uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,994,563
arxiv
\section{Introduction} For automatic summarization, there are two main methods: extractive and abstractive. Extractive methods use certain scoring rules or ranking methods to select a certain number of important sentences from the source texts. For example, \cite{cao2016attsum} proposed to make use of Convolutional Neural Networks (CNN) to represent queries and sentences, as well as adopted a greedy algorithm combined with \emph{pair-wise ranking algorithm} for extraction. Based on Recurrent Neural Networks (RNN), \cite{nallapati2017summarunner} constructed a sequence classifier and obtained the highest extractive scores on the CNN/Daily Mail corpus set. At the same time, The abstractive summarization models attempt to simulate the process of how human beings write summaries and need to analyze, paraphrase, and reorganize the source texts. It is known that there exist two main problems called OOV words and duplicate words by means of abstraction. \cite{see2017get} proposed an improved pointer mechanism named \emph{pointer-generator} to solve the OOV words as well as came up with a variant of coverage vector called \emph{coverage} to deal with the duplicate words. \cite{nema2017diversity} created the \emph{diverse cell} structures to handle duplicate words problem based on query-based summarization. For the first time, a \emph{reinforcement learning} method based neural network model was raised and obtained the state-of-the-art scores on the CNN/Daily Mail corpus\cite{paulus2017deep}. Both extractive and abstractive methods have their merits. In this paper, we employ the combination of extractive and abstractive methods at the sentence level. In the extractive process, we find that there are some ambiguous words in the source texts. The different meanings of each word can be acquired through the synonym dictionary called WordNet. First \emph{WordNet based Lesk algorithm} is utilized to analyze the word semantics. Then we apply the \emph{modified sentence ranking algorithm} to extract a specified number of sentences according to the sentence syntactic information. During the abstractive part based on \emph{seq2seq model}, we add a new encoder which is derived from the extractive sentences and put the \emph{dual attention} mechanism for decoding operations. As far as we know, it is the first time that joint training of sentence-level extractive and abstractive models has been conducted. Additionally, we combine the \emph{pointer-generator} and \emph{coverage} mechanisms to handle the OOV words and duplicate words. Our contributions in this paper are mainly summarized as follows: \begin{itemize} \item Considering the semantics of words and sentences, we improve the \emph{sentence ranking algorithm} based on the \emph{WordNet-based simplified lesk algorithm} to obtain important sentences from the source texts. \item We construct two parallel encoders from the extracted sentences and source texts separately, and make use of \emph{seq2seq dual attentional model} for joint training. \item We adopt the \emph{pointer-generator} and \emph{coverage} mechanisms to deal with OOV words and duplicate words problems. Our results are competitive compared with the state-of-the-art scores. \end{itemize} \section{Our Method} Our method is based on the \emph{seq2seq attentional model}, which is implemented with reference to \cite{nallapati2016abstractive} and the attention distribution $\vec{\alpha}_t$ is calculated as in \cite{bahdanau2014neural}. Here, we show the architecture of our model which is composed of eight parts as in Figure \ref{fig1}. We construct two encoders (\textcircled{\scriptsize 2}\textcircled{\scriptsize 4}) based on the source texts and extracted sentences, as well as take advantage of a \emph{dual attentional} decoder (\textcircled{\scriptsize 1} \textcircled{\scriptsize 3}\textcircled{\scriptsize 5}\textcircled{\scriptsize 6}) to generate summaries. Finally, we combine the \emph{pointer-generator} (\textcircled{\scriptsize 7}) and \emph{coverage} mechanisms (\textcircled{\scriptsize 8}) to manage OOV and duplicate words problems. \begin{figure}[h!] \begin{center} \includegraphics[width=0.95\textwidth]{fig1.pdf} \caption{A \emph{dual attentional encoders-decoder model} with \emph{pointer-generator} network.} \label{fig1} \end{center} \end{figure} \subsection{Seq2seq dual attentional model} \subsubsection{Encoders-decoder model. } Referring to \cite{bahdanau2014neural}, we use two single-layer bidirectional Long Short-Term Memory (BiLSTM) encoders including source and extractive encoders, and a single-layer unidirectional LSTM (UniLSTM) decoder in our model, as shown in Figure \ref{fig1}. For encoding time $i$, the source texts and the extracted information respectively input the word embeddings $\vec{w}_i^s$ and $\vec{w}_i^e$ into two encoders. Meanwhile, the corresponding hidden layer states $\overleftrightarrow{\vec{h}}_i^s$ and $\overleftrightarrow{\vec{h}}_i^e$ are generated. At decoding step $t$, the decoder will receive the word embedding from the step $t-1$, which is obtained according to the previous word in the reference summary during training, or provided by the decoder itself when testing. Next we acquire the state $\vec{s}_t$ and produce the vocabulary distribution $P({\vec{y}_t})$. Here, we are supposed to calculate $\overleftrightarrow{\vec{h}}_i^s$ by the following formulas: \begin{equation} \overrightarrow{\vec{h}}_i^s = LSTM(\vec{w}_i^s, \overrightarrow{\vec{h}}_{i-1}^s) \end{equation} \begin{equation} \overleftarrow{\vec{h}}_i^s = LSTM(\vec{w}_i^s, \overleftarrow{\vec{h}}_{i+1}^s) \end{equation} \begin{equation} \overleftrightarrow{\vec{h}}_i^s = [\overrightarrow{\vec{h}}_i^s;\ \overleftarrow{\vec{h}}_i^s] \end{equation} Also, $\overleftrightarrow{\vec{h}}_i^e$ could be obtained as follows: \begin{equation} \overrightarrow{\vec{h}}_i^e = LSTM(\vec{w}_i^e, \overrightarrow{\vec{h}}_{i-1}^e) \end{equation} \begin{equation} \overleftarrow{\vec{h}}_i^e = LSTM(\vec{w}_i^e, \overleftarrow{\vec{h}}_{i+1}^e) \end{equation} \begin{equation} \overleftrightarrow{\vec{h}}_i^e = [\overrightarrow{\vec{h}}_i^e;\ \overleftarrow{\vec{h}}_i^e] \end{equation} \subsubsection{Dual attention mechanism. } At the $t^{th}$ step, we need not only the previous hidden state $\vec{s}_{t-1}$, but also the context vector $\vec{c}_{t-1}^s$, $\vec{c}_{t-1}^e$, $\vec{c}_t^s$, $\vec{c}_t^e$ obtained by the corresponding attention distribution \cite{bahdanau2014neural} to gain state $\vec{s}_t$ and vocabulary distribution $P({\vec{y}_t})$. Firstly, for source encoder, we calculate the context vector ${\vec{c}}_t^s$ in the following way ($\textbf{V}^s$, $\textbf{W}_1^s$, $\textbf{W}_2^s$, $\vec{b}^s$ are learnable parameters): \begin{equation} {{e}}_{i,\ t}^s = {\textbf{V}^s}^T\cdot tanh{(\textbf{W}_1^s\cdot\vec{s}_t+\textbf{W}_2^s\cdot\overleftrightarrow{\vec{h}}_i^s+\vec{b}^s)} \end{equation} \begin{equation} {{\alpha}}_{i,\ t}^s = \frac{{{e}}_{i,\ t}^s}{\sum_{j=1}^{n_s}{{e}}_{j,\ t}^s} \end{equation} \begin{equation} {\vec{c}}_t^s = \sum_{i=1}^{n_s}{{\alpha}}_{i,\ t}^s\cdot\overleftrightarrow{\vec{h}}_t^s \end{equation} Secondly, for extractive encoder, we utilize the identical method to compute the context vector ${\vec{c}}_t^e$ ($\textbf{V}^e$, $\textbf{W}_1^e$, $\textbf{W}_2^e$, $\vec{b}^e$ are learnable parameters): \begin{equation} {{e}}_{i,\ t}^e = {\textbf{V}^e}^T\cdot tanh{(\textbf{W}_1^e\cdot\vec{s}_t+\textbf{W}_2^e\cdot\overleftrightarrow{\vec{h}}_i^e+\vec{b}^e)} \end{equation} \begin{equation} {{\alpha}}_{i,\ t}^e = \frac{{{e}}_{i,\ t}^e}{\sum_{j=1}^{n_e}{{e}}_{j,\ t}^e} \end{equation} \begin{equation} {\vec{c}}_t^e = \sum_{i=1}^{n_e}{{\alpha}}_{i,\ t}^e\cdot\overleftrightarrow{\vec{h}}_t^e \end{equation} Thirdly, we get the gated context vector ${\vec{c}}_t^g$ by calculating the weighted sum of context vectors ${\vec{c}}_t^s$ and ${\vec{c}}_t^e$, where the weight is the \emph{gate network} obtained by the concatenation of ${\vec{c}}_t^s$ and ${\vec{c}}_t^e$ via \emph{multi-layer perceptron (MLP)}. Details are shown as below ($\sigma$ is Sigmoid function, $\textbf{W}^g$, $\vec{b}^g$ are learnable parameters): \begin{equation} \vec{g}_t = \sigma(\textbf{W}^g\cdot[\vec{c}_t^s;\ \vec{c}_t^e]+\vec{b}^g) \end{equation} \begin{equation} {\vec{c}}_t^{g} = \vec{g}_t\cdot{\vec{c}}_t^s+(\vec{1}-\vec{g}_t)\cdot{\vec{c}}_t^e \end{equation} In the same way, we can obtain the hidden state ${\vec{s}}_t$ and predicte the probability distribution $P({\vec{y}_t})$ at time $t$ ($\textbf{W}_1^{in}$, $\textbf{W}_2^{in}$, $\vec{b}_{in}$, $\textbf{W}_1^{out}$, $\textbf{W}_2^{out}$, $\vec{b}_{out}$ are learnable parameters). \begin{equation} {\vec{s}}_t = LSTM({\vec{s}}_{t-1},\ \textbf{W}_1^{in}\cdot{\vec{y}}_{t-1}+\textbf{W}_2^{in}\cdot{\vec{c}}_{t-1}^{g}+\vec{b}_{in}) \end{equation} \begin{equation} P({\vec{y}}_t | {\vec{y}}_{<t}, \vec{x}) = softmax(\textbf{W}_1^{out}\cdot{\vec{s}}_t+\textbf{W}_2^{out}\cdot{\vec{c}}_t^{g}+\vec{b}_{out}) \end{equation} \subsection{WordNet-based Sentence Ranking Algorithm} To extract the important sentences, we adopt a \emph{WordNet-based sentence ranking algorithm}. WordNet\footnote{\tt http://www.nltk.org/howto/wordnet.html} is a lexical database for the English language, which groups English words into sets of synonyms called synsets and provides short definitions and usage examples. \cite{pal2014approach} used the \emph{simplified lesk approach} based on WordNet to extract abstracts. We refer to its algorithm and set up our \emph{sentence ranking algorithm} so as to construct the extractive encoder. For sentence $\vec{x} = (x_1, x_2, ..., x_n)$, after filtering out the stop words and unambiguous tokens through WordNet, we obtain a reserved subsequence $\vec{x}^{'} = (x_{i_1}, x_{i_2}, ..., x_{i_m})$. Since some words contain too many different senses which may result in too much calculation, we set a window size $n_{win}$ (default value is 5) and sort $\vec{x}^{'}$ in descending order according to the number of senses of words, as well as keep the first $n_{sav}$ ($n_{sav} = min(m, n_{win})$) words left to get $\vec{x}^{''} = (x_{s_1}, x_{s_2}, ..., x_{s_{n_{sav}}})$. Next, we count the common number of senses of each word as word weight. Finally, we get the sum weights of each sentence and acquire an average sentence weight. Taking a sentence $\vec{x}^{''} = (x_1, x_2, x_3)$ for instance, we make an assumption that $x_1$ has two senses $\vec{m}_a$ and $\vec{m}_b$, $x_2$ has two senses $\vec{m}_c$ and $\vec{m}_d$, while $x_3$ has two senses $\vec{m}_e$, $\vec{m}_f$. Currently considering $x_1$ as the keyword, we measure the number of common words between a pair of sentences, which describe the word senses of $x_1$ and another word. Table \ref{tab1} shows all possible matches of the senses of $x_1$, $x_2$, $x_3$. For the two senses of $x_1$, we can separately obtain the sum of co-occurrence word pairs for each meaning. For $\vec{m}_a$, we obtain $count_{\vec{m}_a}$ = $count_{ac}$ + $count_{ad}$ + $count_{ae}$ + $count_{af}$, for $\vec{m}_b$, we gain $count_{\vec{m}_b}$ = $count_{bc}$ + $count_{bd}$ + $count_{be}$ + $count_{bf}$. The significance corresponding to the higher score $count_{x_1} $( $count_{\vec{m}_a}$ or $count_{\vec{m}_b}$) is assigned to the the keyword $x_1$. \begin{table}[h!] \begin{center} \caption{The number of common words between a pair of sentences.}\label{tab1} \begin{tabular}{c|c} \hline \ Pair of sentences \ & \ common words in sense description \ \\ \hline $\vec{m}_a$ and $\vec{m}_c$ & $count_{ac}$\\ $\vec{m}_a$ and $\vec{m}_d$ & $count_{ad}$\\ $\vec{m}_b$ and $\vec{m}_c$ & $count_{bc}$\\ $\vec{m}_b$ and $\vec{m}_d$ & $count_{bd}$\\ $\vec{m}_a$ and $\vec{m}_e$ & $count_{ae}$\\ $\vec{m}_a$ and $\vec{m}_f$ & $count_{af}$\\ $\vec{m}_b$ and $\vec{m}_e$ & $count_{be}$\\ $\vec{m}_b$ and $\vec{m}_f$ & $count_{bf}$\\ \hline \end{tabular} \end{center} \end{table} In this way, we're capable of acquiring the average weight of sentence $\vec{x}$. \begin{equation} weight_{avg} = \frac{1}{n_{sav}}\sum_{i=1}^{n_{sav}} count_{x_i} \end{equation} Let's assume that document $\vec{D} = (\vec{x}_1, \vec{x}_2, ..., \vec{x}_N)$, which contains a total of $N$ sentences. We sort them in descending order according to the average weights of sentences, and then extract the top $n_{top}$ sentences (default value is 3). \subsection{Pointer-generator and coverage mechanisms} \subsubsection{Pointer-generator network. } \emph{Pointer-generator} is an effective method to solve the problem of OOV words and its structure has been expanded in Figure \ref{fig1}. We borrow the method improved by \cite{see2017get}. $p_{gen}$ is defined as a switch to decide to generate a word from the vocabulary or copy a word from the source encoder attention distribution. We maintain an extended vocabulary including the vocabulary and all words in the source texts. For the decoding step $t$ and decoder input $\vec{x}_t$, we define $p_{gen}$ as: \begin{equation} p_{gen} = \sigma(\textbf{W}_1^p\cdot\vec{c}_t^s+\textbf{W}_2^p\cdot\vec{s}_t+\textbf{W}_3^p\cdot\vec{x}_t+\vec{b}^p) \end{equation} \begin{equation} P_{vocab} = P({\vec{y}}_t | {\vec{y}}_{<t}, \vec{x}) \end{equation} \begin{equation} P(\vec{w}_t) = p_{gen}P_{vocab}(\vec{w}_t)+(1-p_{gen})\sum_{i: \vec{w}_i=\vec{w}_t}{\alpha}_{i,\ t}^s \end{equation} Where $\vec{w}_t$ is the value of $\vec{x}_t$, and $\textbf{W}_1^p$, $\textbf{W}_2^p$, $\textbf{W}_3^p$, $\vec{b}^p$ are learnable parameters. \subsubsection{Coverage mechanism. } Duplicate words are a critical problem in the \emph{seq2seq model}, and even more serious when generating long texts like multi-sentence texts. \cite{see2017get} made some minor modifications to the \emph{coverage model} \cite{tu2016modeling} which is also displayed in Figure \ref{fig1}. First, we calculate the sum of attention distributions from previous decoder steps ($1, 2, 3, ..., t-1$) to get a coverage vector $\vec{cov}_t$: \begin{equation} \vec{cov}_t^s = \sum_{t'=0}^{t-1}\vec{\alpha}_{t'}^s \end{equation} Then, we make use of coverage vector $\vec{cov}_t$ to update the attention distribution: \begin{equation} {{e}}_{i,\ t}^s = {\textbf{V}^s}^T\cdot tanh{(\textbf{W}_1^s\cdot\vec{s}_t+\textbf{W}_2^s\cdot\overleftrightarrow{\vec{h}}_i^s+\textbf{W}_3^s\cdot{cov}_{i,\ t}^s+\vec{b}^s)} \end{equation} Finally, we define the coverage loss function $covloss_t$ for the sake of penalizing the duplicate words appearing at decoding time $t$, and renew the total loss: \begin{equation} covloss_t = \sum_{i}min({\alpha}_{i,\ t}^s,\ {cov}_{i,\ t}^s) \end{equation} \begin{equation} loss_t = -log(P(\vec{w}_t^*)) + {\lambda}\ covloss_t \end{equation} Where $\vec{w}_t^*$ is the target word at $t^{th}$ step, $-log(P(\vec{w}_t^*))$ is the primary loss for timestep $t$ during training, hyperparameter $\lambda$ (default value is 1.0) is the weight for $covloss_t$, $\textbf{W}_1^s$, $\textbf{W}_2^s$, $\textbf{W}_3^s$, $\vec{b}^s$ are learnable parameters. \section{Experiments} \subsection{Dataset} CNN/Daily Mail dataset\footnote{\tt https://cs.nyu.edu/\~{}kcho/DMQA/} is widely used in the public automatic summarization evaluation, which contains online news articles (781 tokens on average) paired with multi-sentence summaries (56 tokens on average). \cite{see2017get} provided the data processing script, and we take advantage of it to obtain the non-anonymized version of the the data including 287,226 training pairs, 13,368 validation pairs and 11,490 test pairs, though \cite{nallapati2017summarunner,nallapati2016abstractive} used the anonymized version. During training steps, we find that 114 of 287,226 articles are empty, so we utilize the remaining 287,112 pairs for training. Then, we perform the splitting preprocessing for the data pairs with the help of Stanford CoreNLP toolkit\footnote{\tt https://stanfordnlp.github.io/CoreNLP/}, and convert them into binary files, as well as get the vocab file for the convenience of reading data. \subsection{Implementation} \subsubsection{Model parameters configuration. } The corresponding parameters of controlled experimental models are described as follows. For all models, we have set the word embeddings and RNN hidden states to be 128-dimensional and 256-dimensional respectively for source encoders, extractive encoders and decoders. Contrary to \cite{nallapati2016abstractive}, we learn the word embeddings from scratch during training, because our training dataset is large enough. We apply the optimization technique Adagrad with learning rate 0.15 and an initial accumulator value of 0.1, as well as employ the gradient clipping with a maximum gradient norm of 2. For the one-encoder models, we set up the vocabulary size to be 50k for source encoder and target decoder simultaneously. We try to adjust the vocabulary size to be 150k, then discover that when the model is trained to converge, the time cost is doubled but the test dataset scores have slightly dropped. In our analysis, the models' parameters have increased excessively when the vocabulary enlarges, leading to overfitting during the training process. Meanwhile, for the models with two encoders, we adjust the vocabulary size to be 40k. Each pair of the dataset consists of an article and a multi-sentence summary. We truncate the article to 400 tokens and limit the summary to 100 tokens for both training and testing time. During decoding mode, we generate at least 35 words with \emph{beam search algorithm}. Data truncation operations not only reduce memory consumption, speed up training and testing, but also improve the experimental results. The reason is that the vital information of news texts is mainly concentrated in the first half part. We train on a single GeForce GTX 1080 GPU with a memory of 8114 MiB, and the batch size is set to be 16, as well as the beam size is 4 for \emph{beam search} in decoding mode. For the \emph{seq2seq dual attentional models} without \emph{pointer-generator}, we trained them for about two days. Models with \emph{pointer-generator} expedite the training, the time cost is reduced to about one day. When we add \emph{coverage}, the coverage loss weight $\lambda$ is set to 1.0, and the model needs about one hour for training. \subsubsection{Controlled experiments. } In order to figure out how each part of our models contributes to the test results, based on the released codes\footnote{\tt https://github.com/tensorflow/models/tree/master/research/textsum} of Tensorflow, we have implemented all the models and done a series of experiments. The baseline model is a general \emph{seq2seq attentional model}, the encoder consists of a biLSTM and the decoder is made up of an uniLSTM. The second baseline model is our \emph{encoders-decoder dual attention model}, which contains two biLSTM encoders and one uniLSTM decoder. This model combines the extractive and generative methods to perform joint training effectively through a \emph{dual attention mechanism}. For the above two basic models, in order to explain how the OOV and duplicate words are treated, we lead into the \emph{pointer-generator} and \emph{coverage} mechanism step by step. For the second baseline, the two tricks are only related to the source encoder, because we think that the source encoder already covers all the tokens in the extractive encoder. For the extractive encoder, we adopt two methods for extraction. One is the \emph{leading three (lead-3)} sentences technique, which is simple but indeed a strong baseline. The other is the \emph{Modified sentence ranking algorithm} based on WordNet that we explain in details in section 3. It considers semantic relations in words and sentences from source texts. \subsection{Results} ROUGE \cite{lin2004rouge} is a set of metrics with a software package used for evaluating automatic summarization and machine translation results. It counts the number of overlapping basic units including n-grams, longest common subsequences (LCS). We use pyrouge\footnote{\tt https://pypi.org/project/pyrouge/0.1.3/}, a python wrapper to gain ROUGE-1, ROUGE-2 and ROUGE-L scores and list the $\textbf{F}_1$ scores in table \ref{tab2}. \begin{table}[h!] \begin{center} \caption{ROUGE $\textbf{F}_1$ scores on CNN/Daily Mail non-anonymized testing dataset for all the controlled experiment models mentioned above. According to the official ROUGE usage description, all our ROUGE scores have a 95\% confidence interval of at most $\pm$0.25. \emph{PGN}, \emph{Cov}, \emph{ML}, \emph{RL} are abbreviations for \emph{pointer-generator}, \emph{coverage}, \emph{mixed-objective learning} and \emph{reinforcement learning}. Models with subscript $_a$ were trained and tested on the anonymized CNN/Daily Mail dataset, as well as with $^*$ are the state-of-the-art extractive and abstractive summarization models on the anonymized dataset by now. } \label{tab2} \begin{tabular*}{\textwidth}{p{0.50\textwidth}|>{\hfil}p{0.167\textwidth}<{\hfil}|>{\hfil}p{0.167\textwidth}<{\hfil}|>{\hfil}p{0.130\textwidth}<{\hfil}} \hline \multirow{2}{*}{\ \ Models} & \multicolumn{3}{c}{ROUGE $\textbf{F}_1$ scores} \\ \cline{2-4} & 1 & 2 & L \\ \hline \ \ Seq2seq + Attn & 31.50 & 11.95 & 28.85 \\ \ \ Seq2seq + Attn (150k) & 30.67 & 11.32 & 28.11 \\ \ \ Seq2seq + Attn + PGN & 36.58 & 15.76 & 33.33 \\ \ \ Seq2seq + Attn + PGN + Cov & \textbf{39.16} & \textbf{16.98} & \textbf{35.81} \\ \hline \ \ Lead-3 + Dual-attn + PGN & 37.26 & 16.12 & 33.87 \\ \ \ WordNet + Dual-attn + PGN & 36.91 & 15.97 & 33.58 \\ \ \ Lead-3 + Dual-attn + PGN + Cov & \textbf{39.41} & \textbf{17.30} & 35.92 \\ \ \ WordNet + Dual-attn + PGN + Cov & 39.32 & 17.15 & \textbf{36.02} \\ \hline \ \ Lead-3 (\cite{see2017get}) & 40.34 & 17.70 & 36.57 \\ \ \ Lead-3 (\cite{nallapati2017summarunner})$_{a}$ & 39.20 & 15.70 & 35.50 \\ \ \ SummaRuNNer (\cite{nallapati2017summarunner})$_{a}^*$ & \textbf{39.60} & \textbf{16.20} & \textbf{35.30} \\ \cdashline{1-4}[2pt/2pt] \ \ RL + Intra-attn (\cite{paulus2017deep})$_{a}^*$ & \textbf{41.16} & 15.75 & \textbf{39.08} \\ \ \ ML + RL + Intra-attn (\cite{paulus2017deep})$_{a}$ & 39.87 & \textbf{15.82} & 36.90 \\ \hline \end{tabular*} \end{center} \end{table} We carry out the experiments based on original dataset, i.e., non-anonymized version of data. For the top three models in table \ref{tab2}, their ROUGE scores are slightly higher than those executed by \cite{see2017get}, except for the ROUGE-L score of \emph{Seq2seq + Attn + PGN}, which is 0.09 points lower than the former result. For the fourth model, we did not reproduce the results of \cite{see2017get}, ROUGE-1, ROUGE-2, and ROUGE-L decreased by an average of 0.41 points. For the four models in the middle, we apply the \emph{dual attention} mechanism to integrate extraction with abstraction for joint training and decoding. These model variants own a single \emph{PGN} or \emph{PGN} together with \emph{Cov}, achieve better results than the corresponding vulgaris attentional models simultaneously. We conclude that the extractive encoders play a role, among which we obtained higher ROUGE-1 and ROUGE-2 scores based on the \emph{Lead-3 + Dual-attn + PGN + Cov} model, and achieve a better ROUGE-L score on \emph{WordNet + Dual-attn + PGN + Cov} model. Let's take a look at the five models at the bottom, two of which give the state-of-the-art scores for the extractive and generative methods. our scores are already comparable to them. It is worthy to mention that based on the \emph{dual attention}, our models related to both \emph{Lead-3} and \emph{WordNet} with \emph{PGN} and \emph{Cov} have exceeded the previous best ROUGE-2 scores. When in fact, previous \emph{SummaRuNNer}, \emph{RL} related models are based on anonymized dataset, these differences may cause some deviations in the comparison of experimental results. We give some generated summaries of different models for one selected test article. From Figure \ref{fig2}, we can see that the red words represent key information about \emph{who}, \emph{what}, \emph{where} and \emph{when}. We can match the corresponding keywords in the remaining seven summaries to find out whether they cover all the significant points, and check if they are expressed in a concise and coherent way. It can be discovered from Figure \ref{fig2} that most of the models have lost several vital points, and the model \emph{Lead-3 + Dual-attn + PGN} has undergone fairly serious repetition. Our model \emph{WordNet + Dual-attn + PGN + Cov} holds the main key information as well as has better readability and semantic correctness reliably. \begin{figure}[h!] \begin{center} \includegraphics[width=0.93\textwidth]{fig2.pdf} \caption{Summaries for all the models of one test article example.} \label{fig2} \end{center} \end{figure} \section{Related Work} Up to now, automatic summarization with extractive and abstractive methods are under fervent research. On the one hand, the extractive techniques extract the topic-related keywords and significant sentences from the source texts to constitute summaries. \cite{cheng2016neural} proposed a \emph{seq2seq model} with a hierarchical encoder and attentional decoder to solve extractive summarization tasks at the word and sentence levels. Currently \cite{nallapati2017summarunner} put forward \emph{SummaRuNNer}, a RNN based sequence model for extractive summarization and it achieves the previous state-of-the-art performance. On the other hand, abstractive methods establish an intrinsic semantic representation and use natural language generation techniques to produce summaries which are closer to what human beings express. \cite{bahdanau2014neural} applied the combination of \emph{seq2seq model} and \emph{attention} mechanism to machine translation tasks for the first time. \cite{rush2015neural} exploited \emph{seq2seq model} to sentence compression to lay the groundwork for subsequent summarization with different granularities. \cite{lopyrev2015generating} used \emph{encoder-decoder with attention} method to generate news headlines. \cite{zhou2017selective} added a \emph{selective gate network} to the basic model in order to control which part of the information flowed from encoder to decoder. \cite{tan2017abstractive} raised a model based on graph and \emph{attention} mechanism to strengthen the positioning of vital information of source texts. So as to solve rare and unseen words, \cite{gu2016incorporating,gulcehre2016pointing} proposed the \emph{COPYNET model} and \emph{pointing} mechanism, \cite{zeng2016efficient} created read-again and copy mechanisms. \cite{nallapati2016abstractive} made a combination of the basic model with \emph{large vocabulary trick (LVT)}, \emph{feature-rich encoder}, \emph{pointer-generator}, and \emph{hierarchical attention}. In addition to \emph{pointer-generator}, other tricks of this paper also contributed to the experiment results. \cite{see2017get} presented an updated version of \emph{pointer-generator} which proved to be better. As for duplicate words, for sake of solving problems of over or missing translation, \cite{tu2016modeling} came up with a \emph{coverage} mechanism to avail oneself of historical information for attention calculation, while \cite{see2017get} provided a progressive version. \cite{nema2017diversity} introduced a series of \emph{diverse cell} structures to solve the duplicate words. So far, few papers have considered about the structural or sementic issues at the language level in the field of summarization. \cite{filippova2008dependency} presented a novel unsupervised method that made use of a pruned dependency tree to acquire the sentence compression. Based on a Chinese short text summary dataset (LCSTS) and the \emph{attentional seq2seq model}, \cite{ma2017improving} proposed to enhance the semantic relevance by calculating the cos similarities of summaries and source texts. \section{Conclusion} In our paper, we construct a \emph{dual attentional seq2seq model} comprising source and extractive encoders to generate summaries. In addition, we put forward the \emph{modified sentence ranking algorithm} to extract a specific number of high weighted sentences, for the purpose of strengthening the semantic representation of the extractive encoder. Furthermore, we introduce the \emph{pointer-generator} and \emph{coverage} mechanisms in our models so as to solve the problems of OOV and duplicate words. In the non-anonymized CNN/Daily Mail dataset, our results are close to the state-of-the-art ROUGE $\textbf{F}_1$ scores. Moreover, we get the highest abstractive ROUGE-2 $\textbf{F}_1$ scores, as well as obtain such summaries that have better readability and higher semantic accuracies. In our future work, we plan to unify the \emph{reinforcement learning} method with our abstractive models. \section*{Acknowledgments} We thank the anonymous reviewers for their insightful comments on this paper. This work was partially supported by National Natural Science Foundation of China (61572049 and 61333018). The correspondence author is Sujian Li. \bibliographystyle{splncs04}
1,314,259,994,564
arxiv
\section{Introduction} During the last decade, circuit QED based on superconducting circuits has become a promising platform to investigate strong coupling between light and matter as well as enable quantum information processing technology \cite{Schoelkopf,Clarke,Wendin}. Some of the exciting results include the following: strong coupling between a superconducting qubit and a single photon \cite{Wallraff}, resolving photon-number states \cite{Schuster}, synthesizing arbitrary quantum states \cite{Hofheinz}, three-qubit quantum error correction \cite{Reed}, implementation of a Toffoli gate \cite{Fedorov}, quantum feedback control \cite{Vijay} and architectures for a superconducting quantum computer \cite{Mariantoni}. The nonlinear properties of Josephson junctions have also been used to study the dynamical Casimir effect \cite{Chris2} and build quantum limited amplifiers \cite{Beltran,Bergeal}. More recently, theoretical and experimental work has begun to investigate the strong interaction between light and a single atom even without a cavity \cite{Tey,Hwang,Wrigge,Gerhardt}. In this system, the destructive interference between the excited dipole radiation and the incident field gives rise to the extinction of the forward propagating wave for a weak incident field. This experiment was first demonstrated for a single atom/molecule in 3D space, where the extinction of the forward incident wave did not exceed 12$\%$ \cite{Tey,Wrigge}. This is due to the spatial mode mismatch between the incident and scattered waves. Taking advantages of the huge dipole moment of the artificial atom and the confinement of the propagating fields in a 1D open transmission line \cite{Astafiev1,Hoi,Hoi2,Hoi3,Abdumalikov,Astafiev2,Shen2,Chang}, strong coupling between an artificial atom and propagating field can be achieved. Extinctions in excess of 99\% has been observed \cite{Hoi,Hoi2}. This system represents a potential key component in the field of microwave quantum optics, which is the central scope of this article. This paper is organized as follows. The elastic scattering properties of the single artificial atom is presented in section 2. Well-known quantum optics effects, such as the Mollow Triplet and Aulter-Townes splitting are presented in section 3. In section 4, we demonstrate two quantum devices based on these effects which operate at the single-photon level in the microwave regime, namely: the single-photon router and the photon-number filter. In section 5, we discuss the possibilities of a quantum network using these devices. \begin{figure} \includegraphics[width=\columnwidth]{Figure1} \caption{(a) Top: A micrograph of the artificial atom, a superconducting transmon qubit embedded in a 1D open transmission line. (Zoom In) Scanning-electron micrograph of the SQUID loop of the transmon. Bottom: the corresponding circuit model. (b) Measured transmittance, $T=|t|^2$, on resonance as a function of the incoming probe power, $P_p$, for sample 1 and 2. At low power, very little is transmitted whereas at high power practically everything is transmitted. (Inset) A weak, resonant coherent field is reflected by the atom.} \end{figure} \section{Elastic Scattering} In Fig. 1a, a transmon qubit \cite{Koch} is embedded in a 1D open transmission line with a characteristic impedance $Z_0\simeq 50$ $\Omega$. The 0-1 transition energy of the transmon, $ \hbar\omega_{10}(\Phi)\approx \sqrt{8E_J(\Phi)E_C}-E_C$, is determined by two energies, where $E_C=e^2/2C_\Sigma$ is the charging energy, $C_\Sigma$ is the total capacitance of the transmon, $E_J(\Phi)=E_{J}|\cos(\pi\Phi/\Phi_0)|$ is the Josephson energy which can be tuned by the external flux $\Phi$, $E_{J}$ is the maximum Josephson energy and $\Phi_0=h/2e$ is the magnetic flux quantum. With a coherent state input, we investigate the transmission and reflection properties of the field. The input field, transmitted field and the reflected field are denoted as $V_{in}$, $V_{T}$ and $V_{R}$, respectively, indicated in the bottom panel of Fig. 1a. By definition, the transmission coefficient $t=V_{T}/V_{in} =1+r$. The reflection coefficient, $r$, can be expressed as \cite{Astafiev1}, \begin{eqnarray} r=\frac{V_{R}}{V_{in}} =-r_0\frac{1-i\delta\omega_p/\gamma_{10}}{1+(\delta\omega_p/\gamma_{10})^2+\Omega_p^2/\Gamma_{10}\gamma_{10}}, \end{eqnarray} where the maximum reflection amplitude is given by $r_0=\Gamma_{10}/2\gamma_{10}$. $\Gamma_{10}$,$\Gamma_{\phi}$ are the relaxation rate and pure dephasing rate of the 0-1 transition of the atom, respectively. $\gamma_{10}=\Gamma_{10}/2+\Gamma_{\phi}$ is the 0-1 decoherence rate and $\delta\omega_p=\omega_p-\omega_{10}$ is the detuning between the applied probe frequency, $\omega_p$, and 0-1 transition frequency, $\omega_{10}$. We see that both $r_0$ and $\gamma_{10}$ are uniquely dependent on $\Gamma_\phi$ and $\Gamma_{10}$. $\Omega_p$ is the Rabi oscillation frequency induced by the probe, which is proportional to $V_{in}$ \cite{Koch}, \begin{eqnarray} \Omega_p=\frac{2e}{\hbar}\frac{C_c}{C_{\Sigma}}\left(\frac{E_J}{8E_C}\right)^{1/4}\sqrt{P_pZ_0}, \end{eqnarray} where $P_p=|V_{in}|^2/2Z_0$ is the probe power. The relaxation process is dominated by coupling to the 1D transmission line through the coupling capacitance $C_c$ (see bottom panel of Fig. 1a) and assuming that photon emission to the transmission line dominates the relaxation, we find $\Gamma_{10}\simeq\omega_{10}^2C_c^2Z_0/(2 C_\Sigma)$. \begin{figure*} \includegraphics[width=\columnwidth]{Figure2} \caption{t as a function of $P_p$ and $\omega_p$ (sample 1). a) The magnitude response. b) the phase response. Top panel: experimental data. Bottom panel: we show the line cuts for 5 different powers, as indicated by the arrows on the top panel. The experimental data (markers) are fit simultaneously using Eq. 1 (curves). The magnitude response demonstrates the strong coupling between the atom and resonant propagating microwaves, while the phase response shows anomalous dispersion \cite{Astafiev1}.} \end{figure*} According to Eq.(1), for a weak $(\Omega_p\ll\gamma_{10})$, resonant probe $(\delta\omega_p=0)$, in the absence of both pure dephasing $(\Gamma_{\phi}=0)$ and unknown relaxation channels, we should see full reflection ($|r|=1$) of the incoming probe field \cite{Zumofen,Chang,Shen2}. In that case, we also have full extinction, $|t|=0$, of the propagating wave. This full extinction (perfect reflection) can be described as a coherent interference of the incoming wave and the scattered wave from the atom. This is what we observe in Fig. 1b, we measure the transmittance, $T=|t|^2$, on resonance as a function of $P_p$ with two samples. We see an extinction in the resonant microwaves of up to $ 90 \%(99 \%)$ for sample 1(2) at low incident probe power, where $\Omega_p\ll\gamma_{10}$. For increasing $P_p$, we see the strong nonlinearity of the atom which becomes saturated by the incident microwave photons. Since the atom can only scatter one photon at a time, at high incident power, $\Omega_p\gg\gamma_{10}$, most of the photons pass the atom without interaction and are thus transmitted. Therefore $|t|$ tends towards unity for increasing $P_p$, consistent with Eq.(1). We define the average probe photon number coming to the transmon per interaction time as, $\left \langle N_p \right \rangle=P_p/(\hbar\omega_p(\Gamma_{10}/2\pi))$. We measure $t$ as a function of $P_p$ and $\omega_p$. In Fig. 2, the experimental magnitude, $|t|$, and phase response, $\varphi_p$, for sample 1 are shown in a, b, respectively. The top and bottom panels display 2D plots and the corresponding line cuts indicated by the arrows, respectively. For $\left \langle N_p \right \rangle\ll1$, the magnitude response shows the strong extinction of resonant microwaves, up to 70$\%$ in amplitude or $\sim 90\%$ in power (Fig. 1b). The solid curves of Fig. 2 show fits to all magnitude and phase curves simultaneously, with three fitting parameters, $\Gamma_{10}/2\pi=73$\,MHz, $\Gamma_{\phi}/2\pi=18$\,MHz and $\omega_{10}/2\pi=7.1$\,GHz. This corresponds to $C_c=25$\,fF, $\gamma_{10}/2\pi=55$\,MHz and $r_0=0.67$. We find very good agreement between theory and experiment. We also see that $r$ varies as a function of $P_p$ and $\omega_p$, as expected (data not shown). To further characterize the sample 1, the resonance dip in transmission in Fig. 2a is mapped as a function of $\Phi$ at a weak probe, $\Omega_p\ll\gamma_{10}$ (see Fig. 3a). If we increase $P_p$ to a level such that the 0-1 transition is saturated, two-photon (0-2) transitions occur, as indicated in the grey curve of Fig. 3b. The transition frequency corresponds to $(\omega_{10}+\omega_{21})/2$, where $\omega_{21}$ is the 1-2 transition energy. We use a Cooper Pair Box \cite{Koch} Hamiltonian with 50 charge states to fit the spectrum of the atom. We extract $E_{J}=12.7$\,GHz, $E_C=590$\,MHz for sample 1. The extracted parameters are summarized in Table 1. Note that one of the Josephson junction is broken in sample 3, therefore, the transition frequency could not be tuned with $\Phi$. \begin{table} \begin{tabular}{ccccccccc} \hline Sample & $E_J/h$ & $E_C/h$ & $E_J/E_C$ & $\omega_{10}/2\pi$ & $\omega_{21}/2\pi$ & $\Gamma_{10}/2\pi$ & $\Gamma_{\phi}/2\pi$ & Ext. \\ \hline 1 & $12.7$ & $0.59$ & $21.6$ & $7.1$ & $6.38$ & $0.073$ & $0.018$ & $90\%$ \\ \hline 2 & $10.7$ & $0.35$ & $31$ & $5.13$ & $4.74$ & $0.041$ & $0.001$ & $99\%$ \\ \hline 3 & $-$ & $-$ & $-$ & $4.88$ & $4.12$ & $0.017$ & $0.0085$ & $75\%$ \\ \hline \end{tabular} \caption{\label{Params} Parameters for samples 1, 2 and 3. All values in GHz (except for the extinction and $E_J/E_C$).} \end{table} The extinction efficiency of sample 2 is much better than sample 1. This is because sample 1 has a low $E_J/E_C\sim21.6$, which is barely in the transmon limit. For this value of $E_J/E_C$, charge noise still plays an important role as the energy band of the 0-1 transition is still dependent on charge \cite{Koch}. For sample 1 we find the charge dispersion is 7\,MHz and the dephasing is dominated by charge noise. By increasing $E_J/E_C$ to 31, we see much less dephasing in sample 2, which gives nearly perfect extinction of propagating resonant microwaves. Note that, the anharmonicity between $\omega_{10}$ and $\omega_{21}$ of sample 2 is close to $E_C$. This is not quite the case for sample 1 due to its low $E_J/E_C$ \cite{Koch}. \begin{figure*} \includegraphics[width=\columnwidth]{Figure3} \caption{ $|t|$ as a function of $\Phi$ for sample 1. a) At weak probe power, where $\Omega_p\ll\gamma_{10}$. The black curve is the theory fit to 0-1 transition. b) At high probe power, where $\Omega_p\gg\gamma_{10}$. The black and blue curve correspond to the 0-1 and 1-2 transition respectively. The grey curve is the two-photon (0-2) transition. The red dash line indicates the flux bias point and the corresponding $\omega_{10}$, $\omega_{20}/2$, $\omega_{21}$ for Fig. 2 and Fig. 4a, c and d. There is a stray resonance around 6.1 GHz.} \end{figure*} \section{Mollow Triplet and Aulter-Townes Splitting} As we mentioned in the previous section, the transmon also has higher level transitions, in particular, we are interested in the 1-2 transition with frequency, $\omega_{21}$. By using 2-tone spectroscopy, the $\omega_{21}$ transition can be directly measured. We can saturate the $\omega_{10}$ transition by applying a pump field at $\omega_{10}=7.1$ GHz, and measure the transmission properties using a weak probe $\omega_{p}$. As the pump power is increased, the population of the first excited state increases, therefore, we start to observe photon scattering from the 1-2 transition, which appears as a dip in transmission at $\omega_{p}=\omega_{21}$, see Fig. 4a. The dip in transmission grows until the 0-1 transition becomes fully saturated. From this, we extract $\omega_{21}/2\pi=6.38$\,GHz for sample 1. Therefore, the two-photon (0-2) transition should be equal to 6.74 GHz, consistent with the observation in Fig. 3b. The linewidth of $\omega_{21}$ is around 120\,MHz, this dephasing mainly comes from the charge dispersion. Further increasing the pump power at $\omega_{10}$, we observe the well known Mollow triplet \cite{Astafiev1,Mollow} (Fig. 4b, sample 3). The Rabi splitting of the triplet can be used to calibrate the power at the atom. The Mollow triplet can be explained in the dressed state picture, where the two lowest levels split by the Rabi frequency. These four states give three different transitions, indicated by red, brown and blue arrows in the inset of Fig. 4b. Note that the way we observed the triplet here is different from that in \cite{Astafiev1}. We probe the transmission of these triplet transitions instead of looking at the emission spectrum. We see that the center transition is much less visible, because we pump at the frequency which saturates the transition. \begin{figure*} \includegraphics[width=\columnwidth]{Figure4} \caption{ Two tone spectroscopy. a) As the frequency of a weak probe field is swept, a second microwave drive is continuously applied at $\omega_{10}$ with increasing powers. We see another dip gradually appears in the probe transmission response. b) $T$ as a function of probe frequency and pump power. As the power of $\omega_{10}$ further increases, we see the Mollow triplet. The dashed lines indicate the calculated position of the triplet. (Inset) Schematic of triplet transitions in the dressed state picture. Note that we use sample 3, where $\omega_{10}/2\pi=4.88$ GHz. c) A second microwave drive is applied at $\omega_{21}$ with variable power, $P_c$. Magnitude response in (c). As $P_c$ increases, we see induced transmission at $\omega_p=\omega_{10}$. With a strong drive applied, the Aulter-Townes splitting appears with the magnitude of $\Omega_{c}/2\pi $ (Black dashed lines.) d) Phase response of the probe. } \end{figure*} With a weak resonant probe field, $\Omega_p\ll\gamma_{10},\omega_p=\omega_{10}$, and a strong resonant, $ \omega_c=\omega_{21}$, control field, the 0-1 resonance dip splits with the magnitude of $\Omega_c$ \cite{Abdumalikov}, this is known as the Aulter-Townes Splitting(ATS) \cite{Autler}. The magnitude and phase response are shown in Fig. 4c and 4d respectively. In the magnitude response, we see that the transmon becomes transparent for the probe at $\omega_{p}=\omega_{10}$ at sufficiently high control power. In the phase response, we see the probe phase, $\varphi_p$, depends on the control power, $P_c$. In the following application section, we demonstrate two devices based on these effects which could be utilized in a microwave quantum network. By making use of the ATS, we demonstrate a router for single photons. By using the high nonlinearity of the atom, we demonstrate a photon-number filter, where we convert classical coherent microwaves to a nonclassical microwave field. \section{Applications} \subsection{The Single-Photon Router} The operation principle of the single-photon router is explained as follows. In the time domain (see Fig. 5a), we input a constant weak probe in the single-photon regime, $\left \langle N_p \right \rangle\ll 1$, at $\omega_{p}=\omega_{10}$. We then apply a strong control pulse, around 30\,dB more than the probe power, at the $\omega_{21}$ frequency. When the control is off, the probe photons are reflected by the atom, and delivered to output port 1. When the control is on, the probe photons are transmitted based on ATS, and delivered to output port 2. We measure the reflected and transmitted probe power simultaneously in the time domain. This is crucial to investigate if the microwave photon transport is a fully coherent process, $i.e.$ the transmission dip seen in Fig. 2a is that the photons are being reflected (not due to dissipation). Note that we are phase sensitive since we measure $\left \langle V\right \rangle^2$ rather than $\left \langle V^2 \right \rangle$, this means that we are only sensitive to the coherent part of the signal. The experimental setup is shown in Fig. 5a. \begin{figure*} \includegraphics[width=\columnwidth]{Figure5} \caption{The Single-Photon Router, data for sample 2. (a) Measurement setup and the control pulse sequence. A strong control pulse at $\omega_{c}=\omega_{21}$ is used to route a weak continuous microwave $\omega_p=\omega_{10}$. Depending on whether the control pulse is on or off, the probe field is delivered to output port 2 or 1, respectively. (b), (c) Normalized on-off ratio (see text) of the transmittance (T) and reflectance (R) of $\omega_{p}$ measured simultaneously. b) the control pulse is shaped as a square pulse with $1\,\mu s$ duration. c) a gaussian pulse with a duration of $10\,\ ns$, we see up to 99$\%$ on-off ratio. The black curve in (c) is a gaussian fit to the data. } \end{figure*} The results are shown in Fig. 5b,c with two different control pulses for sample 2. In Fig. 5b (c), we use a square (gaussian) control pulse with a duration of 1 $\mu s$ (10 ns). As expected, when the control signal is on, the probe power of the transmitted signal is increased and we see a corresponding decrease in the reflected probe signal. $99\%$ probe on-off ratio is achieved in both reflection and transmission for sample 2. We also see that the on-off ratio does not depend on the control time. In Fig. 5c, the time resolution of our digitizer detector/arbitrary waveform generator is 5 ns, which prevents us from accurately measuring pulses less than about 10 ns. The response follows nicely with the waveform of the control pulse. The ringing appearing in Fig. 5b are artifacts of the digitizer. In the setup of Fig. 5a, we send $\omega_{10}$ and $\omega_{21}$ in opposite direction with respect to each other. We can also send pulses in the same direction by using a microwave combiner at one of the input ports and get the same results, as expected. Note that we use the on-off ratio here \cite{Hoi}, because it is not possible to do a full calibration of the background reflections in the line and leakage through circulator 1 [Fig. 5(a)]. The speed of our router sample 1(2) is predicted to be $1/\Gamma_{10}\sim$ 2 ns (4 ns). We show that the router works well down to the time limit of our instruments. By engineering the relaxation rate, it should be possible to achieve switching times in the sub nanosecond regime. In addition, the routing efficiency, $R=|r_0|^2$, can be improved by further reducing $\Gamma_{\phi}$. The improvement in sample 2 compared to sample 1 was achieved by increasing the $E_J/E_C$ ratio. This reduced the sensitivity to the charge noise and therefore the dephasing. Our router can also easily be cascaded to distribute photons to many output channels. Fig. 6a shows 4 atoms (A,B,C,D) in series, each separated by a circulator. The $\omega_{10}$ of the atoms are the same, while the $\omega_{21}$ are different. This arrangement can be designed in a straightforward manner by controlling the ratio of $E_J/E_C$. By turning on and off control tones at the various 1-2 transition frequencies of different atoms, we can determine the output channel of the probe field, according to the table of Fig. 6b. For instance, if we want to send the probe field to channel 4, we apply three control tones at $\omega_{21,A},\omega_{21,B}$,$\omega_{21,C}$. Note that regardless of the number of output channels, all the control tones and the probe tone can be input through the same input port. Theoretically, the maximum number of output channels depends on the ratio of the anharmonicity and the width of the 1-2 transition, $\gamma_{21}$. Thus, there is a trade off between efficiency and the number of outputs. \begin{figure*} \includegraphics[width=\columnwidth]{Figure6} \caption{Multiport router. (a) Cartoon of a multiport router: cascaded single-photon router to many output channels. Here we show a 5 port router using 4 atoms (A,B,C,D) in series, each separated by a circulator. The $\omega_{10}$ of the atoms are the same, while the 1-2 transition frequencies, $\omega_{21,A}\neq\omega_{21,B}\neq\omega_{21,C}\neq\omega_{21,D}$, are different. By turning on and off control tones at the various 1-2 transition frequencies, we can determine the output channel of the probe field, according to the table b). } \end{figure*} \subsection{The photon-number filter} In Fig.1b, we demonstrated the nonlinear nature of the artificial atom. This naturally comes from the fact that atoms can only reflect one photon at a time. To reveal the nonclassical character of the reflected field, we investigate the statistics of the reflected field. In particular, in this section, we show that the reflected field is antibunched \cite{Chang}. In addition, we also show that the transmitted fields is superbunched \cite{Chang}. The incident coherent state can be written in terms of a superposition of photon number states, with a poissonian distribution. For a weak probe field with $\left \langle N_p \right \rangle<0.5$, this coherent field can be approximated using a basis of the first three-photon number states. For a one-photon incident state, the atom reflects it, leading to antibunching in the reflected field. Together with the zero-photon state, the reflected field still maintains first-order coherence, as there is a well defined phase between the zero and one photon states. Because the atom is not able to scatter more than one photon at a time, a two-photon incident state has a much higher probability of transmission, leading to superbunching in the transmitted field \cite{Zheng, Chang}. In this sense, our single artificial atom acts as a photon-number filter, which filters and reflects the one-photon number state from a coherent state. This process leads to a photon-number redistribution between the reflected and transmitted fields \cite{Zheng}. A schematic illustration of the measurement setup is shown in Fig. 7a. This allows us to measure Hanbury Brown-Twiss \cite{HANBURY} type power-power correlations. We apply a resonant coherent microwave field at $\omega_{p}=\omega_{10}$, depending on whether we send the input through circulator 1 or 2, we measure the statistics of the reflected or transmitted field, respectively. The signal then propagates to a beam splitter, which in the microwave domain is realized by a hybrid coupler, where the outputs of the beam splitter are connected to two nominally identical HEMT amplifiers with system noise temperatures of approximately 7 K. We assume that the amplifier noise generated in the two independent detection chains is uncorrelated. After further amplification, the two voltage amplitudes of the outputs are captured by a pair of vector digitizers. The second order correlation function \cite{Loudon} provides a statistical tool to characterize the field, it can be expressed as \begin{eqnarray*} g^{(2)}(\tau)=1+\frac{\left \langle \Delta P_1(t)\Delta P_2(t+\tau)\right \rangle}{[\left \langle P_1(t) \right \rangle - \left \langle P_{1,N}(t) \right \rangle][\left \langle P_2(t)\right \rangle-\left \langle P_{2,N}(t) \right \rangle]}, \end{eqnarray*} where $\tau$ is the delay time between the two digitizers, $P_1,P_2$ are the output powers in ports 1 and 2, respectively, see Fig. 7a. $P_{1,N},P_{2,N}$ are the amplifier noise in ports 1 and 2 respectively, when the incident source is off. Therefore, $[\left \langle P_i(t)\right \rangle-\left \langle P_{i,N}(t) \right \rangle]$ represents the net power of the field from output port i, where $i=1,2$. $\left<\Delta P_1\Delta P_2\right>$ is the covariance of the output powers in ports 1 and 2, defined as $\left<(P_1- \left< P_1\right>)(P_2- \left< P_2\right>)\right>$. \begin{figure*} \includegraphics[width=\columnwidth]{Figure7} \caption{Second-order correlation function of reflected fields generated by the artificial atom (sample 2). a) A schematic illustration of the physical setup, including circulators (labelled 1, 2) and the hybrid coupler which acts as a beam splitter for the Hanbury Brown-Twiss measurements \cite{HANBURY}. b) $g^{(2)}$ of a resonant reflected field as a function of delay time. We see the antibunched behavior of the reflected field. Inset: $g^{(2)}(0)$ as a function of incident power. Complete theory including all four non-idealities: Black curve, BW= 55 MHz at 50 mK; partial theory including finite temperature and BW, but not leakage and trigger jitter (see text): Green curve, BW=55 MHz at 0 mK and Blue curve, BW=1 GHz at 0 mK. As the BW decreases or incident power increase, the degree of antibunching decrease. The error bar indicated for each data (markers) set is the same for all the points. c) Influence of BW, temperature, leakage and jitter on antibunching. The solid curves in b) and c) are the theory curves. For the curves with leakage, we assume the phase between the leakage field and the field reflected by the atom is $0.37\pi$. } \end{figure*} We had a trigger jitter of $\pm1$ sample between the two digitizers. To minimize the effect of this trigger jitter, we oversample and then digitally filter (average) the data in all the $g^{(2)}$ measurements. Here, the sampling frequency is set to $10^8$ samples/s with a digital filter with a bandwidth BW=55 MHz applied to each digitizer for all measurements. For a coherent state, we find $g^{(2)}(\tau)=1$ with the qubit detuned from $\omega_{10}$. In Fig. 7b, we plot the measured $g^{(2)}(\tau)$ of the reflected field from our atom. At low powers, where $\left \langle N_p \right \rangle\ll 1$, we clearly observe antibunching of the field \cite{Chang}. The trace here was averaged over $2.4\times10^{11}$ measured quadrature field samples. The antibunching behavior at $P_p=-131$ dBm ($\left \langle N_p \right \rangle\sim0.4$), $g^{(2)}(0)=0.55\pm 0.04$, reveals the nonclassical character of the field. Ideally, we would find $g^{(2)}(0)=0$ as the atom can only reflect one photon at a time. The non-zero $g^{(2)}(0)$ we measured originates from four effects: 1) a thermal field at 50 mK temperature, 2) a finite filter BW, 3) trigger jitter between the two digitizers and 4) stray fields including background reflections in the line and leakage through circulator 1 [Fig. 7(a)]. The complete theory curves include all four non-idealities, the partial theory curves include 1) and 2), but not 3), 4). The effects of these factors on our measured antibunching are shown in the theory plot Fig. 7c. For small BW, within the long sampling time, the atom is able to scatter multiple photons. If BW $\ll\Gamma_{10}, \Omega_p$, the antibunching dip we measure vanishes entirely. This interplay between BW and $\Omega_p$ yields a power dependent $g^{(2)}(0)$, as shown in the inset of Fig. 7b. In the ideal case, $i.e.$ for a sufficiently wide BW (1 GHz) at 0 mK, the theory gives $g^{(2)}(0)=0$, as expected. In Fig. 8a, we see superbunching of the photons \cite{Chang} with $g^{(2)}(\tau=0)=2.31\pm 0.09>2$ at $P_p=-129$ dBm ($\left \langle N_p \right \rangle\simeq 0.8$) for the transmitted field. Superbunching occurs because the one-photon state of the incident field has been selectively reflected and thus filtered out from the transmitted signal, while the two-photon state is more likely transmitted. The three-photon state and higher number states are negligible. The transmitted state generated from our qubit is thus bunched even more than a thermal state, which has $g^{(2)}_{therm}(\tau=0)=2$. Fig. 8b shows the theoretical curves of $g^{(2)}(\tau)$ transmitted field under the influence of various effects. For the case of BW=1 GHz at 0 mK, indicated by the black curve, $g^{(2)}$ exhibits very strong bunching at $\tau=0$. At a later $\tau$($\sim$ 15 ns), $g^{(2)}$ for the transmitted field even appears antibunched \cite{Chang}, this is however not resolved in the experimental data. For the other curves, we see the degrading of superbunching due to the influence of BW, temperature and jitter. In Fig. 8c, we plot $g^{(2)}(0)$ as a function of incident power, and clearly see the (super)bunching behavior decrease as the incident power increases. For high powers, where $\left \langle N_p \right \rangle\gg1$, we find $g^{(2)}(\tau)=1$. This is because most of the coherent signal then passes through the transmission line without interacting with the qubit owing to saturation of the atomic response. We also plot the theoretical curves (Blue) at 0 mK for two different BW. \begin{figure*} \includegraphics[width=\columnwidth]{Figure8} \caption{Second-order correlation function of the transmitted fields generated by the artificial atom (sample 2). a) $g^{(2)}$ of the resonant transmitted microwaves as a function of delay time for five different incident powers. The peculiar feature of $g^{(2)}$ around zero in the theory curves in a) is due to the trigger jitter model. b) Influence of BW, temperature and jitter on superbunching. c) $g^{(2)}(0)$ of resonant transmitted field as a function of incident power. The result for a coherent state is also plotted. We see that the transmitted field statistics(red curve) approaches that of a coherent field at high incident power, as expected. For BW=1 GHz at 0 mK, we see very strong bunching at low incident power. The error bar indicated for each data (markers) set is the same for all the points. The solid curves in a), b) and c) are the theory curves. For all measurements shown here we find, $g^{(2)}(\infty)=1$, as expected. } \end{figure*} A single-mode resonator is used to model the digital filter. The theoretical curves in Fig. 7 and 8 are based on a master equation describing both the transmon and the resonator using the formalism of cascaded quantum systems \cite{Peropadre1}. The trigger jitter is modeled by the following: the value of $g^{(2)}(\tau)$ at each point is replaced by the average value of $g^{(2)}(\tau$-10 ns), $g^{(2)}(\tau)$ and $g^{(2)}(\tau$+10 ns). We extract 50 mK from all these fits, with no additional free fitting parameters. As we have shown, the single artificial atom acts as a photon-number filter, which selectively filters out the one-photon number state from a coherent state. This provides a novel way to generate single microwave photons \cite{Mallet,Bozyigit,Wilson4}. \section{Discussion} Microwave quantum optics with a single artificial atom opens up a novel way of building up a quantum network based on superconducting circuits. In such a system, superconducting processors can act as quantum nodes, which can be linked by quantum channels, to transfer flying photons (quantum information) from site to site on chip with high fidelity. In this way, the single-photon router can switch quantum information on nanosecond timescales and with 99\% efficiency, with the possibility of multiple outputs. The photon-number filter can act as the source of generation of flying microwave photons. These components have the advantage of wide frequency range compared to cavity-based systems \cite{Stojan,Bozyigit,Sandberg}. In addition, recent development of cross-Kerr phase shifter at the single-photon level based on superconducting circuit is also beneficial for a microwave quantum network \cite{Hoi3}. While microwave quantum optics with artificial atoms is a promising technology for quantum information processing, optical photons have clearly advantages on long distance quantum communication via a quantum channel. The development of hybrid quantum network would combine both advantages of these two systems. The early stages of optical-microwave interface have been demonstrated \cite{Kubo1,Kubo2,Matthias}, with other potential coupling mechanisms under investigation \cite{Kielpinski,Wang,Kim}. \section{Summary} Based on superconducting circuits, we study various fundamental quantum optical effects with a single artificial atom, such as photon scattering, Mollow triplet and Aulter-Townes Splitting. We further demonstrate two potential elements for an on chip quantum network: the single-photon router and the photon-number filter. \section{Acknowledgments} We acknowledge financial support from the EU through the ERC and the project PROMISCE, from the Swedish Research Council and the Wallenberg foundation. B.P. acknowledges support from CSIC JAE-PREDOC2009 Grant. We would also like to acknowledge Thomas M. Stace, Bixuan Fan, G. J. Milburn and O. Astafiev for fruitful discussions. \section{References}
1,314,259,994,565
arxiv
\section{#1}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \newcommand{\subsect}[1]{\setcounter{equation}{0}\subsection{#1}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \def{\bf 1}{{\bf 1}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{array}}{\begin{array}} \newcommand{\end{array}}{\end{array}} \title{Leibniz Rules and Reality Conditions} \author{Gaetano Fiore$^1$, \ John Madore$^{2,3}$ \and $\strut^1$Dip. di Matematica e Applicazioni, Fac. di Ingegneria\\ Universit\`a di Napoli, V. Claudio 21, 80125 Napoli \and $\strut^2$Max-Planck-Institut f\"ur Physik (Werner-Heisenberg-Institut)\\ F\"ohringer Ring 6, D-80805 M\"unchen \and $\strut^3$Laboratoire de Physique Th\'eorique et Hautes Energies\\ Universit\'e de Paris-Sud, B\^atiment 211, F-91405 Orsay } \date{} \maketitle \abstract{An analysis is made of reality conditions within the context of noncommutative geometry. We show that if a covariant derivative satisfies a given left Leibniz rule then a right Leibniz rule is equivalent to the reality condition. We show also that the matrix which determines the reality condition must satisfy the Yang-Baxter condition if the extension of the covariant derivative to tensor products is to satisfy the reality condition. This is equivalent to the braid condition for the matrix which determines the right Leibniz rule.} \vfill \noindent Preprint 98-13, Dip. Matematica e Applicazioni, Universit\`a di Napoli\\ \medskip \eject \parskip 4pt plus2pt minus2pt \sect{Introduction and motivation} In noncommutative geometry (or algebra), reality conditions are not as natural as they can be in the commutative case; the product of two hermitian elements is no longer necessarily hermitian. The product of two hermitian differential forms is also not necessarily hermitian. It is our purpose here to analyze this problem in some detail. If the reality condition is to be extended to a covariant derivative then we shall show that there is a unique correspondence between its existence and the existence of a left and right Leibniz rule. We shall show also that the matrix which determines the reality condition must satisfy the Yang-Baxter condition if the extension of the covariant derivative to tensor products is to be well-defined. This is equivalent to the braid condition for the matrix which determines the right Leibniz rule. It is necessary in discussing the reality of the curvature form. There is not as yet a completely satisfactory definition of either a linear connection or a metric within the context of noncommutative geometry but there are definitions which seem to work in certain cases. In the present article we chose one particular definition~\cite{DubMadMasMou96}. We refer to a recent review article~\cite{Mad97} for a list of some other examples and references to alternative definitions. More details of one alternative version can be found, for example, in the book by Landi~\cite{Lan97}. For a general introduction to more mathematical aspects of the subject we refer to the book by Connes~\cite{Con94}. Although we expect our results to have a more general validity we shall prove them only in a particular version of noncommutative geometry which can be considered as a noncommutative extension of the moving-frame formalism of E. Cartan. This implies that we suppose that the module of 1-forms is free as a right or left module. As a bimodule it will always be projective with one generator, the generalized `Dirac operator'. More details can be found elsewhere~\cite{Mad95, DubMadMasMou96}. We shall use here the expression `connection' and `covariant derivative' synonymously. In the second section we describe briefly what we mean by the frame formalism and we recall the particular definition of a covariant derivative which we use. In the third section we discuss the reality condition. We describe here the relation between the map which determines the right Leibniz rule and the map which determines the reality condition. The last section contains a generalization to higher tensor powers. \sect{The frame formalism} The starting point is a noncommutative algebra $\c{A}$ and over $\c{A}$ a differential calculus~\cite{Con94} $\Omega^*(\c{A})$. We recall that a differential calculus is completely determined by the left and right module structure of the $\c{A}$-module of 1-forms $\Omega^1(\c{A})$. We shall restrict our attention to the case where this module is free of rank $n$ as a left or right module and possesses a special basis $\theta^a$, $1\leq a \leq n$, which commutes with the elements $f$ of the algebra: \begin{equation} [f, \theta^a] = 0. \label{fund} \end{equation} In particular, if the geometry has a commutative limit then the associated manifold must be parallelizable. We shall refer to the $\theta^a$ as a `frame' or `Stehbein'. The integer $n$ plays the role of `dimension'; it can be greater than the dimension of the limit manifold but in this case the frame will have a singular limit. We suppose further~\cite{DimMad96} that the basis is dual to a set of inner derivations $e_a = \mbox{ad}\, \lambda_a$. This means that the differential is given by the expression \begin{equation} df = e_a f \theta^a = [\lambda_a, f] \theta^a. \label{defdiff} \end{equation} One can rewrite this equation as \begin{equation} df = -[\theta,f], \label{extra} \end{equation} if one introduces~\cite{Con94} the `Dirac operator' \begin{equation} \theta = - \lambda_a \theta^a. \label{dirac} \end{equation} There is a bimodule map $\pi$ of the space $\Omega^1(\c{A}) \otimes_\c{A} \Omega^1(\c{A})$ onto the space $\Omega^2(\c{A})$ of 2-forms and we can write \begin{equation} \theta^a \theta^b = P^{ab}{}_{cd} \theta^c \otimes \theta^d \label{proj} \end{equation} where, because of (\ref{fund}), the $P^{ab}{}_{cd}$ belong to the center $\c{Z}(\c{A})$ of $\c{A}$. We shall suppose that the center is trivial, $\c{Z}(\c{A}) = \b{C}$, and therefore the components $P^{ab}{}_{cd}$ are complex numbers. Define the Maurer-Cartan elements $C^a{}_{bc} \in \c{A}$ by the equation \begin{equation} d\theta^a = - {1\over 2} C^a{}_{bc} \theta^b \theta^c. \end{equation} Because of~(\ref{proj}) we can suppose that $C^a{}_{bc} P^{bc}{}_{de} = C^a{}_{de}$. It follows from the equation $d(\theta^a f - f \theta^a) = 0$ that there exist elements $F^a{}_{bc}$ of the center such that \begin{equation} C^a{}_{bc} = F^a{}_{bc} - 2 \lambda_e P^{(ae)}{}_{bc} \label{consis1} \end{equation} where $(ab)$ means symmetrization of the indices $a$ and $b$. If on the other hand we define $K_{ab}$ by the equation \begin{equation} d\theta + \theta^2 = {1\over 2} K_{ab} \theta^a \theta^b, \label{consis2} \end{equation} then if follows from (2.3) and the identity $d^2 = 0$ that the $K_{ab}$ must belong to the center. Finally it can be shown~\cite{DimMad96, MadMou98} that in order that~(\ref{consis1}) and (\ref{consis2}) be consistent with one another the original $\lambda_a$ must satisfy the condition \begin{equation} 2 \lambda_c \lambda_d P^{cd}{}_{ab} - \lambda_c F^c{}_{ab} - K_{ab} = 0. \label{manca} \end{equation} This gives to the set of $\lambda_a$ the structure of a twisted Lie algebra with a central extension. We propose as definition of a linear connection a map~\cite{Kos60, CunQui95} \begin{equation} \Omega^1(\c{A}) \buildrel D \over \longrightarrow \Omega^1(\c{A}) \otimes_\c{A} \Omega^1(\c{A}) \label{2.2.4} \end{equation} which satisfies both a left Leibniz rule \begin{equation} D (f \xi) = df \otimes \xi + f D\xi \label{2.2.2} \end{equation} and a right Leibniz rule~\cite{DubMadMasMou96} \begin{equation} D(\xi f) = \sigma (\xi \otimes df) + (D\xi) f \label{second} \end{equation} for arbitrary $f \in \c{A}$ and $\xi \in \Omega^1(\c{A})$. We have here introduced a generalized permutation \begin{equation} \Omega^1(\c{A}) \otimes_\c{A} \Omega^1(\c{A}) \buildrel \sigma \over \longrightarrow \Omega^1(\c{A}) \otimes_\c{A} \Omega^1(\c{A}) \label{2.2.5} \end{equation} in order to define a right Leibniz rule which is consistent with the left one, It is necessarily bilinear. A linear connection is therefore a couple $(D, \sigma)$. It can be shown that a necessary as well as sufficient condition for torsion to be right-linear is that $\sigma$ satisfy the consistency condition \begin{equation} \pi \circ (\sigma + 1) = 0. \label{2.2.6} \end{equation} Using the fact that $\pi$ is a projection one sees that the most general solution to this equation is given by \begin{equation} 1 + \sigma = ( 1 - \pi) \circ \tau \label{2.2.7} \end{equation} where $\tau$ is an arbitrary bilinear map \begin{equation} \Omega^1(\c{A}) \otimes \Omega^1(\c{A}) \buildrel \tau \over \longrightarrow \Omega^1(\c{A}) \otimes \Omega^1(\c{A}). \label{2.2.8} \end{equation} If we choose $\tau = 2$ then we find $\sigma = 1 - 2 \pi$ and $\sigma^2 = 1$. The eigenvalues of $\sigma$ are then equal to $\pm 1$. The map~(\ref{2.2.4}) has a natural extension~\cite{Kos60} \begin{equation} \Omega^*(\c{A}) \buildrel D \over \longrightarrow \Omega^*(\c{A}) \otimes_\c{A} \Omega^1(\c{A}) \label{2.2.4ex} \end{equation} to the entire tensor algebra given by a graded Leibniz rule. This general formalism can be applied in particular to differential calculi with a frame. Since $\Omega^1(\c{A})$ is a free module the maps $\sigma$ and $\tau$ can be defined by their action on the basis elements: \begin{equation} \sigma (\theta^a \otimes \theta^b) = S^{ab}{}_{cd} \theta^c \otimes \theta^d, \qquad \tau (\theta^a \otimes \theta^b) = T^{ab}{}_{cd} \theta^c \otimes \theta^d. \label{2.2.9} \end{equation} By the sequence of identities \begin{equation} f S^{ab}{}_{cd} \theta^c \otimes \theta^d = \sigma (f \theta^a \otimes \theta^b) = \sigma (\theta^a \otimes \theta^b f) = S^{ab}{}_{cd} f \theta^c \otimes \theta^d \label{2.2.10} \end{equation} and the corresponding ones for $T^{ab}{}_{cd}$ we conclude that the coefficients $S^{ab}{}_{cd}$ and $T^{ab}{}_{cd}$ must lie in $\c{Z}(\c{A})$. From~(\ref{2.2.7}) the most general form for $S^{ab}{}_{cd}$ is \begin{equation} S^{ab}{}_{cd} = T^{ab}{}_{ef} (\delta^e_c \delta^f_d - P^{ef}{}_{cd}) - \delta^a_c \delta^b_d. \label{2.2.11} \end{equation} A covariant derivative can be defined also by its action on the basis elements: \begin{equation} D\theta^a = - \omega^a{}_{bc} \theta^b \otimes \theta^c. \label{2.2.12} \end{equation} The coefficients here are elements of the algebra. They are restricted by (2.1) and the the two Leibniz rules. The torsion 2-form is defined as usual as \begin{equation} \Theta^a = d \theta^a - \pi \circ D \theta^a. \label{deftorsion} \end{equation} If $F^a{}_{bc} = 0$ then it is easy to check~\cite{DubMadMasMou96} that \begin{equation} D_{(0)} \theta^a = - \theta \otimes \theta^a + \sigma (\theta^a \otimes \theta) \label{2.2.14} \end{equation} defines a torsion-free covariant derivative. The most general $D$ for fixed $\sigma$ is of the form \begin{equation} D = D_{(0)} + \chi \label{2.2.15} \end{equation} where $\chi$ is an arbitrary bimodule morphism \begin{equation} \Omega^1(\c{A}) \buildrel \chi \over \longrightarrow \Omega^1(\c{A}) \otimes \Omega^1(\c{A}). \label{2.2.16} \end{equation} If we write \begin{equation} \chi (\theta^a) = - \chi^a{}_{bc} \theta^b \otimes \theta^c \label{2.2.17} \end{equation} we conclude that $\chi^a{}_{bc} \in \c{Z}(\c{A})$. In general a covariant derivative is torsion-free provided the condition \begin{equation} \omega^a{}_{de} P^{de}{}_{bc} = {1\over 2}C^a{}_{bc} \label{2.2.19} \end{equation} is satisfied. The covariant derivative~(\ref{2.2.15}) is torsion free if and only if \begin{equation} \pi \circ \chi = 0. \label{2.2.20} \end{equation} One can define a metric by the condition \begin{equation} g(\theta^a \otimes \theta^b) = g^{ab} \label{2.2.21} \end{equation} where the coefficients $g^{ab}$ are elements of $\c{A}$. To be well defined on all elements of the tensor product $\Omega^1(\c{A}) \otimes_\c{A} \Omega^1(\c{A})$ the metric must be bilinear and by the sequence of identities \begin{equation} f g^{ab} = g(f \theta^a \otimes \theta^b) = g(\theta^a \otimes \theta^b f) = g^{ab} f \label{2.2.22} \end{equation} one concludes that the coefficients must lie in $\c{Z}(\c{A})$. We define the metric to be symmetric if \begin{equation} g \circ \sigma \propto g. \end{equation} This is a natural generalization of the situation in ordinary differential geometry where symmetry is respect to the flip which defines the forms. If $g^{ab} = g^{ba}$ then by a linear transformation of the original $\lambda_a$ one can make $g^{ab}$ the components of the Euclidean (or Minkowski) metric in dimension $n$. It will not necessarily then be symmetric in the sense that we have just used the word. The covariant derivative~(\ref{2.2.12}) is compatible with the metric if and only if~\cite{DimMad96} \begin{equation} \omega^a{}_{bc} + \omega_{cd}{}^e S^{ad}{}_{be} = 0. \label{2.2.23} \end{equation} This is a `twisted' form of the usual condition that $g_{ad}\omega^d{}_{bc}$ be antisymmetric in the two indices $a$ and $c$ which in turn expresses the fact that for fixed $b$ the $\omega^a{}_{bc}$ form a representation of the Lie algebra of the Euclidean group $SO(n)$ (or the Lorentz group). When $F^a{}_{bc} = 0$ the condition that~(\ref{2.2.12}) be metric compatible can be written~\cite{DimMad96} as \begin{equation} S^{ae}{}_{df} g^{fg} S^{bc}{}_{eg} = g^{ab} \delta^c_d. \label{2.2.24} \end{equation} Introduce the standard notation $\sigma_{12} = \sigma \otimes 1$, $\sigma_{23} = 1 \otimes \sigma$, to extend to three factors of a module any operator $\sigma$ defined on a tensor product of two factors. Then there is a natural continuation of the map (\ref{2.2.4}) to the tensor product $\Omega^1(\c{A}) \otimes_\c{A} \Omega^1(\c{A})$ given by the map \begin{equation} D_2(\xi \otimes \eta) = D\xi \otimes \eta + \sigma_{12} (\xi \otimes D\eta) \label{2.2.4e} \end{equation} The map $D_2 \circ D$ has no nice properties but if one introduces the notation $\pi_{12} = \pi \otimes 1$ then by analogy with the commutative case one can set \begin{equation} D^2 = \pi_{12} \circ D_2 \circ D \end{equation} and formally define the curvature as the map \begin{equation} \mbox{Curv}:\, \Omega^1(\c{A}) \longrightarrow \Omega^2(\c{A}) \otimes_\c{A} \Omega^1(\c{A}) \label{curv} \end{equation} given by $\mbox{Curv} = D^2$. This coincides with the composition of the first two maps of the series of~(\ref{2.2.4ex}). Because of the condition (\ref{2.2.6}) Curv is left linear. It can be written out in terms of the frame as \begin{equation} \mbox{Curv} (\theta^a) = - {1 \over 2} R^a{}_{bcd} \theta^c \theta^d \otimes \theta^b \label{2.16} \end{equation} Similarly one can define a Ricci map \begin{equation} \mbox{Ric} (\theta^a) = {1 \over 2} R^a{}_{bcd} \theta^c g(\theta^d \otimes \theta^b). \label{2.17} \end{equation} It is given by \begin{equation} \mbox{Ric} \, (\theta^a) = R^a{}_b \theta^b. \label{2.18} \end{equation} The above definition of curvature is not satisfactory in the noncommutative case~\cite{DubMadMasMou96}. For example, from~(\ref{2.16}) one sees that Curv can only be right linear if $R^a{}_{bcd} \in \c{Z}(\c{A})$ The curvature $\mbox{Curv}_{(0)}$ of the covariant derivative $D_{(0)}$ defined in (\ref{2.2.14}) can be readily calculated. One finds after a short calculation that it is given by the expression \begin{equation} \mbox{Curv}_{(0)} (\theta^a) = \theta^2 \otimes \theta^a + \pi_{12} \sigma_{12}\sigma_{23} \sigma_{12} (\theta^a \otimes \theta \otimes \theta). \label{2.19} \end{equation} If $\xi = \xi_a \theta^a$ is a general 1-form then since Curv is left linear one can write \begin{equation} \mbox{Curv}_{(0)} (\xi ) = \xi_a \theta^2 \otimes \theta^a + \pi_{12} \sigma_{12}\sigma_{23} \sigma_{12} (\xi \otimes \theta \otimes \theta). \label{2.20} \end{equation} The lack of right-linearity of Curv is particularly evident in this last formula. \sect{The involution} Suppose now that $\c{A}$ is a $*$-algebra. We would like to choose the differential calculus such that the reality condition $(df)^* = df^*$ holds. This can at times be difficult~\cite{olezu}. We must require that the derivations $e_a$ satisfy the reality condition \begin{equation} (e_a f^*)^* = e_a f \end{equation} which in turn implies that the $\lambda_a$ are antihermitian. One finds that for general $f \in \c{A}$ and $\xi \in \Omega^1(\c{A})$ one has \begin{equation} (f \xi)^* = \xi^* f^*, \qquad (\xi f)^* = f^* \xi^*. \end{equation} From the duality condition we find that \begin{equation} (\theta^a)^* = \theta^a, \qquad \theta^* = - \theta. \label{RealTheta} \end{equation} There are elements $I^{ab}{}_{cd}, J^{ab}{}_{cd} \in \c{Z}(\c{A})$ such that \begin{equation} (\theta^a \theta^b)^* = \imath(\theta^a \theta^b) = I^{ab}{}_{cd} \theta^c \theta^d, \qquad (\theta^a \otimes \theta^b)^* = \jmath_2(\theta^a \otimes \theta^b) = J^{ab}{}_{cd} \theta^c \otimes \theta^d. \end{equation} We can suppose that \begin{equation} I^{ab}{}_{cd} P^{cd}{}_{ef} = I^{ab}{}_{ef}. \end{equation} We have then \begin{equation} (I^{ab}{}_{cd})^* I^{cd}{}_{ef} = P^{ab}{}_{ef}, \qquad (J^{ab}{}_{cd})^* J^{cd}{}_{ef} = \delta^a_e \delta^b_f. \end{equation} The compatibility condition with the product \begin{equation} \pi \circ \jmath_2 = \imath \circ \pi \end{equation} becomes \begin{equation} (P^{ab}{}_{cd})^* J^{cd}{}_{ef} = I^{ab}{}_{cd} P^{cd}{}_{ef} = I^{ab}{}_{ef}. \label{compatibility} \end{equation} But since the frame is hermitian and associated to derivations we have from~(\ref{defdiff}) \begin{equation} (dfdg)^* = [d(f\,dg)]^* = d(f\,dg)^* = d(dg^*\,f) = - dg^*df^* \end{equation} for arbitrary $f$ and $g$. It follows that \begin{equation} (e_a f e_b g)^* I^{ab}{}_{cd} \theta^c \theta^d = - e_b g^* e_a f^* \theta^b \theta^a = - (e_a f e_b g)^* \theta^b \theta^a \end{equation} and we must conclude that \begin{equation} I^{ab}{}_{cd} = - P^{ba}{}_{cd}. \label{I-P} \end{equation} It can be shown~\cite{MadMou98} that the right-hand satisfies a weak form of the Yang-Baxter equation, which would imply some sort of braid condition on the left-hand side. The compatibility condition with the product implies then that \begin{equation} (P^{ab}{}_{cd})^* P^{dc}{}_{ef} = P^{ba}{}_{ef}. \end{equation} For general $\xi, \eta \in \Omega^1(\c{A})$ it follows from~(\ref{I-P}) that \begin{equation} (\xi \eta)^* = - \eta^* \xi^*. \label{antisym} \end{equation} In particular \begin{equation} (\theta^a \theta^b)^* = - \theta^b \theta^a. \end{equation} The product of two frame elements is hermitian then if and only if they anticommute. More generally one can extent the involution to the entire algebra of forms by setting \begin{equation} (\alpha \beta)^* = (-1)^{pq} \beta^* \alpha^* \label{sign} \end{equation} if $\alpha \in \Omega^p(\c{A}$) and $\beta \in \Omega^q(\c{A})$. When the frame exists one has necessarily also the relations \begin{equation} (f \xi \eta)^* = (\xi \eta)^* f^*, \qquad (f \xi \otimes \eta)^* = (\xi \otimes \eta)^* f^* \end{equation} for arbitrary $f \in \c{A}$. If in particular $P^{ab}{}_{cd}$ is given by \begin{equation} P^{ac}{}_{cd} = {1 \over 2} (\delta^a_c \delta^b_d - \delta^a_d \delta^b_c) \end{equation} we can choose $\jmath_2$ to be the identity. In this case the $F^c{}_{ab}$ are hermitian and the $K_{ab}$ anti-hermitian elements of $\c{Z}(\c{A})$. An involution can be introduced on the algebra of forms even if they are not defined using derivations~\cite{Con95} We require that the metric be real; if $\xi$ and $\eta$ are hermitian 1-forms then $g(\xi \otimes \eta)$ should be an hermitian element of the algebra. The reality condition for the metric becomes therefore \begin{equation} g((\xi \otimes \eta)^*) = (g(\xi \otimes \eta))^* \label{reality} \end{equation} and puts further constraints \begin{equation} S^{ab}{}_{cd} g^{cd} = (g^{ba})^* \end{equation} on the matrix of coefficients $g^{ab}$. We shall also require the reality condition \begin{equation} D\xi^* = (D\xi)^* \label{basic} \end{equation} on the connection, which can be rewritten also in the form \begin{equation} D\circ\jmath_1=\jmath_2\circ D. \label{prima} \end{equation} This must be consistent with the Leibniz rules. There is little one can conclude in general but if the differential is based on real derivations then from the equalities \begin{equation} (D(f\xi))^* = D((f\xi)^*) = D(\xi^* f^*) \end{equation} one finds the conditions \begin{equation} (df \otimes \xi)^* + (f D\xi)^* = \sigma(\xi^* \otimes df^*) + (D\xi^*) f^*. \end{equation} Since this must be true for arbitrary $f$ and $\xi$ we conclude that \begin{equation} (df \otimes \xi)^* = \sigma(\xi^* \otimes df^*) \end{equation} and \begin{equation} (f D\xi)^* = (D\xi^*) f^*. \end{equation} We shall suppose~\cite{DubMadMasMou95, KasMadTes97} that the involution is such that in general \begin{equation} (\xi \otimes \eta)^* = \sigma(\eta^* \otimes \xi^*). \label{TPI} \end{equation} A change in $\sigma$ therefore implies a change in the definition of an hermitian tensor. From the compatibility conditions~(\ref{compatibility}) and~(\ref{2.2.6}) one can deduce~(\ref{antisym}). The condition that the star operation be in fact an involution places a constraint on the map $\sigma$: \begin{equation} (\sigma(\eta^* \otimes \xi^*))^* = (\xi \otimes \eta). \label{invconst} \end{equation} It is clear that there is an intimate connection between the reality condition and the right-Leibniz rule. The expression~(\ref{TPI}) for the involution on tensor products becomes the identity \begin{equation} J^{ab}{}_{cd} = S^{ba}{}_{cd}. \label{J-S} \end{equation} This is consistent with~(\ref{I-P}) because of~(\ref{2.2.6}). It forces also the constraint \begin{equation} (S^{ba}{}_{cd})^* S^{dc}{}_{ef} = \delta^a_e \delta^b_f \label{s-unitary} \end{equation} on $\sigma$. Equation~(\ref{J-S}) can be also read from right to left as a definition of the right-Leibniz rule in terms of the hermitian structure. The condition that the connection~(\ref{2.2.12}) be real can be written as \begin{equation} (\omega^a{}_{bc})^* = \omega^a{}_{de} (J^{de}{}_{bc})^*. \label{real1st} \end{equation} One verifies immediately that the connection~(\ref{2.2.14}) is real. In order for the curvature to be real we must require that the extension of the involution to the tensor product of three elements of $\Omega^1(\c{A})$ be such that \begin{equation} \pi_{12} \circ D_2(\xi \otimes \eta)^* = \Big(\pi_{12} \circ D_2(\xi \otimes \eta)\Big)^*. \end{equation} We shall impose a stronger condition. We shall require that $D_2$ be real: \begin{equation} D_2(\xi \otimes \eta)^* = (D_2(\xi \otimes \eta))^*. \label{strong} \end{equation} This condition can be made more explicit when a frame exists. In this case the map $D_2$ is given by \begin{equation} D_2 (\theta^a \otimes \theta^b) = - (\omega^a{}_{pq} \delta^b_r + S^{ac}{}_{pq} \omega^b{}_{cr}) \theta^p \otimes \theta^q \otimes \theta^r. \label{explici} \end{equation} To solve the reality condition~(\ref{strong}) we introduce elements $J^{abc}{}_{def} \in \c{Z}(\c{A})$ such that \begin{equation} (\theta^a \otimes \theta^b \otimes \theta^c)^* = \jmath_3(\theta^a \otimes \theta^b \otimes \theta^c) = J^{abc}{}_{def} \theta^d \otimes \theta^e \otimes \theta^f. \label{involution} \end{equation} Using~(\ref{J-S}) one finds then that the equality \begin{equation} D_2 \circ \jmath_2 = \jmath_3 \circ D_2 \label{dopo} \end{equation} can be written in the form \begin{equation} J^{ab}{}_{pq} (\omega^p{}_{de} \delta^q_f + J^{rp}{}_{de} \omega^q{}_{rf}) = \Big((\omega^a{}_{pq})^* \delta^b_r + (J^{sa}{}_{pq})^* (\omega^b{}_{sr})^*\Big) J^{pqr}{}_{def}. \label{4.2.43} \end{equation} This equation must be solved for $J^{abc}{}_{def}$ as a function of $J^{ab}{}_{cd}$. One cannot simply cancel the factor $\omega^a{}_{bc}$ since it satisfies constraints. As a test case we choose~(\ref{2.2.14}). We find that~(\ref{4.2.43}) is satisfied provided \begin{equation} J^{abc}{}_{def} = J^{ab}{}_{pq}J^{pc}{}_{dr}J^{qr}{}_{ef} = J^{bc}{}_{pq}J^{aq}{}_{rf}J^{rp}{}_{de}. \label{Y-B} \end{equation} The second equality is the Yang-Baxter Equation written out with indices. Using this equation it follows that~(\ref{involution}) is indeed an involution: \begin{equation} (J^{abc}{}_{pqr})^* J^{pqr}{}_{def} = \delta^a_d \delta^b_e \delta^c_f. \end{equation} Using Equation~(\ref{Y-B}) the Equation~(\ref{4.2.43}) can be written in the form \begin{equation} J^{ab}{}_{pe} \omega^p{}_{cd} - J^{ap}{}_{de} \omega^b{}_{cp} + J^{ab}{}_{pq} J^{rp}{}_{cd} \omega^q{}_{re} - J^{qb}{}_{cp} J^{rp}{}_{de} \omega^a{}_{qr} = 0. \label{real2nd} \end{equation} The connection then must satisfy two reality conditions, Equation~(\ref{real1st}) and Equation~(\ref{real2nd}). The second condition can be rewritten more concisely in the form \begin{equation} D_2 \circ \sigma = \sigma_{23} \circ D_2. \label{equi} \end{equation} In fact, using Equations~(\ref{explici}), (\ref{2.2.2}) one finds \[ \begin{array}{l} D_2\Big(\sigma(f\theta^b\otimes\theta^a)\Big) - \sigma_{23} \circ \Big(D_2(f\theta^b\otimes\theta^a)\Big) =\\ S^{ba}{}_{pq}D_2(f\theta^p\otimes\theta^q)- df \otimes \sigma(\theta^b\otimes\theta^a)- f(\omega^b{}_{cp} \delta^a_r + S^{bq}{}_{cp} \omega^a{}_{qr}) \sigma_{23}(\theta^c \otimes \theta^p \otimes \theta^r) = \\ f\Big(S^{ba}{}_{pq}(\omega^p{}_{cd} \delta^q_e + S^{pr}{}_{cd} \omega^q{}_{re}) - (\omega^b{}_{cp} \delta^a_r + S^{bq}{}_{cp} \omega^a{}_{qr}) S^{pr}{}_{de}\Big)\theta^c\otimes\theta^d\otimes\theta^e. \end{array} \] Because of (\ref{J-S}), the right-hand side of this equation vanishes if and only if the left-hand side of Equation~(\ref{real2nd}) is zero. One can check that equations (\ref{equi}) and (\ref{strong}) are equivalent, once the definitions of $\jmath_2,\jmath_3$ and the property~(\ref{basic}) are postulated. It is reasonable to suppose that even in the absence of a frame the constraints~(\ref{s-unitary}) and the Yang-Baxter condition hold. The former has in fact already been written~(\ref{invconst}) in general. The map $\jmath_3$ can be written as \begin{equation} (\xi\otimes\eta\otimes\zeta)^* \equiv \jmath_3(\xi\otimes\eta\otimes\zeta) = \sigma_{12}\sigma_{23} \sigma_{12} (\zeta^*\otimes\eta^*\otimes\xi^*). \label{defj3} \end{equation} Because of~(\ref{J-S}) the Yang-Baxter condition for $\jmath_2$ becomes the braid equation \begin{equation} \sigma_{12}\sigma_{23}\sigma_{12}=\sigma_{23}\sigma_{12} \sigma_{23} \label{genbraid} \end{equation} for the map $\sigma$. \sect{Higher tensor and wedge powers} Just as we have~(\ref{2.2.4e}) defined $D_2$ we can introduce a set $D_n$ of covariant derivatives \begin{equation} D_n: \, \bigotimes_1^n \Omega^1(\c{A}) \longrightarrow \bigotimes_1^{n+1} \Omega^1(\c{A}) \end{equation} for arbitrary integer $n$ by using $\sigma$ to place the operator $D$ in its natural position to the left. For instance, \begin{equation} D_3 = \Big(D\otimes 1 \otimes 1 + \sigma_{12}(1 \otimes D\otimes 1)+ \sigma_{12}\sigma_{23}(1\otimes 1\otimes D)\Big) \label{example} \end{equation} If the condition~(\ref{genbraid}) is satisfied then these $D_n$ will also be real in the sense that \begin{equation} D_n \circ \jmath_n = \jmath_{n+1} \circ D_n \label{genreality1} \end{equation} where the $\jmath_n$ are the natural extensions of $\jmath_2$ and $\jmath_3$. For instance, $\jmath_4$ is defined by \begin{equation} (\xi\otimes\eta\otimes\zeta\otimes \omega)^* \equiv \jmath_4(\xi\otimes\eta\otimes\zeta\otimes \omega) = \sigma_{12}\sigma_{23}\sigma_{12}\sigma_{34}\sigma_{23}\sigma_{12} (\omega^*\otimes\zeta^*\otimes\eta^*\otimes\xi^*). \label{defj4} \end{equation} The general rule to construct $\jmath_n$ is the following. Let $\epsilon$ denote the ``flip'', the permutator of two objects, $\epsilon(\xi\otimes\eta)=\eta\otimes\xi$, and more generally let $\epsilon_n$ denote the inverse-order permutator of $n$ objects. For instance, the action of $\epsilon_3$ is given by \begin{equation} \epsilon_3(\zeta\otimes\eta\otimes\xi)=\xi\otimes\eta\otimes\zeta. \label{deco} \end{equation} The maps $\epsilon,\epsilon_n$ are $\b{C}$-bilinear but not $\c{A}$-bilinear, and are involutive. One can decompose $\epsilon_n$ as a product of $\epsilon_{i(i\!+\!1)}$. One finds for $n=3$ \begin{equation} \epsilon_3=\epsilon_{12}\epsilon_{23}\epsilon_{12}= \epsilon_{23}\epsilon_{12}\epsilon_{23}. \end{equation} The second equality expresses the fact that $\epsilon$ fulfils the braid equation. In a more abstract but compact notation the definitions (\ref{TPI}), (\ref{defj3}) and (\ref{defj4}) can be written in the form \begin{eqnarray} \jmath_2 &=&\sigma\,\ell_2, \label{defj2abs}\\ \jmath_3 &=&\sigma_{12}\sigma_{23}\sigma_{12} \,\ell_3, \label{defj3abs}\\ \jmath_4 &=& \sigma_{12}\sigma_{23}\sigma_{12}\sigma_{34}\sigma_{23}\sigma_{12} \,\ell_4. \label{defj4abs} \end{eqnarray} We have here defined the involution on the 1-forms as $\jmath_1$, and \begin{equation} \ell_n = (\underbrace{\jmath_1 \otimes \ldots \otimes \jmath_1}_{\mbox{$n$~times}})\, \epsilon_n. \end{equation} The $\ell_n$ is clearly an involution, since $\epsilon_n$ commutes with the tensor product of the $\jmath_1$'s. The products of $\sigma$'s appearing in the definitions of $\jmath_3,\jmath_4$ are obtained from the decompositions of $\epsilon_3,\epsilon_4$ by replacing each $\epsilon_{i(i\!+\!1)}$ by $\sigma_{i(i\!+\!1)}$. In this way, $\jmath_3,\jmath_4$ have the correct classical limit, since in this limit $\sigma$ become the ordinary flip $\epsilon$. In the same way as different equivalent decompositions of $\epsilon_3,\epsilon_4$ are possible, so different products of $\sigma$ factors in (\ref{defj3abs}), (\ref{defj4abs}) are allowed; they are all equal, once Equation~(\ref{genbraid}) is fulfilled. The same rules described for $n=3,4$ should be used to define $\jmath_n$ for $n>4$. The definition of $\jmath_n$ can be given also some equivalent recursive form which will be useful for the proofs below, namely \begin{eqnarray} \jmath_3 &=& \sigma_{12}\sigma_{23} \epsilon_{23} \epsilon_{12} (\jmath_1\otimes \jmath_2), \label{poi1} \\ \jmath_4 &=&\sigma_{12}\sigma_{23}\sigma_{34} \epsilon_{34} \epsilon_{23} \epsilon_{12}(\jmath_1\otimes \jmath_3), \label{poi2} \\ &=&\sigma_{23}\sigma_{34}\sigma_{12}\sigma_{23} \epsilon_{23} \epsilon_{12} \epsilon_{34} \epsilon_{23}(\jmath_2\otimes \jmath_2), \label{poi3} \end{eqnarray} and so forth to higher orders. Again, these definitions are unambiguous because of the braid equation~(\ref{genbraid}). Now we wish to show that, if the braid equation is fulfilled and $\jmath_2$ is an involution, that is, Equation~(\ref{invconst}) is satisfied then $\jmath_n$ is also an involution for $n>2$. Note that the constraint~(\ref{invconst}) in the more abstract notation introduced above becomes \begin{equation} \jmath_2=\jmath_2^{-1}= \epsilon\circ(\jmath_1\otimes\jmath_1)\circ\sigma^{-1}. \label{defj2ab} \end{equation} As a first step one checks that for $i=1,...,n\!-\!1$ \begin{equation} \sigma_{i(i\!+\!1)}\,\ell_n = \ell_n\,\sigma_{(n\!-\!i)(n\!+\!1\!-\!i)}^{-1}. \label{fifa} \end{equation} The latter relation can be proved recursively. We show in particular how from the relation with $n=2$ follows the relation with $n=3$: \begin{equation} \begin{array}{lcl} \sigma_{12}\,\ell_3 &\stackrel{(\ref{deco})}{=} &\sigma_{12}(\jmath_1\otimes\jmath_1\otimes\jmath_1) \, \epsilon_{12} \epsilon_{23} \epsilon_{12}\,\sigma_{23}\sigma_{23}^{-1}\\ &\stackrel{(\ref{defj2abs})}{=} & (\jmath_2\otimes\jmath_1)\, \epsilon_{23} \epsilon_{12}\, \sigma_{23}\sigma_{23}^{-1} \\ &=& (\jmath_2\sigma\otimes\jmath_1)\, \epsilon_{23} \epsilon_{12}\,\sigma_{23}^{-1}\\ &\stackrel{(\ref{defj2ab})}{=} & (\jmath_1\otimes\jmath_1\otimes\jmath_1) \epsilon_{12} \epsilon_{23} \epsilon_{12}\,\sigma_{23}^{-1}\\ &\stackrel{(\ref{deco})}{=}& \ell_3\sigma_{23}^{-1}. \nonumber \end{array} \label{semplice} \end{equation} Now it is immediate to show that $\jmath_n$ is an involution. Again, we explicitly reconsider the case $n=3$: \begin{eqnarray} (\jmath_3)^2 &=& \sigma_{12}\sigma_{23}\sigma_{12}\,\ell_3\,\sigma_{23}\sigma_{12} \sigma_{23}\,\ell_3 \nonumber\\ &\stackrel{(\ref{semplice})}{=} & \ell_3\, \sigma^{-1}_{23}\sigma^{-1}_{12}\sigma^{-1}_{23} \sigma_{23}\sigma_{12}\sigma_{23}\,\ell_3 \nonumber\\ &=&1. \nonumber \end{eqnarray} In order to prove~(\ref{genreality1}) it is useful to prove first a direct consequence of relation~(\ref{equi}): \begin{equation} D_n\circ\sigma_{(i\!-\!1)i}=\sigma_{i(i\!+\!1)}\circ D_n. \label{lemma} \end{equation} The recursive proof is straightforward. For instance, \[ D_3\sigma_{23}=[D\otimes 1 \otimes 1 +\sigma_{12} (1\otimes D_2)] \sigma_{23}\stackrel{(\ref{equi})}{=}\sigma_{34}(D\otimes 1 \otimes 1) + \sigma_{12}\sigma_{34}(1\otimes D_2)=\sigma_{34} D_3. \] Now (\ref{genreality1}) can be proved recursively. For instance, \begin{equation} \begin{array}{lcl} D_3 \jmath_3 &\stackrel{(\ref{poi1})}{=}& D_3\sigma_{12}\sigma_{23} \epsilon_{23} \epsilon_{12} (\jmath_1\otimes\jmath_2)\\ &\stackrel{(\ref{lemma})}{=} &\sigma_{23}\sigma_{34} D_3 \epsilon_{23} \epsilon_{12} (\jmath_1\otimes\jmath_2)\\ &\stackrel{(\ref{example})}{=}&\sigma_{23} \sigma_{34} [D_2\otimes 1+\sigma_{12}\sigma_{23}(1\otimes 1\otimes D)] \epsilon_{23} \epsilon_{12} (\jmath_1\otimes\jmath_2)\\ &=&\sigma_{23}\sigma_{34} [ \epsilon_{34} \epsilon_{23} \epsilon_{12} (1\otimes D_2) +\sigma_{12}\sigma_{23} \epsilon_{23} \epsilon_{12} \epsilon_{34} \epsilon_{23} (D\otimes 1\otimes 1)] (\jmath_1\otimes\jmath_2)\\ &\stackrel{(\ref{dopo})}{=}& \sigma_{23}\sigma_{34} [ \epsilon_{34} \epsilon_{23} \epsilon_{12} (\jmath_1\otimes\jmath_3 D_2) +\sigma_{12}\sigma_{23} \epsilon_{23} \epsilon_{12} \epsilon_{34} \epsilon_{23}(\jmath_2 D\otimes\jmath_2)]\\ &\stackrel{(\ref{poi2}),(\ref{poi3})}{=} &\sigma_{12}^{-1}\jmath_4(1\otimes D_2) + \jmath_4(D\otimes 1\otimes 1)\\ &=&\jmath_4[\sigma_{12}(1\otimes D_2) + (D\otimes 1\otimes 1)]\\ &\stackrel{(\ref{example})}{=}& \jmath_4 D_3. \nonumber \end{array} \end{equation} For the second-last equality we have used the relation $\sigma_{12}^{-1}\jmath_4 = \jmath_4\sigma_{12}$, which can be easily proven using Equations~(\ref{genbraid}) and~(\ref{fifa}). For further developments it is convenient to interpret $\sigma$ as a ``braiding'', in the sense of Majid \cite{majid}. This is possible because of Eqution~(\ref{genbraid}). In that framework, the bilinear map $\sigma$ can be naturally extended first to higher tensor powers of $\Omega^1(\c{A})$, \begin{equation} \sigma:(\underbrace{\Omega^1\otimes\ldots\otimes\Omega^1}_{\mbox{$p$ times}}) \otimes(\underbrace{\Omega^1\otimes\ldots\otimes\Omega^1}_{\mbox{$k$ times}}) \rightarrow\underbrace{\Omega^1\otimes\ldots\otimes\Omega^1}_{\mbox{$p\!+\!k$ times}}. \end{equation} This extension can be found by applying iteratively the rules \begin{equation} \begin{array}{l} \sigma\Big((\xi\otimes\eta)\otimes \zeta\Big) = \sigma_{12}\sigma_{23}(\xi\otimes\eta\otimes \zeta), \\ \sigma\Big(\xi\otimes(\eta\otimes \zeta)\Big)= \sigma_{23}\sigma_{12}(\xi\otimes\eta\otimes \zeta). \end{array} \end{equation} Here $\xi,\eta,\zeta$ are elements of three arbitrary tensor powers of $\Omega^1(\c{A})$. It is easy to show that there is no ambiguity in the iterated definitions, and that the extended map still satisfies the braid equation (\ref{genbraid}). These are general properties of a braiding. Thereafter, by applying $p\!+\!k\!-\!2$ times the projector $\pi$ to the previous equation, so as to transform the relevant tensor products into wedge products, $\sigma$ can be extended also as a map \begin{equation} \sigma:\Omega^p(\c{A})\otimes\Omega^k(\c{A})\rightarrow \Omega^k(\c{A})\otimes \Omega^p(\c{A}). \end{equation} For instance, we shall define $\sigma$ on $\Omega^2\otimes\Omega^1$ and $\Omega^1\otimes\Omega^2$ respectively through \begin{equation} \begin{array}{lcl} \sigma(\xi\eta\otimes \zeta) & = &\pi_{23}\sigma\Big((\xi\otimes\eta)\otimes \zeta\Big),\\ \sigma(\xi\otimes\eta\zeta) & = &\pi_{12}\sigma\Big(\xi\otimes(\eta\otimes \zeta)\Big). \end{array} \end{equation} Under suitable assumptions on $\pi$, the extended $\sigma$ still satisfies the braid equation (\ref{genbraid}). It follows that the same formulae presented above in this section can be used to extend the involutions $\jmath_n$ to tensor powers of higher degree forms in a compatible way with the action of $\pi$, that is, in such a way that $\jmath_2\circ\pi_{12}=\pi_{12}\circ\jmath_3$, and so forth. Finally, also the covariant derivatives $D_n$ can be extended to tensor powers of higher degree forms in such a way that~(\ref{genreality1}) is still satisfied. These results will be shown in detail elsewhere. \section*{Acknowledgment} One of the authors (JM) would like to thank J. Wess for his hospitality and the Max-Planck-Institut f\"ur Physik in M\"unchen for financial support.
1,314,259,994,566
arxiv
\section{Acknowledgments} The authors of this paper were supported in part by National Natural Science Foundation of China under Grant 62202164, the National Key R\&D Program of China through grant 2021YFB1714800, S\&T Program of Hebei through grant 21340301D and the Fundamental Research Funds for the Central Universities 2022MS018. Prof. Philip S. Yu is supported in part by NSF under grants III-1763325, III-1909323, III-2106758, and SaTC-1930941. \section{Introduction} Continual graph learning is emerging as a hot research topic which successively learns from a graph sequence with different tasks \cite{DBLP:journals/corr/abs-2202-10688}. In general, it aims at gradually learning new knowledge without \emph{catastrophic forgetting} the old ones across sequentially coming tasks. Centered around fighting with forgetting, a series of methods \cite{DBLP:conf/ijcai/KimYK22,DBLP:conf/ijcnn/GalkeFZS21} have been proposed recently. Despite the success of prior works, continual graph learning still faces tremendous challenges. \emph{\textbf{Challenge 1}: An adaptive Riemannian representation space.} To the best of our knowledge, existing methods work with Euclidean space, the zero-curvature Riemannian space \cite{DBLP:journals/corr/abs-2111-15422,DBLP:conf/aaai/0002C21,DBLP:conf/cikm/WangSWW20}. However, in continual graph learning, the curvature of a graph remains unknown until its arrival. In particular, the negatively curved Riemannian space, hyperbolic space, is well-suited for graphs presenting hierarchical patterns or tree-like structures \cite{krioukov2010hyperbolic,nickel2017poincare}. The underlying geometry shifts to be positively curved, hyperspherical space, when cyclical patterns (e.g., triangles or cliques) become dominant \cite{BachmannBG20}. Even more challenging, the curvature usually varies over the coming graph sequence as shown in the case study. Thus, it calls for a smart graph encoder in the Riemannian space with \emph{adaptive curvature} for each coming graph successively. \emph{\textbf{Challenge 2}: Continual graph learning without supervision.} Existing continual graph learners \cite{MCGL-NAS,FGN} are trained in the supervised fashion, and thereby rely on abundant labels for each learning task. Labeling graphs requires either manual annotation or paying for permission in practice. It is particularly hard and even impossible when graphs are continuously emerging on-the-fly. In this case, \emph{self-supervised learning} is indeed appealing, so that we can acquire knowledge from the unlabeled data themselves. Though self-supervised learning on graphs is being extensively studied \cite{VelickovicFHLBH19,QiuCDZYDWT20,DBLP:conf/aaai/YinWHXZ22}, existing methods are trained offline. That is, they are not applicable for continual graph learning, and naive application results in catastrophic forgetting in the successive learning process \cite{DBLP:journals/pami/LangeAMPJLST22,DBLP:conf/nips/KeLMXS21}. Unfortunately, self-supervised continual graph learning is surprisingly under-investigated in the literature. Consequently, it is vital to explore how to learn and memorize knowledge free of labels for continual graph learning in adaptive Riemannian spaces. Thus, we propose the challenging yet practical problem of \emph{self-supervised continual graph learning in adaptive Riemannian spaces}. In this paper, we propose a novel self-supervised \underline{Rie}mannian \underline{Gra}ph \underline{C}ontinual L\underline{e}arner (\textbf{RieGrace}). To address the first challenge, we design an Adaptive Riemannian GCN (AdaRGCN), which is able to \emph{shift among any hyperbolic or hyperspherical space adaptive to each graph}. In AdaRGCN, we formulate a unified Riemannian graph convolutional network (RGCN) of arbitrary curvature, and design a CurvNet inspired by Forman-Ricci curvature in Riemannian geometry. CurvNet is a neural module in charge of curvature adaptation, so that we induce a Riemannian space shaped by the curvature learnt from the task graph. To address the second challenge, we propose a novel \emph{label-free Lorentz distillation approach to consolidate knowledge without catastrophic forgetting}. Specifically, we create teacher-student AdaRGCN for the graph sequence. When receiving a new graph, the student is created from the teacher. The student distills from the intermedian layer of itself to acquire knowledge of current graph (intra-distillation), and in the meanwhile, distills from the teacher to preserve the past knowledge (inter-distillation). In our approach, we propose to consolidate knowledge via contrastive distillation, but it is particularly challenging to contrast between different Riemannian spaces. To bridge this gap, we formulate a novel \emph{Generalized Lorentz Projection} (GLP). We prove GLP is closed on Riemannian spaces, and show its relationship to the well-known Lorentz transformation. In short, noteworthy contributions are summarized below: \begin{itemize} \item \emph{Problem}. We propose the problem of self-supervised continual graph learning in adaptive Riemannian spaces, which is the first attempt, to the best of our knowledge, to study continual graph learning in non-Euclidean space. \item \emph{Methodology}. We present a novel RieGrace, where we design a unified RGCN with CurvNet to shift curvature among hyperbolic or hyperspherical spaces adaptive to each graph, and propose the Label-free Lorentz Distillation with GLP for self-supervised continual learning. \item \emph{Experiments}. Extensive experiments on the benchmark datasets show that RieGrace even outperforms the state-of-the-arts supervised methods, and the case study gives further insight on the curvature over the graph sequence with the notion of embedding distortion. \end{itemize} \vspace{-0.1in} \section{Preliminaries} \vspace{-0.02in} In this section, we first introduce the fundamentals of Riemannian geometry necessary to understand this work, and then formulate the studied problem, \emph{self-supervised continual graph learning in general Riemannian space}. In short, we are interested in how to learn an encoder $\mathbf \Phi$ that is able to sequentially learn on coming graphs $G_1, \dots, G_T$ in adaptive Riemannian spaces without external supervision. \vspace{-0.09in} \subsection{Riemannian Geometry} \vspace{-0.03in} \subsubsection{Riemannian Manifold.} A Riemannian manifold $(\mathcal M, g)$ is a smooth manifold $\mathcal M$ equipped with a Riemannian metric $g$. Each point $\mathbf x$ on the manifold is associated with a \emph{tangent space} $\mathcal T_\mathbf x\mathcal M$ that looks like Euclidean. The Riemannian metric $g$ is the collection of inner products at each point $\mathbf x \in \mathcal M$ regarding its tangent space. For $\mathbf x \in \mathcal M$, the \emph{exponential map} at $\mathbf x$, $exp_\mathbf x(\mathbf v): \mathcal T_\mathbf x\mathcal M \to \mathcal M$, projects the vector of the tangent space at $\mathbf x$ onto the manifold, and the \emph{logarithmic map} $log_\mathbf x(\mathbf y): \mathcal M \to \mathcal T_\mathbf x\mathcal M $ is the inverse operator. \vspace{-0.03in} \subsubsection{Curvature.} In Riemannian geometry, the \emph{curvature} is the notion to measure how a smooth manifold deviates from being flat. If the curvature is uniformly distributed, the manifold $\mathcal M$ is called the space of constant curvature $\kappa$. In particular, the space is \emph{hyperspherical} $\mathbb S$ with $\kappa>0$ when it is positively curved, and \emph{hyperbolic} $\mathbb H$ with $\kappa<0$ when negatively curved. Euclidean space is flat space with $\kappa=0$, and can be considered as a special case in Riemannian geometry. \vspace{-0.05in} \subsection{Problem Formulation} In the continual graph learning, we will receive a sequence of disjoint tasks $\mathcal T=\{\mathcal T_1, \dots, \mathcal T_t, \dots, \mathcal T_T\}$, and each task is defined on a graph $G=\{\mathcal V, \mathcal E\}$, where $\mathcal V=\{v_1, \cdots, v_N\}$ is the node set, and $\mathcal E=\{(v_i, v_j)\} \subset \mathcal V \times \mathcal V$ is the edge set. Each node $v_i$ is associated with node feature $\mathbf x_i$ and a category $y_i \in \mathcal Y_k$, where $\mathcal Y_k$ is the label set of $k$ categories. \vspace{-0.02in} \newtheorem*{def1}{Definition 1 (Graph Sequence)} \begin{def1} The sequence of tasks in graph continual learning is described as a graph sequence $\mathcal G=\{G_1, \dots, G_T\}$, and each graph $G_t$ corresponds to a task $\mathcal T_t$. Each task contains a training node set $\mathcal V_t^{tr}$ and a testing node set $\mathcal V_t^{te}$ with node features $X_t^{tr}$ and $X_t^{te}$. \vspace{-0.02in} \end{def1} In this paper, we study the task-incremental learning in \emph{adaptive Riemannian space} whose curvature is able to successively match each task graph. When a new graph arrives, the learnt parameters are memorized but historical graphs are dropped, and additionally, no labels are provided in the learning process. We give the formal definition as follows: \vspace{-0.02in} \newtheorem*{prob}{Definition 2 (Self-Supervised Continual Graph Learning in Adaptive Riemannian Space)} \begin{prob} Given a graph sequence $\mathcal G$ with tasks $\mathcal T$, we aim at learning an encoder $\mathbf \Phi: v \to \mathbf h \in \mathcal M^{d,k}$ in absence of labels in adaptive Riemannian space, so that the encoder is able to continuously consolidate the knowledge for current task without catastrophically forgetting the knowledge for previous ones. \vspace{-0.02in} \end{prob} Essentially different from the continual graph learners of prior works, we study with a more challenging yet practical setting: i) rather than Euclidean space, the encoder $\mathbf \Phi$ works with an adaptive Riemannian space suitable for each task, and ii) is able to learn and memorize knowledge without labels for continuously emerging graphs on-the-fly. \begin{figure*} \centering \vspace{-0.15in} \includegraphics[width=0.97\linewidth]{IlluV1} \vspace{-0.13in} \caption{Overall architecture of \textbf{RieGrace}. We design the AdaRGCN which successively adapts its curvature for current task graph with CurvNet, and propose Label-free Lorentz Distillation for continual graph learning. In each learning session, i) the student is created from the teacher with the same architecture, ii) jointly performs intra-distillation from itself and inter-distillation from the teacher with GLP to consolidate knowledge, and iii) becomes the teacher for the next learning session. } \label{illu} \vspace{-0.2in} \end{figure*} \vspace{-0.03in} \section{Methodology} \vspace{-0.03in} To address this problem, we propose a novel Self-supervised \underline{Rie}mannian \underline{Gra}ph \underline{C}ontinual L\underline{e}arner (\textbf{RieGrace}) We illustrate the overall architecture of RieGrace in Figure 1. In the nutshell, we first design a unified graph convolutional network (AdaRGCN) on the Riemannian manifold shaped by the learnt curvature \emph{adaptive to each coming graph}. Then, we propose a \emph{label-free Lorentz distillation approach} to consolidate knowledge without catastrophic forgetting. \vspace{-0.07in} \subsubsection{Representation Space.} First of all, we introduce the Riemannian manifolds we use in this paper before we construct RieGrace on them. We opt for the hyperboloid (Lorentz) model for hyperbolic space and the corresponding hypersphere model for hyperspherical space with the unified formalism, owing to the numerical stability and closed form expressions \cite{HGNN}. Formally, we have a $d$-dimensional manifold of curvature $\kappa$, $\mathcal M^{d, \kappa}=\{ \mathbf x \in \mathbb R^{d+1} | \ \langle \mathbf{x}, \mathbf{x} \rangle_\kappa= \frac{1}{\kappa}\} $ with $\kappa\neq 0$, whose \emph{origin} is denoted as $\mathcal O = (|\kappa|^{-\frac{1}{2}}, 0, \cdots, 0) \in \mathcal M^{d,\kappa}$. The curvature-aware inner product $\langle \cdot, \cdot \rangle_\kappa$ is defined as \vspace{-0.07in} \begin{equation} \langle \mathbf{x}, \mathbf{y} \rangle_\kappa=\mathbf{x}^{\top} \operatorname{diag}(sign(\kappa),1, \cdots, 1) \mathbf{y}, \vspace{-0.07in} \label{manifold} \end{equation} and thus the tangent space at $\mathbf x$ is given as $\mathcal T_\mathbf x\mathcal M^{d,\kappa}=\{\mathbf v \in \mathbb R^{d+1} | \ \langle \mathbf{v}, \mathbf{x} \rangle_\kappa= 0\}$. In particular, for the positive curvature, $\mathcal M^{d, \kappa}$ is the hypersphere model $\mathbb S^{d, \kappa}$ and $\langle \cdot, \cdot \rangle_\kappa$ is the the standard inner product on $\mathbb R^{d+1}$. For the negative curvature, $\mathcal M^{d, \kappa}$ is the hyperboloid model $\mathbb H^{d, \kappa}$ and $\langle \cdot, \cdot \rangle_\kappa$ is the Minkowski inner product. The operators with the unified formalism on $\mathcal M^{d, \kappa}$ is summarized in Table \ref{tab:ops}, where $v=\sqrt{|\kappa|}\|\mathbf{v}\|_{\kappa}$, $\beta = \kappa\langle\mathbf{x}, \mathbf{y}\rangle_{\kappa}$ and $\| \mathbf{v} \|_{\kappa}^2=\langle \mathbf{v}, \mathbf{v} \rangle_\kappa$ for $\mathbf v \in \mathcal T_\mathbf x\mathcal M^{d, \kappa}$. We utilize the curvature-aware trigonometric function the same as \citet{SkopekGB20}. \begin{table} \centering \begin{tabular}{|l|c|} \hline \textbf{Operator} & \textbf{Unified formalism in $\mathcal M^{d, \kappa}$}\\ \hline Distance Metric & $ d_{\mathcal M}(\mathbf{x}, \mathbf{y})=\frac{1}{\sqrt{|\kappa|}} \cos^{-1}_{\kappa}\left( \kappa \langle\mathbf{x}, \mathbf{y}\rangle_{\kappa}\right) $\\ \hline Exponential Map & $ exp _{\mathbf{x}}^{\kappa}(\mathbf{v})=\cos_{\kappa}\left(\alpha \right) \mathbf{x}+ \frac{\sin_{\kappa}\left(\alpha \right)}{\alpha}\mathbf{v} $ \\ Logarithmic Map & $ log _{\mathbf{x}}^{\kappa}(\mathbf{y})=\frac{\cos^{-1}_{\kappa} \left(\beta \right)}{\sin _{\kappa}\left(\cos^{-1}_{\kappa}\left( \beta \right)\right)}\left(\mathbf{y}-\beta \mathbf{x}\right) $ \\ \hline Scalar Multiply & $ r \otimes_{\kappa} \mathbf{x}=exp _{\mathcal O}^{\kappa}\left(r \ log_{\mathcal O}^{\kappa}(\mathbf{x})\right) $\\ \hline \end{tabular} \vspace{-0.1in} \caption{Curvature-aware operations in manifold $\mathcal M^{d, \kappa}$.} \vspace{-0.2in} \label{tab:ops} \end{table} \vspace{-0.09in} \subsection{Adaptive Riemannian GCN} \vspace{-0.03in} Recall that the curvature of task graph remains unknown until its arrival. We propose an adaptive Riemannian GCN (AdaRGCN), a unified GCN of arbitrary curvature coupled with a CurvNet, a neural module for curvature adaptation. AdaRGCN shifts among hyperbolic and hyperspherical spaces accordingly to match the geometric pattern of each graph, essentially distinguishing itself from prior works. \vspace{-0.05in} \subsubsection{Unified GCN of Arbitrary Curvature.} Recent studies in Riemannian graph learning mainly focus on the design of GCNs in manifold $\mathcal M^{d,\kappa}$ with negative curvatures (hyperboloid model), but the unified GCN of arbitrary curvature has rarely been touched yet. To bridge this gap, we propose a unified GCN of arbitrary curvature, generalizing from the zero-curvature GAT \cite{velickovic2018graph}. Specifically, we introduce the operators with unified formalism on $\mathcal M^{d,\kappa}$. Feature transformation is a basic operation in neural network. For $\mathbf{h} \in \mathcal M^{d,\kappa}$, we perform the transformation via the $\kappa$-left-multiplication $\boxtimes_\kappa$ defined by $exp _{\mathcal O}^{\kappa}(\cdot)$ and $log _{\mathcal O}^{\kappa}(\cdot)$, \vspace{-0.05in} \begin{equation} \mathbf W \boxtimes_{\kappa} \mathbf{h}= exp _{\mathcal O}^{\kappa}\left(\left[ \ 0 \ \| \ \mathbf W \ log_{\mathcal O}^{\kappa}(\mathbf{h})_{[1:d]} \right]\right), \vspace{-0.05in} \label{ft} \end{equation} where $\mathbf{W} \in \mathbb R^{d' \times d}$ is weight matrix, and $[ \cdot \| \cdot ]$ denotes concatenation. Note that, $[log _{\mathcal O}^{\kappa}(\mathbf{h})]_0=0$ holds, $\forall \mathbf{h} \in \mathcal M^{d,\kappa}$. The advantage of Eq. (\ref{ft}) is that \emph{logarithmically mapped vector lies in the tangent space $\mathcal T_\mathcal O\mathcal M^{d,\kappa}$ for any $\mathbf W$ so that we can utilize $exp _{\mathcal O}^{\kappa}(\cdot)$ safely}, which is not guaranteed in direct combination formalism of $exp _{\mathcal O}^{\kappa}( \mathbf Wlog _{\mathcal O}^{\kappa}(\mathbf{h}))$. Similarly, we give the formulation of applying function $f(\cdot)$, \vspace{-0.035in} \begin{equation} f_{\kappa}(\mathbf{h})= exp _{\mathcal O}^{\kappa}\left(\left[ \ 0\ \| \ f(log_{\mathcal O}^{\kappa}(\mathbf{h}))_{[1:d]} \right]\right). \vspace{-0.035in} \end{equation} Neighborhood aggregation is a weighted \emph{arithmetic mean} and also the \emph{geometric centroid} of the neighborhood features essentially \cite{DBLP:conf/icml/WuSZFYW19}. Fr\'{e}chet mean follows this meaning in Riemannian space, but unfortunately does not have a closed form solution \cite{DBLP:conf/icml/LawLSZ19}. Alternatively, we define neighborhood aggregation as the geometric centroid of squared distance, in spirit of Fr\'{e}chet mean, to enjoy both mathematical meaning and efficiency. Given a set of neighborhood features $\mathbf{h}_{j}\in \mathcal M^{d,\kappa}$ centered around $v_i$, the closed form aggregation is derived as follows: \vspace{-0.07in} \begin{equation} \resizebox{0.905\hsize}{!}{$ AGG_\kappa(\{\mathbf{h}_{j}, \nu_{ij} \}_i)= \frac{1}{\sqrt{|\kappa|} } \sum\nolimits_{j \in \bar{\mathcal N}_i}\frac{\nu_{ij}\mathbf{h}_{j}}{\left| ||\sum\nolimits_{j \in \bar{\mathcal N}_i} \nu_{ij}\mathbf{h}_{j}||_\kappa \right|}, $} \label{agg} \vspace{-0.07in} \end{equation} where $\bar{\mathcal N}_i$ is the neighborhood of $v_i$ including itself, and $\nu_{ij}$ is the attention weight. Different from \citet{DBLP:conf/icml/LawLSZ19,ZhangWSLS21}, we generalize the centroid from hyperbolic space to the Riemannian space of arbitrary curvature $\mathcal M^{d,\kappa}$, and show its connection to gyromidpoint of $\kappa$-sterographical model theoretically. Now, we prove that \emph{arithmetic mean} in Eq. (\ref{agg}) is the closed form expression of the \emph{geometric centroid}. \newtheorem*{prop1}{Proposition 1} \begin{prop1} Given a set of points $\mathbf{h}_{j}\in \mathcal M^{d,\kappa}$ each attached with a weight $\nu_{ij}$, $j \in \Omega$, the centroid of squared distance $ \mathbf{c}$ in the manifold is given as minimization problem: \vspace{-0.1in} \begin{equation} \min _{ \mathbf c \in \mathcal{M}^{d, \kappa}} \ \ \sum\nolimits_{j \in \Omega} \nu_{i j} d_{\mathcal M}^{\ 2}\left(\mathbf{h}_{j}, \mathbf{c}\right), \vspace{-0.07in} \end{equation} Eq. (\ref{agg}) is the closed form solution, $\mathbf c=AGG_\kappa(\{\mathbf{h}_{j}, \nu_{ij} \}_i)$. \end{prop1} \vspace{-0.1in} \begin{proof} We have $ \mathbf{c} = \arg \min _{ \mathbf c \in \mathcal{M}^{d, \kappa}} \sum_{j \in \Omega} \nu_{i j} d_{\kappa}^{\ 2}\left(\mathbf{h}_{j}, \mathbf{c}\right) $, and $ \mathbf{c}$ is in the manifold $\mathbf c \in \mathcal{M}^{d, \kappa}$, i.e., $\langle \mathbf{c}, \mathbf{c} \rangle_\kappa=\frac{1}{\kappa}$. Please refer to the Appendix for the details. \end{proof} \vspace{-0.1in} Attention mechanism is equipped for neighborhood aggregation as the importance of neighbor nodes are usually different. We study the importance between a neighbor $v_j$ and center node $v_i$ by an attention function in tangent space, \vspace{-0.1in} \begin{equation} ATT_\kappa(\mathbf{x}_i, \mathbf{x}_j, \mathbf{\theta})=\boldsymbol{\theta}^\top \left[ log_{\mathcal O}^{\kappa}(\mathbf{x}_i) || log_{\mathcal O}^{\kappa}(\mathbf{x}_j) \right], \end{equation} parameterized by $\boldsymbol{\theta}$, and then attention weight is given by $\nu_{ij}=Softmax_{j \in \mathcal N_i}(ATT_\kappa(\mathbf{x}_i, \mathbf{x}_j, \boldsymbol{\theta}))$. We formulate the convolutional layer on $\mathcal{M}^{d, \kappa}$ with proposed operators. The message passing in the $l^{th}$ layer is \vspace{-0.07in} \begin{equation} \mathbf{h}_i^{(l)}=\delta_\kappa\left(AGG_\kappa(\{\mathbf{x}_{j}, \nu_{ij} \}_i) \right), \mathbf{x}_i=\mathbf{W} \boxtimes_{\kappa} \mathbf{h}_i^{(l-1)}, \vspace{-0.05in} \end{equation} where $\delta_\kappa(\cdot)$ is the nonlinearity. Consequently, we build the unified GCN by stacking multiple convolutional layers, and its curvature is adaptively learnt for each task graph with a novel neural module designed as follows. \vspace{-0.07in} \subsubsection{Curvature Adaptation.} We aim to learn the curvature of any graph with a function $f: G \to \kappa$, so that the Riemannian space is able to successively match the geometric pattern of the task graph. To this end, we design a simple yet effective network, named CurvNet, based on the notion of Forman-Ricci curvature in Riemannian geometry. \emph{Theory on Graph Curvature}: Forman-Ricci curvature defines the curvature for an edge $(v_i, v_j)$, and \citet{DBLP:journals/compnet/WeberSJ17} give the reformulation on the neighborhoods of its two end nodes, \vspace{-0.05in} \begin{equation} \resizebox{0.909\hsize}{!}{$ F_{ij}=w_i+w_j-\sum\nolimits_{l\in \mathcal N_i}\sqrt{\frac{\gamma_{ij}}{\gamma_{il}}}w_l-\sum\nolimits_{k\in \mathcal N_j}\sqrt{\frac{\gamma_{ij}}{\gamma_{ik}}}w_k, $} \vspace{-0.05in} \end{equation} where $w_i$ and $\gamma_{ij}$ are the weights associated with nodes and edges, respectively. $w_i$ is defined by the degree information of the nodes connecting to $v_i$, and $\gamma_{ij}=\frac{w_i}{\sqrt{w_i^2+w_j^2}}$. According to \cite{DBLP:conf/aaai/CruceruBG21}, $v_i$'s curvature is then given by averaging $F_{ij}$ over its neighborhood. In other words, the curvature of a node is induced by the node weights over its 2-hop neighborhood. We propose \textbf{CurvNet}, a 2-layer graph convolutional net, to approximate the map from node weights to node curvatures. CurvNet aggregates and transforms the information over 2-hop neighborhood by stacking convolutional layer, \vspace{-0.07in} \begin{equation} \mathbf Z^{(l)}=\operatorname{GCN}(\mathbf Z^{(l-1)}, \mathbf M^{(l)}), \vspace{-0.07in} \end{equation} twice, where $\mathbf M^{(l)}$ is the $l^{th}$ layer parameters. CurvNet can be built with any GCN, and we utilize \citet{kipf2016semi} in practice. The input features are node weights defined by degree information, $\mathbf Z^{(0)}=\mathbf A\operatorname{diag}(d_1,\cdots, d_N)$. $\mathbf A$ is the adjacency matrix, and $d_i$ is the degree of $v_i$. The graph curvature $\kappa$ is given as the mean of node curvatures \cite{DBLP:conf/aaai/CruceruBG21}, and accordingly, we readout the graph curvature by $\kappa=MeanPooling(\mathbf Z^{(2)})$. \vspace{-0.07in} \subsection{Label-free Lorentz Distillation} To consolidate knowledge free of labels, we propose the \emph{Label-free Lorentz Distillation} approach for continual graph learning, in which we create teacher-student AdaRGCN as shown in Figure 1. In each learning session, the student acquires knowledge for current task graph $G_t$ by distilling from itself, \emph{intra-distillation}, and preserves past knowledge by distilling from the teacher, \emph{inter-distillation}. The student finished intra- and inter-distillation becomes the teacher when new task $G_{(t+1)}$ arrives, so that we successively consolidate knowledge in the graph sequence without catastrophic forgetting. \begin{algorithm} \caption{\textbf{RieGrace}. Self-Supervised Continual Graph Learning in Adaptive Riemannian Spaces} \KwIn{Current tack $ G_t$, Parameters learnt from previous tasks $ G_1, \cdots, G_{t-1}$ } \KwOut{Parameters of AdaRGCN} \While{not converging}{ \tcp{Teacher-Student AdaRGCN} Froze the parameters of the teacher network\; $\mathbf X^{t,H} \gets \text{AdaRGCN}_{teacher}$\; $\{\mathbf X^{s,H}, \mathbf X^{s,L}\}\gets \text{AdaRGCN}_{student}$\; \tcp{Label-Free Distillation(GLP)} \For{each node $v_i$ in $G_t$}{ \emph{Intra-distillation}: Learn for current task by contrasting with Eq. (\ref{self1})\; \emph{Inter-distillation}: Learn from the teacher by contrasting with Eq. (\ref{teach1})\; } \tcp{Update Student Parameters} Compute gradients of the overall objective: \vspace{-0.05in} $$ \nabla_{\mathbf{\Theta}_{student}, \{\mathbf W, \mathbf b\} }\ \ \mathcal J_{intra}+ \lambda\mathcal J_{inter}. $$ \vspace{-0.2in} } \end{algorithm} In our approach, we propose to distill knowledge via contrastive loss in Riemannian space. Though knowledge distillation has been applied to video and text \cite{GuoJY23} and similar idea on graphs has been proposed in Euclidean space \cite{DBLP:conf/aaai/0006P00LZ022,DBLP:conf/iclr/TianKI20}, they CANNOT be applied to Riemannian space owing to essential distinction in geometry. Specifically, it lacks a method to \emph{contrast between Riemannian spaces with either different dimension or different curvature} for the distillation. To bridge this gap, we propose a novel formulation, \emph{Generalized Lorentz Projection.} \vspace{-0.1in} \subsubsection{Generalized Lorentz Projection (GLP) \& Lorentz Layer.} We aim to contrast between $\mathbf x \in \mathcal M^{d_1, \kappa_1}$ and $\mathbf y \in \mathcal M^{d_2, \kappa_2}$. The obstacle is that both dimension and curvature are incomparable ($d_1 \neq d_2$, $\kappa_1 \neq \kappa_2$). A naive way is to use logarithmic and exponential maps with a tangent space. However, these maps are range to infinity, and trend to suffer from stability issue \cite{DBLP:conf/acl/ChenHLZLLSZ22}. Such shortcomings weaken its ability for the distillation, as shown in the experiment. Fortunately, \emph{Lorentz transformation} in the Einstein's special theory of relativity performs directly mapping between Riemannian spaces, which can be decomposed into a combination of Lorentz boost $\mathbf B$ and rotation $\mathbf R$ \cite{Dragon2012}. Formally, for $\mathbf x\in \mathcal M^{d,\kappa}$, $\mathbf B\mathbf x\in \mathcal M^{d,\kappa}$ and $\mathbf R\mathbf x\in \mathcal M^{d,\kappa}$ given blocked $\mathbf B, \mathbf R \in \mathbb R^{(d+1)\times(d+1)}$ with positive semi-definiteness and special orthogonality, respectively. Though the clean formalism is appealing, it fails to tackle our challenge: i) The constraints on definiteness or orthogonality render the optimization problematic. ii) Both dimension and curvature are fixed, i.e., they cannot be changed over time. Recently, \citet{DBLP:conf/acl/ChenHLZLLSZ22} make effort to support different dimensions, but still restricted in the same curvature. Indeed, it is difficult to assure closeness of the operation especially when curvatures (i.e., shape of the manifold) are different. In this work, we propose a novel \emph{Generalized Lorentz Projection} (GLP) in spirit of Lorentz transformation so as to map between \emph{Riemannian spaces with different dimensions or curvatures}. To avoid the constrained optimization, we reformalize GLP to learn a transformation matrix $\mathbf{W}\in \mathbb R^{d_2 \times d_1}$. The relational behind is that $\mathbf{W}$ linearly transforms both dimension and curvature with a carefully designed formulation based on a Lorentz-type multiplication. Formally, given $\mathbf x\in \mathcal M^{d_1,\kappa_1}$ and the target manifold $\mathcal M^{d_2,\kappa_2}$ to map onto, $GLP^{d_1,\kappa_1\to d_2,\kappa_2}_\mathbf x(\cdot)$ at $\mathbf x$ is defined as follows, \vspace{-0.04in} \begin{equation} GLP^{d_1,\kappa_1\to d_2,\kappa_2}_\mathbf x\left( \left[\begin{array}{cc} w & \mathbf{0}^{\top} \\ \mathbf{0} & \mathbf{W} \end{array}\right] \right)=\left[\begin{array}{cc} w_0 & \mathbf{0}^{\top} \\ \mathbf{0} & \mathbf{W} \end{array}\right], \vspace{-0.03in} \end{equation} so that we have \vspace{-0.06in} \begin{equation} GLP^{d_1,\kappa_1\to d_2,\kappa_2}_\mathbf x\left( \left[\begin{array}{cc} w & \mathbf{0}^{\top} \\ \mathbf{0} & \mathbf{W} \end{array}\right] \right) \left[\begin{array}{c} x_0 \\ \mathbf x_s \end{array}\right]=\left[\begin{array}{c} w_0x_0 \\ \mathbf{W}\mathbf x_s \end{array}\right], \vspace{-0.03in} \end{equation} where $w \in \mathbb R$, $w_0=\sqrt{\frac{|\kappa_1|}{|\kappa_2|}\cdot\frac{1-\kappa_2\ell(\mathbf W, \mathbf x_s)}{1-\kappa_1 \langle \mathbf x_s, \mathbf x_s\rangle}}$, and $\ell(\mathbf W, \mathbf x_s)=\left\| \mathbf W\mathbf x_s \right\|^2$. (The derivation is given in Appendix.) Next, we study theoretical aspects of the proposed GLP. First and foremost, we prove that GLP is \textbf{closed} in Riemannian spaces with different dimensions or curvatures, so that the mapping is done correctly. \newtheorem*{prop3}{Proposition 2} \begin{prop3} $GLP^{d_1,\kappa_1\to d_2,\kappa_2}_\mathbf x\left(\bar{ \mathbf W }\right)\mathbf x \in \mathcal M^{d_2,\kappa_2}$ holds, $\forall \mathbf x \in \mathcal M^{d_1,\kappa_1}$, where $ \bar{\mathbf{W} }=\operatorname{diag}([w, \mathbf W])$. \end{prop3} \vspace{-0.1in} \begin{proof} $\mathbf L=GLP^{d_1,\kappa_1\to d_2,\kappa_2}_\mathbf x(\bar{ \mathbf W })$, and $\langle \mathbf L\mathbf x, \mathbf L\mathbf x \rangle_{\kappa_2} =\frac{1}{\kappa_2}$ holds. Please refer to Appendix for the details. \end{proof} \noindent Second, we prove that GLP matrices cover all valid Lorentz rotation. That is, the proposed GLP can be considered as a generalization of Lorentz rotation. \newtheorem*{prop4}{Proposition 3} \begin{prop4} The set of GLP matrices projecting within $\mathcal M^{d_1, \kappa_1}$ is $\mathcal W_\mathbf x=\{\text{GLP}^{d_1,\kappa_1\to d_1,\kappa_1}_\mathbf x( \mathbf W) \}$. Lorentz rotation set is $\mathcal Q=\{\mathbf R\}$. $\mathcal Q \subseteq \mathcal W_\mathbf x$ holds, $\forall \mathbf x \in \mathcal M^{d_1,\kappa_1}$. \end{prop4} \vspace{-0.1in} \begin{proof} $\forall \mathbf R$, $GLP^{d_1,\kappa_1\to d_1,\kappa_1}_{\mathbf x}(\mathbf R)=\mathbf R$ holds, analog to Parseval's theorem. Please refer to Appendix for the details and further theoretical analysis. \end{proof} Now, we are ready to score the similarity between $\mathbf x \in \mathcal M^{d_1,\kappa_1}$ and $\mathbf y \in \mathcal M^{d_2,\kappa_2}$. Specifically, we add the bias for GLP, and formulate a \emph{Lorentz Layer} (LL) as follows: \vspace{-0.05in} \begin{equation} \resizebox{0.885\hsize}{!}{$ LL^{d_1,\kappa_1\to d_2,\kappa_2}_\mathbf x\left(\mathbf W, \mathbf b, \left[\begin{array}{c} x_0 \\ \mathbf{x}_s \end{array}\right] \right) =\left[\begin{array}{c} w_0x_0\\ \mathbf{W}\mathbf{x}_s+\mathbf{b} \end{array}\right], $} \vspace{-0.05in} \end{equation} where $\mathbf W \in \mathbb R^{d_2 \times d_1}$ and $\mathbf b \in \mathbb R^{d_2}$ denote the weight and bias, respectively. $\ell( \mathbf W, \mathbf x_s )=\left\| \mathbf W\mathbf x_s +\mathbf b\right\|^2$ for $w_0$. It is easy to verify $LL^{d_1,\kappa_1\to d_2,\kappa_2}_\mathbf x(\mathbf W,\mathbf b, \mathbf x)\in \mathcal M^{d_2,\kappa_2}$. In this way, $\mathbf x \in \mathcal M^{d_1,\kappa_1}$ is comparable with $\mathbf y \in \mathcal M^{d_2,\kappa_2}$ after flowing over a Lorentz layer. Accordingly, we define the generalized Lorentz similarity function as follows, \vspace{-0.05in} \begin{equation} Sim^\mathcal L(\mathbf x, \mathbf y)=d_\mathcal M(LL^{d_1,\kappa_1\to d_2,\kappa_2}_\mathbf x(\mathbf W,\mathbf b, \mathbf x), \mathbf y). \label{sim} \end{equation} \vspace{-0.05in} \subsubsection{Consolidate Knowledge with Intra- \& Inter-distillation.} In Label-free Lorentz Distillation, we jointly perform intra-distillation and inter-distillation with GLP to learn and memorize knowledge for continual graph learning, respectively. \emph{In intra-distillation}, the student distills knowledge from the intermedian layer of itself, so that the contrastive learning is enabled without augmentation. Specifically, we first create \emph{high-level view} and \emph{low-level view} for each node by output layer encoding and shallow layer encoding, and then formulate the InfoNCE loss \cite{abs-1807-03748} to evaluate the agreement between different views, \vspace{-0.12in} \begin{equation} \resizebox{1.02\hsize}{!}{$ \mathcal J( \mathbf x_i^{s, L},\mathbf x_i^{s, H}) =-\log \frac{\exp Sim^\mathcal L( \mathbf x_i^{s, L},\mathbf x_i^{s, H})}{\sum_{j=1}^{|\mathcal V|}\mathbb I\{i \neq j\}\exp Sim^\mathcal L( \mathbf x_i^{s, L},\mathbf x_i^{s, H})}, $} \label{self1} \vspace{-0.05in} \end{equation} where $\mathbf x_i^{s, L}$ and $\mathbf x_i^{s, H}$ denote the low-level view and high-level view of the student network, respectively. $\mathbb I\{ \cdot \} \in \{0, 1\}$ is an indicator who will return $1$ iff the condition $(\cdot)$ is true. \emph{In inter-distillation}, the student distills knowledge from the teacher by contrasting their high-level views. We formulate teacher-student distillation objective via InfoNCE loss, \vspace{-0.12in} \begin{equation} \resizebox{1.02\hsize}{!}{$ \mathcal J(\mathbf x_i^{t, H}, \mathbf x_i^{s, H}) =-\log \frac{\exp Sim^\mathcal L(\mathbf x_i^{t, H}, \mathbf x_i^{s, H})}{\sum_{j=1}^{|\mathcal V|}\mathbb I\{i \neq j\}\exp Sim^\mathcal L(\mathbf x_i^{t, H}, \mathbf x_i^{s, H})}, $} \vspace{-0.05in} \label{teach1} \end{equation} where $\mathbf x_i^{t, H}$ and $\mathbf x_i^{s, H}$ denote the high-level view of the teacher and the student, respectively. Finally, with $Sim^\mathcal L(\mathbf x, \mathbf y)$ defined in Eq. (\ref{sim}), we formulate the learning objective of RieGrace as follows, \vspace{-0.07in} \begin{equation} \mathcal J_{overall} =\mathcal J_{intra}+\lambda \mathcal J_{inter}, \vspace{-0.09in} \end{equation} where $\lambda$ is for balance. We have contrastive loss $\mathcal J_{intra}=\sum\nolimits_{i=1}^{|\mathcal V|}\mathcal J(\mathbf x_i^{s, L}, \mathbf x_i^{s, H})$ and $\mathcal J_{inter}=\sum\nolimits_{i=1}^{|\mathcal V|}\mathcal J(\mathbf x_i^{t, H}, \mathbf x_i^{s, H})$. We summarize the overall training process of RieGrace in Algorithm 1, whose computational complexity is $O(|\mathcal V|^2)$ in the same order as typical contrastive models in Euclidean space, e.g., \cite{HassaniA20}. However, RieGrace is able to consolidate knowledge of the task graph sequence \emph{in the adaptive Riemannian spaces free of labels}. \begin{table*} \centering \vspace{-0.12in} \resizebox{1.02\linewidth}{!}{ \begin{tabular}{ c l | c c | c c| c c| c c| cc} \toprule & \multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c|}{\textbf{Cora}} & \multicolumn{2}{c|}{\textbf{Citeseer}} & \multicolumn{2}{c|}{\textbf{Actor}} & \multicolumn{2}{c|}{\textbf{ogbn-arXiv}} & \multicolumn{2}{c}{\textbf{Reddit}} \\ & & PM & FM & PM & FM & PM & FM & PM & FM & PM & FM\\ \toprule \multirow{6}{*}{\rotatebox{90}{ Euclidean } } &JOINT & $ 93.9(0.9) $ & $-$ & $ 79.3(0.8)$ & $ -$ & $57.1(0.9)$ & $-$ & $82.2(0.3)$ & $-$ & $ 96.3(0.7)$ & $-$\\ \cline{2-12} & ERGNN & $ 71.1(2.5)$ & $-34.3(1.0)$ & $65.5(0.3)$ & $-20.4(3.9)$ & $51.4(2.2)$ & $-\ \ 7.2(3.2)$ & $63.5(2.4)$ & $-19.5(1.9)$ & $ 95.3(1.0)$ & $-23.1(1.7)$\\ & TWP & $81.3(3.2)$ & $-14.4(1.5)$ & $69.8(1.5)$ & $-\ \ 8.9(2.6)$ & $54.0(1.8)$ & $-\ \ 2.1(1.9)$ & $75.8(0.5)$ & $-\ \ 5.9(0.3) $ & $95.4(1.4)$ & $-\ \ \note{1.4}(1.5)$ \\ & HPN & $93.6(1.5)$ & $-\ \ \note{1.7}(0.7)$ & $79.0(0.9)$ & $-\ \ \note{1.5}(0.3)$ & $56.8(1.4)$ & $-\ \ 1.5(0.9)$ & $81.2(0.7)$ & $+\ \ \note{0.7}(0.1)$ & $95.3(0.6)$ & $-\ \ 3.6(1.0)$ \\ & FGN & $85.5(1.4)$ & $-\ \ 2.3(1.0)$ & $73.3(0.9)$ & $-\ \ 2.2(1.7)$ & $53.6(0.7)$ & $-\ \ 3.8(1.6)$ & $49.4(0.3)$ & $-14.8(2.2)$ & $79.0(1.8)$ & $-12.2(0.4)$ \\ & MSCGL & $79.8(2.7)$ & $-\ \ 4.9(1.6)$ & $68.7(2.4)$ & $-\ \ 1.8(0.1)$ & $55.9(3.3)$ & $+\ \ \note{1.3}(1.7)$ & $64.8(1.2)$ & $-\ \ 1.9(1.0)$ & $96.1(2.5)$ & $-\ \ 1.9(0.3)$ \\ & DyGRAIN & $82.5(1.0)$ & $-\ \ 3.7(0.2)$ & $69.2(0.6)$ & $-\ \ 5.5(0.3)$ & $56.1(1.2)$ & $-\ \ 2.9(0.3)$ & $71.9(0.2)$ & $-\ \ 4.6(0.1)$ & $93.3(0.4)$ & $-\ \ 3.1(0.2)$ \\ \midrule \multirow{7}{*}{\rotatebox{90}{ Riemannian }} & HGCN & $90.6(1.8)$ & $-33.1(2.3)$ & $80.8(0.9)$ & $-21.6(0.3)$ & $56.1(1.7)$ & $-\ \ 6.3(1.6)$ & $82.0(1.5)$ & $-12.7(1.6)$ & $96.7(1.2)$ & $-33.7(0.9)$ \\ & HGCNwF & $88.7(2.5)$ & $-34.6(4.1)$ & $76.1(3.3)$ & $-19.9(1.5)$ & $52.8(2.9)$ & $-\ \ 8.2(2.5)$ & $78.9(2.4)$ & $-13.6(0.3)$ & $90.5(3.3)$ & $-25.0(1.7)$ \\ & LGCN & $91.7(0.9)$ & $-11.9(1.9)$ & $81.5(1.2)$ &$-\ \ 9.3(2.5)$ & $\note{60.2}(3.3)$ & $-11.2(0.2)$ & $\note{82.5}(0.2)$ & $-20.8(1.1)$ & $96.1(2.4)$ & $-\ \ 9.6(2.1)$ \\ & LGCNwF & $92.3(2.0)$ & $-\ \ 5.5(1.2)$& $80.3(0.7)$ & $-10.2(0.7)$ & $57.5(1.5)$ & $-10.9(2.4)$ & $81.3(1.8)$ & $-18.2(1.9)$ & $95.5(0.6)$ & $-\ \ 4.9(1.5)$ \\ & $\kappa$-GCN & $\note{93.9}(0.3)$ & $-22.0(0.4)$ & $79.8(2.9)$ & $-15.7(1.6)$ & $56.3(3.6)$ & $-\ \ 3.1(0.9)$ & $81.6(0.3)$ & $-\ \ 9.8(1.2)$ & $\note{96.7}(2.7)$ & $-18.6(3.3)$ \\ & $\kappa$-GCNwF & $92.0(1.9)$ & $-11.3(2.4)$ & $\note{81.0}(0.5)$ & $-\ \ 6.1(1.2)$ & $59.7(2.0)$ & $+\ \ 0.6(0.3)$ & $79.9(1.9)$ & $-\ \ 5.1(2.0)$ & $94.1(1.0)$ & $-11.5(2.4)$ \\ \cline{2-12} & \textbf{RieGrace} & $\mathbf{95.2}(0.8)$ & $-\ \ \mathbf{1.2}(0.7)$&$\mathbf{83.6}(2.4)$ &$-\ \ \mathbf{1.3}(0.6)$ & $\mathbf{61.9}(1.2)$ & $+\ \ \mathbf{1.9}(1.1)$ & $\mathbf{83.9}(0.3)$ & $+\ \ \mathbf{1.2}(0.5)$ & $\mathbf{97.9}(1.8)$ & $-\ \ \mathbf{1.1}(1.5)$ \\ \bottomrule \end{tabular} } \vspace{-0.09in} \caption{Node classification on Citerseer, Cora, Actor, ogbn-arXiv and Riddit. We report both PM(\%) and FM(\%). Confidence interval is given in brackets. The best scores are in \textbf{bold}, and the second \underline{underlined}. } \vspace{-0.15in} \label{results} \end{table*} \vspace{-0.1in} \section{Experiment} \vspace{-0.05in} We conduct extensive experiments on a variety of datasets with the aim to answer following research questions (\emph{RQs}): \vspace{-0.02in} \begin{itemize} \item \textbf{\emph{RQ1}}: How does the proposed \emph{RieGrace} perform? \item \textbf{\emph{RQ2}}: How does the proposed component, either \emph{CurvNet} or \emph{GLP}, contributes to the success of RieGrace? \item \textbf{\emph{RQ3}}: How does the \emph{curvature} change over the graph sequence in continual learning? \end{itemize} \vspace{-0.11in} \subsubsection{Datasets.} We choose five benchmark datasets, i.e., \textbf{Cora} and \textbf{Citeseer} \cite{DBLP:journals/aim/SenNBGGE08}, \textbf{Actor} \cite{DBLP:conf/kdd/TangSWY09}, \textbf{ogbn-arXiv} \cite{DBLP:conf/nips/MikolovSCCD13} and \textbf{Reddit} \cite{hamilton2017inductive}. The setting of graph sequence (task continuum) on Cora, Citerseer, Actor and ogbn-arXiv follows \citet{DBLP:journals/corr/abs-2111-15422}, and the setting on Reddit follows \citet{DBLP:conf/aaai/0002C21}. \vspace{-0.07in} \subsubsection{Euclidean Baseline.} We choose several strong baselines, i.e., \textbf{ERGNN} \cite{DBLP:conf/aaai/0002C21}, \textbf{TWP} \cite{DBLP:conf/aaai/LiuYW21}, \textbf{HPN} \cite{DBLP:journals/corr/abs-2111-15422}, \textbf{FGN} \cite{FGN}, \textbf{MSCGL} \cite{MCGL-NAS} and \textbf{DyGRAIN} \cite{DBLP:conf/ijcai/KimYK22}. ERGNN, TWP and DyGRAIN are implemented with GAT backbone \cite{velickovic2018graph}, which generally achieves the best results as reported. We also include the joint training with GAT (\textbf{JOINT}) that trains all the tasks jointly. \emph{Since no catastrophic forgetting exists, it approximates the upper bound in Euclidean space w.r.t. GAT}. MSCGL is designed for multimodal graphs, and we use the corresponding unimodal version to fit the benchmarks. Existing methods are trained in supervised fashion, and we propose the first self-supervised model for continual graph learning to our knowledge. \vspace{-0.07in} \subsubsection{Riemannian Baseline.} In the literature, there is no continual graph learner in Riemannian space. Alternatively, we fine-tune the offline Riemannian GNNs in each learning session, in order to show the forgetting of continual learning in Riemannian space. Specifically, we choose \textbf{HGCN} \cite{HGCN}, \textbf{LGCN} \cite{ZhangWSLS21}, and $\mathbf \kappa$-\textbf{GCN} \cite{BachmannBG20}. In addition, we implement the supervised LwF \cite{DBLP:journals/pami/LiH18a} for CNNs on these Riemannian GNNs (denoted by -\textbf{wF} suffix), in order to show adapting existing methods to Riemannian GNNs trends to result in inferior performance. \vspace{-0.07in} \subsubsection{Evaluation Metric.} Following \citet{MCGL-NAS,DBLP:conf/aaai/0002C21,DBLP:conf/nips/Lopez-PazR17}, we utilize Performance Mean (PM) and Forgetting Mean (FM) to measure the learning and memorizing abilities, respectively. Negative FM means the existence of forgetting, and positive FM indicates positive knowledge transfer between tasks. \vspace{-0.07in} \subsubsection{Euclidean Input.} The input feature $\mathbf x$ are Euclidean by default. To bridge this gap, we formulate an input transformation for Riemannian models, $\Gamma_\kappa: \mathbb R^d \to \mathcal M^{d, \kappa}$. Specifically, we have $\Gamma_\kappa(\mathbf x)=exp_\mathcal O^\kappa([0||\mathbf x])$, and $\kappa$ is either given by CurNet in RieGrace, or set as a parameter in other models. \vspace{-0.07in} \subsubsection{Model Configuration.} In our model, we stack the convolutional layer twice with a 2-layer CurvNet. Balance weight $\lambda=1$. As a self-supervised model, RieGrace first learns encodings without labels, and then the encodings are directly utilized for training and testing, similar to \citet{VelickovicFHLBH19}. The grid search is performed for hyperparameters, e.g., learning rate: $[0.001, 0.005, 0.008, 0.01]$. \noindent (\emph{Appendix} gives the details on datasets, baselines, metrics, implementation as well as the further mathematics.) \vspace{-0.1in} \subsection{Main Results (\emph{RQ1})} \vspace{-0.02in} Node classification is utilized as the learning task for the evaluation. Traditional classifiers work with Euclidean space, and cannot be applied to Riemannian spaces due to the essential distinction in geometry. For Riemannian methods, we extend the classification method proposed in \citet{HGNN} to Riemannian space of arbitrary curvature with distance metric $d_\mathcal M$ given in Table \ref{tab:ops}. For fair comparisons, we perform $10$ independent runs for each model, and report the mean value with $95\%$ confidence interval in Table \ref{results}. Dimension is set to $16$ for Riemannian models, and follows original settings for Euclidean models. As shown in Table 2, traditional continual learning methods suffers from forgetting in general, though MSCGL, HPN, $\kappa$-GCNwF and our RieGrace have positive knowledge transfer in a few cases. Our self-supervised RieGrace achieves the best results in both PM and FM, even outperforming the supervised models. The reason is two-fold: i) RieGrace successively matches each task graph with adaptive Riemannian spaces, improving the learning ability. ii) RieGrace learns from the teacher to preserve past knowledge in the label-free Lorentz distillation, improving the memorizing ability. \begin{table} \vspace{-0.01in} \centering \resizebox{0.91\linewidth}{!}{ \begin{tabular}{ c | c c| c c } \toprule \multirow{2}{*}{\textbf{Variant}} & \multicolumn{2}{c|}{ \textbf{Citeseer}} & \multicolumn{2}{c}{\textbf{Actor} } \\ & PM & FM & PM & FM \\ \toprule $\mathbb{S}$w/oL & $66.7(0.3)$ & $-\ \ 6.7(0.9) $ & $51.6(0.8)$ & $-7.1(0.7) $ \\ $\mathbb{S}$ & $70.2(1.5)$ & $-\ \ 5.3(1.0) $ & $53.4(3.1)$ & $-0.9(0.2) $ \\ \midrule $\mathbb{E}$ & $69.8(0.9)$ & $-11.9(0.3)$ & $52.9(2.7)$ & $-4.3(1.6)$ \\ \midrule $\mathbb{H}$w/oL & $77.1(3.5)$ & $-\ \ 8.2(0.8)$ & $53.3(1.5)$ & $-8.9(0.7)$ \\ $\mathbb{H}$ & $80.9(0.2)$ & $-\ \ 5.7(2.1)$ & $56.6(2.4)$ & $-4.8(0.1)$ \\ \midrule $\mathcal{M}$w/oL & $\note{81.2}(1.8)$ & $-\ \ \note{3.9}(2.2)$ & $\note{58.5}(0.6)$ & $+\note{0.5}(1.3)$ \\ \textbf{Full} & $\mathbf{83.6}(2.4)$ &$-\ \ \mathbf{1.3}(0.6)$ & $\mathbf{61.9}(1.2)$ & $+ \mathbf{1.9}(1.1)$ \\ \bottomrule \end{tabular} } \vspace{-0.09in} \caption{Ablation study on Citersser and Actor. Confidence interval is given in bracket. The best scores are in \textbf{bold}.} \vspace{-0.24in} \label{ablation} \end{table} \vspace{-0.10in} \subsection{Ablation Study (\emph{RQ2})} \vspace{-0.02in} We conduct ablation study to show how each proposed component contributes to the success of RieGrace. To this end, we design two kinds of variants described as follows: \noindent{\emph{i) To verify the importance of GLP directly mapping between Riemannian spaces,}} we design the variants that involve a tangent space for the mapping, denoted by -w/oL suffix. Specifically, we replace the Lorentz layer by logarithmic and exponential maps in corresponding models. \noindent{\emph{ii) To verify the importance of CurvNet supporting curvature adaptation to any positively or negatively curved spaces,}} we design the variants restricted in hyperbolic, Euclidean and hyperspherical space, denoted by $\mathbb S$, $\mathbb E$ and $\mathbb H$. Specifically, we replace CurvNet by the parameter $\kappa$ of the given sign, and we use corresponding Euclidean operators for $\mathbb E$ variant. We have six variants in total. We report their PM and FM in Table \ref{ablation}, and find that: i) The proposed RieGrace with GLP beats the tangent space-based variants. It suggests that introducing an additional tangent space weakens the performance for contrastive distillation. ii) The proposed RieGrace with CurvNet outperforms constrained-space variants ($\mathbb S$, $\mathbb E$ or $\mathbb H$). We will give further discussion in the case study. \vspace{-0.07in} \subsection{Case Study and Discussion (\emph{RQ3})} \vspace{-0.02in} We conduct a case study on \textbf{ogbn-arXiv} to investigate on the curvature over the graph sequence in continual learning. We begin with evaluating the effectiveness of CurvNet. To this end, we leverage the metric of embedding distortion, which is minimized with proper curvature \cite{DBLP:conf/icml/SalaSGR18}. Specifically, given an embedding $\Psi: v_i \in \mathcal V \to \mathbf x_i \in \mathcal M^{d,\kappa}$ on a graph $G$, the embedding distortion is defined as $D_{G, \mathcal M}=\frac{1}{|\mathcal V|^2} \sum\nolimits_{i,j \in \mathcal V}\left| 1-\frac{d_\mathcal M(\mathbf x_i, \mathbf x_j)}{d_G(v_i, v_j)}\right| $, where $d_\mathcal M(\mathbf x_i, \mathbf x_j)$ and $d_G(v_i, v_j)$ denote embedding distance and graph distance, respectively. Graph distance $d_G(v_i, v_j)$ is defined on the shortest path between $v_i$ and $v_j$ regarding $d_\mathcal M$, e.g., if the shortest path between $v_A$ and $v_B$ is $v_A \to v_C \to v_B$, then we have $d_G(v_A, v_B)=d_\mathcal M(\mathbf x_A, \mathbf x_C)+d_\mathcal M(\mathbf x_C, \mathbf x_B)$. We compare CurvNet with the combinational method proposed in \cite{BachmannBG20}, termed as ComC. We report the distortion $D_{G, \mathcal M}$ in $16$-dimensional Riemannian spaces with the curvature estimated by CurvNet and ComC in Table \ref{curvature}, where $D_{G, \mathcal M}$ of $128$-dimensional Eulidean space is also listed (ZeroC). As shown in Table \ref{curvature}, our CurvNet gives a better curvature estimation than ComC, and ZeroC results in larger distortion even with high dimension. Next, we estimate the curvature over the graph sequence via CurvNet, which is jointly learned with RieGrace. We illustrate the shape of Riemannian space with corresponding curvatures with a $2$-dimensional visualization on ogbn-arXiv in Figure 2. As shown in Figure 2, rather than remains in a certain type of space, \emph{the underlying geometry varies from positively curved hyperspherical spaces to negatively curved hyperbolic spaces in the graph sequence}. It suggests the necessity of curvature adaptation supporting the shift among any positive and negative values. The observation in both Table \ref{curvature} and Figure \ref{manifold} motivates our study indeed, and essentially explains the inferior of existing Euclidean methods and the superior of our RieGrace. \begin{table} \centering \vspace{-0.05in} \resizebox{0.9\linewidth}{!}{ \begin{tabular}{ c | c c c } \toprule $D_{G, \mathcal M}$ & Task Graph 1 & Task Graph 2 & Task Graph 3 \\ \toprule CurvNet & $ \mathbf{0.435} (0.027) $ & $\mathbf{0.490}(0.010) $ & $\mathbf{0.367}(0.082)$ \\ ComC & $ 0.507 (0.012) $ & $0.653 (0.007) $ & $ 0.524(0.033)$ \\ ZeroC & $ 5.118(0.129) $ & $3.967(0.022) $ & $ 4.025(0.105)$ \\ \bottomrule \end{tabular} } \vspace{-0.05in} \caption{Embedding distortion $D_{G, \mathcal M}$ with different curvatures on ogbn-arXiv. Confidence interval is given in bracket.} \vspace{-0.2in} \label{curvature} \end{table} \begin{figure} \vspace{-0.1in} \centering \resizebox{1.02\linewidth}{!}{ \subfigure[$G_1$, $\kappa=0.227$]{ \includegraphics[width=0.325\linewidth]{sub1}} \hspace{-0.025\linewidth} \subfigure[$G_2$, $\kappa=-0.536$]{ \includegraphics[width=0.325\linewidth]{sub2}} \hspace{-0.025\linewidth} \subfigure[$G_3$, $\kappa=-1.073$]{ \includegraphics[width=0.325\linewidth]{sub3}} } \vspace{-0.18in} \caption{Illustration of the Riemannian spaces in the task graphs $G_t$ on ogbn-arXiv. $\kappa$ is the learnt curvature.} \vspace{-0.22in} \label{manifold} \end{figure} \vspace{-0.1in} \section{Related Work} \vspace{-0.03in} \subsubsection{Continual Graph Learning.} Existing studies can be roughly divided into three categories, i.e., replay (or rehearsal), regularization and architectural methods \cite{DBLP:journals/corr/abs-2202-10688}. Replay methods retrain representative samples in the memory or pseudo-samples to survive from catastrophic forgetting, e.g., ERGNN \cite{DBLP:conf/aaai/0002C21} introduces a well-designed strategy to select the samples. HPN \cite{DBLP:journals/corr/abs-2111-15422} extends knowledge with the prototypes learnt from old tasks. Regularization methods append a regular term to the loss to preserve the utmost past knowledge, e.g., TWP \cite{DBLP:conf/aaai/LiuYW21} preserves important parameters for both task-related and topology-related goals. MSCGL \cite{MCGL-NAS} is designed for multimodal graphs with neural architectural search. DyGRAIN \cite{DBLP:conf/ijcai/KimYK22} explores the adaptation of receptive fields while distilling knowledge. Architectural methods modify the neural architecture of graph model itself, such as FGN \cite{FGN}. Meanwhile, continual graph learning has been applied to recommendation system \cite{DBLP:conf/cikm/XuZGGTC20}, trafficflow prediction \cite{DBLP:conf/ijcai/ChenWX21}, etc. In addition, \citet{DBLP:conf/cikm/WangSWW20} mainly focus on a related but different problem with the time-incremental setting. Recently, \citet{DBLP:conf/wsdm/TanDG022,DBLP:journals/corr/abs-2205-13954} study the few-shot class-incremental learning on graphs which owns essentially different setting to ours. Since no existing work is suitable for the self-supervised continual graph learning, we are devoted to bridging this gap in this work. \vspace{-0.05in} \subsubsection{Riemannian Representation Learning.} It has achieved great success in a variety of applications \cite{mathieu2019continuous,HAN,nagano2019wrapped,DBLP:conf/icdm/0008Z0WDSY20}. Here, we focus on Riemannian models on graphs. In hyperbolic space, \citet{nickel2017poincare,suzuki2019hyperbolic} introduce shallow models, while HGCN \cite{HGCN}, HGNN \cite{HGNN} and LGNN \cite{ZhangWSLS21} generalize convolutional network with different formalism under static setting. Recently, HVGNN \cite{HVGNN} and HTGN \cite{HTGN} extend hyperbolic graph neural network to temporal graphs. Beyond hyperbolic space, \citet{DBLP:conf/icml/SalaSGR18} study the matrix manifold of Riemannian spaces. $\kappa$-GCN \cite{BachmannBG20} extends GCN to constant-curvature spaces with $\kappa$-sterographical model, but its formalism cannot be applied to our problem. \citet{DBLP:conf/www/YangCPLYX22} model the graph in the dual space of Euclidean and hyperbolic ones. \citet{GuSGR19} and \citet{DBLP:conf/www/WangWSWNAXYC21} explore the mixed-curvature spaces, and \citet{SelfMGNN} propose the first self-supervised GNN in mixed-curvature spaces. \citet{NEURIPS2021_b91b1fac} and \citet{DBLP:conf/kdd/XiongZNXP0S22} study graph learning on a kind of pseudo Riemannian manifold, ultrahyperbolic space. Recently, \citet{DBLP:conf/cikm/0008YPY22} propose a novel GNN in general on Riemannian manifolds with the time-varying curvature. All existing Riemannian models adopt offline training, and we propose the first continual graph learner in Riemannian space to the best of our knowledge. \vspace{-0.07in} \section{Conclusion} \vspace{-0.02in} In this paper, we propose the first self-supervised continual graph learner in adaptive Riemannian spaces, RieGrace. Specifically, we first formulate a unified GNN coupled with the CurvNet, so that Riemannian space is shaped by the learnt curvature adaptive to each task graph. Then, we propose Label-free Lorentz Distillation approach to consolidate knowledge without catastrophic forgetting, where we perform contrastive distillation in Riemannian spaces with the proposed GLP. Extensive experiments on the benchmark datasets show the superiority of RieGrace.
1,314,259,994,567
arxiv
\section*{Acknowledgments} The authors thank anonymous reviewers for comments that improved the readability and correctness of the paper. \bibliographystyle{plain} \section{Definitions} Section~\ref{sec:grammar-defns} defines standard context-free grammars, as well as a special type called \emph{symbol-pair grammars}, used in Section~\ref{sec:expressive-power}. Section~\ref{sec:is-defns} defines insertion systems, with a small number of modifications from the definitions given in~\cite{Dabby-2013a} designed to ease readability. Section~\ref{sec:expressive-power-defn} formalizes the notion of expressive power used in~\cite{Dabby-2013a}. \subsection{Grammars} \label{sec:grammar-defns} A \emph{context-free grammar} $\mathcal{G}$ is a 4-tuple $\mathcal{G} = (\Sigma, \Gamma, \Delta, S)$. The sets $\Sigma$ and $\Gamma$ are the \emph{terminal} and \emph{non-terminal symbols} of the grammar. The set $\Delta$ consists of \emph{production rules} or simply \emph{rules}, each of the form $L \rightarrow R_1 R_2 \cdots R_j$ with $L \in \Gamma$ and $R_i \in \Sigma \cup \Gamma$. Finally, the symbol $S \in \Gamma$ is a special \emph{start symbol}. The \emph{language of $\mathcal{G}$}, denoted $L(\mathcal{G})$, is the set of finite strings that can be \emph{derived} by starting with $S$, and repeatedly replacing a non-terminal symbol found on the left-hand side of some rule in $\Delta$ with the sequence of symbols on the right-hand side of the rule. The \emph{size} of $\mathcal{G}$ is $|\Delta|$, the number of rules in $\mathcal{G}$. If every rule in $\Delta$ is of the form $L \rightarrow R_1 R_2$ or $L \rightarrow t$, with $R_1 R_2 \in \Gamma$ and $t \in \Sigma$, then the grammar is said to be in \emph{Chomsky normal form}. A \emph{symbol-pair grammar}, used in Section~\ref{sec:expressive-power}, is a context-free grammar in Chomsky normal form such that each non-terminal symbol is in fact a symbol pair $(a, d)$, and each production rule has the form $(a, d) \rightarrow (a, b) (c, d)$ or $(a, d) \rightarrow t$. \subsection{Insertion systems} \label{sec:is-defns} Dabby and Chen~\cite{Dabby-2013b,Dabby-2013a} describe both a physical implementation and formal model of insertion systems. We briefly review the physical implementation, then give formal definitions. \textbf{Physical implementation.} Short strands of DNA, called \emph{monomers}, are bonded via complementary base sequences to form linear sequences of monomers called \emph{polymers}. Additional monomers are \emph{inserted} into the gap between two adjacent monomers, called an \emph{insertion site}, by bonding to the adjacent monomers and breaking the existing bond between them via a strand displacement reaction (see Figure~\ref{fig:figure}). Each insertion then creates two new insertion sites for additional monomers to be inserted, allowing \emph{construction} of arbitrarily long polymers. \begin{figure}[ht] \centering \includegraphics[scale=0.9]{figure.pdf} \caption{The two types of insertions. Each symbol denotes a DNA subsequence or its complement. The directionality of DNA and hairpin design using generic subsequence symbols $z$, $z^*$ creates these distinct types. This figure is loosely based on Figures~2 and~3 of~\cite{Dabby-2013a}.} \label{fig:figure} \end{figure} Each monomer consists of four base sequences that form specific bonds, and only two of these can form bonds during insertion due to the monomer's hairpin design. This design gives each insertion site or monomer one of two \emph{signs} such that a monomer can only be inserted into a site with identical sign. \textbf{Formal model.} An \emph{insertion system} $\mathcal{S}$ is a 4-tuple $\mathcal{S} = (\Sigma, \Delta, Q, R)$. The first element, $\Sigma$, is a set of symbols. Each symbol $s \in \Sigma$ has a \emph{complement} $s^*$. We denote the complement of a symbol $s$ as $\overline{s}$, i.e. $\overline{s} = s^*$ and $\overline{s^*} = s$. The set $\Delta$ is a set of \emph{monomer types}, each assigned a \emph{concentration}. Each monomer is specified by a signed quadruple $(a, b, c, d)^+$ or $(a, b, c, d)^-$, where $a, b, c, d \in \Sigma \cup \{s^* : s \in \Sigma\}$, and is \emph{positive} or \emph{negative} according to its sign. The concentration of each monomer type is a real number between~0 and~1, and the sum of call concentrations is at most~1. The two symbols $Q = (a, b)$ and $R = (c, d)$ are special two-symbol monomers that together form the \emph{initiator} of $\mathcal{S}$. It is required that either $\overline{a} = d$ or $\overline{b} = c$. The \emph{size} of $\mathcal{S}$ is $|\Delta|$, the number of monomer types in $\mathcal{S}$. A \emph{polymer} is a sequence of monomers $Q m_1 m_2 \dots m_n R$ where $m_i \in \Delta$ such that for each pair of adjacent monomers $(w, x, a, b) (c, d, y, z)$, either $\overline{a} = d$ or $\overline{b} = c$.\footnote{For readability, the signs of monomers belonging to a polymer are omitted.} The \emph{length} of a polymer is the number of monomers it contains (including $Q$ and $R$). The gap between every pair of adjacent monomers $(w, x, a, b) (c, d, y, z)$ in a polymer is an \emph{insertion site}, written $(a, b) (c, d)$. Monomers can be \emph{inserted} into an insertion site $(a, b) (c, d)$ according to the following rules (seen in Figure~\ref{fig:figure}): \begin{enumerate} \item If $\overline{a} = d$ and $\overline{b} \neq c$, then any monomer $(\overline{b}, e, f, \overline{c})^+$ can be inserted. \item If $\overline{a} \neq d$ and $\overline{b} = c$, then any monomer $(e, \overline{a}, \overline{d}, f)^-$ can be inserted.\footnote{In~\cite{Dabby-2013a}, this rule is described as a monomer $(\overline{d}, f, e, \overline{a})^-$ that is inserted into the polymer as $(e, \overline{a}, \overline{d}, f)$.} \end{enumerate} A \emph{positive} or \emph{negative} insertion site accepts only positive or negative monomers, respectively. A \emph{dead} insertion site accepts no monomers and has the form $(a, b) (\overline{b}, \overline{a})$. An \emph{insertion sequence} is a sequence of insertions, each specified by the site and monomer types, such that each site is created by the previous insertion. A monomer is inserted after time $t$, where $t$ is an exponential random variable with rate equal to the concentration of the monomer type. The set of all polymers \emph{constructed} by an insertion system is recursively defined as any polymer constructed by inserting a monomer into a polymer constructed by the system, beginning with the initiator. Note that the insertion rules guarantee by induction that for every insertion site $(a, b) (c, d)$, either $\overline{a} = d$ or $\overline{b} = c$. We say that a polymer is \emph{terminal} if no monomer can be inserted into any insertion site in the polymer, and that an insertion system \emph{deterministically constructs} a polymer $P$ (i.e. is \emph{deterministic}) if every polymer constructed by the system is either $P$ or is non-terminal and has length less than that of $P$ (i.e. can become $P$). The \emph{string representation} of a polymer is the sequence of symbols found on the polymer from left to right, e.g. $(a, b) (b^*, a, d, c) (c^*, a)$ has string representation $abb^*adcc^*a$. We call the set of string representations of all terminal polymers of an insertion system $\mathcal{S}$ the \emph{language} of $\mathcal{S}$, denoted $L(\mathcal{S})$. \subsection{Expressive power} \label{sec:expressive-power-defn} Intuitively, a system \emph{expresses} another if the terminal polymers or strings created by the system ``look'' like the terminal polymers or strings created by the other system. In the simplest instance, a symbol-pair grammar $\mathcal{G}'$ is said to \emph{express} a context-free grammar $\mathcal{G}$ if $L(\mathcal{G}') = L(\mathcal{G})$. Similarly, a grammar $\mathcal{G}$ is said to \emph{express} an insertion system $\mathcal{S}$ if $L(\mathcal{S}) = L(\mathcal{G})$, i.e. if the set of string representations of the terminal polymers of $\mathcal{S}$ equals the language of $\mathcal{G}$. An insertion system $\mathcal{S} = (\Sigma', \Delta', Q', R')$ is said to express a grammar $\mathcal{G} = (\Sigma, \Gamma, \Delta, S)$ if there exists a function $g : \Sigma' \cup \{s^* : s \in \Sigma'\} \rightarrow \Sigma \cup \{\varepsilon\}$ and integer $\kappa$ such that \begin{enumerate} \item $\{g(s_1') g(s_2') \dots g(s_n') : s_1' s_2' \dots s_n' \in L(\mathcal{S})\} = L(\mathcal{G})$. \item No $\kappa$ consecutive symbols of a string in $L(S)$ are mapped to $\varepsilon$ by $g$. \end{enumerate} The string representations of polymers have both complementary symbol and length requirements that imply they are unable to capture even simple languages, e.g. $\{aa \dots a\}$, despite intuition and claims to the contrary, e.g. Theorem 3.2 of~\cite{Dabby-2013a} that claims insertion systems express all regular languages. Allowing $g$ to output $\varepsilon$ enables locally ``cleaning up'' string representations to eliminate complementary pairs and other debris, while $\kappa$ ensures there is a limit on the amount that can be ``swept under the rug'' locally. A feasible stricter definition could instead use a function $g: \Delta' \rightarrow \Sigma$ (monomer types of $\mathcal{S}$ to terminal symbols of $\mathcal{S}$); it is open whether the results presented here would hold under such a definition. \section{The Expressive Power of Insertion Systems} \label{sec:expressive-power} Dabby and Chen proved that any insertion system has a context-free grammar expressing it. They construct such a grammar by creating a non-terminal for every possible insertion site and a production rule for every monomer type insertable into the site. For instance, the insertion site $(a,b)(c^*,a^*)$ and monomer type $(b^*, d^*, e, c)^+$ induce non-terminal symbol $A_{(a, b)(c^*, a^*)}$ and production rule $A_{(a, b)(c^*, a^*)} \rightarrow A_{(a,b)(b^*, d^*)} A_{(e, c)(c^*,a^*)}$. Here we give a reduction in the other direction, resolving in the affirmative the question posed by Dabby and Chen of whether context-free grammars and insertion systems have the same expressive power: \begin{theorem} \label{thm:IS-express-CFG} For every context-free grammar $G$, there exists an insertion system that expresses $G$. \end{theorem} The primary difficulty in proving Theorem~\ref{thm:IS-express-CFG} lies in developing a way to simulate the ``complete'' replacement that occurs during derivation with the ``incomplete'' replacement that occurs at an insertion site during insertion. For instance, $bcAbc \Rightarrow bcDDbc$ via a production rule $A \rightarrow DD$ and $A$ is completely replaced by $DD$. On the other hand, inserting a monomer $(b^*, d, d, c)^+$ into a site $(a, b) (c^*, a^*)$ yields the consecutive sites $(a, b) (b^*, d)$ and $(d, c) (c^*, a^*)$, with $(a, b) (c^*, a^*)$ only partially replaced -- the left side of the first site and the right side of second site together form the initial site. This behavior constrains how replacement can be captured by insertion sites, and the $\kappa$ parameter of the definition of expression (Section~\ref{sec:expressive-power-defn}) prevents eliminating the issue via additional insertions. We overcome this difficulty by proving Theorem~\ref{thm:IS-express-CFG} in two steps. First, we prove that symbol-pair grammars, a constrained type of grammar with incomplete replacements, are able to express context-free grammars (Lemma~\ref{lem:PG-express-CFG}). Second, we prove symbol-pair grammars can be expressed by insertion systems (Lemma~\ref{lem:IS-express-PG}). \begin{lemma} \label{lem:PG-express-CFG} For every context-free grammar $\mathcal{G}$, there exists a symbol-pair grammar that expresses $\mathcal{G}$. \end{lemma} \begin{proof} Let $\mathcal{G} = (\Sigma, \Gamma, \Delta, S)$. Let $n = |\Gamma|$. Start by putting $\mathcal{G}$ into Chomsky normal form and then relabeling the non-terminals of $\mathcal{G}$ to $A_0, A_1, \dots, A_{n-1}$, with $S = A_0$. Now we define a symbol-pair grammar $\mathcal{G}' = (\Sigma', \Gamma', \Delta', S')$ such that $L(\mathcal{G}') = L(\mathcal{G})$. Let $\Sigma' = \Sigma$ and $\Gamma' = \{(a, d) : 0 \leq a,d < n \}$; we treat the symbols in the pairs of $\Gamma'$ as both symbols and integers. For each production rule $A_i \rightarrow A_j A_k$ in $\Delta$, add to $\Delta'$ the set of rules $(a, d) \rightarrow (a, b) (c, d)$, with $0 \leq a < n$, $d = (i - a) \bmod n$, $b = (j - a) \bmod n$, and $c = (k - d) \bmod n$. For each production rule $A_i \rightarrow t$ in $\Delta$, add to $\Delta'$ the set of rules $(a, d) \rightarrow t$, with $0 \leq a < n$ and $d = (i - a) \bmod n$. Let $S' = (0, 0)$. We claim that a partial derivation $P'$ of $\mathcal{G}'$ exists if and only if the partial derivation $P$ obtained by replacing each non-terminal $(a, d)$ in $P'$ with $A_{(a + d) \bmod n}$ is a partial derivation of $\mathcal{G}$. By construction, a rule $(a, d) \rightarrow (a, b) (c, d)$ is in $\Delta'$ if and only if the rule $A_{(a + d) \bmod n} \rightarrow A_{(a + b) \bmod n} A_{(c + d) \bmod n}$ is in $\Delta$. Similarly, a rule $(a, d) \rightarrow t$ is in $\Delta'$ if and only if the rule $A_{(a + d) \bmod n} \rightarrow r$ is in $\Delta$. Also, $S' = (0, 0)$ and $S = A_{(0 + 0) \bmod n}$. So the claim holds by induction. Since the set of all partial derivations of $P'$ are equal to those of $P$, the completed derivations are as well and $L(\mathcal{S}') = L(\mathcal{S})$. So $\mathcal{G}'$ expresses $\mathcal{G}$. \end{proof} \begin{lemma} \label{lem:IS-express-PG} For every symbol-pair grammar $\mathcal{G}$, there exists an insertion system that expresses $\mathcal{G}$. \end{lemma} \begin{proof} Let $\mathcal{G} = (\Sigma, \Gamma, \Delta, S)$. The symbol-pair grammar $\mathcal{G}$ is expressed by an insertion system $\mathcal{S} = (\Sigma', \Delta', Q', R')$ that we now define. Let $\Sigma' = \{s_a, s_b : (a, b) \in \Gamma\} \cup \{u, x\} \cup \Sigma$. Let $\Delta' = \Delta_1' \cup \Delta_2' \cup \Delta_3' \cup \Delta_4'$, where \begin{align*} \Delta_1' &= \{(s_b, u^*, s_b^*, x)^- : (a, d) \rightarrow (a, b) (c, d) \in \Delta \}\\ \Delta_2' &= \{(s_a^*, s_b, s_c^*, s_d^*)^+ : (a, d) \rightarrow (a, b) (c, d) \in \Delta \}\\ \Delta_3' &= \{(x, s_c, u, s_c)^- : (a, d) \rightarrow (a, b) (c, d) \in \Delta \}\\ \Delta_4' &= \{(s_a^*, t, x, s_d^*)^+ : (a, d) \rightarrow t \in \Delta \} \end{align*} Let $Q' = (u, a)$ and $R' = (b, u^*)$, where $S = (a, b)$. For instance, the following insertions simulate applying the production rule $(0, 0) \rightarrow (0, 1) (2, 0)$ to $(0, 0)$, where $\diamond$ denotes the available insertion sites and bold the inserted monomer: \begin{center} $\begin{array}{c} (u, s_0) \diamond (s_0, u^*)\\ (u, s_0) \diamond \bm{(s_0^*, s_1, s_2^*, s_0^*)} \diamond (s_0, u^*)\\ (u, s_0) \diamond \bm{(s_1, u^*, s_1^*, x)} (s_0^*, s_1, s_2^*, s_0^*) \diamond (s_0, u^*)\\ (u, s_0) \diamond (s_1, u^*, s_1^*, x) (s_0^*, s_1, s_2^*, s_0^*) \bm{(x, s_2, u, s_2)} \diamond (s_0, u^*)\\ (u, s_0) \diamond (s_1, u^*) \dots (u, s_2) \diamond (s_0, u^*)\\ \end{array}$ \end{center} The subsequent application of production rules $(0, 1) \rightarrow p$ $(2, 0) \rightarrow q$ to the string $(0, 1) (2, 0)$ are simulated by the following insertions: \begin{center} $\begin{array}{c} (u, s_0) \diamond (s_1, u^*) \dots (u, s_2) \diamond (s_0, u^*)\\ (u, s_0) \bm{(s_0^*, p, x, s_1^*)} (s_1, u^*) \dots (u, s_2) \diamond (s_0, u^*)\\ (u, s_0) (s_0^*, p, x, s_1^*) (s_1, u^*) \dots (u, s_2) \bm{(s_2^*, q, x, s_0^*)} (s_0, u^*)\\ (u, s_0) (s_0^*, p, x, s_1^*) \dots (s_2^*, q, x, s_0^*) (s_0, u^*)\\ \end{array}$ \end{center} \textbf{Insertion types.} First, it is proved that for any polymer constructed by $\mathcal{S}$, only three types of insertions of a monomer $m_2$ between two adjacent monomers $m_1 m_3$ are possible: \begin{enumerate} \item $m_1 \in \Delta_2'$, $m_2 \in \Delta_3'$, $m_3 \in \Delta_1'$. \item $m_1 \in \Delta_3'$, $m_2 \in \Delta_2' \cup \Delta_4'$, $m_3 \in \Delta_1'$. \item $m_1 \in \Delta_3'$, $m_2 \in \Delta_1'$, $m_3 \in \Delta_2'$. \end{enumerate} Moreover, for every adjacent $m_1 m_3$ pair satisfying one of these conditions, an insertion of some type $m_2$ from the specified set is possible. Consider each possible combination of $m_1 \in \Delta_i'$ and $m_3 \in \Delta_j'$, respectively, with $i, j \in \{1, 2, 3, 4\}$. Observe that for an insertion to occur at insertion site $(a, b) (c, d)$, the symbols $\overline{a}$, $\overline{b}$, $\overline{c}$, and $\overline{d}$ must each occur on some monomer. Then since $x^*$ and $t^*$ do not appear on any monomers, any $i, j$ with $i \in \{1, 4\}$ or $j \in \{3, 4\}$ cannot occur. This leaves monomer pairs $(\Delta_i', \Delta_j')$ with $(i, j) \in \{(2, 1), (2, 2), (3, 1), (3, 2)\}$. Insertion sites between $(\Delta_2', \Delta_1')$ pairs have the form $(s_c^*, s_d^*) (s_d, u^*)$, so an inserted monomer must have the form $(\underline{~~}, s_c, u, \underline{~~})^-$ and is in $\Delta_3'$. An insertion site $(s_c^*, s_d^*) (s_d, u^*)$ implies a rule of the form $(a, d) \rightarrow (a, b) (c, d)$ in $\Delta$, so there exists a monomer $(x, s_c, u, s_c^*)^- \in \Delta_3'$ that can be inserted. Insertion sites between $(\Delta_3', \Delta_2')$ pairs have the form $(u, s_c) (s_c^*, s_b)$, so an inserted monomer must have the form $(\underline{~~}, u^*, s_b^*, \underline{~~})^-$ and thus is in $\Delta_1'$. An insertion site $(u, s_c) (s_c^*, s_b)$ implies a rule of the form $(c, d) \rightarrow (c, b) (e, d)$ in $\Gamma$, so there exists a monomer $(s_b, u^*, s_b^*, x)^- \in \Delta_1'$ that can be inserted. Insertion sites between $(\Delta_2', \Delta_2')$ pairs can only occur once a monomer $m_2 \in \Delta_2'$ has been inserted between a pair of adjacent monomers $m_1 m_3$ with either $m_1 \in \Delta_2'$ or $m_3 \in \Delta_2'$, but not both. But we just proved that all such such possible insertions only permit $m_2 \in \Delta_3' \cup \Delta_1'$. Moreover, the initial insertion site between $Q'$ and $R'$ has the form $(u, s_a) (s_b, u^*)$ of an insertion site with $m_1 \in \Delta_3'$ and $m_3 \in \Delta_1'$. So no pair of adjacent monomers $m_1 m_3$ are ever both from $\Delta_2'$ and no insertion site between $(\Delta_2', \Delta_2')$ pairs can ever exist. Insertion sites between $(\Delta_3', \Delta_1')$ pairs have the form $(u, s_c) (s_b, u^*)$, so an inserted monomer must have the form $(s_c^*, \underline{~~}, \underline{~~}, s_b^*)^+$ and is in $\Delta_2'$ or $\Delta_4'$. We prove by induction that for each such insertion site $(u, s_c) (s_b, u^*)$ that $(c, b) \in \Gamma$. First, observe that this is true for the insertion site $(u, s_a) (s_b, u^*)$ between $Q'$ and $R'$, since $(a, b) = S \in \Gamma$. Next, suppose this is true for all insertion sites of some polymer and a monomer $m_2 \in \Delta_2' \cup \Delta_4'$ is about to be inserted into the polymer between monomers from $\Delta_3'$ and $\Delta_1'$. Inserting a monomer $m_2 \in \Delta_4'$ only reduces the set of insertion sites between monomers in $\Delta_3'$ and $\Delta_1'$, and the inductive hypothesis holds. Inserting a monomer $m_2 \in \Delta_2'$ induces new $(\Delta_3', \Delta_2')$ and $(\Delta_2', \Delta_1')$ insertion site pairs between $m_1 m_2$ and $m_2 m_3$. These pairs must accept two monomers $m_4 \in \Delta_1$ and $m_5 \in \Delta_3$, inducing a sequence of monomers $m_1 m_4 m_2 m_5 m_3$ with adjacent pairs $(\Delta_3', \Delta_1')$, $(\Delta_1', \Delta_2')$, $(\Delta_2', \Delta_3')$, $(\Delta_3', \Delta_1')$. Only the first and last pairs permit insertion and both are $(\Delta_3', \Delta_1')$ pairs. Now consider the details of the three insertions yielding $m_1 m_4 m_2 m_5 m_3$, starting with $m_1 m_3$. The initial insertion site $m_1 m_3$ must have the form $(u, s_a) (s_d, u^*)$. So the sequence of insertions has the following form, with the last two insertions interchangeable: \begin{center} $\begin{array}{c} (u, s_a) \diamond (s_d, u^*)\\ (u, s_a^*) \diamond \bm{(s_a^*, s_b, s_c^*, s_d^*)} \diamond (s_d, u^*)\\ (u, s_a) \diamond \bm{(s_b, u^*, s_b^*, x)} (s_a^*, s_b, s_c^*, s_d^*) \diamond (s_d, u^*)\\ (u, s_a) \diamond (s_b, u^*, s_b^*, x) (s_a^*, s_b, s_c^*, s_d^*) \bm{(x, s_c, u, s_c)} \diamond (s_d, u^*)\\ \end{array}$ \end{center} Notice the two resulting $(\Delta_3', \Delta_1')$ pair insertion sites $(u, s_a) (s_b, u^*)$ and $(u, s_c) (s_d, u^*)$. Assume, by induction, that the monomer $m_2$ must exist. So there is a rule $(a, d) \rightarrow (a, b) (c, d) \in \Delta$ and $(a, b), (c, d) \in \Gamma$, fulfilling the inductive hypothesis. So for every insertion site $(u, s_c) (s_b, u^*)$ between a $(\Delta_3', \Delta_1')$ pair there exists a non-terminal $(c, b) \in \Gamma$. So for every adjacent monomer pair $m_1 m_3$ with $m_1 \in \Delta_3'$ and $m_3 \in \Delta_1'$, there exists a monomer $m_2 \in \Delta_2' \cup \Delta_4'$ that can be inserted between $m_1$ and $m_2$. \textbf{Partial derivations and terminal polymers.} Next, consider the sequence of insertion sites between $(\Delta_3', \Delta_1')$ pairs in a polymer constructed by a modified version of $\mathcal{S}$ lacking the monomers of $\Delta_4'$. We claim that a polymer with a sequence $(u, s_{a_1}) (s_{b_1}, u^*), (u, s_{a_2}) (s_{b_2}, u^*), \dots, (u, s_{a_i}) (s_{b_i}, u^*)$ of $(\Delta_3', \Delta_1')$ insertion sites is constructed if and only if there is a partial derivation $(a_1, b_1) (a_2, b_2) \dots (a_i, b_i)$ of a string in $L(\mathcal{G})$. This follows directly from the previous proof by observing that two new adjacent $(\Delta_3', \Delta_1')$ pair insertion sites $(u, s_a) (s_b, u^*)$ and $(u, s_c) (s_d, u^*)$ can replace a $(\Delta_3', \Delta_1')$ pair insertion site if and only if there exists a rule $(a, d) \rightarrow (a, b) (c, d) \in \Delta$. Observe that any string in $L(\mathcal{G})$ can be derived by first deriving a partial derivation containing only non-terminals, then applying only rules of the form $(a, d) \rightarrow t$. Similarly, since the monomers of $\Delta_4'$ never form half of a valid insertion site, any terminal polymer of $\mathcal{S}$ can be constructed by first generating a polymer containing only monomers in $\Delta_1' \cup \Delta_2' \cup \Delta_3'$, then only inserting monomers from $\Delta_4'$. Also note that the types of insertions possible in $\mathcal{S}$ imply that in any terminal polymer, any triple of adjacent monomers $m_1 m_2 m_3$ with $m_1 \in \Delta_i'$, $m_2 \in \Delta_j'$, and $m_3 \in \Delta_k'$, that $(i, j, k) \in \{(4, 1, 2), (1, 2, 3), (2, 3, 4), (3, 4, 1)\}$, with the first and last monomers of the polymer in $\Delta_4'$. \textbf{Expression.} Define the following piecewise function $g : \Sigma' \cup \{ s^* : s \in \Sigma' \} \rightarrow \Sigma \cup \{ \varepsilon \}$ that maps to $\varepsilon$ except for second symbols of monomers in $\Delta_4'$. \begin{displaymath} g(s) = \left\{ \begin{array}{ll} t, & \text{if } t \in \Sigma \\ \varepsilon, & \text{otherwise} \end{array} \right. \end{displaymath} Observe that every string in $L(\mathcal{S})$ has length $2 + 4 \cdot (4n - 3) + 2 = 16n-8$ for some $n \geq 0$. Also, for each string $s_1' s_2' \dots s_{16n-8}' \in L(\mathcal{S})$, $g(s_1') g(s_2') \dots g(s_{16n-8}') = \varepsilon^3 t_1 \varepsilon^{16} t_2 \varepsilon^{16} \dots t_n \varepsilon^5$. There is a terminal polymer with string representation in $L(\mathcal{S})$ yielding the sequence $s_1 s_2 \dots s_n$ if and only if the polymer can be constructed by first generating a terminal polymer excluding $\Delta_4'$ monomers with a sequence of $(\Delta_3', \Delta_1')$ insertion pairs $(a_1, b_1) (a_2, b_2) \dots (a_n, b_n)$ followed by a sequence of insertions of monomers from $\Delta_4'$ with second symbols $t_1 t_2 \dots t_n$. Such a generation is possible if and only if $(a_1, b_1) (a_2, b_2) \dots (a_n, b_n)$ is a partial derivation of a string in $L(\mathcal{G})$ and $(a_1, b_1) \rightarrow t_1, (a_2, b_2) \rightarrow t_2, \dots, (a_n, b_n) \rightarrow t_n \in \Delta$. So applying the function $g$ to the string representations of the terminal polymers of $\mathcal{S}$ gives $L(\mathcal{G})$, i.e. $L(\mathcal{S}) = L(\mathcal{G})$. Moreover, the second symbol in every fourth monomer in a terminal polymer of $\mathcal{S}$ maps to a symbol of $\Sigma$ using $g$. So $\mathcal{S}$ expresses $\mathcal{G}$ with the function $g$ and $\kappa = 16$. \end{proof} \section{Introduction} In this work we study a theoretical model of \emph{algorithmic self-assembly}, in which simple particles aggregate in a distributed manner to carry out complex functionality. Perhaps the the most well-studied theoretical model of algorithmic self-assembly is the \emph{abstract Tile Assembly Model (aTAM)} of Winfree~\cite{Winfree-1998a} consisting of square \emph{tiles} irreversibly attach to a growing polyomino-shaped assembly according to matching edge colors. This model is capable of Turing-universal computation~\cite{Winfree-1998a}, self-simulation~\cite{Doty-2012b}, and efficient assembly of general (scaled) shapes~\cite{Soloveichik-2007a} and squares~\cite{Adleman-2001a,Rothemund-2000a}. Despite this power, the model is incapable of assembling shapes efficiently; a single row of $n$ tiles requires $n$ tile types and $\Omega(n^2)$ expected assembly time, and any shape with $n$ tiles requires $\Omega(\sqrt{n})$ expected time~\cite{Adleman-2001a}, even if the shape is assembled non-deterministically~\cite{Chen-2012a}. Such a limitation may not seem so significant, except that a wide range of biological systems form complex assemblies in time polylogarithmic in the assembly size, as noted in~\cite{Dabby-2013a,Woods-2013b}. These biological systems are capable of such growth because their particles (e.g.\ living cells) \emph{actively} carry out geometric reconfiguration. In the interest of both understanding naturally occurring biological systems and creating synthetic systems with additional capabilities, several models of \emph{active self-assembly} have been proposed recently. These include the graph grammars of Klavins et al.~\cite{Klavins-2004b,Klavins-2004a}, the \emph{nubots} model of Woods et al.~\cite{Chen-2014a,Chen-2013a,Woods-2013b}, and the insertion systems of Dabby and Chen~\cite{Dabby-2013a}. Both graph grammars and nubots are capable of a topologically rich set of assemblies and reconfigurations, but rely on stateful particles forming complex bond arrangements. In contrast, insertion systems consist of stateless particles forming a single chain of bonds. Indeed, all insertion systems are captured as a special case of nubots in which a linear polymer is assembled via parallel insertion-like reconfigurations, as in Theorem 5.1 of~\cite{Woods-2013a}. The simplicity of insertion systems makes their implementation in matter a more immediately attainable goal; Dabby and Chen~\cite{Dabby-2013b,Dabby-2013a} describe a direct implementation of these systems in DNA. We are careful to make a distinction between \emph{active self-assembly}, where assemblies undergo reconfiguration, and \emph{active tile self-assembly}~\cite{Gautam-2013a,Hendricks-2013a,Jonoska-2014a,Jonoska-2014b,Keenan-2013b,Majumder-2008a,Padilla-2012a,Padilla-2014a}, where tile-based assemblies change their bond structure. Active self-assembly enables exponential assembly rates by enabling insertion of new particles throughout the assembly, while active tile self-assembly does not, since the $\Omega(\sqrt{n})$ expected-time lower bound of Chen and Doty~\cite{Chen-2012a} still applies. \section{Negative Results for Polymer Growth} \label{sec:negative-results} Here we show that the construction in the previous section is the best possible. We start by proving a helpful lemma on the number of insertion sites that accept at least one monomer type, which we call \emph{usable} insertion sites. \begin{lemma} \label{lem:usable-insertion-sites-ub} Any insertion system with $k$ monomer types has at most $4k^{3/2}$ usable insertion sites. \end{lemma} \begin{proof} Let $\mathcal{S} = (\Sigma, \Delta, Q, R)$ be an insertion system that deterministically constructs a polymer of length $n$. Let $k = |\Delta|$ (the number of monomer types in $\mathcal{S}$), and relabel the symbols in $\Sigma \cup \{s^* : s \in \Sigma\}$ as $s_1, s_2, \dots, s_{4k}$, with some of these symbols possibly unused. Define the sets $L_i = \{ (s_a, s_b, s_i, s_c)^{\pm} \in \Delta \}$ and $R_i = \{ (s_a, s_i, s_b, s_c)^{\pm} \in \Delta \}$. We will consider the number of usable insertion sites of $\mathcal{S}$, and define $U_i = \{ (s_i, s_b) (s_c, \overline{s_i}) \rm{~is~usable} \}$. Since each monomer type can only be inserted into one site in each $U_i$, $|U_i| \leq k$, and since each usable site requires a distinct pair of right and left monomer pairs, $|U_i| \leq |L_i| \cdot |R_i|$. So $|U_i| = \rm{min}(k, |L_i| \cdot |R_i|)$. Since each monomer type appears in exactly one $L_i$ and $R_i$, $\sum_{i=1}^{4k}{|L_i|} = \sum_{i=1}^{4k}{|R_i|} = k$. Consider maximizing $\sum_{i=1}^{4k}|U_i| = \sum_{i=1}^{4k}\rm{min}(k, |L_i| \cdot |R_i|)$ subject to $\sum_{i=1}^{4k}{|L_i|} = \sum_{i=1}^{4k}{|R_i|} = k$. Clearly $|L_i| \cdot |R_i| \leq \rm{max}(|L_i|, |R_i|)^2$, and if we define $B_i = L_i \cup R_i$, $|L_i| \cdot |R_i| \leq |B_i|^2$. Then $\sum_{i=1}^{4k}|U_i| \leq \sum_{i=1}^{4k}|B_i|^2$ with $\sum_{i=1}^{4k}|B_i| = 2k$ and $|B_i| \leq \sqrt{k}$. So $\sum_{i=1}^{4k}|U_i| \leq (\sqrt{k})^2 \cdot 2\sqrt{k}$ and thus $\sum_{i=1}^{4k}|U_i| \leq 2k^{3/2}$. So the set of all usable sites of the form $(s_i, s_b) (s_c, \overline{s_i})$ has size $2k^{3/2}$. A similar argument using the monomer sets $L_i' = \{ (s_a, s_b, s_c, s_i)^{\pm} \in \Delta \}$, $R_i' = \{ (s_i, s_a, s_b, s_c)^{\pm} \in \Delta \}$, and insertion site set $U_i' = \{ (s_b, s_i) (\overline{s_i}, s_c) \rm{~is~usable} \}$ suffices to prove that the set of all usable sites of the form $(s_b, s_i) (\overline{s_i}, s_c)$ also has size $2k^{3/2}$. Since these describe all usable sites, $\mathcal{S}$ has at most $4k^{3/2}$ total usable sites. \end{proof} \begin{theorem} \label{thm:monomer-types-lb} Any polymer deterministically constructed by an insertion system with $k$ monomer types has length $2^{O(k^{3/2})}$. \end{theorem} \begin{proof} Let $\mathcal{S}$ be a system with $k$ monomer types that deterministically constructs a polymer. By Lemma~\ref{lem:usable-insertion-sites-ub}, $\mathcal{S}$ has $O(k^{3/2})$ usable sites. As observed by Dabby and Chen, $\mathcal{S}$ can be expressed by a grammar $\mathcal{G}_{\mathcal{S}}$ with at most $4k^{3/2}$ non-terminal symbols, where each insertion site $(a, b) (c, d)$ corresponds to a non-terminal $A_{a,b,c,d}$, and each monomer type $(e, f, g, h)^{\pm}$ insertable into the site corresponds to a rule $A_{a, b, c, d} \rightarrow A_{a, b, e, f} A_{g, h, c, d}$. Let $\sigma$ be a string in $L(\mathcal{G}_{\mathcal{S}})$ of length $n$. So the (binary) derivation tree of any derivation of $\sigma$ contains a path of length at least $\log_2{n}$. If $\log_2{n} > 4k^{3/2}$, then this path must contain at least two occurrances of the same non-terminal symbol. The portion of the path between these two occurrances can be pumped to derive strings of arbitrary lengths, so $L(\mathcal{G}_{\mathcal{S}})$ is infinite. So $L(\mathcal{S}) \neq L(\mathcal{G}_{\mathcal{S}})$ and $\mathcal{G}_{\mathcal{S}}$ does not express $\mathcal{S}$, a contradiction. Thus $\log_2{n} \leq 4k^{3/2}$ for every string in $L(\mathcal{G}_{\mathcal{S}})$ and the length of the polymer deterministically constructed by $\mathcal{S}$ is $2^{O(k^{3/2})}$. \end{proof} \begin{theorem} \label{thm:deterministic-growth-speed-lb} Deterministically constructing a polymer of length $n$ takes $\Omega(\log^{5/3}(n))$ expected time. \end{theorem} \begin{proof} The proof approach is to prove a lower bound on the expected time to carry out an insertion sequence of length $\Omega(\log{n})$ involving (by Lemma~\ref{lem:usable-insertion-sites-ub}), $\Omega(\log{n})$ distinct monomer types. This is converted into a minimization problem for the expected time, whose optimal solutions shown algebraically to be $\Omega(\log^{5/3}(n))$. \textbf{A long insertion sequence.} Since each insertion only increases the number of insertion sites by one, the system must carry out an insertion sequence of length at least $\log_2{n}$ when constructing the polymer. No insertion site appears twice in this sequence, since otherwise the system (non-deterministically) constructs polymers of arbitrary length. Suppose, for the sake of contradiction, that an insertion site in the sequence accepts monomer types $m_1$ and $m_2$, and inserts $m_1$ into some polymer. Then all polymers constructed by the system without $m_1$ and, seperately, the system without $m_1$ are constructed by the system and each has polymers not constructed by the other. So the system cannot deterministically construct a polymer, a contradiction, and so no insertion site in the sequence accepts more than one monomer type. Thus the $\log_2{n}$ (or more) distinct insertion sites appearing in the insertion sequence each accept a unique monomer type. The remainder of the proof is develop a lower bound for the total expected time of the insertions in this sequence. \textbf{An optimization problem.} By linearity of expectation, the total expected time of the insertions is equal to the sum of the expected time for each insertion. Because each insertion site accepts a unique monomer type, the expected time to carry out the insertion is equal to the reciprocal of concentration of this type. Let $k$ be the number of monomer types inserted into the sites in the subsequence. Let $c_1, c_2, \dots, c_k$ be the sums of the concentrations of these types, and $x_1, x_2, \dots, x_k$ be the number of times a monomer from each part is inserted during the subsequence. Then the total expected time for all of the insertions in the subsequence is $\sum_{i=1}^{k}x_i/c_i$. Moreover, these variables are subject to the following constraints: \begin{enumerate} \itemsep5pt \item $\sum_{i=1}^{k}x_i \geq \log_2{n}/2$ (total number of insertions is at least $\log_2{n}/2$). \item $\sum_{i=1}^{k}c_i \leq 1$ (total concentration is at most 1). \item $k \geq \log^{2/3}(n)/4$ (monomer types is at least $\log^{2/3}(n)/4$, Lemma~\ref{lem:usable-insertion-sites-ub}). \end{enumerate} \textbf{Minimizing expected time.} Consider minimizing the total expected time subject to these constraints, starting with proving that $x_i/c_i = x_j/c_j$ for all $1 \leq i, j \leq k$. That is, that the ratio of the number of times a monomer type is inserted in the subsequence to the type's concentration is equal for all types. Assume, without loss of generality, that $x_i/c_i > x_j/c_j$ and $c_i, c_j > 0$. Then it can be shown algebraically that the following two statements hold: \begin{enumerate} \item If $c_j \geq c_i$, then for sufficiently small $\varepsilon > 0$, $\frac{x_i}{c_i} + \frac{x_j}{c_j} > \frac{x_i}{c_i + \varepsilon} + \frac{x_j}{c_j - \varepsilon}$. \item If $c_j < c_i$, then for sufficiently small $\varepsilon > 0$, $\frac{x_i}{c_i} + \frac{x_j}{c_j} > \frac{x_i}{c_i - \varepsilon} + \frac{x_j}{c_j + \varepsilon}$. \end{enumerate} Since the ratios of every pair of monomer types are equal, $$\frac{c_i}{1} \leq \frac{c_i}{\sum_{i=1}^{k}{c_i}} = \frac{x_i}{\sum_{i=1}^{k}{x_i}} \leq \frac{x_i}{\log{n}}$$ So $\log{n} \leq x_i/c_i$ and $k\log{n} \leq \sum_{i=1}^{k}x_i/c_i$. By Lemma~\ref{lem:usable-insertion-sites-ub}, since the insertion subsequence has length $\log(n)/2$ and no repeated insertion sites, $k \geq \log^{2/3}(n)/4$. So the total expected time is $k\log{n} \geq \log^{2/3}(n)/8$. \end{proof} \section{Positive Results for Polymer Growth} \label{sec:positive-results} Dabby and Chen also consider the size and speed of constructing finite polymers. They give a construction achieving the following result: \begin{theorem}[\cite{Dabby-2013a}] \label{thm:dabby-chen-fast} For any positive integer $r$, there exists an insertion system with $O(r^2)$ monomer types that deterministically constructs a polymer of length $n = 2^{\Theta(r)}$ in $O(\log^3{n})$ expected time. Moreover, the expected time has an exponentially decaying tail probability. \end{theorem} Here we improve on this construction significantly in both polymer length and expected running time. In Section~\ref{sec:negative-results}, we prove that this construction is the best possible with respect to both the polymer length and construction time. \begin{theorem} \label{thm:types-extreme-ub} For any positive integer $r$, there exists an insertion system with $O(r^2)$ monomer types that deterministically constructs a polymer of length~$n = 2^{\Theta(r^3)}$ in $O(\log^{5/3}(n))$ expected time. Moreover, the expected time has an exponentially decaying tail probability. \end{theorem} \begin{proof} The approach is to implement a three variable counter where each variable ranges over the values $0$ to $r$, effectively carrying out the execution of a triple for-loop. Insertion sites of the form $(s_a, s_b) (s_c, s_a^*)$ are used to encode the state of the counter, where $a$, $b$, and $c$ are the variables of the outer, inner, and middle loops, respectively. Three types of variable increments are carried out by the counter: \begin{enumerate}[leftmargin=2cm] \item[Inner:] If $b < r$, then $(s_a, s_b) (s_c, s_a^*) \leadsto (s_a, s_{b+1}) (s_c, s_a^*)$. \item[Middle:] If $b = r$ and $c < r$, then $(s_a, s_b) (s_c, s_a^*) \leadsto (s_a, s_0) (s_{c+1}, s_a^*)$. \item[Outer:] If $b = c = r$ and $a < r$, then $(s_a, s_b) (s_c, s_a^*) \leadsto (s_{a+1}, s_0) (s_0, s_{a+1}^*)$. \end{enumerate} For $r = 2$, these increment types give an insertion sequence of the following form from left to right: \begin{center} \begin{tabular}{r@{\hskip 2pt}c@{\hskip 2pt}l@{\hskip 40pt}r@{\hskip 2pt}c@{\hskip 2pt}l@{\hskip 40pt}r@{\hskip 2pt}c@{\hskip 2pt}l} $(s_0, s_0)$&&$(s_0, s_0^*)$&$(s_1, s_0)$&&$(s_0, s_1^*)$&$(s_2, s_0)$&&$(s_0, s_2^*)$\\[-1 pt] &\rotatebox[origin=c]{270}{$\leadsto$}&{\tiny inner$\times 2$}& &\rotatebox[origin=c]{270}{$\leadsto$}&{\tiny inner$\times 2$}& &\rotatebox[origin=c]{270}{$\leadsto$}&{\tiny inner$\times 2$} \\[-1 pt] $(s_0, s_2)$&&$(s_0, s_0^*)$&$(s_1, s_2)$&&$(s_0, s_1^*)$&$(s_2, s_2)$&&$(s_0, s_2^*)$\\[-1 pt] &\rotatebox[origin=c]{270}{$\leadsto$}&{\tiny middle}& &\rotatebox[origin=c]{270}{$\leadsto$}&{\tiny middle}& &\rotatebox[origin=c]{270}{$\leadsto$}&{\tiny middle}\\[-1 pt] $(s_0, s_0)$&&$(s_1, s_0^*)$&$(s_1, s_0)$&&$(s_1, s_1^*)$&$(s_2, s_0)$&&$(s_1, s_2^*)$\\[-1 pt] &\rotatebox[origin=c]{270}{$\leadsto$}&{\tiny inner$\times 2$}& &\rotatebox[origin=c]{270}{$\leadsto$}&{\tiny inner$\times 2$}& &\rotatebox[origin=c]{270}{$\leadsto$}&{\tiny inner$\times 2$} \\[-1 pt] $(s_0, s_2)$&&$(s_1, s_0^*)$&$(s_1, s_2)$&&$(s_1, s_1^*)$&$(s_2, s_2)$&&$(s_1, s_2^*)$\\[-1 pt] &\rotatebox[origin=c]{270}{$\leadsto$}&{\tiny middle}& &\rotatebox[origin=c]{270}{$\leadsto$}&{\tiny middle}& &\rotatebox[origin=c]{270}{$\leadsto$}&{\tiny middle}\\[-1 pt] $(s_0, s_0)$&&$(s_2, s_0^*)$&$(s_1, s_0)$&&$(s_2, s_1^*)$&$(s_2, s_0)$&&$(s_2, s_2^*)$\\[-1 pt] &\rotatebox[origin=c]{270}{$\leadsto$}&{\tiny inner$\times 2$}& &\rotatebox[origin=c]{270}{$\leadsto$}&{\tiny inner$\times 2$}& &\rotatebox[origin=c]{270}{$\leadsto$}&{\tiny inner$\times 2$} \\[-1 pt] $(s_0, s_2)$&&$(s_2, s_0^*)$&$(s_1, s_2)$&&$(s_2, s_1^*)$&$(s_2, s_2)$&&$(s_2, s_2^*)$\\[-1 pt] &\rotatebox[origin=c]{270}{$\leadsto$}&{\tiny outer}& &\rotatebox[origin=c]{270}{$\leadsto$}&{\tiny outer}& &&\\[-1 pt] $(s_1, s_0)$&&$(s_0, s_1^*)$&$(s_2, s_0)$&&$(s_0, s_2^*)$&\\[-1 pt] \end{tabular} \end{center} A site is \emph{modified} by an insertion sequence that yields a new usable site where all other sites created by the insertion sequence are unusable. For instance, we modify a site $(s_a, \bm{s_b}) (s_c, s_a^*)$ to become $(s_a, \bm{s_d}) (s_c, s_a^*)$, written $(s_a, s_b) (s_c, s_a^*) \leadsto (s_a, s_d) (s_c, s_a^*)$, by adding the monomer types $(s_b^*, x, u, s_c^*)^+$ and $(x, u^*, s_a, s_d)^-$ to the system, where $x$ is a special symbol whose complement is not found on any monomer. These two monomer types cause the following insertion sequence, using $\diamond$ to indicate the site being modified and the inserted monomer shown in bold: \begin{center} $\begin{array}{c} (s_a, s_b) \diamond (s_c, s_a^*)\\ (s_a, s_b)\bm{(s_b^*, x, u, s_c^*)} \diamond (s_c, s_a^*)\\ (s_a, s_b) (s_b^*, x, u, s_c^*) \bm{(x, u^*, s_a, s_d)} \diamond (s_c, s_a^*) \end{array}$ \end{center} We call this simple modification, where a single symbol in the insertion site is replaced with another symbol, a \emph{replacement}. There are four types of replacements, seen in Table~\ref{tab:replacements}, that can each be implemented by a pair of corresponding monomers. \renewcommand{\arraystretch}{1.25} \begin{table}[ht!] \begin{center} \begin{tabular}{| c | c |} \hline Replacement & Monomers \\ \hline $(s_a, \bm{s_b}) (s_c, s_a^*) \leadsto (s_a, \bm{s_d}) (s_c, s_a^*)$ & $(s_b^*, x, u, s_c^*)^+$, $(x, u^*, s_a, s_d)^-$ \\ $(s_a, s_b) (\bm{s_c}, s_a^*) \leadsto (s_a, s_b) (\bm{s_d}, s_a^*)$ & $(s_b^*, u, x, s_c^*)^+$, $(s_d, s_a^*, u^*, x)^-$ \\ $(\bm{s_b}, s_a) (s_a^*, s_c) \leadsto (\bm{s_d}, s_a) (s_a^*, s_c)$ & $(x, s_b^*, s_c^*, u)^-$, $(u^*, x, s_d, s_a)^+$ \\ $(s_b, s_a) (s_a^*, \bm{s_c}) \leadsto (s_b, s_a) (s_a^*, \bm{s_d})$ & $(u, s_b^*, s_c^*, x)^-$, $(s_a^*, s_d, x, u^*)^+$ \\ \hline \end{tabular} \end{center} \caption{The four types of replacement steps and monomer pairs that implement them. The symbol $u$ can be any symbol, and $x$ is a special symbol whose complement does not appear on any monomer.} \label{tab:replacements} \end{table} Each of the three increment types are implemented using a sequence of site modifications. The resulting triple for-loop carries out a sequence of $\Theta(r^3)$ insertions to construct a $\Theta(r^3)$-length polymer. A $2^{\Theta(r^3)}$-length polymer is achieved by simultaneously duplicating each site during each inner increment. In the remainder of the proof, we detail the implementation of each increment type, starting with the simplest: middle increments. \textbf{Middle increment.} A middle increment of a site $(s_a, s_b) (s_c, s_a^*)$ occurs when the site has the form $(s_a, s_r) (s_c, s_a^*)$ with $0 \leq c < r$, performing the modification $(s_a, s_r) (s_c, s_a^*) \leadsto (s_a, s_0) (s_{c+1}, s_a^*)$. We implement middle increments using a sequence of three replacements: $$ (s_a, s_r) (s_c, s_a^*) \overset{1}{\leadsto} (s_a, s_r) (s_{f_1(c)}, s_a^*) \overset{2}{\leadsto} (s_a, s_0) (s_{f_1(c)}, s_a^*) \overset{3}{\leadsto} (s_a, s_0) (s_{c+1}, s_a^*) $$ where $f_i(n) = n + 2ir^2$. Use of the function $f$ avoids unintended interactions between monomers, since for any $n_1, n_2 \in \{0, 1, \dots, r\}$, $f_i(n_1) \neq f_j(n_2)$ for all $i \neq j$. Compiling this sequence of replacements into monomer types gives the following monomers: \begin{enumerate}[label=Step \arabic*:, leftmargin=2cm] \item $(s_r^*, s_{f_2(c)}, x, s_c^*)^+$ and $(s_{f_1(c)}, s_a^*, s_{f_2(c)}^*, x)^-$. \item $(s_r^*, x, s_{f_3(c)}, s_{f_1(c)}^*)^+$ and $(x, s_{f_3(c)}^*, s_a, s_0)^-$. \item $(s_0^*, s_{f_4(c+1)}, x, s_{f_1(c)}^*)^+$ and $(s_{c+1}, s_a^*, s_{f_4(c+1)}^*, x)^-$. \end{enumerate} This set of monomers results in the following sequence of insertions: \begin{center} $\begin{array}{c} (s_a, s_r) \diamond (s_c, s_a^*)\\ (s_a, s_r) \diamond \bm{(s_r^*, s_{f_2(c)}, x, s_c^*)} (s_c, s_a^*)\\ (s_a, s_r) \diamond \bm{(s_{f_1(c)}, s_a^*, s_{f_2(c)}^*, x)} (s_r^*, s_{f_2(c)}, x, s_c^*) (s_c, s_a^*)\\ (s_a, s_r) \diamond (s_{f_1(c)}, s_a^*)\\ (s_a, s_r) \bm{(s_r^*, x, s_{f_3(c)}, s_{f_1(c)}^*)} \diamond (s_{f_1(c)}, s_a^*)\\ (s_a, s_r) (s_r^*, x, s_{f_3(c)}, s_{f_1(c)}^*) \bm{(x, s_{f_3(c)}^*, s_a, s_0)} \diamond (s_{f_1(c)}, s_a^*)\\ (s_a, s_0) \diamond (s_{f_1(c)}, s_a^*)\\ (s_a, s_0) \diamond \bm{(s_0^*, s_{f_4(c+1)}, x, s_{f_1(c)}^*)} (s_{f_1(c)}, s_a^*)\\ (s_a, s_0) \diamond \bm{(s_{c+1}, s_a^*, s_{f_4(c+1)}^*, x)} (s_0^*, s_{f_4(c+1)}, x, s_{f_1(c)}^*) (s_{f_1(c)}, s_a^*) \\ (s_a, s_0) \diamond (s_{c+1}, s_a^*)\\ \end{array}$ \end{center} Since each inserted monomer has an instance of $x$, all other insertion sites created are unusable. This is true of the insertions used for outer increments and duplications as well. \textbf{Outer increment.} An outer increment of the site $(s_a, s_b) (s_c, s_a^*)$ occurs when the site has the form $(s_a, s_r) (s_r, s_a^*)$ with $0 \leq a < r$. We implement this step using a four-step sequence of three normal replacements and a special quadruple replacement (Step~2): \begin{center} $\begin{array}{c} (s_a, s_r) (s_r, s_a^*) \overset{1}{\leadsto} (s_a, s_{f_6(a)}^*) (s_r, s_a^*) \overset{2}{\leadsto} (s_{a+1}, s_{f_7(r)}) (s_{f_6(a)}, s_{a+1}^*)\\ (s_{a+1}, s_{f_7(r)}) (s_{f_6(a)}, s_{a+1}^*) \overset{3}{\leadsto} (s_{a+1}, s_0) (s_{f_6(a)}, s_{a+1}^*) \overset{4}{\leadsto} (s_{a+1}, s_0) (s_0, s_{a+1}^*) \end{array}$ \end{center} As with middle increments, we compile replacement steps~1,~2, and~4 into monomers using Table~\ref{tab:replacements}: \begin{enumerate}[label=Step \arabic*:, leftmargin=2cm] \item $(s_r^*, x, s_{f_5(r)}, s_r^*)^+$ and $(x, s_{f_5(r)}^*, s_a, s_{f_6(a)}^*)^-$. \item $(s_{f_6(a)}, s_{a+1}^*, x, s_r^*)^+$ and $(x, s_a^*, s_{a+1}, s_{f_7(r)})^-$. \item $(s_{f_7(r)}^*, x, s_{f_8(r)}, s_{f_6(a)}^*)^+$ and $(x, s_{f_8(r)}^*, s_{a+1}, s_0)^-$. \item $(s_0^*, s_{f_9(a)}, x, s_{f_6(a)}^*)^+$ and $(s_0, s_{a+1}^*, s_{f_9(a)}^*, x)^-$. \end{enumerate} Here is the sequence of insertions, using $\diamond$ to indicate the site being modified and the inserted monomer shown in bold: \begin{center} $\begin{array}{c} (s_a, s_r) \diamond (s_r, s_a^*) \\ (s_a, s_r) \bm{(s_r^*, x, s_{f_5(r)}, s_r^*)} \diamond (s_r, s_a^*) \\ (s_a, s_r) (s_r^*, x, s_{f_5(r)}, s_r^*) \bm{(x, s_{f_5(r)}^*, s_a, s_{f_6(a)}^*)} \diamond (s_r, s_a^*) \\ (s_a, s_{f_6(a)}^*) \diamond (s_r, s_a^*) \\ (s_a, s_{f_6(a)}^*) \diamond \bm{(s_{f_6(a)}, s_{a+1}^*, x, s_r^*)} (s_r, s_a^*) \\ (s_a, s_{f_6(a)}^*) \bm{(x, s_a^*, s_{a+1}, s_{f_7(r)})} \diamond (s_{f_6(a)}, s_{a+1}^*, x, s_r^*) (s_r, s_a^*) \\ (s_{a+1}, s_{f_7(r)}) \diamond (s_{f_6(a)}, s_{a+1}^*) \\ (s_{a+1}, s_{f_7(r)}) \bm{(s_{f_7(r)}^*, x, s_{f_8(r)}, s_{f_6(a)}^*)} \diamond (s_{f_6(a)}, s_{a+1}^*) \\ (s_{a+1}, s_{f_7(r)}) (s_{f_7(r)}^*, x, s_{f_8(r)}, s_{f_6(a)}^*) \bm{(x, s_{f_8(r)}^*, s_{a+1}, s_0)} \diamond (s_{f_6(a)}, s_{a+1}^*) \\ (s_{a+1}, s_0) \diamond (s_{f_6(a)}, s_{a+1}^*) \\ (s_{a+1}, s_0) \diamond \bm{(s_0^*, s_{f_9(a)}, x, s_{f_6(a)}^*)} (s_{f_6(a)}, s_{a+1}^*) \\ (s_{a+1}, s_0) \diamond \bm{(s_0, s_{a+1}^*, s_{f_9(a)}^*, x)} (s_0^*, s_{f_9(a)}, x, s_{f_6(a)}^*) (s_{f_6(a)}, s_{a+1}^*) \\ (s_{a+1}, s_0) \diamond (s_0, s_{a+1}^*) \\ \end{array}$ \end{center} \textbf{Inner increment.} The inner increment has two phases. The first phase (Steps~1-2) performs duplication, modifying the initial site to a pair of sites: $(s_a, s_b) (s_c, s_a^*) \leadsto (s_a, s_b) (s_{f_{10}(c)}, s_a^*) \dots (s_a, s_{b+1}) (s_c, s_a^*)$, yielding an incremented version of the original site and one other site. The second phase (Steps~3-5) is $(s_a, s_b) (s_{f_{10}(c)}, s_a^*) \leadsto (s_a, s_{b+1}) (s_c, a^*)$, transforming the second site into an incremented version of the original site. For the first phase, we use the three monomers: \begin{enumerate}[leftmargin=2cm] \item[Step 1:] $(s_b^*, s_{f_{10}(c)}, s_{f_{10}(b+1)}, s_c^*)^+$. \item[Step 2:] $(s_{f_{11}(c)}, s_a^*, s_{f_{10}(c)}^*, x)^-$ and $(x, s_{f_{10}(b+1)}^*, s_a, s_{b+1})^-$. \end{enumerate} The resulting sequence of insertions is \begin{center} $\begin{array}{c} (s_a, s_b) \diamond (s_c, s_a^*) \\ (s_a, s_b) \diamond \bm{(s_b^*, s_{f_{10}(c)}, s_{f_{10}(b+1)}, s_c^*)} \diamond (s_c, s_a^*) \\ (s_a, s_b) \diamond \bm{(s_{f_{11}(c)}, s_a^*, s_{f_{10}(c)}^*, x)} (s_b^*, s_{f_{10}(c)}, s_{f_{10}(b+1)}, s_c^*) \diamond (s_c, s_a^*) \\ (s_a, s_b) \diamond (s_{f_{11}(c)}, s_a^*, s_{f_{10}(c)}^*, x) (s_b^*, s_{f_{10}(c)}, s_{f_{10}(b+1)}, s_c^*) \bm{(x, s_{f_{10}(b+1)}^*, s_a, s_{b+1})} \diamond (s_c, s_a^*) \\ (s_a, s_b) \diamond (s_{f_{11}(c)}, s_a^*) \dots (s_a, s_{b+1}) \diamond (s_c, s_a^*) \\ \end{array}$ \end{center} The last two insertions occur independently and may happen in the opposite order of the sequence depicted here. In the second phase, the site $(s_a, s_b) (s_{f_{11}(c)}, s_a^*)$ is transformed into $(s_a, s_{b+1}) (s_c, s_a^*)$ by a sequence of replacement steps: \begin{center} $\begin{array}{c} (s_a, s_b) (s_{f_{11}(c)}, s_a^*) \overset{3}{\leadsto} (s_a, s_{f_{12}(b)}) (s_{f_{11}(c)}, s_a^*) \overset{4}{\leadsto} (s_a, s_{f_{12}(b)}) (s_c, s_a^*) \overset{5}{\leadsto} (s_a, s_{b+1}) (s_c, s_a^*)\\ \end{array}$ \end{center} As with previous sequences of replacement steps, we compile this sequence into a set of monomers: \begin{enumerate}[leftmargin=2cm] \item[Step 3:] $(s_b^*, x, s_{f_{13}(b)}, s_{f_{11}(c)}^*)^+$ and $(x, s_{f_{13}(b)}^*, s_a, s_{f_{12}(b)})^-$. \item[Step 4:] $(s_{f_{12}(b)}^*, s_{f_{14}(c)}, x, s_{f_{11}(c)}^*)^+$ and $(s_c, s_a^*, s_{f_{14}(c)}^*, x)^-$. \item[Step 5:] $(s_{f_{12}(b)}^*, x, s_{f_{15}(b+1)}, s_c^*)^+$ and $(x, s_{f_{15}(b+1)}^*, s_a, s_{b+1})^-$. \end{enumerate} The resulting sequence of insertions is \begin{center} $\begin{array}{c} (s_a, s_b) \diamond (s_{f_{11}(c)}, s_a^*) \\ (s_a, s_b) \bm{(s_b^*, x, s_{f_{13}(b)}, s_{f_{11}(c)}^*)} \diamond (s_{f_{11}(c)}, s_a^*) \\ (s_a, s_b) (s_b^*, x, s_{f_{13}(b)}, s_{f_{11}(c)}^*) \bm{(x, s_{f_{13}(b)}^*, s_a, s_{f_{12}(b)})} \diamond (s_{f_{11}(c)}, s_a^*) \\ (s_a, s_{f_{12}(b)}) \diamond (s_{f_{11}(c)}, s_a^*) \\ (s_a, s_{f_{12}(b)}) \diamond \bm{(s_{f_{12}(b)}^*, s_{f_{14}(c)}, x, s_{f_{11}(c)}^*)} (s_{f_{11}(c)}, s_a^*) \\ (s_a, s_{f_{12}(b)}) \diamond \bm{(s_c, s_a^*, s_{f_{14}(c)}^*, x)} (s_{f_{12}(b)}^*, s_{f_{14}(c)}, x, s_{f_{11}(c)}^*) (s_{f_{11}(c)}, s_a^*) \\ (s_a, s_{f_{12}(b)}) \diamond (s_c, s_a^*) \\ (s_a, s_{f_{12}(b)}) \bm{(s_{f_{12}(b)}^*, x, s_{f_{15}(b+1)}, s_c^*)} \diamond (s_c, s_a^*) \\ (s_a, s_{f_{12}(b)}) (s_{f_{12}(b)}^*, x, s_{f_{15}(b+1)}, s_c^*) \bm{(x, s_{f_{15}(b+1)}^*, s_a, s_{b+1})} \diamond (s_c, s_a^*) \\ (s_a, s_{b+1}) \diamond (s_c, s_a^*) \\ \end{array}$ \end{center} When combined, the two phases of duplication modify $(s_a, s_b) (s_c, s_a^*)$ to become $(s_a, s_{b+1}) (s_c, s_a^*) \dots (s_a, s_{b+1}) (s_c, s_a^*)$, where all sites between the duplicated sites are unusable. Notice that although we need to duplicate $\Theta(r^3)$ distinct sites, only $\Theta(r^2)$ monomers are used in the implementation since each monomer either does not depend on $a$, e.g. $(s_b^*, x, s_{f_{13}(b)}, s_{f_{11}(c)}^*)^+$, or does not depend on $c$, e.g. $(x, s_{f_{13}(b)}^*, s_a, s_{f_{12}(b)})^-$. \renewcommand{\arraystretch}{1.25} \begin{table} \begin{center} \begin{tabular}{| c | l l |} \hline Step & \multicolumn{2}{|c|}{Inner monomer types ($b < r$)} \\ \hline 1 & \multicolumn{2}{|c|}{$(s_b^*, s_{f_{10}(c)}, s_{f_{10}(b+1)}, s_c^*)^+$} \\ 2 & $(s_{f_{11}(c)}, s_a^*, s_{f_{10}(c)}^*, x)^-$ & $(x, s_{f_{10}(b+1)}^*, s_a, s_{b+1})^-$ \\ 3 & $(s_b^*, x, s_{f_{13}(b)}, s_{f_{11}(c)}^*)^+$ & $(x, s_{f_{13}(b)}^*, s_a, s_{f_{12}(b)})^-$ \\ 4 & $(s_{f_{12}(b)}^*, s_{f_{14}(c)}, x, s_{f_{11}(c)}^*)^+$ & $(s_c, s_a^*, s_{f_{14}(c)}^*, x)^-$ \\ 5 & $(s_{f_{12}(b)}^*, x, s_{f_{15}(b+1)}, s_c^*)^+$ & $(x, s_{f_{15}(b+1)}^*, s_a, s_{b+1})^-$ \\ [3pt] \hline Step & \multicolumn{2}{|c|}{Middle monomer types ($c < r$)} \\ \hline 1 & $(s_r^*, s_{f_2(c)}, x, s_c^*)^+$ & $(s_{f_1(c)}, s_a^*, s_{f_2(c)}^*, x)^-$ \\ 2 & $(s_r^*, x, s_{f_3(c)}, s_{f_1(c)}^*)^+$ & $(x, s_{f_3(c)}^*, s_a, s_0)^-$ \\ 3 & $(s_0^*, s_{f_4(c+1)}, x, s_{f_1(c)}^*)^+$ & $(s_{c+1}, s_a^*, s_{f_4(c+1)}^*, x)^-$ \\ [3pt] \hline Step & \multicolumn{2}{|c|}{Outer monomer types ($a < r$)} \\ \hline 1 & $(s_r^*, x, s_{f_5(r)}, s_r^*)^+$ & $(x, s_{f_5(r)}^*, s_a, s_{f_6(a)}^*)^-$ \\ 2 & $(s_{f_6(a)}, s_{a+1}^*, x, s_r^*)^+$ & $(x, s_a^*, s_{a+1}, s_{f_7(r)})^-$ \\ 3 & $(s_{f_7(r)}^*, x, s_{f_8(r)}, s_{f_6(a)}^*)^+$ & $(x, s_{f_8(r)}^*, s_{a+1}, s_0)^-$ \\ 4 & $(s_0^*, s_{f_9(a)}, x, s_{f_6(a)}^*)^+$ & $(s_0, s_{a+1}^*, s_{f_9(a)}^*, x)^-$ \\ [3pt] \hline \end{tabular} \end{center} \caption{The set of all monomer types used to deterministically construct a monomer of size $2^{\Theta(r^3)}$ using $O(r^2)$ monomer types.} \label{tab:all-monomers-types-extreme-ub} \end{table} \textbf{Putting it together.} The system starts with the intiator $(s_0, s_0) (s_0, s_0^*)$. Each increment of the counter occurs either through a middle increment, outer increment, or a duplication. The total set of monomers is seen in Table~\ref{tab:all-monomers-types-extreme-ub}. There are at most $(r+1)^2$ monomer types in each family (each row of Table~\ref{tab:all-monomers-types-extreme-ub}) and $O(r^2)$ monomer types total. The system is deterministic if no pair of monomers can be inserted into any insertion site appearing during construction. It can be verified by an inspection of Table~\ref{tab:all-monomers-types-extreme-ub} that any two positive monomers have distinct pairs of first and fourth symbols, and any pair of negative monomers have distinct pairs of second and third symbols. So no two monomers can be inserted into the same site and thus the system is deterministic. The size $P_i$ of a subpolymer with an initiator encoding some value $i$ between $0$ and $(r+1)^3-1$ can be bounded by $2P_{i+2} + 9 \leq P_i \leq 2P_{i+1} + 9$, since either $i+1$ or $i+2$ is an inner increment step and no step inserts more than 9 monomers. Moreover, $P_{(r+1)^3-2} \geq 1$. So $P_0 + 2$, the size of the terminal polymer, is $2^{\Theta(r^3)}$. \textbf{Running time.} Define the concentration of each monomer type to be equal. There are $12r^2 + 24r + 3 \leq 39r^2$ monomer types, so each monomer type has concentration at least $1/(39r^2)$. The polymer is complete as soon as every counter's variables have reached the value $a = b = c = r$, i.e. every site encoding a counter has been modified to become $(s_r, s_r) (s_r, s_r^*)$ and the monomer $(s_r^*, x, s_{f_5(r)}, s_r^*)^+$ has been inserted. There are fewer than $2^{r^3}$ such insertions, and each insertion requires at most $9 \cdot (r+1)^3 \leq 72r^3$ previous insertions to occur. So an upper bound on the expected time $T_r$ for each such insertion is described as a sum of $72r^3$ random variables, each with expected time $39r^2$. The Chernoff bound for independent exponential random variables~\cite{Chernoff-1952} implies the following upper bound on $T_r$: \begin{align*} {\rm Prob}[T_r > 39r^2 \cdot 72r^3(1 + \delta)] &\leq e^{-39 \cdot 72r^5 \delta^2 / (2 + \delta)} \\ &\leq e^{-r^5 \delta^2 / (2 + \delta)} \\ &\leq e^{-r^5 \delta^2 / (2\delta)} \rm{~for~all~} \delta \geq 2 \\ &\leq e^{-r^5 \delta / 2} \\ \end{align*} Let $T_{\mathcal{S}_r}$ be the total running time of the system. Then we can bound $T_{\mathcal{S}_r}$ from above using the bound for $T_r$: \begin{align*} {\rm Prob}[T_{\mathcal{S}_r} > 39r^2 \cdot 72r^3(1 + \delta)] &\leq 2^{r^3} \cdot e^{-r^5 \delta / 2} \\ &\leq 2^{r^3} 2^{-r^5 \delta/2} \\ &\leq 2^{r^3 - r^5 \delta/2} \\ &\leq 2^{r^5 \delta/4 - r^5 \delta/2} \text{~for~all~} \delta \geq 4 \\ &\leq 2^{-r^5\delta/4} \end{align*} So ${\rm Prob}[T_{\mathcal{S}_r} > 39r^2 \cdot 72r^3(1 + \delta)] \leq 2^{-r^5\delta/4}$ for all $\delta \geq 4$. So the expected value of $T_{\mathcal{S}_r}$, the construction time, is $O(r^5) = O(\log^{5/3}(n))$ with an exponentially decaying tail probability. \end{proof}
1,314,259,994,568
arxiv
\section{Introduction} Countless papers have suggested particles or fields that can lead to an inflating universe. Most have used ad hoc mechanisms without identifying a physical origin – what is the inflaton? Such bottom-up descriptions, furthermore, rely on strong hidden assumptions on the theory of quantum gravity. More thorough proposals have identified the inflaton as part of a string theory construction in which the ultraviolet (UV) physics can be addressed. In this case, the inflaton arises in a theory that itself satisfies major consistency conditions and tests. The theory should also connect with the Standard Models of particle physics and cosmology. Ideally, its properties would uniquely determine the nature of the inflaton. In this work, we focus on M-theory compactified spontaneously on a manifold of $G_2$ holonomy. The resulting quantum theory is UV complete and describes gravity plus the Standard Model plus Higgs physics. When its hidden sector matter is included it has a de Sitter vacuum~\cite{Acharya:2007rc}. It stabilizes all the moduli, and is supersymmetric with supersymmetry softly broken via gluino condensation and gravity mediated~\cite{Acharya:2007rc}. It produces a hierarchy of scales, and has quarks and leptons interacting via Yang-Mills forces. It generically has radiative electroweak symmetry breaking, and correctly anticipated the ratio of the Higgs boson mass to the $Z$ mass~\cite{Kane:2011kj}. It also solves the strong CP problem~\cite{Acharya:2010zx}. In this theory, a particular linear combination of moduli, that which describes the volume of the compactified region, generates inflation. By means of K\"ahler geometry, we will prove that a tachyonic instability develops if the inflaton is not `volume modulus-like’. In contrast to related proposals in type II string theory~\cite{Linde:2007jn,Badziak:2008yg,Conlon:2008cj}, volume modulus inflation on $G_2$ does not rely on uplifting or higher order corrections to the K\"ahler potential. This follows from the smaller curvature on the associated K\"ahler submanifold. Besides being intuitively a likely inflaton, the volume modulus also resolves a notorious problem of string inflation: the energy density injected by inflation can destabilize moduli fields and decompactify the extra dimensions. Prominent moduli stabilization schemes including KKLT~\cite{Kachru:2003aw}, the large volume scenario~\cite{Balasubramanian:2005zx} and K\"ahler uplifting~\cite{Balasubramanian:2004uy,Westphal:2006tn} share the property that the volume modulus participates in supersymmetry breaking. Its stability is threatened once the Hubble scale of inflation $H$ exceeds $m_{3/2}$~\cite{Buchmuller:2004xr,Kallosh:2004yh,Buchmuller:2015oma}. In contrast, the volume modulus of the compactified $G_2$ manifold drives inflation in the models we will discuss. Thereby, the inflationary energy density stabilizes the system and $H \gg m_{3/2}$ is realized. The supersymmetry breaking fields - light moduli and mesons of a strong hidden sector gauge theory - receive stabilizing Hubble mass terms on the inflationary trajectory. Inflation takes place close to an inflection point in the potential and lasts for 100-200 e-foldings. If we impose the observational constraints on the spectral index, we can predict the tensor-to-scalar ratio $r\sim 10^{-6}$. It is unlikely that other observables will directly probe the nature of the inflaton. However, inflation emerges as piece of a theory which also implies low energy supersymmetry with a gravitino mass $m_{3/2} \lesssim 100\:\text{Te\eVdist V}$ and a specific pattern of superpartner masses. Gauginos are at the TeV scale and observable at LHC. Furthermore, a matter dominated cosmological history is predicted. In a sense, all aspects and tests of the theory are also tests of the nature of its inflaton, although technically they may not be closely related. Less is known about $G_2$ manifolds than about Calabi-Yau manifolds. This is being at least partially remedied via a 4-year, 9 million \$ study sponsored by the Simons Foundation started in 2017, focusing on $G_2$ manifolds. Remarkably, the above successes were achieved without detailed knowledge of the properties of the manifolds. \section{De Sitter Vacua in $G_2$ Compactifications}\label{sec:g2vacua} \subsection{The Moduli Sector} We study M-theory compactifications on a flux-free $G_2$-manifold. The size and the shape of the manifold is controlled by moduli $T_i$. In our convention, the imaginary parts of the $T_i$ are axion fields.\footnote{$T_i$ in this work corresponds to $i z_i$ defined in~\cite{Acharya:2007rc}.} A consistent set of K\"ahler potentials is of the form~\cite{Beasley:2002db,Acharya:2005ez} \begin{equation} K=-3\log\left(4\pi^{1/3}\mathcal{V}\right)\,, \end{equation} where $\mathcal{V}$ denotes the volume of the manifold in units of the the eleven-dimensional Planck length. Since the volume must be a homogeneous function of the $\:\text{Re}\, T_i$ of degree 7/3, the following simple ansatz has been suggested~\cite{Acharya:2005ez} \begin{equation}\label{eq:MKahler} K=-\log\left[\frac{\pi}{2} \prod\limits_i (\overline{T}_i+T_i)^{a_i}\right]\,,\qquad \sum\limits_{i} a_i =7\,, \end{equation} which corresponds to $\mathcal{V}= \prod_i (\:\text{Re}\, T_i)^{a_i/3}$. We will drop the factor $\pi/2$ in the following since it merely leads to an overall $\mathcal{O}(1)$ factor in the potential not relevant for this discussion. A realistic vacuum structure with stabilized moduli is realized through hidden sector strong dynamics such as gaugino condensation. The resulting theory generically has massless quarks and leptons, and Yang-Mills forces~\cite{Acharya:2007rc}, and it has generic electroweak symmetry breaking, and no strong CP problem~\cite{Acharya:2010zx}. We consider one or several hidden sector $\SU{N}$ gauge theories. These may include massless quark states $Q$, $\overline{Q}$ transforming in the $N$ and $\overline{N}$ representations. Each hidden sector induces a non-perturbative superpotential due to gaugino condensation~\cite{Seiberg:1993vc,Seiberg:1994bz} \begin{equation} W = A \,\det{\left(Q\overline{Q}\right)}^{-\frac{1}{N-N_f}} \, \exp\left({-\frac{2\pi\,f}{N-N_f}}\right)\,, \end{equation} where $N_f$ denotes the number of quark flavors. The coefficient $A$ is calculable, but depends on the RG-scheme as well as threshold corrections to the gauge coupling. The gauge kinetic function $f$ is a linear combination of the moduli~\cite{Lukas:2003dn}, \begin{equation} f= c_i T_i\,, \end{equation} with integer coefficients $c_i$. We now turn to the construction of de Sitter vacua with broken supersymmetry. \subsection{Constraints on de Sitter Vacua}\label{sec:ConstraintsdS} In this section we introduce some tools of K\"ahler geometry which can be used to derive generic constraints on de Sitter vacua in supergravity~\cite{GomezReino:2006dk}. The same framework also applies to inflationary solutions (see e.g.~\cite{Badziak:2008yg}) and will later be employed to identify the inflaton field. In order to fix our notation, we introduce the ($F$-term part) of the scalar potential in supergravity \begin{equation} V= e^G\left( G^i G_{i} - 3\right)\,, \end{equation} with the function $G=K + \log|W|^2$. The subscript $i$ indicates differentiation with respect to the complex scalar field $\phi_i$. Indices can be raised and lowered by the K\"ahler metric $K_{i\bar{j}}$ and its inverse $K^{\bar{i}j}$. Extrema of the potential satisfy the stationary conditions $V_i=0$ which can be expressed as \begin{equation} e^G (G_i + G^j \nabla_i G_j) + G_i V =0\,, \end{equation} where we introduced the K\"ahler covariant derivatives $\nabla_i$. The mass matrix at stationary points derives from the second derivatives of the potential~\cite{Covi:2008ea}, \begin{align} V_{i \bar j} &= e^{G} \left(G_{i \bar j} + \nabla_i G_k \nabla_{\bar j} G^k - R_{i \bar j m \bar n} \, G^m G^{\bar n} \right) + \left(G_{i \bar j} - G_i G_{\bar j}\right) V \,, \label{eq:Vibarj}\\ V_{i j} &= e^{G} \left(2 \nabla_i G_j+ G^k \nabla_i \! \nabla_j G_k \right) + \left(\nabla_i G_j - G_i G_j \right) V \,, \end{align} where $R_{i \bar j m \bar n}$ denotes the Riemann tensor of the K\"ahler manifold. (Meta)stable vacua are obtained if the mass matrix is positive semi-definite. A weaker necessary condition requires the submatrix $V_{i \bar j}$ to be positive semi-definite. All complex scalars orthogonal to the sgoldstino may acquire a large mass from the superpotential. In addition, the above mass matrix contains the standard soft terms relevant e.g.\ for the superfields of the visible sector. Stability constraints apply in particular to the sgoldstino direction which does not receive a supersymmetric mass. Via appropriate field redefinitions, we can set all derivatives of $G$ to zero, except from one which we choose to be $G_n$. The curvature scalar of the one-dimensional submanifold associated with the sgoldstino is defined as \begin{equation}\label{eq:curvaturescalar} R_n= \frac{K_{nn\bar{n}\bar{n}}}{K_{n\bar{n}}^2}-\frac{K_{nn\bar{n}}\,K_{n\bar{n}\bar{n}}}{K_{n\bar{n}}^3}\,. \end{equation} From the necessary condition, it follows that $V_{n\bar{n}}\geq 0$ and, hence, \begin{equation}\label{eq:curvaturecondition0} e^G\, (2- 3 R_n) - V R_n \geq 0\,. \end{equation} For a tiny positive vacuum energy as in the observed universe, the constraint essentially becomes~\cite{GomezReino:2006dk} \begin{equation}\label{eq:curvaturecondition} R_n < \frac{2}{3}\,. \end{equation} This condition restricts the K\"ahler potential of the field responsible for supersymmetry breakdown. Indeed, it invalidates some early attempts to incorporate supersymmetry breaking in string theory. For the dilaton $S$ in heterotic string theory, one can e.g.\ derive the curvature scalar $R_S=2$ from its K\"ahler potential $K=-\log(\overline{S}+S)$. The scenario of dilaton-dominated supersymmetry breaking~\cite{Kaplunovsky:1993rd} is, hence, inconsistent with the presence of a de Sitter minimum~\cite{Brustein:2000mq,GomezReino:2006dk}. K\"ahler potentials of the no-scale type $K= -3\log (\overline{T}+T)$, with $T$ denoting an overall K\"ahler modulus, feature $R_T = 2/3$. In this case~\eqref{eq:curvaturecondition} is marginally violated. Corrections to the K\"ahler potential and/ or subdominant $F$ or $D$-terms from other fields may then reconcile $T$-dominated supersymmetry breaking with the bound. Examples of this type include the large volume scenario~\cite{Balasubramanian:2005zx} as well as K\"ahler uplifting~\cite{Balasubramanian:2004uy,Westphal:2006tn}. A less constrained possibility to realize de Sitter vacua consists in the supersymmetry breaking by a hidden sector matter field. Hidden sector matter is present in compactified M-theory. When it is included using the approach of Seiberg~\cite{Seiberg:1994bz}, it generically leads to a de Sitter vacuum. The identification of the goldstino with the meson of a hidden sector strong gauge group allows for a natural explanation of the smallness of the supersymmetry breaking scale (and correspondingly the weak scale) through dimensional transmutation. The simple canonical K\"ahler potential, for instance, yields a vanishing curvature scalar consistent with~\eqref{eq:curvaturecondition}. Matter supersymmetry breaking is also employed in KKLT modulus stabilization~\cite{Kachru:2003aw} with $F$-term uplifting~\cite{Lebedev:2006qq} and in heterotic string models~\cite{Lowen:2008fm}. We note, however, that in $G_2$ compactifications of M-theory, de Sitter vacua can arise even if the hidden sector matter decouples. As we show in section~\ref{sec:modularinflation}, the $G_2$ K\"ahler potential~\eqref{eq:MKahler} features linear combinations of moduli with curvature scalar as small as 2/7. In contrast to the previously mentioned string theory examples, condition~\eqref{eq:curvaturecondition} can hence be satisfied even in the absence of corrections to the K\"ahler potential. The modular inflation models we discuss in section~\ref{sec:modularinflation} are of this type. We will show that, by a small parameter deformation, the inflationary plateau can be turned into a metastable de Sitter minimum. Let us also briefly allude to the controversy on the existence of de Sitter vacua in string/ M-theory~\cite{Obied:2018sgi}. It is known that de Sitter vacua do not arise in the classical limit of string/ M-theory~\cite{Maldacena:2000mw}. This, however, leaves the possibility to realize de Sitter vacua at the quantum level. Indeed, in the $G_2$ compactification we describe, the scalar potential is generated by quantum effects. The quantum nature is at the heart of the proposal and tied to the origin of physical scales. \subsection{Minimal Example of Modulus Stabilization}\label{sec:singlemodulus} We describe the basic mechanism of modulus stabilization in $G_2$-compactifications leaning on~\cite{Acharya:2007rc}.\footnote{Some differences occur since~\cite{Acharya:2007rc} mostly focused on the case of two hidden sector gauge groups with equal gauge kinetic functions, while we will consider more general cases.} Some key features are illustrated within a simple one-modulus example. Since the single-modulus case faces cosmological problems which can be resolved in a setup with two or more moduli, we will later introduce a two-moduli example and comment on the generalization to many moduli. The minimal example\footnote{Due to the absence of a constant term in the superpotential, a single gaugino condensate would give rise to a runaway potential} of modulus stabilization in $G_2$-compactifications invokes two hidden sector gauge groups $\SU{N_1+1}$, $\SU{N_2}$ with gauge kinetic functions \begin{equation} f_1=f_2=T\,. \end{equation} The $\SU{N_1+1}$ gauge theory shall contain one pair of massless quarks $Q$, $\overline{Q}$ transforming in the fundamental and anti-fundamental representation of $\SU{N_1+1}$. When the $\SU{N_1+1}$ condenses, the quarks form an effective meson field $\phi=\sqrt{2Q\overline{Q}}$. Taking $\SU{N_2}$ to be matter-free, the superpotential and K\"ahler potential read \begin{align}\label{eq:onemod} W &= A_1 \,\phi^{-\frac{2}{N_1}} \, e^{-\frac{2\pi T}{N_1}} + A_2 \, e^{-\frac{2\pi T}{N_2}} \,,\nonumber\\ K &= -7 \log\left(\overline{T}+T\right) + \overline{\phi}\phi\,, \end{align} We negelected the volume dependence of the matter K\"ahler potential which does qualitatively not affect the modulus stabilization~\cite{Acharya:2008hi}. The scalar potential including the modulus and meson field is \begin{equation}\label{eq:potentialonemodulus} V= e^G\left( G^T G_T + G^{\phi} G_{\phi} - 3\right)\,. \end{equation} The scalar mass spectrum contains two CP even and two CP odd (axion) states which are linear combinations of $\:\text{Re}\, T$, $|\phi|$ and $\:\text{Im}\, T$, $\arg \phi$ respectively. We will denote the CP even and odd mass eigenstates by $s_{1,2}$ and $\varphi_{1,2}$ respectively. The scalar potential is invariant under the shift \begin{equation} T\rightarrow T + i \frac{N_2}{N_1-N_2}\,\Delta\,, \qquad\phi \rightarrow e^{i\pi\Delta} \phi\,. \end{equation} This can easily be seen from the fact that the superpotential merely picks up an overall phase under this transformation. The light axion \begin{equation} \varphi_1 \propto N_2 \:\text{Im}\, T + \pi(N_1-N_2) \arg \phi \end{equation} is, hence, massless which makes it a natural candidate for the QCD axion~\cite{Acharya:2010zx}. The remaining axionic degree of freedom receives a periodic potential which has an extremum at the origin of field space. Without loss of generality, we require $\text{sign}(A_1/A_2)=-1$ such that the extremum is a minimum.\footnote{If this condition is not satisfied, the relative sign of $A_1$ and $A_2$ can be inverted through field redefinition.} This allows us to set $\:\text{Im}\, T=\arg \phi=0$ when discussing the stabilization of the CP even scalars. We now want to prove that this setup allows for the presence of a (local) de Sitter minimum consistent with observation. For practical purposes, we can neglect the tiny cosmological constant and require the presence of a Minkowski minimum with broken supersymmetry. There is generically no supersymmetric minimum at finite field values. Since the negative sign of $A_1/A_2$ is required for axion stabilization, a solution to $G_T=0$ only exists if $N_2>N_1$. With this constraint imposed, there is no simultaneous solution to $G_{\phi}=0$ with positive $|\phi|$. However, a minimum $(T_0,\phi_0)$ with broken supersymmetry may occur close to the field value $T_{\text{susy}}$ at which $G_T$ vanishes. This is because the modulus mass term at $T_{\text{susy}}$ dominates over the linear term which drives it away from this point. Given a minimum with a small shift $\delta T = T_{\text{susy}}-T_0$, we can expand \begin{equation}\label{eq:GT} G_{T} = G_{\bar{T}} = - (G_{TT}+G_{T\bar{T}}) \,\delta T\,. \end{equation} Here and in the following, all terms are evaluated at the minimum if not stated otherwise. Since $T_0$, $\phi_0$, $\delta T$ are real, there is no need to distinguish between $G_T$ and $G_{\bar{T}}$. In order to determine the shift, we insert~\eqref{eq:GT} into the minimization condition $V_T=0$ and keep terms up to linear order in $\delta T$. Notice that all derivatives of $G$ with respect to purely holomorphic or purely antiholomorphic variables are of zeroth order in $T_0^{-1}$. We find \begin{equation}\label{eq:shift} \delta T = \frac{G_{\phi T} G_{\bar{\phi}}}{G_{TT}K^{T\bar{T}}G_{\bar{T}\bar{T}}} + \mathcal{O}\left(T_0^{-4}\right)\,. \end{equation} The leading contribution to the shift is $\delta{T}=\mathcal{O}(T_0^{-2})$. This justifies our expansion in $\delta T$. In the next step, we want to determine the location of the minimum. As an additional constraint, we require a vanishing vacuum energy. In order to provide simple analytic results, we will perform a volume expansion which is equivalent to an expansion in $T_0^{-1}$. We include terms up to $\mathcal{O}(T_0^{-1})$. Notice that, at this order, the modulus minimum satisfies $T_0 = T_{\text{susy}}$. We, nevertheless, have to keep track of the shift carefully since it may appear in a product with the inverse K\"ahler metric which compensates its suppression. The conditions $V_T=V_\phi=V=0$ lead to the set of equations at order $T_0^{-1}$ \begin{equation}\label{eq:minimum} G_{T}=0\,,\qquad G_{\phi\phi}+1-\frac{G_{\phi T}^2}{G_{TT}}=0\,,\qquad G_{\phi}= \sqrt{3}\,. \end{equation} The solutions for the modulus and meson minimum read \begin{equation} \phi_0=\frac{\sqrt{3}}{2}\,,\qquad T_0=\frac{14}{\pi}\, \frac{N_2}{3(N_2-N_1)-8}\,. \end{equation} Notice that a minimum only exists for $N_2 \geq N_1 +3$. On the other hand $N_2-N_1\lesssim 10$ since the non-perturbative terms in the superpotential would otherwise exceed unity. The equations~\eqref{eq:minimum} fix one additional parameter which can be taken to be the ratio $A_1/A_2$. We find \begin{equation}\label{eq:parameter} \frac{A_1}{A_2} =-\frac{N_1}{N_2}\left(\frac{3}{4}\right)^{\frac{1}{N_1}} \exp\left[\frac{28}{N_1}\frac{N_2-N_1}{3\,(N_2-N_1)-8}\right]\,. \end{equation} A suppressed vacuum energy can be realized on those $G_2$ manifolds which fulfill the above constraint\footnote{More accurately, the exact version of the above approximate constraint.} with acceptable precision. We now turn to the details of supersymmetry breaking. The gravitino mass is defined as \begin{equation} m_{3/2}= |e^{G/2}|_{T_0,\phi_0}\,. \end{equation} Throughout this work, $m_{3/2}$ refers to the gravitino mass in the vacuum of the theory. We will later also introduce the gravitino mass during inflation, but will clearly indicate the latter by an additional superscript $I$. Within the analytic approximation, the gravitino mass determined from~\eqref{eq:minimum} and~\eqref{eq:parameter} is \begin{equation} m_{3/2} \simeq |A_1|\,\frac{e^{3/8}\pi^{7/2}}{48 N_1}\,\left(\frac{3 N_2- 3 N_1 -8}{7 N_2}\right)^{7/2}\,\exp\left[-\frac{N_2}{N_1}\,\frac{28}{ 3(N_2-N_1)-8}\right]\,. \end{equation} Up to the overall prefactor, the gravitino mass is fixed by the rank of the hidden sector gauge groups. A hierarchy between the Planck scale and the supersymmetry breaking scale naturally arises from the dimensional transmutation. If we require a gravitino mass close to the electroweak scale, this singles out the choice $N_2 = N_1 + 4$. While this particular result only holds for the single modulus case, similar relations between the gravitino mass and the hidden sector gauge theories can be established in realistic systems with many moduli~\cite{Acharya:2007rc}.\footnote{In realistic $G_2$ compactifications, the gauge kinetic function is set by a linear combination of many moduli. We can effectively account for this by modifying the gauge kinetic function to $f= \mathcal{O}(10-100) \,T$ in the one-modulus example. In this case, the preferred value of $N_2-N_1$ changes to 3 in agreement with~\cite{Acharya:2007rc}.} In order to determine the pattern of supersymmetry breaking we evaluate the $F$-terms which are defined in the usual way, \begin{equation} F^i=e^{G/2} K^{i\bar{j}} G_{\bar{j}}\,. \end{equation} From~\eqref{eq:GT} and~\eqref{eq:shift}, we derive \begin{equation} |F^T| \simeq \frac{2 N_2}{\pi(N_2-N_1)}\,m_{3/2} \,,\qquad |F^{\phi}| \simeq \sqrt{3} \,m_{3/2} \end{equation} at leading order. The meson provides the dominant source of supersymmetry breaking as can be seen by comparing the canonically normalized $F$-terms \begin{equation} \frac{\left|F^T \sqrt{K_{\bar{T}T}}\right|}{\left|F^{\phi}\right|} \simeq \frac{3 N_2-3 N_1-8}{2\sqrt{21}(N_2-N_1)}\,. \end{equation} This has important implications for the mediation of supersymmetry breaking to the visible sector. Since gravity-mediated gaugino masses only arise from moduli $F$-terms, they are suppressed against the gravitino and sfermion masses. We refer to~\cite{Acharya:2008zi} for details. As stated earlier, the modulus and the meson are subject to mixing. However, the mixing angle is suppressed by $T_0$, and the heavy CP even and odd mass eigenstates $s_2$ and $\varphi_2$ are modulus-like. Since their mass is dominated by the supersymmetric contribution $m_{\bar{T}T}$, they are nearly degenerate with \begin{equation} m_{s_2} \simeq m_{\varphi_2} \simeq e^{G/2}\,\sqrt{\frac{G_{TT}K^{\bar{T}T}G_{\bar{T}\bar{T}}}{K_{\bar{T}T}}} \simeq \frac{56}{N_1}\,\frac{3 N_2^2-3 N_1^2 -8 N_1}{(3 N_2 -3 N_1-8)^2}\,m_{3/2}\,. \end{equation} The meson-like axion $\varphi_1$ is massless due to the shift symmetry. Since the meson is the dominant source of supersymmetry breaking, the supertrace of masses in the meson multiplet must approximately cancel. This implies \begin{equation} m_{s_1} \simeq 2\, m_{3/2}\,. \end{equation} The scalar potential vanishes towards large modulus field values. Hence, the minimum ($T_0,\phi_0$) is only protected by a finite barrier. We first keep the meson fixed and estimate its height in a leading order volume expansion.\footnote{We also assumed $N_{1,2}\gg N_2-N_1$ when estimating the barrier height.} Then, we allow the meson to float, in order to account for a decrease of the barrier in the mixed modulus-meson direction. Numerically, we find that the shifting meson generically reduces the barrier height by another factor $\sim T_0^{-1}$. Our final estimate thus reads \begin{equation}\label{eq:barrierpotential} V_{\text{barrier}} \simeq \frac{16\pi^2 T_0}{7 e^2 N_1^2}\,m_{3/2}^2\,. \end{equation} The prefactor in front of the gravitino mass is of order unity. Notice that the above expression is multiplied by two powers of the Planck mass which is set to unity in our convention. For illustration, we now turn to an explicit numerical example. We choose the following parameter set \begin{equation}\label{eq:benchmarkparameter} N_1=8\,,\qquad N_2=12\,,\qquad A_1=0.0001\,. \end{equation} The prefactor $A_2$ is fixed by requiring a vanishing vacuum energy. Numerically, we find \begin{equation} A_1/A_2=-20.9\,, \end{equation} in good agreement with the analytic approximation~\eqref{eq:parameter}. We list the resulting minimum, particle masses, supersymmetry breaking pattern and barrier height in table~\ref{tab:spectrum1}. The numerical results are compared with the analytic expressions provided in this section. The approximations are valid to within a few per cent precision. Only for $m_{3/2}$ the error is larger due to its exponential dependence on the modulus minimum. \begin{table}[h] \begin{center} \begin{tabular}{|cc|cccccccc|} \hline &&&&&&&&&\\[-4mm] $T_0$ & $\phi_0$ & $m_{3/2}$ & $\!m_{\varphi_1}\!$ & $m_{\varphi_2}$ & $m_{s_1}$ & $m_{s_2}$& $F^T$& $F^\phi$ &$V_{\text{barrier}}$\\[1mm] \hline\hline $12.9$ & $0.85$ & $57\:\text{Te\eVdist V}$ & $0$ & $77.1\, m_{3/2}$ & $1.98\, m_{3/2}$ & $75.4\, m_{3/2}$& $1.98 \,m_{3/2}$& $1.72 \,m_{3/2}$ &$0.5\,m_{3/2}^2$\\ $13.4$ & $0.87$ & $33\:\text{Te\eVdist V}$ & $0$ & $77\, m_{3/2}$ & $2\, m_{3/2}$ & $77\, m_{3/2}$ & $1.91\, m_{3/2}$ & $1.73\, m_{3/2}$ &$0.6\,m_{3/2}^2$\\ \hline \end{tabular} \end{center} \caption{Location of the minimum, mass spectrum, $F$-terms and height of the potential barrier for the parameter choice~\eqref{eq:benchmarkparameter}. The upper and lower line correspond to exact numerical result and analytic approximation respectively.} \label{tab:spectrum1} \end{table} The scalar potential in the modulus-meson plane is depicted in figure~\ref{fig:modulus1}. Also shown is the potential along the `most shallow' mixed modulus-meson direction. The latter was determined by minimizing the potential in meson direction for each value of $T$. \begin{figure}[t] \begin{center} \includegraphics[height=5.3cm]{potential3d.pdf}\hspace{6mm} \includegraphics[height=4.7cm]{potential2d.pdf} \end{center} \caption{The left panel shows the scalar potential (in Planck units) in modulus and meson direction rescaled by $m_{3/2}^2$. A local minimum with broken supersymmetry is located at $T_0=12.9$, $\phi_0=0.85$. The field direction with the shallowest potential barrier is indicated by the red line. In the right panel, the potential along this direction is shown.} \label{fig:modulus1} \end{figure} \subsection{Generalization to Several Moduli}\label{sec:severalmoduli} Realistic $G_2$ manifolds must contain the full MSSM spectrum with its $\mathcal{O}(100)$ couplings. They will generically feature a large number of moduli and non-perturbative terms in the superpotential. The low energy phenomenology, however, mostly depends on the lightest modulus. In this sense, the mass spectrum derived in the previous section is realistic, once $T$ is identified with the lightest modulus. However, in the early universe, high energy scales are accessed. This implies that, for cosmology, the heavier moduli do actually matter. We will later see that inflation in M-theory relies on large mass hierarchies in the moduli sector. In order to motivate their existence, we now introduce an example with two moduli $T_{1,2}$. One linear combination of moduli $\ensuremath{T_{\scalebox{.7}{L}}}$ plays the role of the light modulus as in the previous section. It participates (subdominantly) in supersymmetry breaking and its mass is tied to the gravitino mass. The orthogonal linear combination $\ensuremath{T_{\scalebox{.7}{H}}}$ can, however, be decoupled through a large supersymmetric mass term from the superpotential. In order to be explicit, we will identify \begin{equation} \ensuremath{T_{\scalebox{.7}{H}}} = \frac{T_1 + T_2}{2}\,,\qquad \ensuremath{T_{\scalebox{.7}{L}}} = \frac{T_1 - T_2}{2}\,. \end{equation} The superpotential is assumed to be of the form \begin{equation} W = \mathcal{W}(\ensuremath{T_{\scalebox{.7}{H}}}) + w(\ensuremath{T_{\scalebox{.7}{H}}},\ensuremath{T_{\scalebox{.7}{L}}})\,, \end{equation} The part $\mathcal{W}$ only depends on $\ensuremath{T_{\scalebox{.7}{H}}}$ and provides the large supersymmetric mass for the heavy linear combination. The part $w$ is responsible for supersymmetry breaking and its magnitude is controlled by the (much smaller) gravitino mass. We require that $\ensuremath{T_{\scalebox{.7}{H}}}$ is stabilized supersymmetrically at a high mass scale. For this we impose that the high energy theory defined by $\mathcal{W}$ has a supersymmetric Minkowski minimum, i.e. \begin{equation}\label{eq:globalsusy} \mathcal{W}=\ensuremath{\mathcal{W}_{\scalebox{.7}{H}}} = 0\,, \end{equation} where the subscript $\text{H}$ indicates differentiation with respect to $\ensuremath{T_{\scalebox{.7}{H}}}$. The above condition has to be fulfilled at the minimum which we denote by $\ensuremath{T_{\scalebox{.7}{H},0}}$. It ensures that $\ensuremath{T_{\scalebox{.7}{H}}}$ can be integrated out at the superfield level. The mass of the heavy modulus is given as \begin{equation}\label{eq:heavymodmass} m_{\ensuremath{T_{\scalebox{.7}{H}}}}\simeq \left| e^{K/2}\:\ensuremath{\mathcal{W}_{\scalebox{.7}{HH}}}\:\left(\frac{1}{4K_{1\bar{1}}}+\frac{1}{4K_{2\bar{2}}}\right)\right| \end{equation} with $K_{i\bar{i}}$ denoting the entries of the K\"ahler metric in the original field basis. Since $m_{\ensuremath{T_{\scalebox{.7}{H}}}}$ is unrelated to the gravitino mass, it can be parametrically enhanced against the light modulus mass. The construction of a Minkowski minimum for $\ensuremath{T_{\scalebox{.7}{L}}}$ with softly broken supersymmetry proceeds analogously to the one-modulus case. As an example we consider five hidden sector gauge groups $\SU{N_1+1}$ and $\SU{N_i}$ ($i=2,\dots 5$) with gauge kinetic functions \begin{equation} f_{1,2}= 2\,T_1 + T_2\,,\qquad f_{3,4,5}= T_1+T_2\,. \end{equation} The $\SU{N_1+1}$ shall again contain one pair of massless quarks $Q$, $\overline{Q}$ forming the meson $\phi=\sqrt{2Q\overline{Q}}$. The remaining gauge theories are taken to be matter-free. Super- and K\"ahler potential take the form \begin{align}\label{eq:twomod} W &= \underbrace{A_1 \,\phi^{-\frac{2}{N_1}} \, e^{-\frac{2\pi f_1}{N_1}} + A_2 \, e^{-\frac{2\pi f_2}{N_2}}}_{w}+ \underbrace{A_3 \, e^{-\frac{2\pi f_3}{N_3}}+ A_4 \, e^{-\frac{2\pi f_4}{N_4}}+ A_5 \, e^{-\frac{2\pi f_5}{N_5}}}_{\mathcal{W}} \,,\nonumber\\[2mm] K &= - \log\left(\overline{T}_1+T_1\right)-6 \log\left(\overline{T}_2+T_2\right) + \overline{\phi}\phi\,. \end{align} We have assumed \begin{equation}\label{eq:hierarchy} |A_1 \, e^{-\frac{2\pi f_1}{N_1}}|,\,|A_2 \, e^{-\frac{2\pi f_2}{N_2}}| \;\;\ll\;\; |A_3 \, e^{-\frac{2\pi f_3}{N_3}}|,\,|A_4 \, e^{-\frac{2\pi f_4}{N_4}}| ,\,|A_5 \, e^{-\frac{2\pi f_5}{N_5}}|\,, \end{equation} such that the first two gaugino condensates contribute to $w$, the last three to $\mathcal{W}$. In order to obtain a supersymmetric minimum with vanishing vacuum energy for the heavy modulus, we impose~\eqref{eq:globalsusy}, which fixes one of the coefficients, \begin{equation}\label{eq:A5condition} A_5= -A_3 \left(\frac{A_3}{A_4}\frac{\mathcal{N}_{53}}{\mathcal{N}_{45}}\right)^{\frac{\mathcal{N}_{53}}{\mathcal{N}_{34}}} - A_4 \left(\frac{A_3}{A_4}\frac{\mathcal{N}_{53}}{\mathcal{N}_{45}}\right)^{\frac{\mathcal{N}_{54}}{\mathcal{N}_{34}}}\quad\text{with}\;\;\mathcal{N}_{ij}=\frac{1}{N_i}-\frac{1}{N_j}\,. \end{equation} The location of the heavy modulus minimum is found to be \begin{equation}\label{eq:THmin} \ensuremath{T_{\scalebox{.7}{H},0}}= \frac{\log \left(\frac{A_3}{A_4}\frac{\mathcal{N}_{53}}{\mathcal{N}_{45}}\right)}{4\pi \mathcal{N}_{34} }\,. \end{equation} We can now integrate out $\ensuremath{T_{\scalebox{.7}{H}}}$ at the superfield level. In the limit where $\ensuremath{T_{\scalebox{.7}{H}}}$ becomes infinitely heavy, the low energy effective theory is defined by the superpotential $W_\text{eff}=w$ (evaluated at $\ensuremath{T_{\scalebox{.7}{H}}}=\ensuremath{T_{\scalebox{.7}{H},0}}$) and the K\"ahler potential \begin{equation} K_\text{eff} = - \log\left(2\ensuremath{T_{\scalebox{.7}{H},0}}+\ensuremath{\overline{T}_{\scalebox{.7}{L}}}+\ensuremath{T_{\scalebox{.7}{L}}}\right)-6 \log\left(2\ensuremath{T_{\scalebox{.7}{H},0}}-\ensuremath{\overline{T}_{\scalebox{.7}{L}}}-\ensuremath{T_{\scalebox{.7}{L}}}\right) + \overline{\phi}\phi\,. \end{equation} The effective theory resembles the one-modulus example of the previous section. At leading order in the volume expansion, the minimum with softly broken supersymmetry derives from the set of equations~\eqref{eq:minimum} with $T$ replaced by $\ensuremath{T_{\scalebox{.7}{L}}}$. We find \begin{equation}\label{eq:TLmin} \phi_0=\frac{\sqrt{3}}{2}\,,\qquad \ensuremath{T_{\scalebox{.7}{L},0}}=-\frac{4 \ensuremath{K_{\scalebox{.7}{L}}}\ensuremath{T_{\scalebox{.7}{L},0}}}{\pi}\, \frac{N_2}{3(N_2-N_1)-8}\,, \end{equation} where we wrote the equation for $\ensuremath{T_{\scalebox{.7}{L},0}}$ in implicit form. In contrast to the single modulus example, values $N_2<N_1+3$ may now be realized since the derivative of the K\"ahler potential $\ensuremath{K_{\scalebox{.7}{L}}}$ can take both signs. In order for the vacuum energy to vanish, the coefficients $A_{1,2}$ need to fulfill the relation \begin{equation}\label{eq:A1condition} \frac{A_1}{A_2} = -\frac{N_1}{N_2}\left(\frac{3}{4}\right)^{\frac{1}{N_1}} e^{2\pi(3T_{\text{H,0}}+T_{\text{L,0}})\mathcal{N}_{12}} \end{equation} with $\ensuremath{T_{\scalebox{.7}{H},0}}$ and $\ensuremath{T_{\scalebox{.7}{L},0}}$ taken from~\eqref{eq:THmin} and~\eqref{eq:TLmin}. Again, we neglected higher orders in the inverse volume. In analogy with section~\ref{sec:singlemodulus}, one can show that the meson provides the dominant source of supersymmetry breaking. The spectrum of scalar fields now contains three CP even states $s_{1,2,3}$ and three CP odd states $\varphi_{1,2,3}$, for which the following mass pattern occurs \begin{align}\label{eq:massestimate2} m_{s_3}&\simeq m_{\ensuremath{T_{\scalebox{.7}{H}}}}\,\quad m_{s_2}\simeq m_{\ensuremath{T_{\scalebox{.7}{L}}}}= \mathcal{O}\left(\frac{m_{3/2}}{\ensuremath{K_{\scalebox{.7}{$\mathrm{L}\overline{\mathrm{L}}$}}}}\right)\,,\quad m_{s_1}=\mathcal{O}\left(m_{3/2}\right)\,,\nonumber\\ m_{\varphi_3}&\simeq m_{\ensuremath{T_{\scalebox{.7}{H}}}}\,\quad m_{\varphi_2}\simeq m_{\ensuremath{T_{\scalebox{.7}{L}}}}= \mathcal{O}\left(\frac{m_{3/2}}{\ensuremath{K_{\scalebox{.7}{$\mathrm{L}\overline{\mathrm{L}}$}}}}\right)\,,\quad m_{\varphi_1}=\mathcal{O}\left(m_{3/2}\sqrt{\frac{m_{\ensuremath{T_{\scalebox{.7}{L}}}}}{m_{\ensuremath{T_{\scalebox{.7}{H}}}}}}\right)\,, \end{align} The heavy states $s_3,\;\varphi_3$ with their mass determined from~\eqref{eq:heavymodmass} are the two degrees of freedom contained in $\ensuremath{T_{\scalebox{.7}{H}}}$. The lighter states are composed of $\ensuremath{T_{\scalebox{.7}{L}}}$ and $\phi$. They exhibit a similar spectrum as in the single modulus example (section~\ref{sec:singlemodulus}). However, once a finite $m_{\ensuremath{T_{\scalebox{.7}{H}}}}$ is considered, the effective super- and K\"ahler potential receive corrections which are suppressed by inverse powers of $m_{\ensuremath{T_{\scalebox{.7}{H}}}}$. These corrections break the axionic shift symmetry which was present in the one-modulus case. As a result, a non-vanishing mass of the light axion appears. The latter can no longer be identified with the QCD axion. An unbroken shift symmetry can, however, easily be reestablished, once the framework is generalized to include several light moduli. In order to provide a numerical example, we pick the following hidden sector gauge theories \begin{equation}\label{eq:twomodulibench} A_1=A_3 = 1\,,\;\; A_4=-0.445 \,,\;\;N_1=8 \,,\;\;N_2=10 \,,\;\;N_3=11 \,,\;\;N_4=13 \,,\;\;N_5=15\,. \end{equation} The (exact numerical version of the) conditions~\eqref{eq:A5condition} and~\eqref{eq:A1condition} then fixes $A_2=-0.0306$, $A_5=0.0754$. One may wonder, whether the two-moduli example introduces additional tuning compared to the one-modulus case, since two of the $A_i$ are now fixed in order to realize a vanishing cosmological constant. However, deviations from~\eqref{eq:A5condition} and~\eqref{eq:A1condition} can compensate without spoiling the moduli stabilization.\footnote{In the low energy theory, such deviations would manifest as a constant in the superpotential which is acceptable as long as the latter is suppressed against the other superpotential terms.} Effectively, there is still only a single condition which must be fulfilled to the precision to which the vacuum energy cancels. In table~\ref{tab:spectrum2} we provide the location of the minimum and the resulting mass spectrum for the choice~\eqref{eq:twomodulibench}. \begin{table}[h] \begin{center} \begin{tabular}{|ccc|ccccccc|} \hline &&&&&&&&&\\[-4mm] $\ensuremath{T_{\scalebox{.7}{H},0}}\!$ & $\!\ensuremath{T_{\scalebox{.7}{L},0}}$ & $\phi_0$ & $m_{3/2}$ & $\!m_{\varphi_1}\!$ & $m_{\varphi_2}$ & $m_{\varphi_3}$ & $m_{s_1}$ & $m_{s_2}$& $m_{s_3}$\\[1mm] \hline\hline $9.5$ & $\!\!-3.9$ & $0.78$ & $82$ & $2.4$ & $1.4\times 10^3$ & $3.3\times 10^6$& $148$& $1.2\times 10^3$ &$3.3\times 10^6$\\ \hline \end{tabular} \end{center} \caption{Minimum and mass spectrum for the parameter set~\eqref{eq:twomodulibench}. In the original basis, the minimum is located at $T_{1,0}=5.6$, $T_{2,0}=13.4$. All masses are given in $\text{TeV}$.} \label{tab:spectrum2} \end{table} An important observation is that large mass hierarchies -- in this example a factor of $\mathcal{O}(10^3)$ -- can indeed be realized in the moduli sector. The origin of such hierarchies lies in the dimensional transmutation of the hidden sector gauge theories. A larger modulus mass is linked to a higher gaugino condensation scale, originating from a gauge group of higher rank or larger initial gauge coupling. In figure~\ref{fig:2modulus}, we depict the scalar potential along the light modulus direction. For each value of $\ensuremath{T_{\scalebox{.7}{L}}}$ we have minimized the potential along the orthogonal field directions. The Minkowski minimum is protected by a potential barrier, in this case against a deeper minimum with negative vacuum energy at $\ensuremath{T_{\scalebox{.7}{L}}}=4.6$. Similar as in the single modulus example, the barrier height is controlled by the gravitino mass. Numerically, we find $V_{\text{barrier}}= 0.2\,m_{3/2}^2$. The potential rises steeply once $\ensuremath{T_{\scalebox{.7}{L}}}$ approaches the pole in the K\"ahler metric at $\ensuremath{T_{\scalebox{.7}{L}}}=\ensuremath{T_{\scalebox{.7}{H}}}$ (corresponding to $T_2=0$). The supergravity approximation breaks down close to the pole which is, however, located sufficiently far away from the Minkowski minimum we are interested in. Of course, we need to require that the cosmological history places the universe in the right vacuum. But once settled there, tunneling to the deeper vacuum does not occur on cosmological time scales as we verified with the formalism~\cite{Coleman:1977py}. \begin{figure}[htp] \begin{center} \includegraphics[height=7cm]{potential_2mod.pdf} \end{center} \caption{Scalar potential along the $\ensuremath{T_{\protect\scalebox{.7}{L}}}$-direction. The remaining fields were set to their $\ensuremath{T_{\protect\scalebox{.7}{L}}}$-dependent minima (see text). The Minkowski minimum with softly broken supersymmetry is located at $\ensuremath{T_{\protect\scalebox{.7}{L},0}}=-3.9$.} \label{fig:2modulus} \end{figure} The example of this section can straightforwardly be generalized to incorporate many moduli and hidden sector matter fields. A subset of fields may receive a supersymmetric mass term and decouple from the low energy effective theory. The remaining light degrees of freedom are stabilized by supersymmetry breaking in the same way as $\ensuremath{T_{\scalebox{.7}{L}}}$ and $\phi$. Indeed, it was shown in~\cite{Acharya:2007rc} that an arbitrary number of light moduli can be fixed through the sum of two gaugino condensates in complete analogy to the examples discussed in this work. \section{Modulus (De-)Stabilization During Inflation?} As was shown in the previous section, the lightest modulus is only protected by a barrier whose seize is controlled by the gravitino mass. There is danger that, during inflation, the large potential energy lifts the modulus over the barrier and destabilizes the extra dimensions. We will show that in the single modulus case, indeed, the bound $H<m_{3/2}$ on the Hubble scale during inflation arises. This constraint was previously pointed out in the context of KKLT modulus stabilization~\cite{Kallosh:2004yh} (the analogous constraint from temperature effects had been derived in~\cite{Buchmuller:2004xr}) and later generalized to the large volume scenario and the K\"ahler uplifting scheme~\cite{Buchmuller:2015oma}. The constraint for the single modulus case would leave us with the undesirable choice of either coping with ultra-low scale inflation or of giving up supersymmetry as a solution to the hierarchy problem.\footnote{Another option may consist in fine-tuning several gaugino condensates in order to increase the potential barrier as in models with strong moduli stabilization~\cite{Kallosh:2004yh,Dudas:2012wi}.} As another problematic consequence, supersymmetry breaking would then generically induce large soft terms into the inflation sector which tend to spoil the flatness of the inflaton potential. Fortunately, we will be able to demonstrate that the bound on $H$ does not apply to more realistic examples with several moduli. The crucial point is that in the multi-field case, the modulus which stabilizes the overall volume of the compactified manifold and the modulus participating in supersymmetry breaking in the vacuum are generically distinct fields. \subsection{Single Modulus Case} We will now augment the single modulus example by an inflation sector. The latter consists of further moduli or hidden sector matter fields which we denote by $\rho_\alpha$. In order to allow for an analytic discussion of modulus destabilization we shall make some simplifying assumptions. Specifically, we take superpotential and K\"ahler potential to be separable into modulus and inflaton parts, \begin{equation} W = w(T,\phi) + \mathscr{W}(\rho_\alpha) \,,\qquad K = k(\overline{T},T,\overline{\phi},\phi) + \mathscr{K}(\overline{\rho}_\alpha,\rho_\alpha)\,. \end{equation} The modulus superpotential $w$ and K\"ahler potential $k$ are defined as in~\eqref{eq:onemod}. As an example inflaton sector, we consider the class of models with a stabilizer field defined in~\cite{Kallosh:2010xz}. These feature \begin{equation}\label{eq:simple_inflation} \mathscr{W}=\mathscr{K}= \mathscr{K}_\alpha=0 \end{equation} along the inflationary trajectory.\footnote{In this section, we neglect the backreaction of the modulus sector on the inflaton potential. This is justified since, for the moment, we are interested in the stabilization of the modulus during inflation and not in the distinct question, whether the backreaction spoils the flatness of the inflaton potential.} For now, we focus on modulus destabilization during inflation. Whether this particular inflation model can be realized in M-theory does not matter at this point. In fact, we merely impose the conditions~\eqref{eq:simple_inflation} for convenience since they lead to particularly simple analytic expressions. The important element, which appears universally, is the $e^K\propto (\overline{T}+T)^{-7}$ factor which multiplies all terms in the scalar potential. The latter reads \begin{equation} V = V_{\text{mod}} + \frac{e^{|\phi|^2}}{(\overline{T}+T)^7} W^{\alpha}W_{\alpha} \,, \end{equation} where $V_{\text{mod}}$ coincides with the scalar potential without the inflaton as defined in~\eqref{eq:potentialonemodulus}. The second term on the right hand side sets the energy scale of inflation. It displaces the modulus and the meson. Once the inflationary energy reaches the height of the potential barrier defined in~\eqref{eq:barrierpotential}, the minimum in modulus direction gets washed out and the system is destabilized. This is illustrated in figure~\ref{fig:modulus1inf}. The constraint can also be expressed in the form \begin{equation} H \lesssim m_{3/2}\,, \end{equation} where we employed $V=3\,H^2$. The constraint remains qualitatively unchanged if we couple a different inflation sector to the modulus.\footnote{See~\cite{He:2010uk} for a possible exception.} \begin{figure}[h] \begin{center} \includegraphics[height=6cm]{inflation_1mod.pdf} \end{center} \caption{Scalar potential in modulus direction for different choices of the Hubble scale. For each value of $T$, the potential in the meson direction was minimized. Remaining parameters are chosen as in figure~\ref{fig:modulus1}.} \label{fig:modulus1inf} \end{figure} \subsection{Two or More Moduli}\label{sec:twoormore} In the previous example, the single modulus $T$ is apparently the field which sets the overall volume of the manifold. Destabilization of $T$, which occurs at $H\sim m_{3/2}$, triggers unacceptable decompactification of the extra dimensions. However, once we extend our consideration to multiple fields, the modulus participating in supersymmetry breaking and the modulus controlling the overall volume can generically be distinct. Consider a simple two-modulus example for which the volume is determined as \begin{equation}\label{eq:volume} \mathcal{V} = (\:\text{Re}\, T_1)^{a_1/3}\,(\:\text{Re}\, T_2)^{a_2/3}\,. \end{equation} The scalar potential (before including the inflaton sector) shall have a minimum at $(T_{1,0},\,T_{2,0})$. At the minimum, we may then define the overall volume modulus \begin{equation}\label{eq:volmod} T_{\mathcal{V}} = a_1 \frac{T_1}{T_{1,0}} + a_2 \frac{T_2}{T_{2,0}}\,, \end{equation} such that for an infinitesimal change of the volume $d\mathcal{V}\propto dT_{\mathcal{V}}$. Let us assume $T_{\mathcal{V}}$ receives a large supersymmetric mass and decouples from the low energy theory. The orthogonal linear combination shall be identified with the light modulus which is stabilized by supersymmetry breaking. It becomes clear immediately that in this setup the bound $H < m_{3/2}$ cannot hold. The overall volume remains fixed as long as the inflationary energy density does not exceed the stabilization scale of the heavy volume modulus. Since the latter does not relate to supersymmetry breaking, large hierarchies between $H$ and $m_{3/2}$ can in principle be realized.\footnote{The idea of trapping a light modulus through a heavy modulus during inflation has also been applied in~\cite{Kappl:2015pxa}.} In reality, the heavy modulus which protects the extra dimensions does not need to coincide with the volume modulus. One can easily show that $\mathcal{V}$ in~\eqref{eq:volume} remains finite given that an arbitrary linear combination $T_1 + \alpha T_2$ with $\alpha >0$ is fixed. If the heavy linear combination is misaligned with the volume modulus, the light modulus still remains protected, but receives a shift during inflation. In order to be more explicit, let us consider the two-modulus example of section~\ref{sec:severalmoduli}. We add the inflation sector again imposing~\eqref{eq:simple_inflation}. The scalar potential along the inflationary trajectory is \begin{equation}\label{eq:2modinf} V = V_{\text{mod}} + \frac{e^{|\phi|^2}\ }{(\overline{T}_1+T_1)(\overline{T}_2+T_2)^6}\, W^{\alpha}W_{\alpha}\,. \end{equation} Inflation tends to destabilize moduli since the potential energy is minimized at $T_{1,2}\rightarrow\infty$. However, the direction $\ensuremath{T_{\scalebox{.7}{H}}}=T_1+T_2$ is protected by the heavy modulus mass $m_{\ensuremath{T_{\scalebox{.7}{H}}}}$. As long as $H\ll m_{\ensuremath{T_{\scalebox{.7}{H}}}}$, the heavy modulus remains close to its vacuum expectation value. For fixed $\ensuremath{T_{\scalebox{.7}{H}}}$, the inflaton potential energy term (second term on the right-hand side of~\eqref{eq:2modinf}) is minimized at \begin{equation}\label{eqref:infminimum} \ensuremath{T_{\scalebox{.7}{L}}} = -\frac{5}{7}\, \ensuremath{T_{\scalebox{.7}{H}}}\,. \end{equation} Hence, $\ensuremath{T_{\scalebox{.7}{L}}}$ remains protected as long as $\ensuremath{T_{\scalebox{.7}{H}}}$ is stabilized. It, nevertheless, receives a shift during inflation since $\ensuremath{T_{\scalebox{.7}{H}}}$ is not exactly aligned with the volume modulus. In the left panel of figure~\ref{fig:mod2inf}, we depict the scalar potential in the light modulus direction for different choices of $H$. For each value of $\ensuremath{T_{\scalebox{.7}{L}}}$ and $H$, we have minimized the potential in meson and heavy modulus direction. \begin{figure}[htp] \begin{center} \includegraphics[height=6.8cm]{vtl.pdf}\hspace{7mm} \includegraphics[height=7cm]{vth.pdf} \end{center} \caption{Scalar potential during inflation in the light modulus (left panel) and heavy modulus direction (right panel). For each $\ensuremath{T_{\protect\scalebox{.7}{H,L}}}$ and $H$, the remaining fields have been set to their corresponding minima.} \label{fig:mod2inf} \end{figure} It can be seen that the light modulus remains stabilized even for $H> m_{3/2}$. With growing $H$ it becomes heavier due to the Hubble mass term induced by inflation. This holds as long as the heavy modulus is not pushed over its potential barrier. For our numerical example, destabilization of the heavy modulus occurs at $H\simeq 470\,m_{3/2}$ as can be seen in the right panel of the same figure. The minima of $\ensuremath{T_{\scalebox{.7}{H}}}$, $\ensuremath{T_{\scalebox{.7}{L}}}$, $\phi$ as a function of the Hubble scale are depicted in figure~\ref{fig:vevs} up to the destabilization point. It can be seen that $\ensuremath{T_{\scalebox{.7}{L}}}$ slowly shifts from $\ensuremath{T_{\scalebox{.7}{L},0}}$ to the field value maximizing the volume as given in~\eqref{eqref:infminimum}. \begin{figure}[htp] \begin{center} \includegraphics[width=10cm]{inf_vev.pdf} \end{center} \caption{Minima of $\ensuremath{T_{\protect\scalebox{.7}{H}}}$, $\ensuremath{T_{\protect\scalebox{.7}{L}}}$, $\phi$ as a function of the Hubble scale. Moduli destabilization occurs at $H\simeq 470 m_{3/2}$ as indicated by the stars.} \label{fig:vevs} \end{figure} Our findings can easily be generalized to systems with many moduli. In this case, an arbitrary number of light moduli remains stabilized during inflation, given at least one heavy modulus ($m_{\ensuremath{T_{\scalebox{.7}{H}}}}\gg H$) which bounds the overall volume. A particularly appealing possibility is that the modulus which protects the extra dimensions is itself the inflaton. In particular, it would seem very natural to identify the inflaton with the overall volume modulus. We will prove in the next section that this simple picture is also favored by the K\"ahler geometry of the $G_2$ manifold. Indeed, we will show that inflationary solutions only arise in moduli directions closely aligned with the overall volume modulus. \section{Modular Inflation in M-theory}\label{sec:modularinflation} So far we have discussed modulus stabilization during inflation without specifying the inflaton sector. In this section, we will select a modulus as the inflaton. The resulting scheme falls into the class of `inflection point inflation' which we will briefly review. We will then identify the overall volume modulus (or a closely aligned direction) as the inflaton by means of K\"ahler geometry, before finally introducing explicit realizations of inflation and moduli stabilization. \subsection{Inflection Point Inflation}\label{sec:inflectioninflation} Observations of the cosmic microwave background (CMB) suggest an epoch of slow roll inflation in the very early universe. The nearly scale-invariant spectrum of density perturbations sets constraints on the first and second derivative of the inflaton potential \begin{equation} |V'|,\,|V''|\ll V\,. \end{equation} Unless the inflaton undergoes trans-Planckian excursions, the above conditions imply a nearly vanishing slope and curvature of the potential at the relevant field value. An obvious possibility to realize successful inflation invokes an inflection point with small slope, i.e.\ an approximate saddle point. Most features of this so-called inflection point inflation can be illustrated by choosing a simple polynomial potential \begin{equation}\label{eq:eff_inflation} V= V_0 \left[1 + \frac{\delta}{\rho_0}(\rho-\rho_0) + \frac{1}{6\rho_0^3}(\rho-\rho_0)^3 \right] \;\;+ \;\;\mathcal{O}\Big((\rho-\rho_0)^4\Big)\,, \end{equation} where $\rho$ is the inflaton which is assumed to be canonically normalized, $\rho_0$ is the location of the inflection point. The coefficient in front of $(\rho-\rho_0)^4$ can be chosen such that the potential has a minimum with vanishing vacuum energy at the origin. Since the quartic term does not play a role during inflation, it has not been specified explicitly. The height of the inflationary plateau is set by $V_0$. The potential slow roll parameters follow as \begin{equation}\label{eq:slowrollparameters} \epsilon_V = \frac{1}{2}\left(\frac{V^{\prime}}{V}\right)^2\,,\qquad \eta_V = \frac{V^{\prime\prime}}{V}\,. \end{equation} The number of e-folds $N$ corresponding to a certain field value can be approximated analytically, \begin{equation}\label{eq:efolds} N \simeq N_{\text{max}}\left(\frac{1}{2}+ \frac{1}{\pi}\arctan\left[\frac{N_{\text{max}}(\rho-\rho_0)}{2\pi \rho_0^3}\right]\right) \,,\qquad N_{\text{max}} =\frac{\sqrt{2}\pi\,\rho_0^2}{\sqrt{\delta}}\,, \end{equation} where $N_{\text{max}}$ denotes the maximal e-fold number. Since we assume $\rho_0$ to be sub-Planckian, the slope parameter $\delta$ must be strongly suppressed for inflation to last 60 e-folds or longer. The CMB observables, namely the normalization of the scalar power spectrum $A_s$, the spectral index of scalar perturbations $n_s$ and the tensor-to-scalar ratio $r$ are determined by the standard expressions \begin{equation}\label{eq:cmbobservables} A_s \simeq \frac{V}{24\pi^2\epsilon_V}\,,\quad n_s \simeq 1 - 6\,\epsilon_V + 2\,\eta_V\,,\quad r\simeq 16\epsilon_V\,. \end{equation} For comparison with observation, these quantities must be evaluated at the field value for which the scales relevant to the CMB cross the horizon, i.e.\ at $N=50-60$ according to~\eqref{eq:efolds}. We can use the Planck measured values $n_s=0.96-0.97$, $A_s\simeq 2.1\times 10^{-9}$~\cite{Akrami:2018odb} to fix two parameters of the inflaton potential. This allows us to predict the tensor-to-scalar ratio \begin{equation}\label{eq:tensor} r \sim \left(\frac{\rho_0}{0.1}\right)^6\times 10^{-11}\,. \end{equation} Inflation models rather generically require some degree of fine-tuning. This is also the case for inflection point inflation and manifests in the (accidental) strong suppression of the slope at the inflection point. In addition, the slow roll analysis only holds for the range of initial conditions which enable the inflaton to dissipate (most of) its kinetic energy before the last 60 e-folds of inflation. While initial conditions cannot meaningfully be addressed in the effective description~\eqref{eq:eff_inflation}, we note that the problem gets ameliorated if the inflationary plateau spans a seizable distance in field space. This favors large $\rho_0$ as is, indeed, expected for a modulus field. In this case, the typical distance between the minimum of the potential and an inflection point relates to the Planck scale (although $\rho_0 \lesssim 1$ to avoid uncontrollable corrections to the setup). Setting $\rho_0$ to a few tens of $M_P$, we expect $r\sim 10^{-8}-10^{-6} $ according to~\eqref{eq:tensor}. The maximal number of e-folds is $N_{\text{max}}=100-200$. While the modulus potential differs somewhat from~\eqref{eq:eff_inflation} (e.g.\ due to non-canonical kinetic terms), we will still find similar values of $r$ in the M-theory examples of the next sections. \subsection{Identifying the Inflaton} We now want to realize inflation with a modulus field as inflaton. Viable inflaton candidates shall be identified by means of K\"ahler geometry. This will allow us to derive some powerful constraints on the nature of the inflaton without restricting to any particular superpotential. Inflationary solutions feature nearly vanishing slope and curvature of the inflaton potential in some direction of field space. To very good approximation we can neglect the tiny slope and apply the supergravity formalism for stationary points (see section~\ref{sec:ConstraintsdS}). All field directions orthogonal to the inflaton must be stabilized. Hence, the modulus mass matrix during inflation should at most have one negative eigenvalue corresponding to the inflaton mass. The latter must, however be strongly suppressed against $V$ due to the nearly scale invariant spectrum of scalar perturbations caused by inflation. We can hence neglect it against the last term in~\eqref{eq:Vibarj} and require the mass matrix to be positive semi-definite. This leads to the same necessary condition as for the realization of de Sitter vacua, namely that $V_{i\bar{j}}$ must be positive semi-definite. During inflation, we expect the potential energy to be dominated by $F^{\rho}$. The curvature scalar of the one-dimensional submanifold associated with the inflaton $\rho$ (cf.~\eqref{eq:curvaturescalar}) should, hence, fulfill condition~\eqref{eq:curvaturecondition0}. The latter can be rewritten as \begin{equation}\label{eq:Kahlercurvature} R_\rho^{-1} > \frac{3}{2} +\frac{3}{2}\left(\frac{H}{m_{3/2}^I}\right)^2\,. \end{equation} Here we introduced the inflationary Hubble scale through the relation $H=\sqrt{V/3}$ and the `gravitino mass during inflation' $m_{3/2}^I= e^{G/2}$. Note that $m_{3/2}^I$ is evaluated close to the inflection point. It is generically different from the gravitino mass in the vacuum which we denoted by $m_{3/2}$. We notice that field directions with a small K\"ahler curvature scalar are most promising for realizing inflation. For a simple logarithmic K\"ahler potential $K = -a\log(\overline{\rho}+\rho)$, one finds $R_\rho= 2/a$. Condition~\eqref{eq:Kahlercurvature} then imposes at least $a>3$. However, more generically, we expect $\rho$ to be a linear combination of the moduli $T_i$ appearing in the $G_2$ K\"ahler potential~\eqref{eq:MKahler}. We perform the following field redefinition \begin{equation} \rho_i = \sum\limits_j O_{ij} \;\frac{\sqrt{a_j}}{2\,T^I_{j}} \;T_j\,. \end{equation} Here $T^I_{j}$ denotes the field value of $T_j$ during inflation (more precisely, at the quasi-stationary point). Without loss of generality, we assume that $T^I_{j}$ is real.\footnote{Imaginary parts of $T^I_{j}$ can be absorbed by shifting $T_{j}$ along the imaginary axis which leaves the K\"ahler potential invariant.} The matrix $O$ is an element of SO($M$), where $M$ denotes the number of moduli. The coefficients $a_i$ must again sum to $7$ for $G_2$. The above field redefinition leads to canonically normalized $\rho_i$ at the stationary point. We now choose $\rho_1\equiv \rho$ to be the inflaton and abbreviate $O_{1i}$ by $O_{i}$. The curvature scalar can then be expressed as \begin{equation}\label{eq:g2curvature} R_\rho= \sum\limits_i\frac{6\,O_{i}^4}{a_i} - \sum\limits_{i,j}\frac{4\,O_{i}^3\,O_{j}^3}{\sqrt{a_i a_j}}\,. \end{equation} Since successful inflation singles out field directions with small curvature scalar, it is instructive to identify the linear combination of moduli with minimal $R_\rho$. The latter is obtained by minimizing $R_\rho$ with respect to the $O_i$ which yields $O_i=\sqrt{a_i/7}$ and, \begin{equation}\label{eq:bestdirection} \rho\propto \sum\limits_i \frac{a_i}{T_i^I}\, T_i\,. \end{equation} By comparison with~\eqref{eq:volmod}, we can identify this combination as the overall volume modulus (defined at the field location of inflation). The corresponding minimal value of $R_\rho= 2/7$. Hence, inflation must take place in the direction of the overall volume modulus or a closely aligned field direction -- as was independently suggested by modulus stabilization during inflation (see section~\ref{sec:twoormore}). In order to be more explicit, we define $\theta$ as the angle\footnote{The angle $\theta$ is defined in the $M$-dimensional space spanned by the canonically normalized $T_i$. For two linear combinations of moduli $\rho_1 = \alpha_i \hat T_i$ and $\rho_2 = \beta_i \hat T_i$, it is obtained from the scalar product $\boldsymbol{\alpha}\boldsymbol{\beta}=|\boldsymbol{\alpha}| |\boldsymbol{\beta}| \cos\theta$. Here, $\hat T_i$ denote the canonically normalized moduli $\hat T_i = (\sqrt{a_i}/T^I_{i}) \,T_i /2$.} between $\rho$ and the volume modulus $T_{\mathcal{V}}$, \begin{equation}\label{eq:defangle} \cos\theta = O_i\sqrt{\frac{a_i}{7}}\,. \end{equation} In other words, $\cos^2\theta$ is the fraction of volume modulus contained in the inflaton. The constraint on the angle depends on the properties of the manifold. However, one can derive the lower bound \begin{equation} R_\rho^{-1} < \frac{7}{6} \left(1+2\cos^2\theta\right)\,, \end{equation} which holds for an arbitrary number of moduli and independent of the coefficients $a_i$ (only requiring that the $a_i$ sum up to 7). If we combine this constraint with~\eqref{eq:Kahlercurvature}, we find \begin{equation}\label{eq:cosmax} \cos^2\theta > \frac{1}{7} + \frac{9}{14} \left(\frac{H}{m_{3/2}^I}\right)^2\,. \end{equation} From this condition, it may seem sufficient to have a moderate volume modulus admixture in the inflaton. However, in the absence of fine-tuning, the second term on the right hand side is not expected to be much smaller than unity. Furthermore, for any concrete set of $a_i$, a stronger bound than~\eqref{eq:cosmax} may arise. Therefore, values of $\cos\theta$ close to unity -- corresponding to near alignment between the inflaton and volume modulus -- are preferred. Let us, finally, point out that the lower limit on the curvature scalar also implies the following bound on the Hubble scale \begin{equation}\label{eq:generalbound} H <\frac{2\, m_{3/2}^I}{\sqrt{3}}\,, \end{equation} which must hold for arbitrary superpotential. One may now worry that this constraint imposes either low scale inflation or high scale supersymmetry breaking. This is, however, not the case since $m_{3/2}^I$ can be much larger than the gravitino mass in the true vacuum. Indeed, if the inflaton is not identified with the lightest, but with a heavier modulus, it appears natural to have $m_{3/2}^I\gg m_{3/2}$. Nevertheless,~\eqref{eq:generalbound} imposes serious restrictions on the superpotential. In order for the potential energy during inflation to be positive, while satisfying~\eqref {eq:Kahlercurvature}, one must require\footnote{We assume that the inflaton dominantly breaks supersymmetry during inflation.} \begin{equation}\label{eq:Gconstraint} 3 < G^{\rho}G_{\rho} < 7\,. \end{equation} A single instanton term $W\supset e^{-S}$ in the superpotential would induce $G^{\rho}G_{\rho}\sim S^2$. Since perturbativity requires $S\gg 1$, one typically needs to invoke a (mild) cancellation between two or more instanton terms in order to satisfy~\eqref{eq:Gconstraint}. \subsection{An Inflation Model} We now turn to the construction of an explicit inflation model. For the moment, we ignore supersymmetry breaking and require inflation to end in a supersymmetric Minkowski minimum. Previous considerations suggested the overall volume modulus as inflaton candidate. The simplest scenario of just one overall modulus and a superpotential generated from gaugino condensation does, however, not give rise to an inflection point with the desired properties. The minimal working example, therefore, invokes two moduli $T_{1,2}$. One linear combination $\ensuremath{T_{\scalebox{.7}{H}}}$ is assumed to be stabilized supersymmetrically with a large mass $m_{\ensuremath{T_{\scalebox{.7}{H}}}}\gg H$ at $\ensuremath{T_{\scalebox{.7}{H},0}}$. This is achieved through the superpotential part $\mathcal{W}(\ensuremath{T_{\scalebox{.7}{H}}})$ which could e.g.\ be of the form described in section~\ref{sec:severalmoduli}. The orthogonal, lighter linear combination $\rho$ is the inflaton. It must contain a large admixture of the overall volume modulus. As an example, we take superpotential and K\"ahler potential to be of the form, \begin{align}\label{eq:twomodinf} W &= \mathcal{W}(T_1+T_2) + \sum\limits_i A_i e^{-2\pi T_1/N_i} \,,\nonumber\\ K &= - a_1 \log\left(\overline{T}_1+T_1\right)-a_2 \log\left(\overline{T}_2+T_2\right) \,. \end{align} The heavy modulus can be defined as $\ensuremath{T_{\scalebox{.7}{H}}}= (T_1 + T_2)/2$ in this case. In the limit, where $\ensuremath{T_{\scalebox{.7}{H}}}$ becomes infinitely heavy, integrating out $\ensuremath{T_{\scalebox{.7}{H}}}$ at the superfield level is equivalent to replacing $\ensuremath{T_{\scalebox{.7}{H}}}$ by $\ensuremath{T_{\scalebox{.7}{H},0}}$ in the superpotential and K\"ahler potential, i.e.\ $T_1 \rightarrow \ensuremath{T_{\scalebox{.7}{H},0}} +\rho$ and $T_2 \rightarrow \ensuremath{T_{\scalebox{.7}{H},0}}-\rho$. We consider the case, where inflation proceeds along the real axis. The scalar potential features terms which decrease exponentially towards large $\rho$ which originate from $W$ and its derivatives. At the same time, the prefactor $e^K$ has positive slope if we choose $a_2>a_1$. For appropriate parameters, the interplay between the super- and K\"ahler potential terms leads to an inflection point suitable for inflation. \begin{table}[t] \begin{center} \begin{tabular}{|c|ccccccccccc|} \hline PS& $a_1$ & $a_2$ & $A_1$ & $A_2$ & $A_3$ & $A_4$ & $N_1$ & $N_2$ & $N_3$ & $N_4$ & $\ensuremath{T_{\scalebox{.7}{H},0}}$\\ \hline\hline 1& $1$ & $6$ & $1$ & $-1.18$ & $0.719766$ & $-0.178645$ & $11$ & $15$ & $19$ & $23$ & $7.8$ \\ 2& $2$ & $5$ & $-1.35$ & $2.16245$ & $-0.918729$ & $-$ & $15$ & $17$ & $19$ & $-$ & $8.2$ \\ \hline \end{tabular} \end{center} \caption{Input parameter sets PS$\,$1 and PS$\,$2 which give rise to the potential shown in figure~\ref{fig:infpot}. Two input parameters are specified with higher precision. This is required to (nearly) cancel the cosmological constant and to ensure that the spectral index matches precisely with observation.} \label{tab:ps} \end{table} We have previously shown model-independently that the inflaton must be volume modulus-like. But how do the constraints from K\"ahler geometry actually enter the concrete setup? For this, we have to look at the axion direction $\varphi$ orthogonal to the inflaton. In table~\ref{tab:ps} we provide two parameter choices (PS$\,$1 and PS$\,$2) which give rise to a similar scalar potential along the real axis (see left panel of figure~\ref{fig:infpot}). However, only PS$\,$1 leads to a viable inflationary scenario, while PS$\,$2 suffers from a tachyonic instability in the axion direction (at the inflationary plateau). This can be seen in the right panel of figure~\ref{fig:infpot}, where we depict the axion mass as a function of the inflaton field value. \begin{figure}[htp] \begin{center} \includegraphics[height=5.4cm]{inflaton_pot.pdf}\hspace{7mm} \includegraphics[height=5.4cm]{inflaton_mim.pdf} \end{center} \caption{In the left panel, the inflaton potential is shown for the two parameter sets of table~\ref{tab:ps}. The inflection point at $\rho-\rho_0=4$ is indicated by the thick gray dot. In the left panel, the squared mass of the axion direction is shown in units of $H^2$.} \label{fig:infpot} \end{figure} The reason for the tachyonic instability of PS$\,$2 becomes clear, when we study the nature of the inflaton. We express the inflaton in terms of canonically normalized moduli, \begin{equation}\label{eq:defThat} \rho = O_1 \,\hat T_1 + O_2 \,\hat T_2\,,\qquad \hat T_i=\frac{\sqrt{a_i}}{2\,T^I_i} \,T_i\,. \end{equation} The coefficients $O_i$ determine the angle between inflaton and overall volume modulus (cf.~\eqref{eq:defangle}). In table~\ref{tab:psout} we provide the angle, the corresponding curvature scalar and the ratio $m_{3/2}^I/H$ for the two parameter sets. One can easily verify that, for PS$\,$1, the inflaton is sufficiently volume modulus-like to satisfy the constraint~\eqref{eq:cosmax} on the angle (analogously, the curvature scalar is small enough to satisfy~\eqref{eq:Kahlercurvature}). Successful inflation can, therefore, be realized. For PS$\,$2, the situation is different since the same condition is violated. The tachyonic instability which prevents inflation for PS$\,$2 is, hence, due to the misalignment between the (would-be-)inflaton and the volume modulus. \begin{table} \begin{center} \begin{tabular}{|c|cccccc|} \hline PS& $O_1$ & $O_2$ & $\cos^2\theta$ & $ R_\rho$ & $H$ & $m_{3/2}^I$\\ \hline\hline 1 & $0.12$& -$0.99$ & $0.76$ & $0.34$ & $5.9\times 10^{-8}$ & $1.7\,H$\\ 2 & $0.29$& -$0.96$ & $0.42$ & $0.47$ & $5.9\times 10^{-8}$ & $1.1\,H$\\ \hline \end{tabular} \end{center} \caption{Derived parameters for the inputs PS$\,$1 and PS$\,$2 from table~\ref{tab:ps}.} \label{tab:psout} \end{table} For the parameter choice PS$\,$1, the inflationary observables can be determined from the slow roll expressions~\eqref{eq:cmbobservables}, where the normalization of the kinetic term has to be taken into account (the slow roll parameters are defined as derivatives with respect to the canonically normalized inflaton in~\eqref{eq:slowrollparameters}). The observables are consistent with present CMB bounds, specifically we find \begin{equation}\label{eq:nsrAs} n_s=0.96\,,\qquad r=3\times 10^{-7}\,,\qquad A_s = 2\times 10^{-9}\,. \end{equation} The tensor-to-scalar ratio falls in the expected range for inflection point inflation with a modulus (see section~\ref{sec:inflectioninflation}). From a theoretical point, it is interesting that the inflationary plateau can be turned into a de Sitter minimum through a small parameter deformation. If we, for example, increase the value of $\ensuremath{T_{\scalebox{.7}{H},0}}$ (or change one of the $A_i$) for PS$\,$1 slightly, the potential develops a minimum close to the inflection point. The consistency of de Sitter vacua in the moduli potential follows from the $G_2$ K\"ahler potential which has a curvature scalar as small as 2/7 on the submanifold associated with the volume modulus -- in contrast to other prominent string theory constructions (see section~\ref{sec:ConstraintsdS}). \subsection{Inflation and Supersymmetry Breaking} In the final step, we wish to construct a more realistic model which incorporates inflation and supersymmetry breaking simultaneously. The plan is to augment the inflation sector of the previous section by the supersymmetry breaking sector comprised of the light modulus and the meson field (cf. section~\ref{sec:g2vacua}). The minimal example contains three moduli fields $T_{1,2,3}$ which form the linear combinations $\ensuremath{T_{\scalebox{.7}{H}}}$, $\rho$ and $\ensuremath{T_{\scalebox{.7}{L}}}$. The inflaton $\rho$ must be approximately aligned with the volume modulus. An orthogonal light modulus $\ensuremath{T_{\scalebox{.7}{L}}}$ participates in supersymmetry breaking. The third modulus direction $\ensuremath{T_{\scalebox{.7}{H}}}$ is stabilized supersymmetrically at a mass scale above the inflationary Hubble scale. While it does not play a dynamical role, its vacuum expectation value manifests in the K\"ahler potential of the lighter degrees of freedom. It assists in generating the plateau in the inflaton potential. The superpotential is chosen such that a mass hierarchy $m_{\ensuremath{T_{\scalebox{.7}{H}}}} \gg m_{\rho} \gg m_{\ensuremath{T_{\scalebox{.7}{L}}}}$ arises in the vacuum. This can be achieved via the form \begin{equation} W = \mathcal{W}(\ensuremath{T_{\scalebox{.7}{H}}}) + \mathscr{W}(\ensuremath{T_{\scalebox{.7}{H}}},\rho) + w(\ensuremath{T_{\scalebox{.7}{H}}},\ensuremath{T_{\scalebox{.7}{L}}})\,. \end{equation} All three superpotential parts originate from gaugino condensation. The desired mass pattern is realized through an appropriate hierarchy in the condensation scales in $\mathcal{W}$, $\mathscr{W}$ and $w$, respectively. For concreteness, we will make the following identification \begin{equation}\label{eq:Tdefinition} T_1 = \frac{\ensuremath{T_{\scalebox{.7}{H}}}}{3} + \frac{\rho}{6} + \frac{\ensuremath{T_{\scalebox{.7}{L}}}}{2}\,,\qquad T_2 = \frac{\ensuremath{T_{\scalebox{.7}{H}}}}{3} + \frac{\rho}{6} - \frac{\ensuremath{T_{\scalebox{.7}{L}}}}{2}\,,\qquad T_3 = \frac{\ensuremath{T_{\scalebox{.7}{H}}}}{3} - \frac{\rho}{3}\,, \end{equation} which is just one of many possibilities. Without specifying $\mathcal{W}$ explicitly, we assume $\mathcal{W}=\ensuremath{\mathcal{W}_{\scalebox{.7}{H}}} = 0$ at $\ensuremath{T_{\scalebox{.7}{H},0}}$. As shown previously, this can e.g.\ be achieved via three gaugino condensation terms (see section~\ref{sec:severalmoduli}). In the limit of very large mass $m_{\ensuremath{T_{\scalebox{.7}{H}}}}$, integrating out the heavy modulus then simply amounts to replacing $\ensuremath{T_{\scalebox{.7}{H}}}$ by $\ensuremath{T_{\scalebox{.7}{H},0}}$ in the superpotential and K\"ahler potential. In addition, we choose \begin{equation} w = A_1 \,\phi^{-\frac{2}{N_1}} \, e^{-\frac{2\pi f_1}{N_1}} + A_2 \, e^{-\frac{2\pi f_2}{N_2}}\,,\qquad \mathscr{W} = \sum\limits_{i=3}^6 A_i e^{-\frac{2\pi f_i}{N_i}}\,. \end{equation} The gauge kinetic functions are defined as \begin{equation} f_{1,2} = 2\,T_1 + T_3= \ensuremath{T_{\scalebox{.7}{H}}} + \ensuremath{T_{\scalebox{.7}{L}}} \,,\qquad f_{3,4,5,6} = T_1 + T_2= \frac{2}{3}\, \ensuremath{T_{\scalebox{.7}{H}}} + \frac{1}{3}\,\rho\,, \end{equation} such that $\mathscr{W}$ only depends on $\rho$, while $w$ only depends on $\ensuremath{T_{\scalebox{.7}{L}}}$ and $\phi$ (once $\ensuremath{T_{\scalebox{.7}{H}}}$ has been integrated out). The $G_2$ K\"ahler potential, \begin{equation} K = - \sum\limits_{i=1}^3 a_i \log\left(\overline{T}_i+T_i\right)\,, \end{equation} can be expressed in terms of $\rho$, $\ensuremath{T_{\scalebox{.7}{L}}}$ via~\eqref{eq:Tdefinition}. For an exact numerical evaluation, we choose the parameter set of table~\ref{tab:inflationparameters}. \begin{table}[t] \begin{center} \begin{tabular}{|ccccccccccccccc|} \hline $a_1$ & $a_2$ & $a_3$ & $A_1$ & $A_2$ & $A_3$ & $A_4$ & $A_5$ & $N_1$ & $N_2$ & $N_3$ & $N_4$ & $N_5$ & $N_6$ & $T_{A,0}$\\[1mm] \hline\hline &&&&&&&&&&&&&&\\[-3mm] $\frac{1}{2}$ & $2$ & $\frac{9}{2}$ & $-7$ & $0.117$ & $-4.9$ & $22.52$ & $-20.52678$ & $8$ & $10$ & $24$ & $30$ & $32$ & $38$ & $21.7$\\ \hline \end{tabular} \end{center} \caption{Parameter choice giving rise to the inflaton potential shown in figure~\ref{fig:infpot2}. The parameter $A_5$ has been specified with higher precision in order to ensure that inflation with the correct spectral index arises. Cancellation of the cosmological constant fixes the remaining input parameter, $A_6=2.4213062895$.} \label{tab:inflationparameters} \end{table} \begin{figure}[t] \begin{center} \includegraphics[height=6.5cm]{spectrum.pdf} \end{center} \caption{Spectrum of scalar (+) and pseudoscalar (-) masses in the vacuum and during inflation. The dominant field components of the mass eigenstates are given in the plot legend (the orange lines e.g. refer to the meson-like mass eigenstates). Also shown are the gravitino mass and the Hubble parameter during inflation.} \label{fig:spectrum} \end{figure} The latter gives rise to a Minkowski minimum with broken supersymmetry at $\phi_0=0.78$, $\rho_0=-3.5$, $\ensuremath{T_{\scalebox{.7}{L},0}}=6.7$ (corresponding to $T_1=10$, $T_2=3.3$, $T_3=8.4$ in the original field basis). An additional AdS minimum appears outside the validity of the supergravity approximation ($T_2 <1$). In the Minkowski minimum, where we can trust our calculation, the mass spectrum shown in figure~\ref{fig:spectrum} arises. The light modulus and meson are responsible for supersymmetry breaking. Their masses cluster around the gravitino mass $m_{3/2}\sim 200\:\text{Te\eVdist V}$. A slight suppression of the meson-like axion mass arises due to an approximate shift symmetry (see section~\ref{sec:severalmoduli}). The inflaton is substantially heavier compared to the other fields since it decouples from supersymmetry breaking. \begin{figure}[t] \begin{center} \includegraphics[height=5cm]{potential_all.pdf} \end{center} \caption{Scalar potential in the inflaton direction with the other fields eliminated through their minimization condition.} \label{fig:infpot2} \end{figure} Inflation occurs along the real axis of $\rho$. The potential along this direction is shown in figure~\ref{fig:infpot2}, where the remaining fields have been set to their $\rho$-dependent minima. A (quasi-stationary) inflection point occurs at $\rho-\rho_0=15.5$, where we can still trust the supergravity approximation. Corrections to the moduli K\"ahler potential, which are expected at small compactification volume, are suppressed in this regime. Even if they slightly perturbed the inflaton potential, this could easily be compensated by adjusting the superpotential parameters. Inflation, hence, appears to be robust with respect to any higher order effects. For applying the constraints from K\"ahler geometry, we express the inflaton in terms of the (canonically normalized) original field basis \begin{equation} \rho \propto 0.09\,\hat{T}_1 + 0.07 \,\hat{T}_2 - 0.99 \,\hat{T}_3\,, \end{equation} where the $\hat{T}_i$ have been defined in~\eqref{eq:defThat}. As can be seen, the inflaton is dominantly $T_3$. The curvature scalar along the inflaton direction is $R_\rho=0.45$. The Hubble scale and the gravitino mass during inflation are depicted in figure~\ref{fig:spectrum}. One easily verifies that the curvature constraint~\eqref{eq:Kahlercurvature} is satisfied and viable inflation without tachyons can thus be achieved. This can be related to the fact that the inflaton is sufficiently aligned with the volume modulus. The fraction of volume modulus contained in the inflaton is given by $\cos^2\theta=0.54$, in agreement with~\eqref{eq:cosmax}. In figure~\ref{fig:spectrum}, we also provide the scalar mass spectrum during inflation. The inflaton mass is not shown since its squared mass is negative as required by the constraints on the spectral index, specifically $m_{\rho}^2=-0.05\,H^2$ during inflation (corresponding to $\eta_V=-0.015$). The other scalars receive positive Hubble scale masses during inflation (as described in section~\ref{sec:twoormore}). Only the meson-like axion is about an order of magnitude lighter than $H$ due to the approximate shift symmetry. The resulting isocurvature perturbations in the light axion are not expected to be dangerous since they are transferred into adiabatic perturbations once the axion has decayed into radiation. For the parameter example, this decay occurs before primordial nucleosynthesis (BBN). In order to describe the dynamics of the multi-field system, the coupled equations of motion need to be solved. For non-canonical fields, the most general set of equations reads~\cite{Sasaki:1995aw} \begin{equation} \ddot{\psi}^\alpha+\Gamma^{\alpha}_{\beta\gamma}\dot{\psi}^{\beta}\dot{\psi}^{\gamma}+3H\dot{\psi}^{\alpha}+\mathcal{G}^{\alpha\beta}\frac{\partial V}{\partial\psi^{\beta}}=0\,. \end{equation} Here the fields $\psi^{\alpha}$ label the real and imaginary parts of $\rho$, $\ensuremath{T_{\scalebox{.7}{L}}}$, $\phi$. The field space metric $\mathcal{G}_{\alpha\beta}$ can be determined from the K\"ahler metric and $\Gamma^{\alpha}_{\beta\gamma}$ is the Christoffel symbol with respect to the field metric $\mathcal{G}_{\alpha\beta}$ and its inverse $\mathcal{G}^{\alpha\beta}$. The solution to the field equations is depicted in figure~\ref{fig:fields}. For a range of initial conditions, the fields approach the inflationary attractor solution. This means that $\ensuremath{T_{\scalebox{.7}{L}}}$, $\phi$ settle at finite field-values which do not depend on the initial condition after a few oscillations. Their minima during inflation, however, differ from their vacuum expectation values. The inflaton $\rho$ slowly rolls down its potential close to the inflection point. Inflation ends when it reaches the steeper part of the potential. Then, $\rho$ oscillates around its vacuum expectation value with the amplitude decreasing due to the Hubble friction. The inflationary observables can again be determined from a slow roll analysis. The parametric example was chosen to be consistent with observation. It has \begin{equation} n_s=0.97\,,\qquad r=5\times 10^{-7}\,,\qquad A_s = 2\times 10^{-9}\,. \end{equation} The field evolution shown in figure~\ref{fig:fields} spans five orders of magnitude in energy. All scalar fields remain stabilized over the full energy range. After inflation, the volume of the compactified manifold remains protected by the large inflaton mass. If the scalar potential features more than one minimum, the post-inflationary field evolution should ensure that the universe ends up in the desired vacuum.\footnote{In the parameter example, an additional AdS minimum occurs. It may, however, get lifted since it appears outside the range, where we can trust the supergravity calculation.} This might impose additional constraints on the moduli couplings including those to the visible sector. A comprehensive discussion of the reheating process is, however, beyond the scope of this work. Let us just note that the energy density stored in the light degrees of freedom redshifts slower than the thermal energy of the radiation bath and may dominate the energy content of the universe before they decay. We, therefore, expect a non-standard cosmology with late time entropy production to occur (see~\cite{Acharya:2008bk}). Notice that this scenario is consistent with the observed element abundances since all particles are sufficiently heavy to decay before BBN. \begin{figure}[htp] \begin{center} \includegraphics[height=7cm]{fieldquationr.pdf} \end{center} \caption{Solution to the coupled system of equations of motion for the fields $\rho$, $\ensuremath{T_{\protect\scalebox{.7}{L}}}$, $\phi$.} \label{fig:fields} \end{figure} \newpage \section{Conclusion} M-theory compactified on a manifold of $G_2$ holonomy successfully describes many microphysical features of our world. It has chiral fermions interacting via gauge forces and explains the hierarchy of scales. We have now identified the inflaton within this theory. The latter is essentially the overall volume modulus of the compactified region (or a closely aligned field direction). This statement is model-independent and derives from the K\"ahler geometry of the $G_2$ manifold. We provided concrete realizations of volume modulus inflation which satisfy all consistency conditions. Inflation occurs close to an inflection point in the scalar potential. In the relevant parameter regime, string theory corrections to the supergravity approximation are under full control. We solved the system of coupled field equations and proved that all moduli are stabilized during inflation. The scalar fields orthogonal to the inflaton receive Hubble mass terms such that inflation is effectively described as a single field slow roll model. However, several scalar fields are displaced from their vacuum expectation values during inflation. They are expected to undergo coherent oscillations when the Hubble scale drops below their mass. The energy stored in these degrees of freedom generically induces late time entropy production at their decay (which happens before BBN). The scale of inflation emerges from hidden sector strong dynamics. The Planck scale is the only dimensionful input to the theory. We predict $V^{1/4}\sim 10^{15}\:\text{Ge\eVdist V}$ and the corresponding tensor-to-scalar ratio $r\sim 10^{-6}$. Despite the large energy density of inflation, the theory is consistent with, and generically has low energy supersymmetry. It has a de Sitter vacuum in which the (s)goldstino dominantly descends from a hidden sector meson field. Supersymmetry breaking is transmitted to the visible sector via gravity mediation. It generates a hierarchy with heavy sfermions and lighter gauginos. The gauginos are expected to reside at the TeV scale, close to the present LHC sensitivity. While experiments will not directly probe the inflaton of compactified M-theory, indirect evidence may be collected. This is because inflation sets the initial conditions for a non-thermal cosmology which affects many other phenomena including baryogenesis and dark matter. Further predictions of the compactified M-theory will soon be tested by laboratory experiments. \section*{Acknowledgments} We would like to thank Scott Watson for helpful comments on the manuscript. MW acknowledges support by the Vetenskapsr\r{a}det (Swedish Research Council) through contract No. 638-2013-8993 and the Oskar Klein Centre for Cosmoparticle Physics and the LCTP at the University of Michigan, and both of us from DoE grant DE-SC0007859.
1,314,259,994,569
arxiv
\section{Introduction} \lb{sec_intro} Galactic and cluster dynamics, cosmic structure, type Ia supernovae, the cosmic microwave background (CMB), and the primordial abundances of light elements provide solid evidence that dark sectors constitute a significant energy fraction of the universe at any accessible redshift $z \lesssim 10^{10}$. At all corresponding cosmological epochs the nature of abundant dark species, coupled to photons and baryons only by gravitation, is partly or entirely uncertain. The mainstream analyses of cosmological data usually assume the minimal neutrino sector, non-interacting cold dark matter (CDM), and dark energy represented by a canonical scalar field (quintessence). These assumptions are reasonable for interpreting the available data, yet none of them can be taken for granted. ~For example, new light weakly interacting particles commonly appear in high-energy models. In some models, even the standard neutrinos recouple to each other or to additional light fields \ct{Chacko:2003dt,Chacko:2004cz,Beacom:2004yd,Okui:2004xn,Grossman:2005ej} at redshifts at which the decoupled component of radiation gravitationally affects the CMB and cosmic structure \citep{HuSugSmall96,BS04,Hannestad:2004qu,Bell:2005dr}. ~Various alternatives to cold dark matter have been suggested as well. These include warm dark matter\ct{Blumenthal:1982mv,Olive:1981ak}, self-interacting dark matter\ct{Carlson_et_al_92,deLaix:1995vi,Spergel:1999mh}, or modified gravity\ct{Milgrom83,Bekenstein04,Skordis:2005xk,Dodelson:2006zt}. The viability of such scenarios remains an intriguing question. ~Quintessence models are convenient for quantitatively constraining dark energy parameters by data. Yet quintessence is not readily motivated by particle physics, where it is difficult to naturally achieve the required shallowness of the field potential. On the other hand, many alternatives have been proposed whose inhomogeneous kinetics, and hence cosmological signatures, {\it cannot be mimicked by quintessence with any background\/} equation of state~$w(z)$. \citep[For a comprehensive review of dark energy models see, e.g.,][]{Copeland:2006wr}. Fortunately, cosmological observations themselves can test these assumptions by revealing not only the dark species' mean density and pressure but also the kinetics of their inhomogeneities. The goal of this paper is to map various inhomogeneous kinetic properties of the dark sectors (or deviations from Einstein gravity) to the observable characteristics of CMB and cosmic structure. Dark species influence the visible matter by affecting both the background expansion and metric perturbations. Of the two mechanisms, the perturbations, albeit demanding better statistics for useful constraints, encode many more independent clues about the dark universe by offering {\it new information at every spatial scale\/}~$k$. The following three examples show the importance of this information, absent in the background equation of state~$w(z)$. \subsection{Examples of the value of dark perturbations} \subsubsubsection{Nature of dark energy} The first example is the most challenging problem in today's cosmology---the nature of dark energy. The constraints on the dark energy background equation of state $w\equiv p_{\rm de}/\rho_{\rm de}$ are tightening around the value $-1$, consistent with a cosmological constant. Analyses that combine the current data from the CMB, large scale structure (LSS), Lyman-$\alpha$ forest, and supernovae, already constrain the deviation of $w$ from $-1$ for flat models better than to $10\%$ \citep[][and others.]{WMAP3Spergel,SelSlosMcD06,TegmLRG06} Whether or not future observations continue to converge on $w=-1$, the dynamics of perturbations will be crucial in elucidating the nature of cosmic acceleration. Even if $w(z)\equiv -1$ at low redshifts, this does not necessarily imply a cosmological constant. Certain models \cite[e.g.][]{MaVaN2Fardon05} predict that $w\equiv -1$ at the present epoch, yet the dynamics of perturbations differs at high redshifts. Contrary to common belief, it is even conceivable that $w\approx -1$ yet low-redshift perturbations deviate from those in the $\Lambda$CDM model. On the other hand, if $w\not=-1$ then perturbations of dark energy in the inhomogeneous metric are unavoidable. (Even if dark energy appears unperturbed in one spacetime slicing, its perturbations for $w\not=-1$ are necessarily nonzero in any different slicing.) Exploring the perturbations' properties, specifically the properties considered in this paper, will then become pivotal to establishing the nature of dark energy. \subsubsubsection{Density of dark radiation} The current observations of CMB temperature and polarization, including the WMAP 3-year results\ct{WMAP3Hinshaw,WMAP3Page}, when combined with the SDSS galaxy power spectrum\ct{Tegmark:2003uf,Eisenstein:2005su}, with or without Ly-$\alpha$ forest data, prefer enhanced neutrino density. \ctt{SelSlosMcD06} report $N_\nu=5.3^{+2.1}_{-1.7}$ at $2\,\sigma$, disfavoring the standard value $N_\nu=3.04$ at $2.4\,\sigma$. The WMAP team in its latest analysis\ct{WMAP3Spergel} likewise concludes that from the WMAP3 and SDSS data $N_\nu=7.1^{+4.1}_{-3.5}$. A similar preference for high $N_\nu$ by the WMAP3 and SDSS data is seen by\ctt{Cirelli:2006kt}, although not by\ctt{Hannestad:2006mi}. While neutrinos noticeably speed-up the background expansion in the radiation era, by itself this leads to almost no observable cosmological signatures\ct{HuEisenTegmWhite98,BS04,B05Trieste}, given the freedom of a compensating adjustment of matter density $\Omega_m h^2$\ct{Bowen02} and primordial helium fraction~$Y_p$\ct{BS04}. This degeneracy is broken by the differences in the evolution of streaming neutrino perturbations and of the acoustic waves in the photon-baryon fluid. It is also broken by independent constraints on the helium fraction. Various signatures of neutrinos that then appear remain partly degenerate with nuisance parameters, such as the dark energy equation of state or the normalization and tilt of the primordial power. Yet these signatures and the constraints on $N_\nu$ with the correspondingly different experiments have rather incomparable degree of robustness\ct{BS04}. The preference of WMAP3+SDSS data for an enhanced neutrino density could be due to the physical excess of matter power over CMB power, expected for higher density of freely streaming particles\ct{HuSugSmall96,BS04}. Yet other explanations, e.g., incomplete treatment of recombination or insufficient accuracy of bias modeling, also cannot be definitely excluded. In addition to increasing the ratio of matter to CMB power, freely streaming neutrinos shift the phase of CMB acoustic oscillations\ct{BS04}. The phase-shift signature is very robust to systematic uncertainties and will become the primary discriminative mechanism for the near-future tight CMB constraints on neutrino density\ct{Lopez:1998aq,Bowen02,BS04,Perotto:2006rj}, e.g. $\sigma(N_\nu)\sim 0.2-0.3$ expected from the Planck mission\footnote{http://www.rssd.esa.int/Planck}. It can provide decisive evidence for the excess of streaming relativistic particles or rule this possibility out. ~ \subsubsubsection{Nature of dark radiation} If there is a true excess of energy density in the radiation era, at least three alternatives for non-standard dark radiation are possible: relic decoupled particles, self-interacting particles\ct{Chacko:2003dt,Chacko:2004cz,Beacom:2004yd,Hannestad:2004qu}, or a tracking classical field\ct{RatraPeebles88,FerreiraJoyce97,ZlatevStein_tracking98}. The perturbations of dark radiation propagate differently in all of these scenarios. As follows from this work, this in principle allows their experimental discrimination. \subsection{Questions not answered by black-box computations} For a particular cosmological model it is generally straightforward to calculate linear power spectra and transfer functions with standard codes, if necessary, modified to include new dynamics. Despite this, numerical calculations have limited usefulness for exploring the signatures of new physics: First, for typical models, with close to ten unknown nuisance parameters, it is often intricate to establish numerically which of the observable signatures of the new physics cannot be compensated by parameter adjustments. While only such signatures are the true discriminators of the physics in question, they may be tiny and easily overlooked among large yet degenerate effects. Moreover, for every extended parameterization of the nuisance effects as well as for any additional constraining experiment, a new numerical analysis is required. Equally importantly, a numerical black-box computation does not answer which aspects of a model are responsible for its observable signatures. This obscures the separation of the physical facts that are backed by observations from the models' features that are believed true yet are untested. \subsection{Approach, outline, and conventions} The approach of this paper is to explicitly track the evolution of gravitationally coupled inhomogeneities of the visible and dark species. This allows us to identify which observables are affected by which of the various properties of the dark kinetics. Importantly, this approach also reveals the mechanisms behind the sensitivity of cosmological observables to various dark properties. Establishing the mechanism allows us to judge more intelligently the robustness of cosmological constraints, particularly, to know when this mechanism may fail or produce a different outcome. As we will see soon, there is an important subtlety in performing such identifications. To decipher the signatures of the dark kinetics, it is essential to address the gravitational interaction of perturbations during {\it horizon entry\/}. Then and only then perturbations of all abundant dark species are gravitationally imprinted on the visible species without suppression. The suppression of the impact of the species' energy overdensity $\delta\rho/\rho$ on the metric on subhorizon scales is evident from the Poisson equation \be \Phi\,=\,-k^{-2} 4\pi G a^2\,\delta\rho \sim (\mathcal H/k)^2\,\delta\rho/\rho, \lb{Pois_subhor} \ee where $\bm{k}$ is the (comoving) wavevector of a perturbation mode and $\mathcal H$ is the expansion rate (in conformal time). The impact of the species' velocities is suppressed even more [by $(\mathcal H/k)^3$, c.f.\ eq.\rf{Poiss_my}.] At horizon entry, on the other hand, the factor $\mathcal H/k$ approaches unity and does not cause suppression. In particular, only during the horizon entry are the visible species influenced by the perturbations of dark radiation and dark energy, for which the Jeans length is expected to be close to the horizon size. Numerous authors \citep[e.g.,][]{PressVishniac80,Bard80,MaBert95} have pointed out considerable freedom in representing the inhomogeneous evolution on the scales of the order of and exceeding the Hubble scale. This freedom is due to, literally, an infinite number of possibilities for coordinate gauges, as well as for the variables (even the gauge-invariant ones) that can parameterize large-scale perturbations. In principle, this descriptional freedom could introduce ambiguity in relating observable features with specific physical mechanisms that operate on large scales. Moreover, such relations indeed differ among the authors who use apparently dissimilar while formally equivalent descriptions--- differ substantially for some pronounced and important for constraints features. We argue in Sec.~\ref{sec_uniq} that, as far as the observable impact of dark dynamics is concerned, there is little room for ambiguities. A well defined distinction can be drawn between an apparent connection ``dynamical cause $\to$ observable effect'' that is a descriptional artifact or that is an objective causal relation. We will also see that in certain formalisms, which exist and can be distinguished by simple criteria, the physical microscopic properties that characterize species at a particular time influence the apparent large-scale perturbations {\it at the same time\/}. Such formalisms markedly simplify mapping of the characteristics of cosmological observables to the responsible microscopic dark kinetics. In the subsequent sections we do such a mapping for the CMB temperature power spectrum and LSS transfer functions. In Sec.~\ref{sec_probes} we discuss the general features of the primary probes of inhomogeneous cosmological dynamics. We also give dynamical equations to be used further to quantify the CMB and matter response to dark parameters. In Sec.~\ref{sec_dynamics} we review various general parameterizations of potentially accessible information about the properties of dark sectors. We consider the parameterizations of the metric, of dark densities and stresses, and of internal dark dynamics. Sec.~\ref{sec_impacts} is central to our study. In this section we identify the characteristics of the CMB and cosmic structure that reveal such general properties of dark species as anisotropic stress, stiffness, clustering, and propagation of inhomogeneities. In Sec.~\ref{sec_ModGrav} we study the specifics of cosmologies with modified gravity and discuss how modified gravity can be distinguished observationally. Our main results are summarized in Sec.~\ref{concl} and its Table~\ref{tab_sum}. Throughout the paper, distances will be measured in comoving units. Evolution will usually be described in conformal time $d\tau=dt/a$, where $a$~is the cosmological scale factor and $dt$ is proper background time. Overdots denote derivatives with respect to the conformal time, and $\mathcal H\equiv \dot a/a = Ha$ gives the Hubble expansion rate in conformal time. \section{Is mapping of large-scale dynamics to observables unambiguous?} \lb{sec_uniq} Except for Sec.~\ref{sec_ModGrav}, we will consider models in which dark and visible sectors couple by standard Einstein gravity. These models correspond to a local action $S=\int d^4x\,\sqrt{-g}\,\mathcal L$ with $\mathcal L=\mathcal L_{\rm dark}+\mathcal L_{\rm vis}+\mathcal L_{\rm grav}$, where the dark degrees of freedom are represented by the Lagrangian density $\mathcal L_{\rm dark}$, the visible ones by $\mathcal L_{\rm vis}$, and the gravitational ones by $\mathcal L_{\rm grav}=(16\pi G)^{-1} R$. In agreement with observations, we assume that the large-scale evolution can be presented as a mildly perturbed Friedmann-Robertson-Walker (FRW) expansion. For this section only, we clock the evolution by the number of $e$-foldings $N\equiv\ln a$, tractable through local matter density or CMB temperature. Let $[N_1, N_2]$ be an evolution interval in the past. Consider a class,~$M_{12}$, of all conceivable models with same $\mathcal L_{\rm vis}$ that: \begin{itemize} \item[(i)] evolve identically for $N<N_1$; \item[(ii)] during the interval $[N_1, N_2]$ possibly differ in the microscopic laws of their dark, but not visible, dynamics; and \item[(iii)] by $N=N_2$ have identical distribution of the dark species (in some non-degenerate measure) among all~$M_{12}$ models. \end{itemize} By $N=N_2$, the distributions of the visible species among these models will generally differ: The visible species would generally be gravitationally affected earlier by the model-specific evolution in the dark sectors.\footnote{ Even the dark species in the $M_{12}$ models may not be perturbed identically at $N_2$ in terms of every measure. On large scales the quantification of dark perturbations by a different measure can depend on the perturbations of visible species and the metric. We require only that the dark perturbations are the same in at least one non-degenerate measure. } Given a quantity $P$ that characterizes internal dark dynamics and an observable~$O$, the following may be true: Regardless of the value of $P$ during the interval $\Delta N_{12}\equiv [N_1, N_2]$, all the models in the above class~$M_{12}$ have the same observed value of~$O$. Then it is natural to say that the observable~$O$ is {\it not sensitive\/} to the considered dynamical property~$P$ in the interval $\Delta N_{12}$. In linear order, this applies to the effects of any dark property~$P$ on any observable~$O$ that depends only on the perturbations that were superhorizon during $\Delta N_{12}$. \ctt{zeta_a} showed that perturbations in several gravitationally coupled fluids can be described by variables which during superhorizon evolution (for non-decaying modes) are time-independent in each of the individual fluids.\footnote{ \ctt{LindeMukh_curvaton05} noted that under certain conditions, which may exist, e.g., during reheating, {\it gravitational\/} decays of species into another type of species may mix the superhorizon perturbations in different sectors. Such decays are, nevertheless, negligible for the post-BBN evolution studied in this paper. } This result can be extended beyond multifluid models to any sector that is perturbed (internally) adiabatically, i.e., is homogeneous in at least some coordinates\ct{BS04}. In other words, linear perturbations encode no information about the microscopic properties that characterized the dark universe when these perturbation were superhorizon. The abundances and kinetic properties that dark species have {\it since horizon entry\/} do influence the observed CMB or matter perturbations. However, because of the aforementioned freedom in representing the evolution of large-scale inhomogeneities, in many formalisms (including the most popular ones), the properties characterizing the dark species at horizon entry affect the apparent perturbations of the CMB or matter long before or after the entry. This can mislead one into viewing an observable feature as a probe of an entirely unrelated epoch and/or physical process. This source of existing and potential new misassignments can be eliminated systematically and naturally as follows. The impact of local dark dynamics on perturbations of visible species will appear concurrent with the underlying microscopic dark physics in any formalism which has the following two properties\ct{B06}: ~ \begin{itemize} \item[I.] Perturbations are frozen on superhorizon scales. \item[II.] Perturbations evolve by the equations of the FRW metric whenever the geometry is unperturbed. \end{itemize} We imply that the description of gravity in these formalisms reduces to the Newtonian one in the weakly perturbed metric on subhorizon scales. This is easily achieved, e.g., by parameterizing metric inhomogeneities by the gravitational potentials of the Newtonian gauge \citep{Mukh_Rept92,MaBert95}. Then the apparent linear impact of dark species on observables will be found to be identical in any description with properties I~and~II: This impact is practically unambiguous for the evolution of perturbation modes after horizon entry; the apparent perturbations do not evolve at all prior to the entry by condition~I. Therefore, these descriptions could differ only by the changes of perturbations during horizon entry. It is straightforward to argue\ct{B06} that there are no such differences in linear theory among the formalisms that satisfy I~and~II. One such particular, natural and simple, formulation of the full linearly perturbed Einstein-Boltzmann cosmological dynamics was described in\ctt{B06}. This formulation is based on {\it canonical\/} coordinate variables. The canonical variables for perturbations of phase-space distributions or radiation intensity were considered in the past \citep[e.g.,][]{Chandrasekhar_book60,Durrer_CMB01} but have not become mainstream in the present cosmology. In addition to the demonstrated more direct connection of large-scale evolution to microscopic kinetics of cosmological species, this formulation has several technical advantages over popular alternatives, most of which consider perturbations of proper quantities. It is used in the analysis that follows. \section{Probes} \label{sec_probes} Following the standard route, we separate the effects of dark sectors on the metric into the contributions to the background geometry and to metric inhomogeneities. The background geometry is fully described by the Hubble expansion rate as a function of redshift and by possible global curvature. It is constrained by a variety of ``geometrical'' probes, e.g., the luminosity-redshift relation for type Ia supernovae or the angular size of the acoustic features in the power spectra of the CMB and matter. In this paper we focus on extracting the potentially much richer information about the dark dynamics that is contained in the metric inhomogeneities. This information is to be inferred from the imprints of the metric inhomogeneities on such various observables as the CMB, galaxy distributions, lensing shear, quasar absorption spectra, etc. The primary probes of dark perturbations can be broadly classified as either ``light'' or ``matter'': The ``light'' probing the metric along trajectories which are close to null geodesics ($ds^2\simeq 0$, e.g., CMB photons); the ``matter'' moving almost with the Hubble flow with non-relativistic peculiar velocities ($|d\bm{x}/d\tau|\ll 1$, e.g., galaxies or Ly-$\alpha$ clouds). These two classes of probes, considered in Secs.~\ref{sec_CMB} and~\ref{sec_matter} respectively, provide information that is complimentary in several respects, discussed in details in subsequent sections. Linear theory adequately describes the inhomogeneous dynamics during horizon entry, when physical quantities are perturbed by about $0.001\%$, and long after the entry. Then the signatures of the dark kinetics are encoded in the linear transfer functions [alternatively, in Green's functions\ct{Magueijo92,Baccigalupi98,Baccigal2,BB_PRL,BB02}], which are the essential constituents of ``dynamical'' cosmological observables, such as power spectra, luminosity functions, etc. To be specific (though general within the linear regime), in this section we consider an individual perturbation mode with a comoving wavevector~${\bf k}$. \subsection{CMB} \label{sec_CMB} After horizon entry until photon decoupling around hydrogen recombination, a perturbation mode in the photon-baryon plasma undergoes acoustic oscillations. All the memory of the gravitational impact at the mode's entry is then retained only in its oscillation amplitude~$A({\bf k})$ and its temporal phase shift~$\varphi(k)$. These quantities map into the heights and phase of the acoustic peaks in the observable CMB angular power spectra~$C_l$, with a rough $\ell\leftrightarrow k$ correspondence $\ell\sim k\,r_{\rm dec}$, where $r_{\rm dec}$ is the angular diameter distance to the CMB decoupling surface. [In the standard $\Lambda$CDM model $r_{\rm dec}=\tau(z_{\rm dec}{\sim}1100)\simeq 14\,$Gpc.] Our goal is to establish how $A({\bf k})$ and $\varphi(k)$ are affected by the gravitational impact of perturbations in various species. As noted in the introduction, this task involves a subtlety that without proper care can lead to erroneous conclusions. Although the notion of density or temperature perturbation is uniquely defined in the FRW geometry, ambiguities arise on large scales in the perturbed metric. We argued in Sec.~\ref{sec_uniq} that this complication is uniquely resolved in linear theory, where switching off a microscopic effect of interest results in the same change of observables regardless of other local properties of the various species. We can then calculate the observables in the models with the effect ``on'' and ``off'' and identify the {\it signature of the effect\/} with the difference. Being interested in the kinetics of perturbations, we compare the models with identical background expansion $H(z)$. For the compared models, we assume identical initial conditions, i.e., identical dynamics during the inflationary epoch, when the superhorizon perturbations likely were generated. We describe the perturbative evolution by a formalism detailed in\ctt{B06}. Then the gravitational forcing of perturbations appears concurrent with the responsible local interactions and the above unambiguous causal relations are manifest. In this formalism the general-relativistic generalization of particle overdensity is $d\equiv \delta n_{\rm coo}/n_{\rm coo}$, a perturbation of particle number density per {\it coordinate\/} volume. When the CMB is tightly coupled to electrons by Thompson scattering, its (coordinate) overdensity evolves as \be \textstyle \ddot d_{\gamma} + \left(\fr{R_b}{1+R_b}\,\mathcal H + 2\tau_d k^2\right)\dot d_{\gamma} + c_s^2k^2(d_{\gamma}-D) = 0 \lb{dot_gk} \ee \citep{BS04}. Here and below $R_b\equiv 3\rho_b/(4\rho_{\g})$, the diffusion time $\tau_d=[1-\fr{14}{15}(1+R_b)^{-1} +(1+R_b)^{-2}]/(6an_e\sigma_{\!\rm Thomp})$\ct{Kaiser83} determines the Silk damping\ct{Silk:1967kq}, and $c_s^2=[3(1+R_b)]^{-1}$ gives the sound speed in the photon-baryon plasma. Only scalar perturbations are considered.\footnote{ The connection between dark dynamics and its observable signatures is relatively straightforward in the tensor sector, where gauge ambiguities are absent\ct{Bard80,KS84}. Vector perturbations, even if primordially generated, are not expected to survive superhorizon evolution\ct{Bard80,KS84}. If necessary, they can be analyzed similarly to the scalar perturbations. } The gravitational driving of the CMB modes in eq.\rf{dot_gk} is mediated through an instantaneous equilibrium value~$D$ of photon overdensity. In the Newtonian gauge \citep{Mukh_Rept92,MaBert95} \be ds^2=a^2\left[-(1+2\Phi)d\tau^2+(1-2\Psi)d\bm{x}^2\right], \lb{Newt_gauge} \ee the driving term equals \be D(\tau,k)= -3(\Phi+\Psi+R_b\Phi). \lb{D_def} \ee In the radiation era, when $R_b\ll 1$, the driving of the CMB by dark perturbations {\it on all scales\/} is controlled only by the sum $\Phi+\Psi$. This also applies to the CMB photons on all scales after their decoupling from baryons, c.f.\ the equation for the evolution of CMB intensity\rf{dot_dig_C}. \ctt{Durrer_pert93} argued that this is natural for the relativistic particles, whose dynamics is conformally invariant, and therefore should be sensitive only to the Weyl part of the curvature tensor. Indeed, the scalar part the Weyl tensor is fully specified by $\Phi+\Psi$\ct{Durrer_pert93}. In the decoupled case, the driving by $\Phi+\Psi$ is well known as the integrated Sachs-Wolfe effect (ISW)\ct{SachsWolfe67}. The full linear dynamics of partially polarized CMB photons and baryons \cite[e.g.,][]{MaBert95} is also straightforward to formulate so that the changes of the apparent inhomogeneities are concurrent with the responsible local dark properties\ct{B06}. In particular, the perturbation of the polarization-summed CMB intensity corresponds to its canonical variable $\iota(x^\mu,n_i)\equiv I_{\gamma}/\bar I_{\gamma}-1$ that evolves according to a transport equation \citep[][also presenting a fully nonlinear treatment]{Durrer:1990mk,SB_tensors05} \be \dot \iota + n_i\nabla_{\!i}} \newcommand{\Nbj}{\nabla_{\!j} \iota = -4 n_i\nabla_{\!i}} \newcommand{\Nbj}{\nabla_{\!j}(\Phi+\Psi) + C_T, \lb{dot_dig_C} \ee where $C_T$ is the Thompson collision term. \subsection{Matter} \label{sec_matter} The evolution of linear scalar perturbations of cold dark matter (CDM) on all scales and, after decoupling from the CMB, of baryons on large scales is governed by the conservation and Euler equations \be \dot d_c + \partial_i v^i_c = 0, \qquad \dot v^i_c + \mathcal H v^i_c = -\partial_i\Phi. \lb{dot_c} \ee Their solution for a Fourier mode with non-singular (inflationary) initial conditions is \be d_c = d_{c,\,\rm in} - k^2\int_0^\tau u_c\, d\tau, \qquad u_c = \fr1a\int_0^{\tau}(a\Phi)\,d\tau. \lb{d_c_sol} \ee The quantity $u_c$ that appears in these equations is physically the CDM velocity potential: $v^i_c=-\partial_i u_c$. Similarly to the CMB modes, the matter perturbations carry the memory of the metric inhomogeneities at the horizon entry in two independent functions: the {\it overdensity change\/} and {\it velocity boost\/} that were generated during the entry. After horizon entry, the excess of CDM velocity, decaying as~$1/a$, yet not vanishing instantaneously, continues to enhance matter clumping. Hence, the implications of the velocity boost for the observed $\delta\rho_m/\rho_m$ are generally more important than the initial overdensity enhancement\ct{HuSugSmall96}. Eqs.\rf{d_c_sol} show that matter perturbations are driven by the potential~$\Phi$ alone {\it on all scales\/}\ct{BS04}. This also has a natural explanation. In the Newtonian gauge, the point particles of a mass $m$ are described by the Lagrangian $L=-m\sqrt{-ds^2}/d\tau=-ma\sqrt{[1+2\Phi({\bf x})]-[1-2\Psi({\bf x})]\,\dot{\bf x}^2}$. The corresponding contribution of $\Psi$ is negligible for nonrelativistic particles, when $L\simeq ma\,[\dot{\bf x}^2/2-\Phi({\bf x})]\,+\,$const. Moreover, the initial (superhorizon) perturbations of particle's coordinates ${\bf x}$ determine the initial value of the {\it coordinate\/} overdensity $d_c$ also independently of $\Psi$. Since the CMB, on the other hand, is affected by the sum~$\Phi+\Psi$, this is one of the respects in which matter and the CMB provide complimentary information about dark inhomogeneities. A combined analysis of both probes is particularly useful for testing a deviation of~$\Psi/\Phi$ from unity, produced by freely streaming species \cite[e.g.,][]{MaBert95} or generally expected in modified gravity\ct{Bert06,Dodelson:2006zt}. \section{Parameterizing dark dynamics} \label{sec_dynamics} In this section we review several general parameterizations of the potentially observable properties of the dark universe. We start from the spacetime metric, connected most directly to observational probes. Then, assuming validity of the Einstein equations, we match the metric characteristics to a general description of dark densities and stresses. Finally, we consider a parameterization of the potentially measurable properties of dark internal dynamics. The observational probes of these parameters will be studied in Sec.~\ref{sec_impacts}. The status of these parameterizations in cosmologies with modified gravity will be addressed in Sec.~\ref{sec_ModGrav}. \subsection{Metric} \label{sec_dynamics_metric} Provided the visible matter couples covariantly and minimally to a certain metric tensor $g_{\mu\nu}$, the observable impact of dark sectors or modified gravity alike can be parameterized by a single function of redshift~$z$, to describe the FRW background, and by two independent functions of $z$ and~${\bf k}$, to specify scalar metric perturbations. For example, the background can be quantified by its expansion rate~$\mathcal H(z)$ (with its possible spatial curvature), and the metric perturbations by the Newtonian-gauge potentials $\Phi(z,{\bf k})$ and $\Psi(z,{\bf k})$. \subsection{Densities and stresses} For a more direct link to the internal properties of the dark sectors, it may be worthwhile to parameterize their contribution to the energy-momentum tensor $T^{\mu\nu}=\sum_a T^{\mu\nu}_a$. If gravity is described by general relativity then $T^{\mu\nu}$~determines the geometrical quantities $\{\mathcal H$, $\Phi$, $\Psi\}$ through the Einstein equations. In this view, the background may be specified by the average energy densities of the species $\rho_a(z)\equiv -\bar T^0_{0\,(a)}$. The scalar inhomogeneities of the metric are determined from the perturbations of the energy densities, sourcing the curvature potential~$\Psi$, and from anisotropic stress, generating the splitting $\Phi-\Psi$. The corresponding formulas are given next. After\ctt{Hu_GDM98}, we will use proper particle number overdensity $\delta^{(c)}_a(z,{\bf k}) \equiv \delta\rho_a^{(c)}/(\rho_a+p_a)$ in the {\it comoving\/} gauge\footnote{ The comoving gauge conditions for scalar perturbations are zero total momentum density ($T^0_i \equiv 0$) and vanishing shear of the spatial metric ($g_{ij} \propto \delta_{ij}$). }. In any other gauge \be \delta^{(c)}_a = \fr{\delta\rho_a}{\rho_a+p_a}+3\mathcal H u_a, \lb{delta^c_def} \ee where $v^i_a= -\partial_i u_a$. Specifically, $\delta^{(c)}_a$ can be expressed in terms of the dynamical coordinate overdensities~$d_a$, used in Sec.~\ref{sec_probes}, as \be \delta^{(c)}_a = d_a + 3\mathcal H u_a + 3\Psi. \lb{delta^c_Newt} \ee The comoving-gauge overdensity $\delta^{(c)}_a$ is convenient for two reasons. First, it directly sources the Newtonian potential~$\Psi$\ct{Bard80} \be \Nb^2\Psi = 4\pi Ga^2\sum_a(\rho_a+p_a)\delta^{(c)}_a, \lb{Poiss_tradit} \ee where $\Nb^2\equiv {}^{(3)}\!\bar g^{ij}\partial_i\partial_j$ is the spatial Laplacian (from now on the background is assumed flat). Second, the generalized pressure gradient in the Euler equation [the term $- \partial_i f_p$ in eq.\rf{dot_v}] in the comoving gauge takes its usual special-relativistic form [$f_p = \delta p^{(c)}/(\rho+p)$]. Nevertheless, we view $\delta^{(c)}_a$ only as a useful linear combination\rf{delta^c_Newt}, rather than an independent dynamical variable: The evolution of~$\delta^{(c)}_a$, unlike that of~$d_a$, is highly nonlocal. Beyond the horizon~$\delta^{(c)}_a$, unlike the frozen $d_a$, is affected by all species, in addition to the studied dark species~$a$. In terms of the dynamical variables $d_a$ and $u_a$ the Poisson equation\rf{Poiss_tradit} reads \be \left(-3+\fr{\Nb^2}{\gamma}\right)\Psi = \sum_a x_a\,(d_a + 3\mathcal H u_a), \lb{Poiss_my} \ee with \be \gamma \equiv 4\pi Ga^2(\rho+p) = \frac32\,(1+w)\mathcal H^2, ~~~{\rm and}~~~ x_a\equiv \frac{\rho_a+p_a}{\rho+p}. \lb{Poiss_my_notat} \ee Unlike eq.\rf{Poiss_tradit}, the generalized Poisson equation in the form\rf{Poiss_my} in the superhorizon limit is explicitly non-singular and its solution $\Psi \approx -\fr13(d + 3\mathcal H u)$ even becomes local. We describe the scalar component of the anisotropic stress $\Sigma^i_j\equiv T^i_j-\fr13\delta^i_jT^k_k$ by a potential~$\sigma(z,{\bf k})$ as \be (\partial_i\partial_j-\fr13\delta^i_j\Nb^2)\sigma_a \equiv {\Sigma^i_{j\,(a)}\over(\rho_a+p_a)}. \lb{sigma_def} \ee The corresponding gravitational equation is\ct{Bard80,KS84,MaBert95}: \be \Phi=\Psi-8\pi Ga^2\sum_a(\rho_a+p_a)\sigma_a. \lb{Psi-Phi} \ee Thus the densities and stresses of the background and scalar perturbations may be parameterized by a set $\{ \rho_a(z)$, $\delta^{(c)}_a(z,{\bf k})$, $\sigma_a(z,{\bf k})\}$ for each sector~$a$. Both $\delta^{(c)}_a$ and $\sigma_a$, as defined by eqs.\rf{delta^c_def} and\rf{sigma_def}, are gauge-invariant quantities. Neither of them, though, can be measured locally. \begin{figure*}[t] \centerline{\sc\footnotesize \qquad CMB modes, Radiation era} \centerline{\includegraphics[width=12cm]{fig_CMB_rad.eps}} \caption{The evolution of CMB overdensity~$d_{\gamma}(\tau,k)$ (oscillating, red curves) and the corresponding gravitational driving term~$D=-3(\Phi+\Psi)$ (monotonically falling, brown curves) in the radiation era for $R_b\ll 1$ and negligible photon diffusion. The gravitational driving of the CMB [eq.\rf{dot_gk}] is illustrated with a spring that connects the $d_\gamma$ and $D$ curves. The panels demonstrate the impact of perturbations in decoupled neutrinos (left) and quintessence (right). ---\,Left panel shows $d_\gamma$ and $D$ for: tightly-coupled photons only (dashed), the standard model with three neutrinos (wider solid), and the same model after switching off neutrino anisotropic stress as a source of $\Phi-\Psi$ (thinner solid). ---\,Right panel displays: the earlier photon-only model (dashed) and a fictitious model where, as in the standard model, $59\%$ of energy density is in photons but the remaining $41\%$ is now in a tracking quintessence~$\phi$, with $w_\phi=1/3$ (solid). } \label{fig_CMB_rad} \end{figure*} \subsection{Dynamical properties} \label{sec_param_dyn} As a step further, we can try to parameterize the {\it dynamics\/} of the dark densities and stresses. One such useful and popular parameterization has been suggested by \citet{Hu_GDM98}. He described the dynamics of the background by the conventional ``equation of state'' $w_a(z)\equiv p_a/\rho_a$, and the dynamics of perturbations by the effective sound speed \be c^2_{{\rm eff},\,a}(z,k)\equiv \delta p_a^{(c)}(k)/\delta\rho_a^{(c)}(k). \lb{c_eff_def} \ee We call this quantity ``stiffness'' throughout the paper in order to distinguish it from the physical velocity of perturbation propagation $c_p$, Sec.~\ref{sec_speed}, which may differ from $c_{\rm eff}$ and is probed by different observables. In addition, \citet{Hu_GDM98} considered a ``viscosity parameter'' $c^2_{{\rm vis},\,a}(z,k)$ (defined by the ratio of the dynamical change of anisotropic stress to the velocity gradient). However, realistically the evolution of anisotropic stress is determined not only by the velocity gradient but also by additional degrees of freedom, e.g., by the $\ell =3$ multipole or polarization. Therefore, in our study we will not use $c^2_{\rm vis}$ but will work directly with $\sigma_a(z,k)$. A dark sector that couples negligibly to the other species can be assigned a covariantly conserved energy-momentum tensor $T^{\mu\nu}_{(a)}$ through the standard procedure of metic variation \cite[e.g.,][]{LandLifshII}. Given the value of the sector's background energy density~$\rho_a$ at some moment (e.g. the present density, set by $\Omega_ah^2$), $w_a(z)$~determines the density at all times from \be \dot\rho=-3\mathcal H(1+w)\rho \lb{dot_rho} \ee (the subscript~$a$ is dropped from now on.) The evolution of density perturbation~$d$ follows by linearly perturbing the equation of energy conservation $T^{0\nu}{}_{;\nu}=0$: \be \dot d + \partial_i v^i = Q \lb{dot_d} \ee with\footnote{ $\dot p/\dot\rho$, appearing in eq.\rf{Q_def}, can be related to the background equation of state $w$ as $\dot p/\dot\rho=w-\frac{\dot w}{3(1+w)\mathcal H}$. } \be Q \equiv \fr{\dot\rho\delta p-\dot p\delta\rho}{(\rho+p)^2} = -3\mathcal H\left(c^2_{\rm eff}-\fr{\dot p}{\dot\rho}\right)\delta^{(c)}. \lb{Q_def} \ee Linearization of $T^{i\nu}{}_{;\nu}=0$ yields the evolution of momentum-averaged velocity:\footnote{ Eqs.\rf{dot_v}\,--\,\rf{Fs_def} are simpler in terms of the velocity potential: for $v^i=-\partial_i u$, they read $$ \dot u + \mathcal H u = \Phi + c^2_{\rm eff}\,\delta^{(c)} - \fr23\,\Nb^2\sigma. $$ } \be \dot v^i + \mathcal H v^i = - \partial_i\Phi - \partial_i f_p + F^i_{\sigma} \lb{dot_v} \ee with \be f_p &\equiv& \fr{\delta p-\dot p u}{\rho+p} = c^2_{\rm eff}\,\delta^{(c)}, \lb{fp_def}\\ F^i_\sigma &\equiv& - \fr{\partial_j\Sigma^{ij}}{\rho+p} = \fr23\,\partial_i\Nb^2\sigma. \lb{Fs_def} \ee We remember that $\delta^{(c)}$ is given by eq.\rf{delta^c_Newt}, and the gravitational potentials are determined by the linearized Einstein equations\rf{Poiss_tradit} and\rf{Psi-Phi}. These equations are closed after $c^2_{{\rm eff}}$ and $\sigma$ are specified by the species' internal dynamics. The expansion of perturbations over Fourier modes splits the equations into uncoupled systems for each mode, with its own $c^2_{{\rm eff}}(k)$ and $\sigma(k)$. Similarly to $\sigma(k)$, the stiffness $c^2_{\rm eff}(k)$ is manifestly gauge-invariant but not measurable locally. \subsection{Other parameterizations} \label{sec_sum_param} Any of the above sets $\{\mathcal H$, $\Phi$, $\Psi\}$, $\{\rho$, $\delta^{(c)}$, $\sigma\}$, or $\{w$, $c^2_{\rm eff}$, $\sigma\}$ parameterizes all properties of arbitrary dark sectors that can be constrained by any probe of background and scalar perturbations. It may nevertheless be useful to explore other internal properties of the dark species. Certain properties, e.g.\ particle masses and cross-sections, that are not readily extractable from the above sets may be important for particle-physics models. Other properties may have more distinctive or less degenerate observable signatures. An example appears in Sec.~\ref{sec_speed}, discussing the dark characteristics that control the additive shift of the CMB peaks. Of course, yet alternative parameterizations of the internal dark dynamics may also be considered. For example,\ctt{Linder_gamma05} suggested to approximately quantify the growth of cosmic structure by a growth index parameter~$\gamma$. This approach may be useful for utilizing LSS to probe modified gravity or non-standard long-range interaction. We do not pursue it here, being interested in more universal descriptions that are applicable to the CMB as well. \section{Mapping dark dynamics to observable features} \label{sec_impacts} In this, main, section we identify the observable signatures of the dark properties. Specifically, we consider the signatures of: anisotropic stress, the dark species' stiffness, the propagation speed of their inhomogeneities, and their clustering. The observational probes considered are the transfer functions for cosmic structure and the power spectra of the CMB. \subsection{Anisotropic stress} \label{sec_anis_stress} Already in the standard cosmological model, significant anisotropic stress is generated in the radiation era by freely streaming neutrinos. The anisotropic stress in our universe may differ from the standard three-neutrino expectation because of, for example, additional free-streaming relativistic species, which would enhance the stress. Or, it may differ because of non-minimal neutrino couplings\ct{Chacko:2003dt,Chacko:2004cz,Beacom:2004yd,Okui:2004xn, Hannestad:2004qu,Grossman:2005ej}, which would locally isotropize neutrino velocities and reduce or eliminate their anisotropic stress. Anisotropic stress~$\sigma$ changes $\Phi$ and $\Psi$\ct{MaBert95} and consequently the CMB gravitational driving term~$D$\rf{D_def} even on superhorizon scales. Thus the $\sigma$ of dark species affects the CMB modes since the earliest stages of horizon entry. The left panel of Fig.~\ref{fig_CMB_rad} shows the evolution of gravitationally coupled photon and neutrino perturbations in the radiation era for various assumptions about neutrino abundance and properties. The plots are obtained by numerically integrating the corresponding equations from \cite{B06}. The oscillating, red curves give the CMB overdensity~$d_\gamma(\tau,k)$. The falling, brown curves show the driving term $D(\tau,k)$, equal to $-3(\Phi+\Psi)$ in the radiation era. As evident from this figure, the photon overdensity~$d_\gamma$ in the presence of freely streaming neutrinos (wider solid oscillating curve) is suppressed with respect to~$d_\gamma$ of a neutrinoless model (dashed oscillating curve). The suppression of the CMB perturbations by neutrinos can be related to the reduction of $D$ early during horizon entry, with the CMB modes then experiencing a smaller initial boost. The suppression of $D$ and, consequently, of the CMB oscillations in the Newtonian gauge is due to $\sigma_\nu$ being a direct source of gravitational potentials. It is not due to the damping of neutrino perturbations by the viscosity effect of $\sigma_\nu$, eqs.\rf{dot_v} and\rf{Fs_def}. We can verify this by repeating the calculations for the standard model with three neutrinos but with the following modification: We remove the gravitational effect of anisotropic stress in eq.\rf{Psi-Phi}, i.e., set $\Phi=\Psi$. We make no other changes in the integrated equations, i.e., preserve the Poisson equation\rf{Poiss_tradit} and use the standard dynamical equations for all species' perturbations. In particular, we retain the $\sigma_\nu$ viscosity term in the ${\dot v}_{\!\nu}$ equation [c.f.\ eqs.~(\ref{dot_v},\,\ref{Fs_def})]. The results of this calculation, with the ``gravity of $\sigma_\nu$ removed''\footnote{ The performed removal of $\sigma_\nu$ from the gravitational equation\rf{Psi-Phi} but not from the Euler equation\rf{dot_v} violates the covariance of the Einstein equations. Therefore, the separation of the effects of anisotropic stress into ``gravitational'' and ``dynamical'' should not be considered beyond the context of the Newtonian gauge. We could preserve the Einstein equations by removing $\sigma_\nu$ from both eqs.\rf{Psi-Phi} and\rf{dot_v}. The result would be the neutrinoless model, described on the left panel of Fig.~\ref{fig_CMB_rad} by the dashed curves; again, showing no suppression of the CMB amplitude. }, are plotted on the left panel of Fig.~\ref{fig_CMB_rad} with the thinner solid curves. Switching off the direct gravitational effect of $\sigma_\nu$ is seen to lift the neutrino suppression of the subhorizon CMB oscillations. On the scales that enter the horizon in the radiation era ($\ell\gg 200$) the suppression of the amplitude of the CMB oscillations, $A_\gamma$, can be calculated analytically in leading order in $R_\nu\equiv\rho_\nu/(\rho_\gamma+\rho_\nu)$\ct{BS04}. For all physical values $0\le R_\nu \le1$, this suppression is well fitted by\ct{HuSugSmall96} \be \frac{A_\gamma}{A_\gamma(R_\nu\to0)} \approx \left(1+\fr{4}{15}\,R_\nu\right)^{-1}, \lb{Agamma_suppr} \ee where $R_\nu$ is varied at unchanged fluctuations of the primordial curvature $\zeta$\ct{zeta_orig}. This variation implies fixed inflationary physics, although then the superhorizon values of both $\Phi$ and $\Psi$ change with $R_\nu$. The oscillations in the CMB power spectra of temperature, polarization, or their cross-correlation for $\ell\gg 200$ are suppressed by the square of\rf{Agamma_suppr}. To summarize, the suppression of the CMB oscillations by freely streaming neutrinos is due to neutrino anisotropic stress. In the Newtonian gauge the suppression may be attributed to the anisotropic stress acting as a source of metric perturbations, impacting CMB photons. The damping of bulk velocity and density perturbations of neutrinos by streaming plays almost no role in the suppression of CMB fluctuations. ~ ~ \subsection{Stiffness $c^2_{\rm eff}$} \label{sec_stiffness} The quantity $c^2_{\rm eff}(z,k)\equiv \delta p^{(c)}/\delta\rho^{(c)}$ describes the stiffness of the dark medium to perturbations with a wavevector~$k$. (We avoid calling $c^2_{\rm eff}$ ``the sound speed,'' not to mix it with $c_p$ of Sec.~\ref{sec_speed}.) The stiffness $c^2_{\rm eff}$ affects directly the evolution of the dark species' overdensities and peculiar velocities. The direct impact of $c^2_{\rm eff}$ on the evolution of overdensity, eq.\rf{dot_d}, is described by the term $Q$\rf{Q_def}, proportional to the so called ``non-adiabatic'' pressure $\delta p-(\dot p/\dot\rho)\,\delta\rho$. The velocities are accelerated by pressure gradient, the term $-\partial_i f_p=-\partial_i(c^2_{\rm eff}\delta\rho_a^{(c)})$ in eq.\rf{dot_v}, directly proportional to $c^2_{\rm eff}$. For perfect fluids and $k\gg\mathcal H$, the quantity $c_{\rm eff}(k)$ gives the (phase) velocity of acoustic waves. In general, however, $c^2_{\rm eff}$ need not be related to the speed of perturbation propagation, which will be studied in Sec.~\ref{sec_speed}. The dependence of the dark perturbations' evolution on their stiffness $c^2_{\rm eff}$ is reflected in the perturbations' contribution to metric inhomogeneities. Through the latter, the dark stiffness affects the visible species \cite[e.g.,][]{Erickson:2001bq}. To be specific, we assume that superhorizon perturbations are adiabatic.\footnote{ The following arguments remain valid under a considerably milder condition: It is sufficient that the primordial perturbations in the probed dark species are {\it internally\/} adiabatic\ct{BS04}. In particular, this condition is satisfied automatically for any one-component fluid. } Then the comoving overdensity $\delta^{(c)}$, eq.\rf{delta^c_def}, vanishes in the superhorizon limit. Consequently, $\dot d_a$ [eqs.~(\ref{dot_d}-\ref{Q_def})] and $\dot v_a$ [eqs.~(\ref{dot_v}-\ref{fp_def})] depend on $c^2_{\rm eff}$ only at the order $O(k^2\tau^2)$. The sourced potentials $\Psi$ and $\Phi$ therefore become sensitive to $c^2_{\rm eff}$ only at a late period of the horizon entry, also only at the order $O(k^2\tau^2)$. Note that, as discussed in Sec.~\ref{sec_anis_stress}, anisotropic stress, in contrast, changes the potentials even in the superhorizon limit $k\tau\to 0$. Thus $c^2_{\rm eff}$ affects the observable species at a noticeably {\it later\/} stage of the horizon entry than~$\sigma$ does. The lateness of the impact of $c^2_{\rm eff}$ is clearly seen on the right panel of Fig.~\ref{fig_CMB_rad}, which shows the joint evolution of perturbations in a photon fluid and a classical scalar field~$\phi$ (quintessence). The quintessence background density is set to track radiation ($w_\phi=1/3$) but its stiffness exceeds that of the photon fluid ($c^2_{{\rm eff},\phi}=1>c^2_{{\rm eff},\gamma}=1/3$, e.g.,\ctt{Hu_GDM98}). Either the perturbations of the quintessence (right panel) or of freely streaming neutrinos (left panel and previous Sec.~\ref{sec_anis_stress}) decrease the CMB-driving sum $\Phi+\Psi$. With neutrinos, $\Phi+\Psi$ was decreased already on superhorizon scales by neutrino anisotropic stress ($\sigma_\nu\not=0=\sigma_{\gamma b}$). On the other hand, in agreement with the previous arguments, $\Phi+\Psi$ is seen to be affected by the stiff inhomogeneities of quintessence, whose anisotropic stress is zero, at a later time. There are several observable consequences of the lateness of the influence of dark stiffness on potentials. First, while the early reduction of $\Phi+\Psi$ by $\sigma_\nu$ suppressed the amplitude of the acoustic oscillations in~$d_\gamma$, the later reduction of $\Phi+\Psi$ by large $c^2_{{\rm eff},\phi}$ slightly {\it enhances\/} this amplitude. The right panel of Fig.~\ref{fig_CMB_rad} explains this paradox: The reduction of $\Phi+\Psi$ by quintessence perturbations increases the negative driving of photon overdensity when $\dot d_\gamma$ is on average negative. As a result, the acoustic oscillations gain energy and their amplitude increases. The lateness of the stiffness impact has another important consequence. Namely, the ratio of the matter response over the CMB response is considerably larger to probing~$c^2_{\rm eff}$ than to probing~$\sigma$. This is evident from comparing the left panel of Fig.~\ref{fig_CDM}, showing CDM overdensity (black, rising curves), and the earlier Fig.~\ref{fig_CMB_rad}, for the CMB overdensity, all in the radiation era. In Fig.~\ref{fig_CDM}, CDM overdensity $d_c$ is plotted for: the photon-only model, where $\sigma=0$ (dashed); the standard model with $59\%$ of energy density in coupled photons and $41\%$ in streaming neutrinos (wide solid); and a fictitious model with the same $59\%$ of energy density in photons but the remaining $41\%$ carried by quintessence (thin solid). We see that the enhanced stiffness of the dark component of the third model impacts the matter an order of magnitude more strongly than the anisotropic stress of the second model. On the other hand, the CMB perturbations, studied on the left and right panels of Fig.~\ref{fig_CMB_rad}, are affected by neutrinos and quintessence of the second and third models comparably. These results have a simple explanation. The early impact of~$\sigma$ on matter velocities is washed out over time by the Hubble friction [$\mathcal H v^i_c$ term in eq.\rf{dot_c}]. The Hubble friction, acting on massive CDM and baryons, damps less the $c^2_{\rm eff}$-dependent late impact of dark perturbations on matter. The overdensity of relativistic photons, on the other hand, is unchanged by Hubble friction [see eq.\rf{dot_gk}, where $R_b$ is negligible]. Hence the present CMB anisotropy responds comparably to either of the impacts. The relatively high ratio of the matter over the CMB responses to the dark stiffness should help distinguish an excess of relic relativistic particles in the radiation era from a subdominant tracking quintessence. Both the decoupled relativistic relics and quintessence have a similar nondegenerate phase-shift signature in the CMB power spectra \citep[][more in Sec.~\ref{sec_speed} below]{BS04}. This signature will soon considerably tighten the constraints on neutrinos or an early quintessence with the enhanced angular resolution of CMB experiments. Yet it does not discriminate between the two scenarios if a nonstandard signal is observed. The discrimination can, however, be achieved by combining the CMB data with accurate measurements of matter power, together sensitive to the strong effect of quintessence on the ratio of the CMB and matter power. The discrimination should be facilitated by the fact that tracking quintessence and an excess of relativistic species change this ratio in opposite directions. \subsection{The speed of sound or streaming} \lb{sec_speed} Dark inhomogeneities were seen to affect the amplitude of the oscillations in the CMB temperature and polarization power spectra. In addition to the heights of the acoustic peaks, the CMB spectra are characterized by the peaks' positions~$\ell_n$. The period,~$\ell_A$, of the acoustic oscillations is well known to be controlled entirely by background geometry and the properties of the photon-baryon plasma. (Namely, $\ell_A=\pi r_{\rm dec}/S_{\rm dec}$, where $r_{\rm dec}$ is the distance to the surface of CMB decoupling and $S_{\rm dec}=\int_0^{\tau_{\rm dec}}c_{\!\gamma b}\,d\tau$ is the corresponding size of the acoustic horizon.) On the other hand, the overall additive shift of the peak's sequence, i.e., the parameter $\Delta \ell$ in an approximate relation $\ell_n\sim n\ell_A+\Delta \ell$ is a robust characteristic of dark perturbations.\footnote{ The positions of individual peaks receive additional corrections from the changes of the peaks' shape, such as due to the Silk damping and (for temperature) falling Doppler contribution. These corrections are fixed by the background properties and should not spoil the constraints on the overall shift~$\Delta \ell$. } The shift of the peaks in $\ell$ is determined by the shift $\Delta\varphi$ of the temporal phase of the subhorizon modes, $d_{\gamma}= A_{\gamma} \cos(kc_{\gamma b}\tau-\Delta\varphi)$, as $\Delta \ell=\ell_A\,\Delta\varphi/\pi$. This shift of phase is tightly connected to the locality of the inhomogeneous dynamics\ct{BS04,B05Trieste}. This becomes manifest in the real-space description of perturbation evolution of \citet{BB_PRL,BB02}, using plane-parallel Green's functions. The Green's function of $d_{\gamma}$ in the radiation era, when the modes forming the acoustic peaks enter the horizon, was found in \citet{BS04}. Its Fourier transform yields \be \Delta\varphi \simeq -\pi\sqrt3\ (\Phi+\Psi)_{|x|=c_s\tau} \ee (for $|\Delta\varphi|\ll 1$.) Here, the initial conditions are assumed adiabatic, and $c_s\equiv c_{\gamma b}=1/\sqrt3$ is the speed of sound in the radiation era. The term $(\Phi+\Psi)_{|x|=c_s\tau}$ is the value of $\Phi+\Psi$ at the acoustic horizon of a perturbation that is initially localized in space to a thin plane: $d(\tau\to0,x)=\delta_D(x)$, where $\delta_D(x)$ is the Dirac delta function. For adiabatic initial conditions, $(\Phi+\Psi)_{|x|=c_s\tau}$ is fully determined by the perturbations that propagate beyond the acoustic horizon. In particular, $(\Phi+\Psi)_{|x|=c_s\tau}=0$ if none of the dark species support perturbations that propagate faster than the acoustic speed~$c_s$. [For a proof, accounting for subtleties of inhomogeneous gravitational dynamics on large scales, see Appendix~B of \citet{BS04}]. \begin{figure*}[t] \centerline{\sc\footnotesize \qquad CDM modes} \centerline{\includegraphics[width=12cm]{fig_CDM.eps}} \caption{ Growth of CDM overdensity~$d_c(\tau,k)$ (rising black) during radiation domination (left) and matter domination (right). Brown curves show the value of $\tau\Phi$, responsible for the ultimate growth of $\delta\rho_c/\rho_c$ by a future time of its observation, eq.\rf{growth_grf}. ---\,Left: coupled photons only (dashed), photons plus 3 standard neutrinos (wide solid), and photons plus a tracking ($w_\phi=1/3$) scalar field~$\phi$ replacing the neutrino density of the previous model (thin solid). ---\,Right: pressureless matter only (solid), and a model with equal densities of matter and a scalar field that tracks it, $w_\phi=0$ (dashed). } \label{fig_CDM} \end{figure*} Two commonly considered types of cosmological species do support perturbations that propagate faster than~$c_s$. These are decoupled relativistic neutrinos and quintessence, in both of which perturbations propagate at the speed of light. The increase of the energy density in either of these species by one effective fermionic degree of freedom displaces the peaks of the CMB temperature and E-polarization spectra by $\Delta l_{+1\nu}\simeq -3.4$ for neutrinos\ct{BS04} and $\Delta l_{+1\varphi}\simeq -11$ for tracking quintessence\ct{B06}. This nondegenerate effect will lead to tight constraints on the abundance\ct{Lopez:1998aq,Bowen02,BS04,Perotto:2006rj} and interaction\ct{Chacko:2003dt} of the relic neutrinos from upcoming CMB experiments. These constraints notably improve with higher angular resolution in temperature and polarization as new data at higher $\ell$'s allow better identification of the oscillations' phase. [The polarization channel, where the acoustic peaks are more pronounced, is especially useful\ct{BS04}.] Thus a crucial characteristic that affects the acoustic phase is the velocity $c_p$ of the wavefront of a localized perturbation of dark species, shifting the phase if and only if $c_p>c_s\approx 1/\sqrt3$. The velocity $c_p$ of a perfect fluid or a classical field is determined by the stiffness~$c^2_{\rm eff}$. However, in general $c_p$ and $c_{\rm eff}$ are independent. As a notable example, free-streaming relativistic particles have $c_{\rm eff}(k)\equiv 1/\sqrt3$ but $c_p=1$. Unlike $c_{\rm eff}(k)$, $c_p$ is defined irrespectively of any gauge and is locally measurable. Finally, as discussed in Sec.~\ref{sec_stiffness} and the present section, the quantities $c_{\rm eff}$ and $c_p$ map to different observable signatures. Thus for gaining robust knowledge of the nature and kinetics of dark radiation, including neutrinos and possible early quintessence, it is important to constrain observationally both $c_{\rm eff}(k)$ and $c_p$. \subsection{$\Phi$ and $\Psi$; clustering in the dark sectors} \lb{sec_clustering} In the three previous subsections we analyzed the gravity-mediated signatures of the dark stresses. Now we shift our attention to the observational manifestations of potentials $\Phi$ and $\Psi$ themselves, irrespective of their cause or even of the validity of the Einstein equations. As described quantitatively in this subsection, the impact of the gravitational potentials on large-scale structure (LSS) and the CMB differs in several essential ways. Consequently, the corresponding constraints will be rather complimentary. Before considering the observable impact of the metric potentials, let us summarize the scenarios in which the potentials are expected to differ from the predictions of the standard cosmological model. As compared to a typical model with CDM and quintessence dark energy (including $\Lambda$CDM model as a limiting case), different $\Phi$ and $\Psi$ in a given background $w(z)$ are expected: \begin{enumerate} \item[(a)] During horizon entry for practically any other dark energy model. \item[(b)] On any scales for models with modified gravity. \item[(c)] On subhorizon scales for models with non-canonical kinetic term for the field\ct{ArmendarizMukhanov_kEss00}, models with warm dark matter\ct{Blumenthal:1982mv,Olive:1981ak} or with interacting dark matter\ct{Carlson_et_al_92,deLaix:1995vi,Spergel:1999mh}. More mundane physics affecting the subhorizon potentials includes thermal, radiative, or magnetic pressure on baryons, and various astrophysical feedback mechanisms (supernovae, central black holes, etc.) \end{enumerate} \subsubsection{Matter response} Cosmic structure, which growth in the matter era is driven by potential~$\Phi$, is very sensitive to modifications of~$\Phi$ by new physics at low redshifts. As noted in Sec.~\ref{sec_stiffness}, the Hubble friction diminishes its sensitivity toward higher redshifts. The matter power spectrum and other characteristics of cosmic structure are therefore the natural probes of the scenarios in the above classes (b) and~(c). An example is shown on the right panel of Fig.~\ref{fig_CDM}. For both scenarios displayed on this panel the background is Einstein--de Sitter. The solid curves show the growing matter overdensity and gravitational potential in a pure CDM phase. [The plotted potential is weighted by $\tau$ according to eq.\rf{growth_grf}.] The dashed curves display the same quantities for a toy scenario in which CDM constitutes only $50\%$ of the energy density, while the other $50\%$ is in a classical scalar field with $w_\phi=w_{\rm CDM}\equiv0$. Since for the field $\delta p_\phi^{(c)}/\delta\rho_\phi^{(c)} = c^2_{\rm eff}=1$, its clustering is suppressed and its perturbations do not contribute to the subhorizon potential. Correspondingly, the structure growth in the second scenario is seen to be significantly suppressed. Note that both scenarios have identical background expansion and the standard laws of gravity. Quantitatively, the impact of the potential $\Phi$ over time $(\tau,\tau+\Delta\tau)$ on matter overdensity $d_c$ observed at a later time $\nbrk{\tau'\gg\tau}$ scales roughly as $(\tau\Delta\tau)\Phi$. The prefactor~$\tau\Delta\tau$ quantifies the stronger suppression of an early impact by the higher Hubble friction, experienced by matter particles during the faster cosmological expansion. The derivation follows straightforwardly from eq.\rf{d_c_sol}: A nonzero potential $\Phi(\tau)$ over the time interval $(\tau,\tau+\Delta\tau)$ contributes to the matter overdensity at a later time $\tau'$ as\footnote{ The last estimate in eq.\rf{growth_grf} assumes that $w\le1/3$, so that $a(\tau)$ grows as $\tau$ or faster. The zero of conformal time $\tau$ is chosen, as usual, as $\tau\to0$ when $a\to0$. Logarithmic corrections that appear for $w=1/3$ (radiation domination) are ignored. } \be \Delta\!\left} \newcommand{\rt}{\right( \fr{\delta\rho_c}{\rho_c} \rt)_{\!\!\tau'} = \Delta\, d_c(\tau') &=& - k^2\,a(\tau)\Phi(\tau)\,\Delta\tau \int_{\tau}^{\tau'} \frac{d\tau''}{a(\tau'')}\nonumber\\ &\sim& - k^2\Phi(\tau)\,\tau\,\Delta\tau. \quad \lb{growth_grf} \ee In the models with standard gravity, CDM, and {\it insignificant clustering of dark energy\/}, the dark energy perturbations can leave their mark on the potential only during horizon entry. By eq.\rf{growth_grf}, the corresponding early potential contributes to structure growth much less than the potential generated long after the entry by the clustered CDM and baryons. Thus in the absence of the non-standard physics of the types (b)\ or~(c) above, it is justified to consider an approximate consistency relation between the background equation of state~$w(z)$ and the growth of cosmic structure\ct{KnoxSongTyson05,Ishak:2005zs,Chiba:2007rb}. As we see next, the situation is essentially {\it opposite\/} when the dark sectors are probed by the primary anisotropies of the CMB. \subsubsection{CMB response} Metric perturbations have a dramatic impact on CMB temperature anisotropies. In sharp contrast to large scale structure, primary CMB anisotropies (generated by linear evolution) depend most strongly on the values and evolution of gravitational potentials {\it during horizon entry\/}. They are only mildly sensitive to the potentials on subhorizon scales. (This need not to apply to secondary anisotropies.) The scalar perturbations of the CMB respond primarily to the sum of the Newtonian-gauge potentials $\Phi+\Psi$ on all scales and all times (Sec.~\ref{sec_CMB}). While the inertial drag of the CMB by baryons, affected by $\Phi$ alone, is important for the CMB sensitivity to the baryon density and is noticeable around decoupling\ct{HuSugSmall96}, it never dominates the evolution of CMB perturbations. For developing a general understanding of the CMB sensitivity to dark dynamics we will often ignore it. We stressed in Sec.~\ref{sec_uniq} that dark inhomogeneities at a (conformal) time~$\tau$ and the metric perturbations generated by them affect only the modes with $k\gtrsim \mathcal H\sim 1/\tau$. The corresponding dynamics of CMB perturbations differs qualitatively in a tightly coupled regime, prior to hydrogen recombination at $z_{\rm rec}\sim 1100$, and in a streaming regime, after recombination. We will consider the CMB response to the metric perturbation $\Phi+\Psi$ at these two epochs in turns. In the subsequent Sec.~\ref{sec_cl_resp_quant} we will quantify this response by simple analytical formulas. We will see that despite numerous differences in the evolution of CMB modes before and after recombination, the CMB response to $\Phi+\Psi$ at the respective epochs is very similar. \begin{figure}[t] \centerline{\includegraphics[width=7cm]{cmb_view1.eps}} \caption{An accurate mechanical analogy for general-relativistic evolution\rf{dot_gk} of an acoustic CMB mode. The CMB overdensity~$d_{\gamma}$ equals the denoted distance to the tip of a pendulum with an internal frequency $\omega=kc_s$ and its suspension point driven as specified by the gravitational driving term~$D$, eq.\rf{D_def}. For adiabatic initial conditions the evolution starts with $d_{\rm in}=3\zeta_{\rm in}$, where $\zeta_{\rm in}$ is the superhorizon value of the Bardeen curvature. In the radiation era, initially, $D_{\rm in}=\frac43(1+\frac15R_\nu)/(1+\frac4{15}R_\nu)d_{\rm in}$ and in the matter era $D_{\rm in}=\frac65d_{\rm in}$\ct{BS04}, c.f.\ Figs.~\ref{fig_CMB_rad} and~\ref{fig_CMB_mat}. The advantages of this approach over considering $\delta T^{(\rm Newt)}\!/T$ or $\Theta_{\rm eff}=\delta T^{(\rm Newt)}\!/T+\Phi=\fr13d_{\gamma}+\Phi+\Psi$ as independent dynamical variables include: (A)~Independence of the gravitational force that drives $\ddot d_{\gamma}$ from $\ddot d_{\gamma}$ itself. (B)~Epoch- and scale-independence of the relation between the superhorizon perturbation~$d_{\rm in}$ and the inflation-generated conserved curvature~$\zeta_{\rm in}$ ($d_{\rm in}=3\zeta_{\rm in}$). (C)~Direct cause-effect connection between local physical dynamics and the apparent changes of the variable that describes CMB perturbations. } \label{fig_CMB_analogy} \end{figure} ~ \subsubsubsection{Coupled regime ($z\gtrsim z_{\rm rec} \sim 1100$)} Any CMB mode prior to recombination evolves according to eq.\rf{dot_gk} of a driven damped harmonic oscillator. Fig.~\ref{fig_CMB_analogy} presents an equivalent mechanical system---a pendulum whose evolution is described by the same equation. The pendulum's internal frequency is $kc_s$ and its pivot is moved by a distance \be D=-3(\Phi+\Psi+R_b\Phi). \ee The CMB overdensity $d_{\gamma}(\tau)$ then numerically equals the distance to the pendulum's tip, as marked on the figure. To account for the Hubble and Silk damping, the pendulum may be imagined submerged in a viscous fluid with appropriate time-dependent viscosity. All the influence of the metric perturbations on $d_{\gamma}$ is transferred through the (generally, time-dependent) position of the pendulum's pivot~$D(\tau)$. \begin{figure*}[t] \centerline{\sc\footnotesize CMB, Matter era \qquad} \centerline{\includegraphics[width=13cm]{fig_CMB_mat.eps}} \caption{ Gravitational suppression of CMB temperature anisotropy by growing cosmic structure in the matter era (5-fold for $\Delta T/T$, 25-fold for power $C_l$). ---\,Left top: Evolution of~$d_{\gamma}$ and the driving term~$D$ during matter domination with $R_b$ set negligible (solid). Evolution of~$d_{\gamma}$ from the same primordial perturbation if the metric becomes homogeneous before the horizon entry (dashed). ---\,Right: Suppression of the CMB temperature power~$C_l$ for $\ell\lesssim 100$ by CDM inhomogeneities. The solid curve shows~$C_l$ in the concordance $\Lambda$CDM model with adiabatic initial conditions. The dashed curve describes the same model with changed initial CDM perturbations: $d_{\rm CDM}$ is artificially set to zero on superhorizon scales (the superhorizon $d_{\gamma}$, $d\snu$, and $d_b$ are unchanged). Then CDM inhomogeneities and the associated potential in the matter era are reduced. The smoother metric suppresses the CMB power at $\ell\lesssim 100$ less, as the $C_l$ plots (obtained with {\sc CMBFAST}) clearly show. ---\,Left bottom: The plots of $\Phi$ and $\Psi$ transfer functions confirm that the last model has smaller $\dot\Phi+\dot\Psi$, hence, the enhancement of its power at low $\ell$ cannot be attributed to the ISW effect.} \label{fig_CMB_mat} \end{figure*} This mechanical analogy of the acoustic CMB evolution makes intuitive the following important facts: Foremost, the CMB overdensity is influenced strongly by the {\it values\/} of the potentials at horizon entry. Also, as already observed in Sec.~\ref{sec_stiffness}, the sensitivity and even the sign of the CMB response to the subsequent changes of the potentials depend strongly on the {\it time\/} and {\it duration\/} of the change. When at horizon entry the driving term~$D$ is close to the superhorizon value of $d_{\gamma}$ (as it usually is for adiabatic perturbations) and when $D$ {\it does not decay quickly\/} during the entry then CMB temperature anisotropy is suppressed dramatically. For a specific example, roughly reflecting the evolution between radiation-matter equality and decoupling ($3\times 10^3 > z > 1\times 10^3$), let us ignore the baryon-photon ratio $R_b$ by taking $D\simeq -3(\Phi+\Psi)$. Let us evaluate the potentials in a CDM-dominated limit, in which $\Phi(\tau)=\Psi(\tau)={\rm const}=-\frac15d_{\rm in}$, where for adiabatic perturbations $d_{\rm in}\equiv d_a(\tau{\to}0)=3\zeta(\tau{\to}0)$ for all species $a$.\footnote{\lb{note_Psi15d} The CDM-domination result $\Phi=\Psi=-\frac15d_{\rm in}$ follows straightforwardly from eqs.~(\ref{d_c_sol},\,\ref{Poiss_my}-\ref{Poiss_my_notat}) and from $\Phi=\Psi$ by eq.\rf{Psi-Phi}. It is also easy to derive from the known relation between potentials and the traditional proper overdensity by remembering that for pressureless matter $d=\delta\rho/\rho-3\Psi$. } In these approximations, \be D(\tau)=\frac65\,d_{\rm in}, \ee being time-independent during and after the mode entry. Then the amplitude of acoustic CMB oscillations is {\it suppressed 5-fold\/}\ct{B06}, which is evident from Fig.~\ref{fig_CMB_analogy} or the left top panel of Fig.~\ref{fig_CMB_mat}. After the entry, a slow variation of~$D$ over a characteristic time $\delta\tau$ that exceeds the period of $d_{\gamma}$ oscillations ($\delta\tau\, k \gg1$) has minor impact on the oscillation amplitude and phase. In particular, this impact vanishes in the adiabatic limit, $\delta\tau\, k \to \infty$. A typical temporal scale of linear variations of $\Phi$ and $\Psi$, and so of $D$, is $\delta\tau\sim\mathcal H^{-1}$. This applies even to subhorizon scales, where only the perturbations with negligible pressure, hence no rapid internal oscillations, contribute to the potentials. Thus deeply {\it subhorizon\/} CMB modes (for which $\delta\tau\, k\sim k/\mathcal H \gg1$) are {\it insensitive\/} to such variations. Conversely, the modes that are closer to horizon entry are affected more strongly. \subsubsubsection{Streaming regime ($z\lesssim z_{\rm rec} \sim 1100$)} Analogous results exist for the sensitivity of the CMB to gravitational potentials after recombination. When the universe is matter-dominated and baryons have decoupled from the CMB ($10^3 > z \gg 1$) then $\Phi$ and $\Psi$ are equal and time-independent on all scales that are sufficiently large to evolve linearly and to be unaffected by the residual baryonic pressure. For time-independent $\Phi+\Psi$ and negligible collision term $C_T$ in eq.\rf{dot_dig_C}, an ``effective'' CMB intensity perturbation \be \iota_{\rm eff}\equiv \iota + 4(\Phi+\Psi)\qquad \lb{di_eff_def} \ee is constant along the line of sight \citep{KS86,HuSug_ISW94,HuSug_toward94}. Under a general evolution of the potentials, eq.\rf{dot_dig_C} gives \be \dot\iota_{\rm eff} + n_i\nabla_{\!i}} \newcommand{\Nbj}{\nabla_{\!j}\iota_{\rm eff} = 4(\dot\Phi+\dot\Psi) + C_T. \lb{dot_dieff_C} \ee Prior to horizon entry, the intensity perturbation~$\iota$ is frozen and equals $\iota_{\rm in}=\fr43d_{\gamma,\,\rm in}$ \cite[see][]{B06}. As discussed earlier, for the modes that enter during the domination of pressureless matter, $\Phi(\tau)=\Psi(\tau)=-\frac15d_{\rm in}$. Then, for adiabatic perturbations for which $d_{\gamma,\,\rm in}=d_{\rm in}$, eq.\rf{di_eff_def} gives \be \iota_{\rm eff}=-\fr15\iota_{\rm in}. \ee Similarly to the CMB evolution in the coupled regime, for most of these modes the late ISW effect due to the {\it slow\/} decay of the potentials at $z\lesssim 1$ does not noticeably change $\iota_{\rm eff}$.\footnote{ The solution of the transport equation\rf{dot_dieff_C} for a perturbation mode $\iota_{\rm eff}=A(\tau)\,e^{i{\bf k}\cdot({\bf x}-{\bf n}\tau)}$ in a potential $\Phi+\Psi=F(\tau)\,e^{i{\bf k}\cdot{\bf x}}$ is \be A(\tau)=A_{\rm in}+4\int_0^{\tau}\dot F(\tau')\,e^{i{\bf k}\cdot{\bf n}\,\tau'}d\tau'. \ee Due to the oscillating factor $e^{i{\bf k}\cdot{\bf n}\,\tau'}$, any slow variation of the subhorizon potential over a temporal scale $\delta \tau$ does not affect any modes except for those few with $|{\bf k}\cdot{\bf n}|\,\delta\tau \lesssim 1$, i.e., those whose wavevector is nearly orthogonal to the direction of light propagation. } Thus, first, the presently observed temperature anisotropy in the considered modes is suppressed by the potentials of the clustering matter 5-fold\footnote{ This suppression, by a factor of 5, is larger than an apparent factor of 2 suppression that is usually inferred, erroneously, from the traditional description of the Sachs-Wolfe effect in terms of the proper Newtonian-gauge perturbations. (Indeed, for the latter $\left} \newcommand{\rt}{\right.\Delta T/T\rt|_{\rm observed}\simeq -\frac12 (\delta T^{\rm (Newt)}/T)_{\rm decoupling}$, assuming matter domination at decoupling. Yet this result is specific to the Newtonian gauge condition for the decoupling surface, on which the considered perturbations are superhorizon. It should not be given its traditional physical interpretation directly.) See\ctt{B06} for additional discussions. }---suppressed with respect to a fictitious scenario in which in the matter era $\Phi=\Psi=0$, consequently, $\iota_{\rm eff}=\iota_{\rm in}$. Second, after decoupling as well, the primary CMB anisotropies have little sensitivity to the potentials' evolution on subhorizon scales. The 5-fold suppression is a physical effect, independent of the choice of variables or gauge. It is absent in models where the matter-era potentials decay early during horizon entry or where the laws of gravity are modified so that matter inhomogeneities do not perturb the metric. Note that if all modes that contribute to the CMB temperature autocorrelation $C_\ell$ at some $\ell$ are suppressed 5-fold then the $C_\ell$ is suppressed by $5^2=25$ times. The right panel of Fig.~\ref{fig_CMB_mat} presents convincing evidence for the reality of the order-of-magnitude suppression of the CMB temperature power spectrum at low $\ell$'s by dark matter clustering. The plot shows the CMB temperature power spectra~$C_l$ obtained with a modified {\sc cmbfast} code\ct{CMBFAST96} for two models both of which have the identical matter content of the concordance $\Lambda$CDM model\ct{WMAP3Spergel,SelSlosMcD06,TegmLRG06}. The models differ only by the initial conditions for CDM perturbations. In one of the models (solid curve) all species are initially perturbed adiabatically. In the other model (dashed curve) the CDM density perturbation $d_c$ is artificially set to zero on superhorizon scales while the initial values for $d_{\gamma}$, $d\snu$, and $d_b$ are unchanged. As a consequence, in the second model the metric inhomogeneities in the matter era are reduced. While this model has a smaller ISW effect (see the left bottom panel of Fig.~\ref{fig_CMB_mat}), due to the smoother metric, its CMB power for $\ell\lesssim 100$ is considerably larger. The suppression of the CMB anisotropy on large scales, which enter during matter domination and active growth of structure, should not be partly traded for the ``resonant self-gravitational driving'' of small-scale modes that enter in the radiation era. The acoustic modes entering during radiation domination are often said to be resonantly driven by a specially timed decay of their self-generated potential. In reality, the suppression of large relative to small scales is not very sensitive to the timing of the potential decay in the radiation era\ct{B06}. Moreover, the same (when accounting for neutrinos, even somewhat larger) suppression of large relatively to small scales would be observed if in the radiation era the metric were unperturbed, hence, the small-scale modes objectively could not be driven gravitationally\ct{B06}. \subsubsection{Quantifying the response of the CMB power spectrum} \lb{sec_cl_resp_quant} The CMB sensitivity to gravitational potentials at various epochs can be quantified as follows. The temperature anisotropy of CMB radiation observed in a direction ${\bf n}$ is given by the following integral along the line of sight \citep{CMBFAST96}: \be \fr{\Delta T({\bf n})}{T}= \int_0^{\tau_0}d\tau\ S(\tau,{\bf n} r(\tau)). \lb{los_int} \ee Here, $\tau_0$ is time at present, $r(\tau)=\tau_0-\tau$ (in flat models) is the radial distance along the line of sight, and the source equals \be S=\dot g \left} \newcommand{\rt}{\right(\fr13d_{\gamma}+\Phi+\Psi-v_b^i n_i+Q^{ij}n_in_j\rt)+ g\left} \newcommand{\rt}{\right(\dot\Phi+\dot\Psi\rt),~~~ \lb{los_source} \ee with $v_b^i$ being the baryon velocity, $Q^{ij}$ being determined by the radiation quadrupole and polarization, and a visibility function $g(\tau)=\exp\,(-\int^{\tau_0}_{\tau}d\tau/\tau_T)$ giving the probability of CMB photons to reach us unscattered. Regardless of the random realization of the primordial curvature perturbation~$\zeta_{\rm in}$, the physics behind the subsequently developed perturbations of potentials, densities, etc.\ in linear theory can be described by transfer functions \be T_\Phi(k,\tau)\equiv \Phi(k,\tau)/\zeta_{\rm in}(k), \ee with analogous definitions for other scalar perturbations: $\Psi$, $d_{\gamma}$, velocity potential $u_b$ (s.t.\ $v_b^i=-\partial_i u_b$), etc. The CMB constraints on the transfer functions are derived from observational estimates of CMB angular power spectra~$C_l$. In particular, the temperature power spectrum equals\ct{CMBFAST96} \be C_\ell=4\pi \int_0^{\infty}{dk\over k}\, \Delta^2_\zeta(k)\, \left} \newcommand{\rt}{\right|\int_0^{\tau_0}\,d\tau\,T_S(\tau,k)\,j_l(kr(\tau))\rt|^2,~~~ \lb{C_l} \ee where $\Delta^2_\zeta(k)\equiv k^{3}P_\zeta(k)/(2\pi^2)$ is the dimensionless primordial power of $\zeta_{\rm in}(k)$, and $j_l$ is the spherical Bessel function. In eq.\rf{C_l}, the full transfer function $T_S$ of the source $S$, eq.\rf{los_source}, is a differential operator, with derivatives corresponding to the direction-dependent terms of~$S$. In the following discussion we will mostly be concerned with its scalar terms, as only they involve the gravitational potentials directly. It is useful to keep in mind that the dominant contribution to $C_\ell$ at a given $\ell$ comes from the modes with $k\sim \ell/r$. Indeed, $j_l(kr)$ vanishes exponentially at smaller values of~$k$ and as $1/k$ at higher values. \subsubsubsection{Radiation era ($z>z_{\rm eq}\sim 3000$, probed by $\ell>200$)} By the arguments of Sec.~\ref{sec_uniq}, only the modes that enter the horizon before radiation-matter equality provide information about the metric and dark dynamics in the radiation era. Then after horizon entry $\delta\rho/\rho\approx \delta\rho_{\gamma}/\rho_{\gamma}$ oscillates with nearly constant amplitude. Consequently, the induced $\Phi+\Psi$ decays as $\tau^{-2}\propto a^{-2}$ [eq.\rf{Pois_subhor}]. Although the potential decay is eventually halted by the growth of structure since matter domination, $\Phi+\Psi$ for these modes remains a subdominant source of CMB anisotropy in eq.\rf{los_source}, exceeded by the intrinsic photon-baryon perturbations.\footnote{ Photon-baryon perturbations also decay on small scales due to Silk damping. Estimates\ct{HuSugSmall96,Weinberg:2002kg} show that the exponential Silk damping overcomes the quadratic decay of the potential only at $\ell\gtrsim 4000$, where secondary anisotropies are expected to dominate the considered primary signal. } Thus the impact of gravitational potentials in the radiation era is confined to horizon entry. It is fully encoded in the amplitude and phase of the subsequent acoustic oscillations. Specific cases of such an impact were considered in Secs.~\ref{sec_anis_stress} and~\ref{sec_stiffness}. \subsubsubsection{From equality to recombination ($z_{\rm rec}<z<z_{\rm eq}$, $100<\ell<200$)} The CMB modes that enter during and after radiation-matter equality ($z_{\rm eq}\sim 3000$) oscillate in a significant gravitational potential of growing matter inhomogeneities. Close to $z_{\rm eq}$, the driving potential $\Phi+\Psi$ still evolves appreciably. The potential continues to decay because of the residual radiation density. It decays until recombination ($z_{\rm rec}\approx 1100$) also due to coupling of the baryonic component of matter to the CMB, slowing the structure growth by excluding the baryons from it \citep[e.g.,][]{HuSugSmall96}. Altogether, these effects lead to complicated evolution of the modes that enter during $z_{\rm rec}<z<z_{\rm eq}$. Then both the intrinsic and gravitational terms in the source of $\Delta T/T$\rf{los_source} contribute noticeably. As highlighted by many studies, the term $\dot\Phi+\dot\Psi$ in eq.\rf{los_source} due to the continuing decay of potentials then is also large and boosts the height of the first acoustic peak (early ISW effect). \subsubsubsection{After recombination ($z<z_{\rm rec}\simeq 1100$, probed by $\ell<100$)} The evolution of linear perturbations becomes simple again when $z\ll z_{\rm rec}$. On the scales that enter at this epoch but before dark energy becomes dynamically relevant, we expect that during linear evolution $\Phi(\tau)=\Phi(\tau)=-\fr15d_{\rm in}={\rm const}$ (footnote~\ref{note_Psi15d}). The presently observed perturbation of CMB intensity for almost all these modes is suppressed 5-fold. Our next goals will be, first, to confirm that the contribution of these modes to $C_\ell$ is indeed gravitationally reduced by a factor of $5^2=25$. Second, to find the sensitivity of $C_\ell$ to different values of potentials due to non-standard physics. And third, to quantify the $C_\ell$ response to the evolution of potentials on subhorizon scales. We start from noting that not only do the modes of different~$k$'s contribute to the correlation function $C_\ell$\rf{C_l} incoherently, but coherence is also lost for same-$k$ contributions at widely separated times, $|\tau-\tau'|\,k\gg 1$. To see this explicitly, we rewrite eq.\rf{C_l} as \be C_\ell=4\pi\!\!\int\!{dk\over k}\, \Delta^2_\zeta(k) \int\!\!\! d\tau\, T_S(\tau,k) \int\!\!\! d\tau'T_S^*(\tau'\!,k)\,\,p_\ell(kr,kr').~~~~~ \lb{C_l_expand} \ee The last factor $p_\ell(kr(\tau),kr(\tau'))$ describes the kernel of projecting the harmonic plane-wave modes on a spherical multipole~$\ell$ \be p_\ell(x,x')\equiv j_\ell(x)j_\ell(x'). \lb{pl_def} \ee This function for $\ell=10$, as an example, is shown in Fig.~\ref{fig_Cj1j2}. \begin{figure}[t] \centerline{\footnotesize $p_\ell(x_1,x_2)\equiv j_\ell(x_1)\,j_\ell(x_2)$} \centerline{\includegraphics[width=6cm]{j1j2_cont.eps}} \caption{The isocontours of the kernel $p_\ell(x_1,x_2)\equiv j_\ell(x_1)\,j_\ell(x_2)$ of projecting perturbation $k$-modes to $C_l$, eqs.~(\ref{C_l_expand},\,\ref{pl_def}). The figure is for $\ell=10$. The labels on the contours show the values of $p_{10}/10^{-3}$. Note that $p_\ell(x_1,x_2)\approx 0$ whenever either $x_1<\ell$ or $x_2<\ell$. Also note that $p_\ell\ge0$ along the line $x_1=x_2$ (diagonal dashed line), but $p_\ell$ oscillates through positive and negative values along any line $x_1=cx_2$ with $c\not=1$ (e.g., lower dashed line). } \label{fig_Cj1j2} \end{figure} In the integrand of eq.\rf{C_l_expand}, except for very low redshifts, $p_\ell(kr,kr')$ varies with $k$ much more rapidly than $T_S(\tau,k)$ and $T_S^*(\tau'\!,k)$.\footnote{ The characteristic scales of variation are $\Delta k \sim \min(r^{-1},r'^{-1})$ for $p_\ell$, versus $\Delta k \sim \tau^{-1}$ and $\tau'^{-1}$ for $T_S$ and $T_S^*$ respectively. } With $T_S$, $T_S^*$, and $\Delta^2_\zeta(k)$ almost unchanged over many $k$ periods of the $p_\ell$ oscillations, we can see from Fig.~\ref{fig_Cj1j2} that positive and negative contributions to $\int\!(dk/k)\,p_\ell(kr,kr')$ mutually cancel whenever \be |\tau-\tau'|= |r-r'|\gg 1/({\rm contributing}\ k). \ee Thus the contributions to $C_l$ from sources at this temporal separation are incoherent. In particular, we can ignore the coherence of, and study independently, the contributions to $C_l$ at the horizon entry ($\tau\lesssim 1/k$) and during subhorizon evolution ($\tau\gg 1/k$). First, we consider a time interval from $\tau=0$ to $\tau_{\rm ent}\sim 1/k$. For a mode that enters at any time after recombination, the source\rf{los_source} forms a complete derivative:\footnote{ For such modes, $\dot g$ significantly deviates from zero only when the mode is superhorizon. Then the factor that multiplies $\dot g$ in the full source\rf{los_source} reduces to $(\fr13d_{\gamma,\,\rm in}+\Phi+\Psi)$, giving eq.\rf{los_source_late_ent}. } \be S\approx {\partial\over\partial\tau}\left} \newcommand{\rt}{\right[g \left} \newcommand{\rt}{\right(\fr13d_{\gamma,\,\rm in}+\Phi+\Psi\rt)\rt].~~~ \lb{los_source_late_ent} \ee Starting from eq.\rf{C_l}, neglecting the change of $j_l(kr(\tau))$ over the considered time interval $(0,\tau_{\rm ent})$, and trivially integrating the complete derivatives\rf{los_source_late_ent} over $d\tau$ and $d\tau'$, we obtain \be \delta C_\ell^{\rm (postrec~entry)}\!\!\!\approx 4\pi\!\! \int\!{dk\over k}\, \Delta^2_\zeta(k)\,j_l^2(kr) \left} \newcommand{\rt}{\right|g \left} \newcommand{\rt}{\right(\!\fr13d_{\gamma,\,\rm in}+\Phi+\Psi\rt)\rt|^2_{\rm entry} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!~~~~~~ \lb{Cl_late_ent} \ee where the variables $d_{\gamma,\,\rm in}$, $\Phi$, and $\Psi$ now stand for the corresponding transfer functions, normalized by the condition $\zeta(\tau\to0,\, k)\equiv1$. When the primordial power is nearly scale invariant, we can neglect the $k$-dependence of $\Delta^2_\zeta(k)$ over the range of $k\sim \ell/r$ that contribute to the integral\rf{Cl_late_ent}. We can also ignore the variation of the transfer functions in the last parentheses over this range. Integration of the remaining $k$-dependent terms by the standard formula $\int{dx\over x}\,j_l^2(x)=[2\ell(\ell+1)]^{-1}$ gives \be \delta C_\ell^{\rm (postrec~entry)}\!\!\!\approx \fr{2\pi\,\Delta^2_\zeta(\ell/r_{\rm ent})}{\ell(\ell+1)}\, \left} \newcommand{\rt}{\right|g \left} \newcommand{\rt}{\right(\!\fr13d_{\gamma,\,\rm in}+\Phi+\Psi\rt)\rt|^2_{\rm entry} \!\!\!\!\!\!\!\!\!\!\!\!.~~~~~~ \lb{Cl_late_ent_inv} \ee Eqs.\rf{Cl_late_ent} or\rf{Cl_late_ent_inv} quantify the response of $C_l$ to the gravitational potentials at horizon entry after recombination. These equations confirm that for adiabatic perturbations that enter in the matter era the potentials $\Phi=\Psi=-\fr15 d_{\rm in}$ suppress the corresponding contribution to $C_l$ 25-fold. We now consider the response of $C_l$ to changes of potentials on subhorizon scales. Then the CMB sensitivity can be quantified in the Limber approximation\ct{Limber54}, applied to the CMB by \citet{Kaiser84,Kaiser92} and \citet{HuWhite_weak_coupling95}. With the details of the calculation given in Appendix~\ref{sec_appx}, the response of the power spectrum of CMB temperature to subhorizon changes of potentials is roughly \be \delta C_\ell^{\rm (postrec~subhor.)} \sim \fr{2\pi^2\Delta^2_\zeta(\ell/r)}{\ell^2} [\delta(\Phi+\Psi)]^2\ \fr{r}{\ell\delta\tau}.\quad \lb{Cl_resp_late} \ee Here $r$ is the comoving radial distance to the changing potentials and $\delta\tau$ is the time taken by a change $\delta(\Phi+\Psi)$, assumed gradual. Even for large changes of potentials ($|\delta(\Phi+\Psi)|\sim |\Phi+\Psi|$) the observed response $\delta C_\ell^{\rm (subhor.)}$\rf{Cl_resp_late} is suppressed relative to $C_\ell^{\rm (entry)}$\rf{Cl_late_ent_inv} by a factor $\fr{r}{\ell\delta\tau}$. We argued previously that under linear evolution generally $\delta\tau\sim \mathcal H^{-1}$. Then for subhorizon scales ($r/\ell\ll \mathcal H^{-1}$) the factor $\fr{r}{\ell\delta\tau}$ is much smaller than unity. This suppression is easily understood as the cancellation of positive and negative contributions to $\Delta T/T$ from the peaks and troughs of all perturbation modes except for those whose ${\bf k}$ is almost orthogonal to the line of sight\ct{HuWhite_weak_coupling95}. In addition, dark dynamics influences the CMB indirectly through the gravitational impact on baryons, coupled to the CMB by Thompson scattering. After recombination, the baryon-mediated linear contribution on subhorizon scales is suppressed even more than the late ISW contribution\rf{Cl_resp_late}. Indeed, the direct contribution of baryon velocity $v_b^i n_i$ to the source of CMB temperature perturbation\rf{los_source} in the Limber limit vanishes\ct{HuWhite_weak_coupling95}: In this limit, only the modes with ${\bf k}$ orthogonal to ${\bf n}$ would contribute to $C_\ell$, yet for their scalar perturbations (with $v^i$ parallel to ${\bf k}$) $v_b^i n_i= 0$. The indirect effect of weakly coupled baryonic perturbations through affecting the photon overdensity or other photon multipoles in the source\rf{los_source} is also diminished relative to the already suppressed ISW term by an additional factor $\delta\tau/\tau_T$, where $\tau_T$ is the photon free-flight time. This factor is small for any temporal interval after CMB decoupling. The CMB is also affected through various non-linear effects, e.g., its lensing by cosmic structure or Sunyaev-Zeldovich effect. Unlike the primary anisotropies, these non-linear features may be useful probes of the dark dynamics on small scales whenever they can be separated from the dominant primary signal \citep[e.g.,][]{Seljak:1998nu,HuOkamoto_LensingReconstr01}. \begin{figure}[t] \centerline{\includegraphics[width=7cm]{CMB_entry.eps}} \caption{While the acoustic pattern of CMB temperature anisotropy is formed on the surface of last scattering ($z_{\rm rec}\sim 1100$), the metric perturbations on this surface {\it do not play a major role\/} in the observed CMB spectra $C_\ell$. The potentials at last scattering do control the subdominant ``baryon loading'' effect. Primarily, however, the CMB multipoles are sensitive to the gravitational potentials and underlying dark dynamics at the {\it horizon entry\/} of the corresponding modes ($z\sim \ell^2/4$ for $z< z_{\rm eq}$, and $z\sim 20\,\ell$ for $z\gtrsim z_{\rm eq}$), covering the entire range from $z\sim 1$ for lowest $\ell$ up to $z\sim 6\times 10^4$ for $\ell\sim 3000$.} \label{fig_hor_ent} \end{figure} \subsubsubsection{Overall} The above results can be summarized concisely as follows. At a given comoving scale $k$, probed by $\ell\sim kr \approx k\times 14$\,Gpc, the CMB anisotropy is most sensitive to the value and evolution of $\Phi+\Psi$ during the scale's horizon entry. This is illustrated by Fig.~\ref{fig_hor_ent}. Although most of the observed CMB photons scattered last during hydrogen recombination at $z_{\rm rec}\sim 1100$, the potentials during recombination are of relatively minor importance for the observed anisotropies. (Exceptions are the scales of the first peak that happened to enter during recombination, and the nondegenerate baryon-loading signature on smaller scales.) When the potentials do not decay quickly during the entry, they significantly suppress the CMB temperature spectrum~$C_\ell$ (by a factor of 25 in the Einstein-de Sitter scenario with adiabatic perturbations.) The $C_\ell$ response to changes of the potentials after the entry is weak. After recombination, in particular, the $C_\ell$ response to the subhorizon changes of potentials is diminished by a factor $\sim\mathcal H/k$. In principle, $\Delta T/T$ of the CMB responds comparably well to the metric inhomogeneities during the horizon entry at either high or low redshifts. Nevertheless, the CMB is a considerably better probe of the horizon-scale potentials at high redshifts because of the reduced cosmic variance. Let us define the wavevector $k_{\rm ent}$ of the CMB modes that enter at a redshift~$z$ by a condition $k_{\rm ent}\,S(z)\equiv1$, where $S=\int c_s\,d\tau$ is the corresponding size of acoustic horizon. The modes that enter at a redshift $z$ match to the multipole $\ell_{\rm ent}=k_{\rm ent}\,r=r/S$. Taking $S\approx \tau/\sqrt3$ as a valid estimate both in the radiation era (when $R_b$ is negligible) and after recombination (when baryons decouple and formally $c^2_{\rm eff,\,\gamma}=1/3$), we obtain \be \ell_{\rm ent}(z)\equiv {r\over S}\approx \left} \newcommand{\rt}{\right\{\ba{l} z/18,~~~~~{\rm radiation\ era},\\ \sim 2\sqrt{z},~~{\rm matter\ era},\\ \ea\rt. \lb{l_to_z} \ee estimated for a $\Lambda$CDM model with $\Omega_mh^2= 0.13$ and $h=0.7$. For this model the redshift of equality $z_{\rm eq}\approx 3110$ and recombination $z_{\rm rec}\approx 1090$ match to $\ell_{\rm ent}\approx 200$ and $90$ respectively. The 1-$\sigma$ uncertainty of probing a non-degenerate effect, quantified by a parameter $p$, can be evaluated as \cite[e.g.,][]{KendallStuart2} \be \Delta p = 1\left} \newcommand{\rt}{\right/\sqrt{\sum_l\left} \newcommand{\rt}{\right(\fr{\partial C_l/\partial p}{{\rm r.m.s.}\,C_l}\rt)^2}.\rt. \ee For a CMB experiment limited on the studied scales only by cosmic variance, for either temperature or polarization power spectra $({\rm r.m.s.}\,C_l)/C_l=[(\ell+\fr12)f_{\rm sky}]^{-1/2}$\ct{Knox:1995dq, Seljak:1996ti,Zaldarriaga:1997ch}, where $f_{\rm sky}\le1$ is the experimental sky coverage. Then \be \Delta p = 1\left} \newcommand{\rt}{\right/\sqrt{\sum_l(\ell+\fr12)f_{\rm sky}\left} \newcommand{\rt}{\right({\partial \ln\, C_l\over \partial p}\rt)^2}.\rt. \ee Thus for constraining an effect that affects the modes which enter the horizon at a redshift~$z$ over a span of redshifts~$\Delta z$, by eq.\rf{l_to_z}, we can expect accuracy \be \Delta p \sim f_{\rm sky}^{-1/2}(\partial \ln\, C_l/\partial p)^{-1}\times \left} \newcommand{\rt}{\right\{\ba{l} 18/\sqrt{z\,\Delta z},~~{\rm radiation\ era},~~~\\ ~~1/\sqrt{2\,\Delta z},~~{\rm matter\ era}. \ea\rt. \lb{eps_cmb} \ee This accuracy improves with increased $\Delta z$ and, when probing the radiation era, with increased $z$. \section{Applications and caveats for modified gravity} \label{sec_ModGrav} We now briefly consider the possibility that general relativity (GR) breaks down on cosmological scales. Modified gravity (MG) offers rich phenomenology by invoking new degrees of freedom whose dynamics is substantially different and often involves more parameters than the standard cosmological model. It typically predicts nonstandard evolution of perturbations on horizon and subhorizon scales alike. While the detection of such phenomena as non-standard growth of cosmic structure or anomalous lensing may indicate MG, we will see that without any restrictions on the dark dynamics, the identical effects could always be generated by a non-minimal dark sector that influences the visible matter according to the standard Einstein equations.\footnote{ We stress that these features remain useful hints of MG. Since they are even more ubiquitous than the scenarios of modified gravity and may reveal other new physics, they are well worth searching for. } Further in this section we will argue that other features should yet allow to discriminate MG observationally. We will assume that even if full Einstein gravity fails on cosmological scales, the Einstein principle of equivalence remains valid for the visible species. This assumption is common to many existing MG models. It is motivated by the relatively strong terrestrial and solar-system constraints on the equivalence principle. Thus we suppose that the regular matter couples covariantly to a certain matter-frame ``physical'' metric~$g_{\mu\nu}$. However, we now neither take for granted that all dark fields also couple covariantly to the same metric~$g_{\mu\nu}$, nor assume that the dynamics of $g_{\mu\nu}$ itself is governed by the Einstein equations. Under the weaker assumption of the equivalence principle for only the visible matter, all observable signatures of new physics can still be quantified by {\it any of the three parameterization schemes\/} of Sec.~\ref{sec_dynamics}. Indeed, the $g_{\mu\nu}$ background can still be described by a single number for its present spatial curvature and by its uniform redshift-dependent expansion rate $\mathcal H(z)$. The potentials $\Phi$ and $\Psi$, defined by eq.\rf{Newt_gauge} to parameterize the inhomogeneities of the physical metric $g_{\mu\nu}$, will play their usual role in the evolution of light and baryons. Moreover, the (effective dark) energy and momentum densities assigned to the missing sources of curvature by the naive application of the Einstein equations will evolve in agreement with the usual local conservation laws, which is easily seen as follows. Let by definition \be T^{\mu\nu}_{\rm eff~dark}\equiv \fr1{8\pi G}\,G^{\mu\nu}-\sum_{{\rm known}~a}T^{\mu\nu}_a, \lb{T_eff_dark} \ee where the last sum is over the known regular particles. Their energy-momentum tensor [constructed unambiguously from the species' action $S_a$ as $T^{\mu\nu}_a=(2/\sqrt{-g})\,\,\delta S_a/\delta g_{\mu\nu}$] is covariantly conserved by the assumed covariance for the regular species. The matter-frame Einstein tensor $G_{\mu\nu}=R_{\mu\nu}-\fr12g_{\mu\nu}R$, where $R_{\mu\nu}$ is the Ricci tensor of the physical metric, is also covariantly conserved by the Bianchi identities. Thus the entire expression\rf{T_eff_dark} is covariantly conserved: \be T^{\mu\nu}_{\rm eff~dark\,;\nu}=0, \lb{eff_conserv} \ee with all covariant derivatives being taken relative to the physical metric. Since $T^{\mu\nu}_{\rm eff~dark}$ is covariantly conserved, the background and perturbations of the missing energy and momentum evolve according to equations\rf{dot_rho}\,--\rf{dot_v}, derived from the identical conservation law. Thus if all our probes of the invisible degrees of freedom are based solely on their gravitational impact on light, baryons, and other regular particles (neutrinos, WIMP's when probing dark energy, etc.) then phenomenologically {\it all observable signatures\/} of a considered MG model can be mimicked by an effective GR-coupled dark sector. Specifically, we can find the corresponding effective $w(z)$ to reproduce the missing energy background, and the effective anisotropic stress $\sigma(z,k)$ and stiffness $c^2_{\rm eff}(z,k)$ to describe scalar perturbations. For example, for a popular DGP gravity model\ct{DGP00} this was demonstrated explicitly by\ctt{KunzSapone06}. With this discouraging general conclusion, we may enquire whether cosmology can at all reveal definitive distinctions between MG and general relativity with a peculiar yet physically permissible dark sector. To establish such distinctions, let us summarize the conceptual differences of MG from general relativity: \begin{itemize} \item[A.] Some dark degrees of freedom may not couple covariantly to the matter metric $g_{\mu\nu}$. \item[B.] The gravitational action may not be given entirely by the Hilbert-Einstein term $S_{\rm grav}=(16\pi G)^{-1}\!\int d^4x\sqrt{-g}\,R$. \end{itemize} The distinctive observable consequences of these special properties of MG may include: \begin{itemize} \item[1.] The effective dark dynamics, which is observationally inferred by assuming the Einstein equations, violates the equivalence principle (EP). The EP violation can be seen, e.g., as \begin{itemize} \item[i.] The dependence of the inferred local dark dynamics on the distribution of visible matter, when it cannot be explained by non-gravitational dark--visible coupling allowed by particle experiments. \item[ii.] Superluminality of the inferred dark dynamics. \end{itemize} \item[2.] The dynamics of gravitational waves (tensor modes) deviates from the predictions of the Einstein equations, assuming that both the visible and inferred dark species contribute to the energy-momentum tensor in the simplest way. \end{itemize} Most of these signatures have already been utilized for falsifying MG models with existing or suggested observations,\ct{Clowe06ApJL,Bradac:2006er} and\ct{KahyaWoodard07}; we comment additionally on them next. \subsection{EP violation for the inferred dark dynamics} \label{sec_MG_scal} The violation of the first condition can be illustrated by an extreme toy theory in which the regular matter (``baryons,'' for short) constitute all independent degrees of freedom. Let the metric in this theory be specified by baryon distribution via some deterministic relation (e.g., as $g^{\mu\nu}=\Lambda^{-1} T^{\mu\nu}_{\rm baryon}$, following from an action $S=S_{\rm baryon}-\int\!d^4x\,\sqrt{-g}\,\Lambda$ with $\Lambda$ being a constant). Even in such a contrived theory, by the above arguments, the effective missing energy and momentum densities\rf{T_eff_dark} would appear to evolve and gravitate in agreement with energy-momentum conservation and the Einstein equations. In this example, however, the effective dark density and stress are uniquely determined by the distribution of the visible matter. This does not occur for truly independent dark degrees of freedom. In more realistic MG theories we should not expect a deterministic relation between the visible and effective dark distributions. Still, if the dark and visible sectors interact other than by coupling covariantly to the common metric then the inferred laws of the effective local dark dynamics would depend on the visible environment. Detection of such dependencies would be particularly feasible for the dark matter, for which there are plentiful observable regions with varying environment: varying in both visible matter density and in its ratio to dark density. In addition to baryons clumping strongly at low redshifts, the density of all known visible species varies with redshift due to the Hubble expansion. The ratio of visible to dark density is perturbed even on large scales due to dark matter being decoupled from the photon-baryon acoustic oscillations and clustering on its own until recombination. The segregation of dark matter and baryons is even more apparent at late times on the scales of clusters and smaller, becoming almost complete in the famous bullet cluster example\ct{Markevitch:2003at,Clowe06ApJL,Bradac:2006er}. In order to test that the standard CDM+GR model accounts for the signatures of dark matter at various redshifts and scales, it is crucial to observe and robustly model the dynamics on a wide range of scales, including the highly non-linear ones. It is also important to utilize complimentary probes, such as baryonic matter (responding to $\Phi$), and the CMB or gravitational lensing (probing $\Phi+\Psi$). Various signatures of dark matter that allow to compare dark matter parameters under different conditions include: the height alteration of the odd and even CMB acoustic peaks due to the CDM potential affecting the coupled baryons before recombination\ct{HuSugSmall96}, the early ISW enhancement of the first peak \citep[e.g.,][]{HuNature}, the significant suppression of the CMB temperature power $C_l$ below the first peak (Sec.~\ref{sec_cl_resp_quant} and Fig.~\ref{fig_CMB_mat}), the linear and nonlinear dynamics of cosmic structure, and the lensing of the CMB and background galaxies by the CDM potential at lower redshifts. Dark energy (DE) does not trace any visible species at low redshifts, when its density appears almost redshift-independent. DE may track the radiation or matter background at higher redshifts; then its perturbations are not expected to evolve similarly to any of the standard species. (For example, see the evolution of the perturbations of tracking quintessence in the radiation and matter eras on the right panels of Figs.~\ref{fig_CMB_rad} and~\ref{fig_CDM} respectively.) Constraints on the background equation of state $w$ can rule out specific DE or MG models but do not decisively differentiate between the DE and MG paradigms. Beyond~$w$, the perturbations of dark energy are likely to be stiff and thus can be constrained only during horizon entry. As discussed previously, the CMB can provide such constraints over a wide range of redshifts, from $z\sim 1$ up to $z\sim 10^5$, becoming much tighter toward higher redshifts (Sec.~\ref{sec_impacts} and Fig.~\ref{fig_hor_ent}). With the cosmological constant fitting the current data well, the search for {\it any manifestation\/} of non-trivial DE or MG dynamics may be the utmost priority for establishing the origin of cosmic acceleration. This includes falsifying the background equation of state $w(z)\equiv -1$ at low redshifts but by no means is limited to $w(z)$. Examples of other searches, which may end up being more fruitful, are: the search for a subdominant non-standard component in the radiation or early matter era (Secs.~\ref{sec_stiffness} and \ref{sec_clustering}), nonstandard growth of cosmic structure\ct{Linder_gamma05}, and nonstandard $\Phi/\Psi$\ct{Bert06}, probed by comparing the CMB or lensing signal with the galaxy and cluster distributions. \subsection{Superluminal dark flows} \lb{sec_MG_superlum} Another possible signature of MG is the superluminal propagation of dark perturbations. The superluminality may be spurious, from our describing MG by a parameterization that assumes general relativity. It may also be real, from the equivalence principle not applying to some dark degrees of freedom. While numerous observational tests for superluminality can be thought of, here we consider only one specific example. Spurious or real superluminal propagation of (effective) dark inhomogeneities can be constrained by the sensitivity of the CMB acoustic phase to the propagation velocity of dark perturbations (Sec.~\ref{sec_speed}). A straightforward Fisher-matrix forecast shows that TT, TE, and especially EE spectra from the future high-$\ell$ CMB experiments, Planck\footnote{http://www.rssd.esa.int/Planck} and ACT\footnote{http://www.physics.princeton.edu/act} in particular, will strongly restrict the abundance of any dark component that supports scalar perturbations with $c_p^2>1/3$, including $c_p>1$. If $B$-polarization of the CMB reveals the signal of relic gravitational waves\ct{SelZald_Bmode,Kamionk96_Bmobe}, a similar effect in the tensor sector will directly constrain the streaming dark species for which streaming velocity $c_p$ exceeds the speed of gravitational waves, $c_{s,\,\rm grav}$. [Indeed, $c_{p,\,\nu}>c_{s,\,\rm grav}$, albeit under somewhat different conditions, for the dark matter emulator of \ctt{KahyaWoodard07}, discussed below.] The oscillation amplitude of the gravitational (tensor) modes after horizon entry is noticeably affected by neutrino perturbations\ct{Weinberg:2003ur}. However, the phase shift of the tensor oscillations is strictly forbidden\ct{SB_tensors05} when for all species $c_{p,\,\rm dark}\le c_{s,\,\rm grav}$, as required by GR. If, on the other hand, the velocity of neutrinos or other dark species with non-negligible abundance exceeds $c_{s,\,\rm grav}$ then the arguments of\ctt{SB_tensors05} show that the phase of the tensor BB signal in the CMB will be necessarily shifted. \subsection{Nonstandard phenomenology of gravitational waves} \label{sec_MG_tens} An interesting test of a broad class of MG alternatives to dark matter was recently suggested by\ctt{KahyaWoodard07}. They considered the propagation of a fundamental tensor field $\tilde g_{\mu\nu}$ [whose kinetics is governed by $S_{\rm grav}=(16\pi \tilde G)^{-1}\!\int d^4x\sqrt{-\tilde g}\,\tilde R$] in any of the model where $\tilde g_{\mu\nu}$ is sourced by the luminous matter alone by the Einstein law \be \tilde G^{\mu\nu}\approx 8\pi G T^{\mu\nu}_{\rm luminous}. \lb{tildeEeq} \ee The physical metric $g_{\mu\nu}$ was set to reproduce the observed gravitational potentials in the vicinity of our galaxy. The knowledge of the other specifics of the MG dynamics was therefore unnecessary. Then\ctt{KahyaWoodard07} showed that the arrival of the gravitational waves from a cosmological event, e.g.\ a supernova, is noticeably {\it delayed\/} with respect to the arrival of the associated neutrinos and photons. While the assumption\rf{tildeEeq}, implicit in\ctt{KahyaWoodard07}, incorporates many models it is not generic. For example, it does not apply if \be \tilde G^{\mu\nu}\approx 8\pi \tilde G T^{\mu\nu}_{\rm luminous}, \ee where $\tilde G$ differs from the local value of Newton's gravitational constant $G$. Even in the TeVeS model\ct{Bekenstein04}, motivating\rf{tildeEeq}, $\tilde G\not=G$ strictly, although it is natural to assume that for TeVeS this difference is small\ct{Bekenstein04}. Yet more generally, $\tilde G$ may depend on the new MG degrees of freedom, e.g., on the scalar field $\phi$ of tensor-scalar or tensor-vector-scalar models. In any case, even if eq.\rf{tildeEeq} fails and the exact prediction for the gravity wave delay by\ctt{KahyaWoodard07} does not apply, the gravitational waves and neutrinos or photons would still generally propagate with different velocities. Thus the future gravitational wave astronomy can offer robust tests for discriminating GR-coupled dark matter from modified gravity. \section{Summary} \label{concl} \subsection{Approach} Constraints on the {\it inhomogeneous\/} dynamics of the dark sectors are essential to full understanding of their nature. Varying with both redshift and {\it spatial scale\/}~$k$, the observable imprints of dark perturbations provide plentiful information about the dark sectors' local kinetic properties and (self) interactions. This information significantly complements the dark species' background equation of state $w(z)$. Reliable extraction of the information encoded in the dark perturbations is hindered by the indirectness of their, sometimes subtle, gravitational impact on observables. It is also obstructed by the numerous contributions to the observables from other complex multiscale cosmological and astrophysical phenomena. Moreover, controlled repeatable measurements of dark sectors cannot be afforded when the measuring device is the entire observable large-scale universe. Nevertheless, the unavoidable ``nuisance'' phenomena and contaminations can be counteracted by detailed understanding of the involved physics and by complementary probes of as many independent characteristics of the dark sectors as can be observationally accessed. For a tractable and systematic study of the dark sectors beyond the background equation of state, we map the local kinetic properties of inhomogeneous dark dynamics at a given redshift to the characteristics of the observed cosmological distributions that can reflect those dark properties. The correct mapping is more likely to be achieved with a description of evolution that is simple and that manifests the objective causal relations explicitly. The dynamics of gravitationally coupled perturbations of dark and visible species during horizon entry is pivotal to probing the dark inhomogeneities. Then any dynamical species, including dark radiation and all dark energy candidates with $w\not\equiv-1$, are necessarily perturbed and leave imprints of their highly specific inhomogeneous kinetics on the observable probes. The contribution of subhorizon perturbations of a given magnitude $\delta\rho/\rho$ to gravitational potentials is suppressed, as $\Phi \sim (\mathcal H/k)^2\,\delta\rho/\rho$. On the scales of the horizon and beyond, the apparent gravitational impact of the dark species on the visible ones depends strongly on the description used. In most descriptions, the apparent inhomogeneities of the visible species are changed by the properties that dark species have long {\it before\/} and even long {\it after\/} the change. This misguides the identification of the observable features that reflect the internal dark properties at specific epochs. The apparent cause--effect mismatch is, nevertheless, not intrinsic to linearly perturbed cosmological evolution. Within Einstein gravity, the internal local properties of dark species at a past time~$\tau$ affect only the perturbations that had approached or entered the horizon by the time~$\tau$. The mapping of dark properties to evolutionary changes is unambiguous after horizon entry, when baryonic and photon perturbations are instantaneously impacted by the Newtonian potentials, then reflecting the instantaneous overdensity of visible and dark species. The remaining, confined to the horizon entry, gravitational impact of the dark sectors at a given~${\bf k}$ in linear theory is also unambiguous (Sec.~\ref{sec_uniq}). It is easy to describe the full perturbed linear cosmological dynamics (including that of partly polarized photons, baryons, realistic neutrinos, quintessence, and other particles or fields) by a formalism in which the changes of perturbations in the visible sectors are concurrent with the local dark properties responsible for these changes\ct{B06}. The resulting description reveals explicitly the objective causal dependencies and enables us to map observable features to local dark properties more reliably than with traditional formalisms, which lack this concurrence. The suggested formalism considers perturbations of canonical rather than proper distributions. It has additional useful technical benefits, for example, note (A) and (B) in the caption of Fig.~\ref{fig_CMB_analogy}, illustrating the corresponding description of the acoustic CMB modes. Using this formalism, we relate general properties of dark perturbations to observable features by tracking the gravitationally coupled evolution of dark and visible perturbations. The advantages of such an evolutionary study over black-box computation of observables are twofold: It isolates all observable signatures of the studied phenomena; it also reveals the mechanisms that generate these signatures and allows us to judge the mechanisms' robustness. \subsection{Sensitivity of probes} We categorize the primary probes, responding to the dark dynamics through linear evolution, as either ``light'' or ``matter'', Sec.~\ref{sec_probes}. Those of the first type (CMB spectra) probe the trajectories close to null geodesics; of second type (matter transfer functions) respond to the metric along the time-like Hubble-flow worldlines. Primary CMB anisotropies are highly sensitive to the values and evolution of the potentials {\it during horizon entry\/}. They are only mildly affected by the gravitational potentials on subhorizon scales. (The situation may differ for secondary, nonlinear, CMB features.) For example, after recombination the response of the CMB angular power spectrum $C_\ell$ to a change of subhorizon potentials is easily quantifiable with eq.\rf{Cl_resp_late}, derived in the Limber approximation. This response is suppressed relative to the contribution\rf{Cl_late_ent_inv} from the horizon entry by a factor $\fr{r}{\ell\delta\tau}$. By the arguments of Sec.~\ref{sec_cl_resp_quant}, for linear changes of subhorizon potentials this factor is much smaller than unity ($\sim \mathcal H/k$). The CMB is an excellent probe of the potentials on the horizon scales at high redshifts. The dark inhomogeneities that enter the horizon at a redshift~$z$ affect most of all the CMB multipoles with $\ell\sim z/20$ for the radiation epoch and $\ell\sim 2\sqrt{z}$ for the matter era, eq.\rf{l_to_z}. With sufficient angular resolution of the detector and reliable subtraction of foregrounds and secondaries, the larger number of statistically-independent multipoles at higher $\ell$'s improves the constraints on the parameters that describe the dynamics at higher redshifts $(z,\,z+\Delta z)$ as $\sqrt{z\,\Delta z}$ for radiation and $\sqrt{\Delta z}$ for matter domination, eq.\rf{eps_cmb}. In contrast, matter transfer functions are more sensitive to the potential $\Phi$ at low redshifts. The gravitational impact on massive matter at the later times is erased less by the Hubble friction. The response of matter overdensity to $\Phi(\tau)$ at any past epoch since the horizon entry can be quantified by a simple equation $\Delta (\delta\rho_m/\rho_m)_{\rm today}\sim - \,\tau\Delta\tau\,k^2\,\Phi(\tau)$, eq.\rf{growth_grf}. The reduction of the matter sensitivity to early-time dynamics is manifested in the prefactor~$\tau\Delta\tau$. \subsection{Mapping the dark properties} The potentially measurable properties of arbitrary dark sectors may be parameterized by a single function of $z$ for background and two (transfer) functions of $z$ and $k$ for scalar perturbations. The effects of dark sectors or modified gravity on the metric may be phenomenologically described in terms of $\{\mathcal H$, $\Phi$, $\Psi\}$. Assuming the Einstein equations, we may instead consider either dark species' densities and stresses $\{\rho$, $\delta^{(c)}$, $\sigma\}$ or their dynamical characteristics $\{w$, $c^2_{\rm eff}$, $\sigma\}$ (all reviewed in Sec.~\ref{sec_dynamics}). The correspondence between the inhomogeneous properties of the dark universe and the observable characteristics of the CMB and LSS is summarized by Table~\ref{tab_sum} and the text that follows. \subsubsection{$\sigma$ vs. $c_{\rm eff}$} The measurable dynamical characteristics of scalar dark {\it inhomogeneities\/} may be fully quantified by the anisotropic stress potential $\sigma$, eq.\rf{sigma_def}, and stiffness (``effective sound speed'') $c^2_{\rm eff}$, Sec.~\ref{sec_param_dyn}. For adiabatic initial conditions, the gravitational potentials $\Phi(\tau,k)$ and $\Psi(\tau,k)$ depend on $c^2_{\rm eff}$ only at the order $(k\tau)^2$, i.e., at a relatively late stage of horizon entry, Sec.~\ref{sec_stiffness}. On the contrary, anisotropic stress of streaming species affects the potentials earlier, already at the order $(k\tau)^0$, Sec.~\ref{sec_anis_stress}\ct{MaBert95}. This distinction may be given the following intuitive illustration: The stiffness affects the motion of the dark species, hence a certain time is required for the dark species to redistribute and start sourcing different potentials. On the other hand, anisotropic stress is generated without displacing the matter. Being itself a source of curvature in the Einstein equations, anisotropic stress changes the gravitational potentials earlier. There are several observable consequences of the early and late influence respectively of $\sigma$ and $c^2_{\rm eff}$: Both $\sigma$ of freely streaming relativistic species and $c^2_{\rm eff}$ of a component of dark radiation as stiff as quintessence ($c^2_{{\rm eff},\,\phi}=1$) reduce $\Phi+\Psi$. Yet the corresponding reductions occur at a different phase of the acoustic CMB oscillations. As a result, the CMB oscillations in the radiation era are somewhat suppressed by the gravitational effect of the anisotropic stress of streaming neutrinos, yet are slightly boosted by the gravitationally coupled perturbations of tracking quintessence, Fig.~\ref{fig_CMB_rad}.\footnote{ The example of tracking quintessence shows that the reduction of the magnitude of dark perturbations {\it does not necessarily imply\/} damping of the gravitationally coupled CMB fluctuations, as sometimes stated. The sign of the impact on the CMB anisotropy depends not only on the sign of the change of $\Phi+\Psi$, sourced by the dark perturbations, but also the {\it time\/} of this change. } Another consequence of the lateness of the gravitational impact of~$c^2_{\rm eff}$ is a considerably larger ratio of the matter response over the CMB response to a change of~$c^2_{\rm eff,\,dark}$, as compared to this ratio for $\sigma_{\rm dark}$. These results, summarized on the upper half of Table~\ref{tab_sum}, should help distinguish an excess of relic relativistic particles from a subdominant tracking classical scalar field in the radiation era. \begin{table*}[t] \begin{center} \caption{\lb{tab_sum}} \vspace{-.1cm} {\renewcommand{\arraystretch}{0} \begin{tabular}{|c|clll|} \hline \rule{0pt}{2pt}& & & & \\ \hline \strut & Quantified & ~Important & ~~Effect on & ~~~~Effect on \\ \strut\raisebox{1.4ex}[0pt]{Property} & by & ~~~~~~for & ~~the CMB & ~~~~~~Matter \\ \hline \rule{0pt}{2pt}& & & & \\ \hline \strut & & Early & Amplitude & Minor on power \\ \strut Anisotropic & ~~$\sigma$,~ eq.\rf{sigma_def} \strut & stage of & (Suppressed & (Enhanced \\ \strut Stress &[$\Phi-\Psi$,~ eq.\rf{Psi-Phi}]~~ & horizon & ~by $\sigma$ from & ~by $\sigma$ from \\ \strut & & entry & ~streaming) & ~streaming) \\ \hline \strut & & Late & Amplitude & Medium on power \\ \strut & & stage of & (Enhanced~~ & (Suppressed \\ \strut\raisebox{1.4ex}[0pt]{Stiffness} & \strut\raisebox{1.4ex}[0pt]{$c^2_{\rm eff}$,~ eq.\rf{c_eff_def}} & horizon & ~by tracking & ~by tracking \\ \strut & & entry & ~quintessence) & ~quintessence) \\ \hline \strut Velocity of & & Features & Phase of & Phase of \\ \strut a perturbation & $c_p$,~ Sec.~\ref{sec_speed} & local in & the acoustic & baryonic \\ \strut front & & real space & peaks & oscillations \\ \hline \strut & & Horizon & Significant & Primary \\ \strut & $\Phi$, $\Phi+\Psi$ & entry (CMB) & suppression & driving of \\ \strut\raisebox{1.4ex}[0pt]{Self-clustering} & eq.\rf{Newt_gauge}& and subhorizon & of the & the structure \\ \strut & & evolution (LSS) & amplitude & growth \\ \hline \vspace{-1.2cm} \tablecomments{ Summary of the discussed properties of the dark sectors, the epochs of their observational relevance, and their effects on the CMB power spectra and on large-scale structure.} \end{tabular} } \end{center} \end{table*} \subsubsection{$c_{\rm eff}$ vs. $c_p$} \lb{sec_sum_cp} Two other potentially observable characteristics of the dark species that should be distinguished from each other are the species' $c_{\rm eff}(k)$, considered above, and the velocity $c_p$ of the wavefront of their localized perturbation. These quantities need not be functionally related.\footnote{ Since $c^2_{\rm eff}(z,k)$ and $\sigma(z,k)$ provide the most general parameterization of the observable dynamical properties of the dark perturbations, these functions jointly should encode all the observable signatures of $c_p$. Nevertheless, since $c_p$ has an important kinetic meaning and maps to a sharp non-degenerate signature in the CMB spectra, it is worthwhile to consider this quantity on its own, and to compare it with $c_{\rm eff}(k)$. } Together, they are powerful indicators of the nature of dark sectors. For example, interacting relativistic particles whose free-flight time is much smaller than the Hubble time have $c_{\rm eff}(k)=c_p=1/\sqrt3$; free-streaming relativistic particles have $c_{\rm eff}(k)=1/\sqrt3$ while $c_p=1$; quintessence is characterized by $c_{\rm eff}(k)=c_p=1$. Observationally, the phase of the CMB acoustic oscillations in both temperature and $E$-polarization power spectra or their cross-correlation is shifted if and only if $c_p$ of any of the dark component in the radiation era exceeds the acoustic sound speed $c_s\simeq 1/\sqrt3$, Sec.~\ref{sec_speed}\ct{BS04}. Any such dark component contributes to the phase shift by an easily calculable amount. Importantly, the additive shift of the acoustic peaks for adiabatic perturbations is nondegenerate with any of the standard cosmological parameters or with the shape of the primordial power spectrum\ct{BS04}. Thus for the robust knowledge of the nature and kinetics of the dark radiation (encompassing neutrinos, possibly other light particles, and early quintessence) both $c_{\rm eff}$ and $c_p$ should be targeted by experimental strategies and data analyses. In particular, the determination of $c_{\rm eff}$ appears most promising from the comparison of the {\it CMB and LSS\/} power. On the other hand, $c_p$ is probed increasingly best by the CMB spectra extended toward {\it higher\/} $\ell$'s, with the {\it polarization\/} autocorrelation (EE) being the most crucial\ct{BS04}. \subsubsection{$\Phi$ and $\Phi+\Psi$} We can probe the inhomogeneous properties of the invisible universe more model-independently by constraining metric perturbations directly. Such constraints have a greater validity than general relativity, implied for deducing the dark dynamical properties. They can be placed meaningfully under a weaker assumption of only the visible sectors obeying the equivalence principle, better constrained for them by terrestrial and solar-system probes. In this perspective, the discussed probes of the scalar component of anisotropic stress~$\sigma$ can be viewed as the direct probes of the difference of the Newtonian-gauge potentials $\Phi-\Psi$, eq.\rf{Psi-Phi}. The primary anisotropies of the CMB are a sufficiently clean probe of the sum $\Phi+\Psi$, entirely responsible for the gravitational driving of the scalar CMB inhomogeneities on all scales and epochs except for a narrow band of redshifts around $z_{\rm dec}\sim 1100$. CDM, on the other hand, responds strictly to the Newtonian potential~$\Phi$ on all scales and at all times, and so does the baryonic matter on large scales after baryon decoupling from the CMB. Trivially [eq.\rf{d_c_sol}], the matter growth function is almost directly proportional to past $\Phi(\tau)$ (as previously noted, weighted by its conformal time~$\tau$.) The CMB is also affected by the gravitational potentials. Delay of the decay of $\Phi+\Psi$ after the horizon entry suppresses the CMB power by over an order of magnitude. [In CDM-dominated limit by $5^2=25$ times\ct{B06} --- more than by a factor $2^2=4$ which appears in the apparent description of the Sachs-Wolfe effect in terms of the proper Newtonian perturbations.] This suppression of the low CMB multipoles is physical: It is absent in models where the metric in the matter era is unperturbed. The suppression is diminished in the models in which the metric is perturbed less than in the $\Lambda$CDM scenario. The prominent suppression of the CMB power at $l\lesssim100$ by the CDM potential is one of the primary reasons that models without dark matter provide poor fits to the CMB data. The suppression of the CMB temperature anisotropies for $\ell<100$ must not be explained as a ``resonant self-gravitational driving'' of radiation perturbations at $l\gtrsim200$: This effect is caused by and probes the inhomogeneities of matter after equality and not the inhomogeneities of radiation before equality. The suppression of $C_\ell$ for $\ell\lesssim 100$ severely restricts the alternatives to CDM and the models of dark energy which reduce metric perturbations at any redshift in the matter era. Examples of such mechanisms are contribution of quintessence to the density in the matter era, interaction or unification of dark matter and dark energy \citep[e.g.,][]{Wetterich88,PerrottaBaccig02, FarrarPeebles04,Catena:2004ba,Scherrer_kess}, or MOND-inspired alternatives to dark-matter\ct{Milgrom83,Bekenstein04}. \subsection{Modified gravity} \lb{sec_sum_MG} Many authors have suggested that modification of general relativity on cosmological scales is the cause of the cosmic acceleration \citep[for recent reviews see][]{Copeland:2006wr,Nojiri:2006ri} or even of the apparent manifestations of dark matter\ct{Milgrom83,Bekenstein04}. In Sec.~\ref{sec_ModGrav} we consider the phenomenology of typical models of modified gravity (MG) that retain the equivalence principle for the visible sectors. We show that in these models all gravitational impact of the hidden physics can be described within the same parameterization schemes of Sec.~\ref{sec_dynamics}, developed to quantify the observable properties of dark sectors that are coupled by general relativity (GR). Indeed, these schemes were restricted only by the covariance of the visible dynamics, the assumption of the Einstein equations, and the local conservation of the dark energy and momentum. However, for any covariant visible dynamics, the formal dark energy-momentum tensor\rf{T_eff_dark} that is missing in the Einstein equations is covariantly conserved automatically (Sec.~\ref{sec_ModGrav}). Thus all observable signatures of MG can be mimicked by effective dark energy and momentum that influence the visible species according to the Einstein equations and during evolution are conserved locally. Particularly, the nonstandard structure growth or $\Phi/\Psi$ ratio that are predicted by {\it any\/} MG model of the considered broad class can in principle be reproduced without violation of the Einstein equations by sufficiently peculiar dark dynamics. Nevertheless, first, such signatures would still signal some non-minimal physics and therefore should be considered for experimental constraints whenever possible. Second, GR remains falsifiable by other effects; in particular, by the violation of the equivalence principle by apparent dark dynamics. This may be manifested in the strong dependence of the dark dynamics on the visible matter (Sec.~\ref{sec_MG_scal}), and in the signatures of effective or real superluminal dark flows (e.g., Sec.~\ref{sec_MG_superlum}). GR can also be falsified by nonstandard phenomenology of gravitational waves \citep[e.g.][and Secs.~\ref{sec_MG_superlum} and~\ref{sec_MG_tens}]{KahyaWoodard07}. ~ ~ \section*{Acknowledgments} I am grateful to Salman Habib and Katrin Heitmann for valuable discussions, suggestions, and comments on the manuscript. I thank Daniel Holz and Gerry Jungman for stimulating talks and useful suggestions. This work was supported by the US Department of Energy via the LDRD program of Los Alamos.
1,314,259,994,570
arxiv
\section{Introduction} Many applications of machine learning involve learning with graph structured data such as bioinformatics \cite{sharan2006modeling}, social networks \cite{scott2011social}, chemoinformatics \cite{trinajstic2018chemical}, and so on. To deal with graph structured data, many graph kernels have been proposed in literature for measuring the similarity between graphs in a kernel-based framework. Most of them are based on $\mathcal{R}$-framework, which focuses on comparing graphs based on their substructures such as subtree \cite{shervashidze2011weisfeiler}, shortest path \cite{borgwardt2005shortest}, random walk \cite{kashima2003marginalized}, and so on. However, these methods have several limitations: 1) they do not consider feature and structure distributions of graphs, 2) they require to define substructures based on the domain knowledge, which might not be available in many practical applications. Optimal Transport (OT) \cite{villani2009wasserstein} has received much attention in the machine learning community and has been shown to be an effective tool for comparing probability measures in many applications. In recent years, several studies have attempted to use the OT distance for learning graph structured data by considering the problem of measuring the similarity of graphs as an instance of computing OT distance for graphs. Togninalli et al. \cite{togninalli2019wasserstein} introduced the Wasserstein distance to compare graphs based on their node embeddings obtained by Weisfeiler-Lehmann labeling framework \cite{shervashidze2011weisfeiler}. Titouan et al. \cite{titouan2019optimal} proposed fused Gromov-Wasserstein (\texttt{FGW}) which combines Wasserstein and Gromov-Wasserstein \cite{memoli2011gromov,peyre2016gromov} distances in order to jointly take into account features and structures of graphs. These OT-based distances have achieved great performance for graph classification. However, they have several limitations: 1) kernel matrices converted from the OT-based distances are generally not valid, so they are not ready to use for the kernel-based framework, 2) calculating the similarity between each pair of graphs is computationally expensive, so the need for computing the kernel matrix of all pairwise similarity can be a burden for dealing with large-scale graph data sets. In order to overcome the aforementioned limitations, inspired by the linear optimal transport framework introduced by Wang et al. \cite{wang2013linear}, we propose an OT-based distance, named \texttt{linearFGW}, for learning with graph structured data. As the name suggests, our distance is a generalization of the linear optimal transport framework and \texttt{FGW} distance. The basic idea is to embed the node features and topology of a graph into a linear tangent space through a fixed reference measure graph. Then the \texttt{linearFGW} distance between two graphs is defined as the Euclidean distance between their two embeddings, which approximates their \texttt{FGW} distance. Therefore, the \texttt{linearFGW} distance has the following advantages: 1) it can take into account node features and topologiesb of graphs for the OT problem in order to calculate the dissimilarity between graphs, 2) we can derive a valid graph kernel from the embeddings of graphs for the downstream tasks such as graph classification and clustering, 3) by using the \texttt{linearFGW} as an approximate of the \texttt{FGW}, we can avoid expensive computation of the pairwise \texttt{FGW} distance for large-scale graph data sets. Finally, we conduct experiments on graph data sets to show the effectiveness of the proposed distance in terms of classification and clustering accuracies. The remainder of the paper is organised as follows: in Section 2, we present some related work. In Section 3, we present the idea of our proposed distance for learning graph with structured data and its theoretical properties. In Section 4, experimental results on benchmark graph data sets are provided. Finally, we conclude by summarizing this work and discussing possible extensions in Section 5. \section{Related Work} \subsection{Kernels for Graphs} Graph is a standard representation for relational data, which appear in various domains such as bioinformatics \cite{sharan2006modeling}, chemoinformatics \cite{trinajstic2018chemical}, social network analysis \cite{scott2011social}. Making use of graph kernels is a popular approach to learning with graph structured data. Essentially, a graph kernel is a measure of the similarity between two graphs and must satisfy two fundamental requirements to be a valid kernel: 1) symmetric and 2) positive semi-definite (PSD). There are a number of kernels for graphs with discrete attributes such as random walk \cite{kashima2003marginalized}, shortest path \cite{kashima2003marginalized}, Weisfeiler-Lehman (WL) subtree \cite{shervashidze2011weisfeiler} kernels, just to name a few. There are several kernels for graphs with continuous attributes such as GraphHopper \cite{feragen2013scalable}, Hash Graph \cite{morris2016faster} kernels. \subsection{Optimal Transport Frameworks for Graphs} Optimal Transport (OT) \cite{villani2009wasserstein} has received much attention from the machine learning community as it provide an effective way to measure the distance between two probability measures. Some OT-based graph kernels have been proposed and achieved great performance in comparison with traditional graph kernels. Wasserstein Weisfeiler-Lehman (WWL) \cite{togninalli2019wasserstein} used OT for measuring distance between two graphs based on their WL embeddings (discrete feature vectors of subtree patterns). Nguyen et al. \cite{nguyen2021learning} extends WWL by proposing an efficient algorithm for learning subtree pattern importance, leading to higher classification accuracy on graph data sets. However, they are not valid kernels for graphs with continuous attributes. Following the work in \cite{memoli2011gromov}, Peyre et al. \cite{peyre2016gromov} proposed a Gromow-Wasserstein distance to compare pairwise similarity matrices from different spaces. Then, Titouan et al. \cite{titouan2019optimal} proposed a fused Gromov-Wasserstein distance which combine Wasserstein and Gromov-Wasserstein distances in order to jointly leverage feature and structure information of graphs. To reduce computational complexity, OT-based distances are often computed using a Sinkhorn algorithm \cite{sinkhorn1967diagonal, cuturi2013sinkhorn}. Due to the nature of optimal assignment problem, these OT-based graph kernels are indefinite similarity matrices so they are invalid kernels, leading to the use of support vector machine (SVM) with indefinite kernels introduced in \cite{luss2007support}. \subsection{Linear Optimal Transport} Wang et al. \cite{wang2013linear} proposed a simplified version of OT in 2-Wasserstein space, called linear optimal transport. In the sense of geometry, the basic idea is to transfer probability measures from the geodesic 2-Wasserstein space to the tangent space with respect to some fixed base or reference measure. One advantage is that we can work on a linear tangent space instead of the complex 2-Wasserstein space so that the downstream tasks such as classification, clustering can be done in the linear space. Another advantage is the fast approximation of pairwise Wasserstein distance for large-scale data sets. In the context of graph learning, Kolouri et al. \cite{kolouri2020wasserstein} leveraged the above framework and introduced the concept of linear Wasserstein embedding for learning graph embeddings. In a concurrent work, Mialon et al. \cite{mialon2020trainable} proposed a similar idea for learning set of features. In this paper, we extend the idea of linear optimal transport framework from the 2-Wasserstein distance to the Fused Gromov-Wasserstein distance (\texttt{FGW}), and define a valid graph kernel for learning with graph structured data. Furthermore, we derive theoretical understandings of the proposed distance. \section{Proposed Distance for Graphs: Linear Fused Gromov-Wasserstein} We denote a measure graph as $\mathcal{G}(\mathbf{X}, \mathbf{A}, \mu)$, where $\mathbf{X}=\{\mathbf{x}_{i}\}_{i=1}^{m}\in \mathbb{R}^{m\times d}$ is the set of $m$ node features with dimensionality of $d$, $\mathbf{A}=\left[a\right]_{ij}\in \mathbb{R}^{m \times m}$ is a square matrix to encode the topology of the given graph such as the adjacency matrix or the matrix of pairwise distances between nodes, $\mu=\left[\mu_{i} \right]\in \Delta^{m}$ (probability simplex) is a Borel probability measure defined on the nodes (note that when no additional information is provided, all probability measures can be set as uniform). \subsection{\texttt{FGW}: A Distance for Matching Node Features and Structures} \label{subsection:fgw} In \cite{titouan2019optimal}, a graph distance, named Fused Gromov-Wasserstein (\texttt{FGW}), is proposed to take into account both node feature and topology information into the OT problem for measuring the dissimilarity between two graphs. Formally, given two graphs $\mathcal{G}_{1}(\mathbf{X}, \mathbf{A}, \mu)$ and $\mathcal{G}_{2}(\mathbf{Y}, \mathbf{B}, \nu)$, the \texttt{FGW} distance between $\mathcal{G}_{1}$ and $\mathcal{G}_{2}$ is defined for a trade-off parameter $\alpha\in \left[0,1 \right]$ as: \begin{align} \label{eqn:fgw} \texttt{FGW}_{q,\alpha}(\mathcal{G}_{1}, \mathcal{G}_{2})=\min_{\pi \in \Pi(\mu,\nu)}\sum_{i,j,k,l}(1-\alpha)\lVert \mathbf{x}_{i}-\mathbf{y}_{j}\rVert^{q}+\alpha |\mathbf{A}_{i,k}-\mathbf{B}_{j,l}|^{q}\pi_{i,j}\pi_{k,l} \end{align} where $\Pi(\mu,\nu)=\{\pi\in R_{+}^{m\times n} \text{s.t. }\sum_{i=1}^{m}\pi_{i,j}=\nu_{j}\text{, } \sum_{j=1}^{n}\pi_{i,j}=\mu_{i} \}$ is the set of all admissible couplings between $\mu$ and $\nu$. The \texttt{FGW} distance acts as a generalization of the Wasserstein \cite{villani2009wasserstein} and Gromov-Wasserstein \cite{memoli2011gromov}, which allows balancing the importance of matching the node features and topologies between two graphs. However, similar to the existing OT-based graph distances, it is challenging to define a valid kernel from the \texttt{FGW} for the graph-related prediction task, due to the nature of optimal assignment problem. In the following, we restrict our attention to the OT with $q=2$ and for the ease of presentation, we use the notation $\texttt{FGW}_{\alpha}$ instead of $\texttt{FGW}_{q,\alpha}$. \subsection{\texttt{linearFGW}: A New Distance for Comparing Graphs} \label{subsection:linearFGW} In order to overcome the limitations of the \texttt{FGW} distance, we propose to approximate it by a linear optimal transport framework, which we call Linear Fused Gromov-Wasserstein (\texttt{LinearFGW}). Computing the \texttt{LinearFGW} distance requires a reference, which we choose to be also a measure graph $\overline{\mathcal{G}}(\mathbf{Z}, \mathbf{C}, \sigma)$. How the reference measure graph is chosen is described later in Subsection \ref{subsection:reference}. To precisely define the \texttt{LinearFGW} distance, we first define the barycentric projections for node features and structures of graphs as follows: \begin{defn} (Barycentric projections for nodes and edges of graphs). For a reference measure graph $\overline{\mathcal{G}}(\mathbf{Z}, \mathbf{C}, \sigma)$ with $K$ nodes and a transport plan $\pi =\sum_{k,i}\pi_{k,i}\delta_{(\mathbf{z}_k, \mathbf{x}_{i})}$. Then the barycentric projections for nodes and edges of the measure graph $\overline{\mathcal{G}}$ using the transport plan $\pi$ are defined as follows: \begin{equation} \label{eqn:barycentricprojections} T_{\text{n}, \pi}(\mathbf{z}_{k})=\frac{1}{\sigma_{k}}\sum_{i}\pi_{ki}\mathbf{x}_{i} \text{ and } T_{\text{e}, \pi}(\mathbf{C}_{k,l})=\frac{1}{\sigma_{k}\sigma_{l}}\sum_{i,j}\pi_{k,i}\pi_{l,j}\mathbf{C}_{ij} \text{, where } k,l=\overline{1,K} \end{equation} \label{def:barycentricprojections} \end{defn} The definitions of these projections are extended from \cite{wang2013linear,beier2021linear}. Furthermore, we derive their properties in the following lemma. \begin{lem} \label{lemma:optimalplan} Given two measure graphs $\mathcal{G}(\mathbf{X}, \mathbf{A}, \mu)$ and $\overline{\mathcal{G}}(\mathbf{Z}, \mathbf{C}, \sigma)$, we denote $\pi^{*}$ as the optimal transport plan from $\overline{\mathcal{G}}$ to $\mathcal{G}$ with respect to the \texttt{FGW} distance, and $\Tilde{\mathcal{G}}(\Tilde{\mathbf{Z}},\Tilde{\mathbf{C}},\sigma)$ as the probability measure graph obtained by applying barycentric projections for nodes and edges $T_{\text{n}, \pi^{*}}(.)$ and $T_{\text{e}, \pi^{*}}(.)$, respectively (see Definition \ref{def:barycentricprojections}). Then, we have the following claims: \begin{enumerate} \item $\operatorname{diag}(\sigma)=\begin{bmatrix}\sigma_{1} & 0 & 0\\0 & \ddots & 0\\ 0 & 0 & \sigma_{K}\end{bmatrix}$ is the optimal transport plan from $\overline{\mathcal{G}}$ to $\Tilde{\mathcal{G}}$ in the sense of the \texttt{FGW} distance. \item $\texttt{FGW}_{\alpha}(\overline{\mathcal{G}}, \Tilde{\mathcal{G}})\leq \texttt{FGW}_{\alpha}(\overline{\mathcal{G}}, \mathcal{G})$. \end{enumerate} \end{lem} The proof is given in the Appendix section. An important implication of the above lemma is that $\Tilde{\mathcal{G}}$ can be considered as a surrogate measure graph for $\mathcal{G}$ with respect to the reference $\overline{\mathcal{G}}$. Thus we propose to define the \texttt{LinearFGW} distance between two measure graphs $\mathcal{G}_{1}$ and $\mathcal{G}_{2}$ with respect to the reference measure graph $\overline{\mathcal{G}}$ as follows: \begin{align} \label{eqn:linearfgw} \texttt{linearFGW}_{\alpha}(\mathcal{G}_{1}, \mathcal{G}_{2})=(1-\alpha) \sum_{k}\lVert T_{\text{n}, \pi_{1}}(\mathbf{z}_{k})-T_{\text{n}, \pi_{2}}(\mathbf{z}_{k})\rVert^{2} + \alpha \sum_{k,l}|T_{\text{e}, \pi_{1}}(\mathbf{C}_{k,l})-T_{\text{e}, \pi_{2}}(\mathbf{C}_{k,l})|^{2} \end{align} where $\pi_{1}$ and $\pi_{2}$ denote the optimal transport plans from $\overline{\mathcal{G}}$ to $\mathcal{G}_{1}$ and $\mathcal{G}_{2}$, respectively, in the sense of the \texttt{FGW} distance. We call this distance \texttt{linearFGW} as it acts as a generalization of linear optimal transport \cite{wang2013linear} and \texttt{FGW} \cite{titouan2019optimal}. Furthermore, the proposed distance also suggests a Euclidean embedding of the measure graph $\mathcal{G}_{1}$ with respect to the reference measure graph $\overline{\mathcal{G}}$ becomes: $\Phi_{\overline{\mathcal{G}}, \alpha}(\mathcal{G}_{1})=\left(\sqrt{1-\alpha}T_{\text{n}, \pi_{1}}(\mathbf{z}_{1}),...,\sqrt{1-\alpha}T_{\text{n}, \pi_{1}}(\mathbf{z}_{K}),...,\sqrt{\alpha}T_{\text{e}, \pi_{1}}(\mathbf{C}_{k,l}),...\right)$ with dimension of $K+K^{2}$. So we can derive a valid kernel for graph-related prediction tasks. The computation of the \texttt{linearFGW} can be illustrated in Figure \ref{fig:linearfgw}. \begin{figure}[t] \centerline{\includegraphics[width=0.9\columnwidth]{linearfgw.png}} \caption{Illustration of the computation of the \texttt{linearFGW} distance between $\mathcal{G}_{1}(\mathbf{X},\mathbf{A},\mu)$ and $\mathcal{G}_{2}(\mathbf{Y},\mathbf{B},\nu)$, given the fixed reference measure graph $\overline{\mathcal{G}}(\mathbf{Z},\mathbf{C},\sigma)$. First, we find the optimal transport plans $\pi_{1}$ and $\pi_{2}$ from $\overline{\mathcal{G}}$ to $\mathcal{G}_{1}$ and $\mathcal{G}_{2}$, respectively, in the sense of the \texttt{FGW}. Then we transport $\overline{\mathcal{G}}$ with the barycentric projections for nodes and edges (see Definition \ref{def:barycentricprojections}) using the optimal plans $\pi_{1}$ and $\pi_{2}$ to get the surrogate measure graphs $\Tilde{\mathcal{G}}_{1}(\Tilde{\mathbf{Z}}^{(1)}, \Tilde{\mathbf{C}}^{(1)}, \sigma)$ and $\Tilde{\mathcal{G}}_{2}(\Tilde{\mathbf{Z}}^{(2)}, \Tilde{\mathbf{C}}^{(2)}, \sigma)$ for $\mathcal{G}_{1}$ and $\mathcal{G}_{2}$, respectively. Finally, the Euclidean distance between $\Tilde{\mathcal{G}}_{1}$ and $\Tilde{\mathcal{G}}_{2}$ can be directly calculated using Equation (\ref{eqn:linearfgw}).} \label{fig:linearfgw} \end{figure} \subsection{Selection of Reference Measure Graph} \label{subsection:reference} Selecting the reference measure graph in Subsection \ref{subsection:linearFGW} is important. We empirically observe that if the reference is randomly selected or distant from all measure graphs, the approximation error between \texttt{FGW} and \texttt{linearFGW} is likely to increase. In the lemma presented below, we show the relation between \texttt{FGW} and \texttt{linearFGW} with respect to the reference measure graph. \begin{lem} \label{Lemma2} We denote the mixing diameter of a graph $\mathcal{G}(\mathbf{X}, \mathbf{A}, \mu)$ by $\texttt{diam}_{\alpha}(\mathcal{G})=\alpha \max_{i,j}\lVert \mathbf{x}_{i}-\mathbf{x}_{j} \rVert^{2} + (1-\alpha)\max_{i,j,i^\prime, j^\prime}|\mathbf{A}_{i,j}-\mathbf{A}_{i^\prime, j^\prime}|^{2}$. Then, given a fixed reference measure graph $\overline{\mathcal{G}}(\mathbf{Z}, \mathbf{C}, \sigma)$, for two input measure graphs $\mathcal{G}_{1}(\mathbf{X}, \mathbf{A}, \mu)$ and $\mathcal{G}_{2}(\mathbf{Y}, \mathbf{B}, \nu)$, we have the following inequality: \begin{equation} \label{lemma:barycentric} |\texttt{FGW}_{\alpha}(\mathcal{G}_{1}, \mathcal{G}_{2})-\texttt{linearFGW}_{\alpha}(\mathcal{G}_{1}, \mathcal{G}_{2})|\leq 4\min\{\texttt{FGW}_{\alpha}(\mathcal{G}_{1}, \overline{\mathcal{G}}),\texttt{FGW}_{\alpha}(\mathcal{G}_{2}, \overline{\mathcal{G}})\} + 2\texttt{diam}_{\alpha}(\mathcal{G}_{1}) + 2\texttt{diam}_{\alpha}(\mathcal{G}_{2}) \end{equation} \end{lem} The proof is given in the Appendix section. A corollary of the above lemma suggests how to select a good reference measure graph $\overline{\mathcal{G}}$: given $N$ graphs $(\mathcal{G}_{1},...,\mathcal{G}_{N})$, the total approximation error is upper bounded by: \begin{equation} \sum_{i=1}^{N}\sum_{j=i+1}^{N}|\texttt{FGW}_{\alpha}(\mathcal{G}_{i}, \mathcal{G}_{j})-\texttt{linearFGW}_{\alpha}(\mathcal{G}_{i}, \mathcal{G}_{j})|\leq 4\sum_{i=1}^{N}\text{FGW}_{\alpha}(\mathcal{G}_{i}, \overline{\mathcal{G}}) + 4\sum_{i=1}^{N}\texttt{diam}_{\alpha}(\mathcal{G}_{i}) \end{equation} where the right-hand side has two terms: the first term is the objective of the fused Gromov-Wasserstein barycenter problem \cite{titouan2019optimal} while the second term is constant with respect to the reference measure graph $\overline{\mathcal{G}}$, suggesting that we can use the fused Gromov-Wasserstein barycenter of $N$ given measure graphs as the reference. \subsection{Implementation Details} The \texttt{FGW} is the main component of our method. We use the proximal point algorithm (PPA) \cite{xu2020gromov} to implement the \texttt{FGW}. Specifically, given two graphs $\mathcal{G}_{1}$ and $\mathcal{G}_{2}$, we solve the problem (\ref{eqn:fgw}) iteratively ( with maximum $T$ iterations) as follows: \begin{align} \pi^{(t+1)}=\mathop{\rm arg~min}\limits_{\pi \in \Pi(\mu,\nu)}\langle (1-\alpha)\mathbf{D}_{12} + \alpha (\mathbf{C}_{12}-2 \mathbf{A}\pi^{(t)}\mathbf{B}), \pi \rangle + \eta \textbf{KL}(\pi|| \pi^{(t)}) \label{eqn:sinkhorniterations} \end{align} where $\langle\cdot,\cdot\rangle$ denote the inner product of matrices, $\mathbf{D}_{12}=(\mathbf{X}\odot \mathbf{X})\mathbf{1}_{d}\mathbf{1}_{n}^\top+\mathbf{1}_{m}\mathbf{1}_{d}^\top(\mathbf{Y}\odot \mathbf{Y})^\top$, $\mathbf{C}_{12}=(\mathbf{A}\odot \mathbf{A})\mu \mathbf{1}_{n}^\top+\mathbf{1}_{m}\nu^\top(\mathbf{B}\odot \mathbf{B})^\top$ and $\odot$ denotes the Hadamard product of matrices. $\textbf{KL}(\pi|| \pi^{(t)})$ is the Kullback-Leibler divergence between the optimal transport plan and the previous estimation. We can approximately solve the above problem by Sinkhorn-Knopp update (see \cite{xu2020gromov} for the algorithmic details). \begin{table}[] \centering \caption{Statistics of data sets used in experiments} \begin{tabular}{c|c c c c c} \hline\hline Dataset & \#graphs & \#classes & Ave. \#odes & Ave. \#edges & \#attributes\\ \hline COX2 & 467 & 2 & 41.22 & 43.45 & 3\\ BZR & 405 & 2 & 35.75 & 38.36 & 3\\ ENZYMES & 600 & 6 & 32.63 & 62.14 & 18\\ PROTEINS & 1113 & 2 & 39.06 & 72.82 & 1\\ PROTEINS-F & 1113 & 2 & 39.06 & 72.82 & 29\\ AIDS & 2000 & 2 & 15.69 & 16.20 & 4\\ IMDB-B & 1000 & 2 & 19.77 & 96.53 & -\\ \hline\hline \end{tabular} \label{tab:datasets} \end{table} \section{Experimental Results} We now show the effectiveness of our proposed graph distance on real world data sets in terms of graph classification and clustering. Our code can be accessed via the following link: \texttt{https://github.com/haidnguyen0909/linearFGW}. \subsection{Data sets} In this work, we focus on graph kernels/distances for graphs with continuous attributes. So we consider the following seven widely used benchmark data sets: BZR \cite{sutherland2003spline}, COX2 \cite{sutherland2003spline}, ENZYMES \cite{dobson2003distinguishing}, PROTEINS \cite{borgwardt2005protein}, PROTEINS-F \cite{borgwardt2005protein}, AIDS \cite{riesen2008iam} contain graphs with continous attributes, while IMDB-B \cite{yanardag2015deep} contains unlabeled graphs obtained from social networks. All these data sets can be downloaded from \texttt{https://ls11-www.cs.tu-dortmund.de/staff/morris/graphkerneldatasets}. The details of the used data sets are shown in Table \ref{tab:datasets}. \subsection{Experimental settings} To compute numerical features for nodes of graphs, we consider two main settings: 1) we keep the original attributes of nodes (denoted by suffix \texttt{RAW}), 2) we consider Weisfeiler-Lehman (WL) mechanism by concatenating numerical vectors of neighboring nodes (denoted by the suffix $\texttt{WL}-$H where H means we repeat the procedure H times to take neighboring vertices within H hops into account to obtain the features, see \cite{shervashidze2011weisfeiler} for more detail). For the matrix $\textbf{A}$, we restrict our attention to the adjacency matrices of the input graphs. For solving the optimization problem (\ref{eqn:sinkhorniterations}), we fix $\eta$ as 0.1 and the number of iterations $T$ as 5. We carry out our experiments on a 2.4 GHz 8-Core Intel Core i9 with 64GB RAM. For the classification task, we convert a distance into a kernel matrix through the exponential function, i.e. $K=\exp(-\gamma D)$ (Gaussian kernel). We compare the classification accuracy with the following state-of-the-art graph kernels (or distances): GraphHopper kernel (GH, \cite{feragen2013scalable}), HGK-WL \cite{morris2016faster}, HGK-SP \cite{morris2016faster}, RBF-WL \cite{togninalli2019wasserstein}, Wasserstein Weisferler-Lehman kernel (WWL, \cite{togninalli2019wasserstein}), FGW \cite{titouan2019optimal}, GWF \cite{xu2020gromov}. We divide them into two groups: OT-based graph kernels including WWL, FGW, GFW and \texttt{linearFGW} (ours) and non-OT graph kernels including GH, HGK-WL, HGK-SP, RBF-WL. Note that our proposed graph kernel converted from the \texttt{linearFGW} is the only (valid) positive definite kernel among OT-based graph kernels. We perform 10-fold cross validation and report the average accuracy of the experiment repeated 10 times. The accuracies of other graph kernels are taken from the original papers. We use SVM for classification and cross-validate the parameters $C=\{2^{-5},2^{-4},...,2^{10}\}$, $\gamma=\{10^{-2},10^{-1},...,10^{2}\}$. The range of the WL parameter $H={1,2}$. For our proposed \texttt{linearFGW}, $\alpha$ is cross-validated via a search in $\{0.0,0.3,0.5,0.7,0.9,1.0\}$. Note that the linear optimal transport \cite{wang2013linear} is a special case of the \texttt{linearFGW} with $\alpha=0$. We also compare the clustering accuracy with OT-based graph distances: \texttt{FGW}, GWB-KM, GWF on four real world data sets: AIDS, PROTEINS, PROTEINS-F, IMDB-B. For fair comparison, we use K-means and spectral clustering on the Euclidean embedding and Gaussian kernel of the proposed \texttt{linearFGW} distance (denoted by \texttt{linearFGW-Kmeans} and \texttt{linearFGW-SC}, respectively). We fix the parameters $H=1$, $\alpha=0.5$ for data sets of graphs with continuous attributes and $\gamma=0.01$ for the Gaussian kernel. \begin{table}[] \centering \caption{Average classification accuracy on the graph data sets with vector attributes. The best result for each column (data set) is highlighted in bold and the standard deviation is reported with the symbol $\pm$.} \begin{tabular}{l l l l l l l} \hline\hline & Kernels/Data sets & COX2 & BZR & ENZYMES & PROTEINS & IMDB-B\\ \hline \multirow{4}{*}{Non OT} & GH & 76.41$\pm$ 1.39 & 76.49$\pm$ 0.9 & 65.65$\pm$ 0.8 & 74.48$\pm$ 0.3 & -\\ & HGK-WL & 78.13$\pm$ 0.45 & 78.59$\pm$ 0.63 & 63.04$\pm$ 0.65 & 75.93$\pm$ 0.17 & -\\ & HGK-SP & 72.57$\pm$ 1.18 & 76.42$\pm$ 0.72 & 66.36$\pm$ 0.37 & 75.78$\pm$ 0.17 & -\\ & RBF-WL & 75.45$\pm$ 1.53 & 80.96$\pm$ 1.67 & 68.43$\pm$ 1.47 & 75.43$\pm$ 0.28 & -\\ \hline \multirow{4}{*}{OT-based} & WWL & 78.29$\pm$0.47 & 84.42$\pm$ 2.03 & 73.25$\pm$ 0.87 & 77.91$\pm$ 0.8 & -\\ & \texttt{FGW} & 77.2$\pm$4.7 & 84.1 $\pm$4.1 & 71.0$\pm$ 6.7 & 75.1$\pm$ 2.9 & \textbf{64.2}$\pm$ 3.3\\ & \texttt{GWF} & - & - & - & 73.7$\pm$2.0 & 63.9$\pm$2.7\\ & \texttt{linearFGW}-RAW (Ours) & 79.74 $\pm$ 1.99 & \textbf{86.07}$\pm$1.64 & 83.25 $\pm$ 2.44 & 82.49$\pm$ 1.75 & 63.62$\pm$1.9\\ & \texttt{linearFGW}-WL1 (Ours) & \textbf{79.98} $\pm$ 3.21 & 84.80$\pm$2.95 & \textbf{85.28}$\pm$1.64 & 83.29 $\pm$ 1.63 & -\\ & \texttt{linearFGW}-WL2 (Ours) & 79.50$\pm$3.29 & 84.37$\pm$2.75 & 83.13$\pm$ 1.56 & \textbf{83.95} $\pm$ 1.12 & -\\ \hline\hline \end{tabular} \label{tab:classificationresults} \end{table} \subsection{Results} \textbf{Classification:} The average classification accuracies shown in Table \ref{tab:classificationresults} indicate that the \texttt{linearFGW} is a clear state-of-the-art method for graph classification. It achieved the best performances on 4 out of 6 data sets. In particular, on two data sets ENZYMES and PROTEINS, the \texttt{linearFGW} outperformed all the rest by large margins (around 12\% and 6\%, respectively) in comparison with the second best ones. On COX2 and BZR, the \texttt{linearFGW} achieved improvements of around 2\% and 1.5\%, respectively, over WWL which is the second best one. Note that the Gaussian kernel derived from WWL is not valid for the data sets of graphs with continuous attributes (see \cite{togninalli2019wasserstein}). On IMDB-B, the average accuracies of the compared methods are comparable. Interestingly, despite that our \texttt{linearFGW} is an approximate of the \texttt{FGW} distance, the \texttt{linearFGW} consistently achieved significantly higher performance than \texttt{FGW}. This can be explained by the fact that the kernel derived from the \texttt{linearFGW} distance is valid. \textbf{Clustering:} The average clustering accuracies shown in Table \ref{tab:clusteringresults} also indicate that the \texttt{linearFGW} could achieve high performance on clustering. On PROTEINS and PROTEINS-F, the \texttt{linearFGW} achieved the highest accuracies by margins of around 2\% and 3\%, respectively, over the second best one. On AIDS and IMDB-B, the \texttt{linearFGW} achieved comparable performances with GWF-PPA which is the best performer. \begin{table}[] \centering \caption{Average clustering accuracy on the graph data sets with continuous attributes. The best result for each column (data set) is highlighted in bold and the standard deviation is reported with the symbol $\pm$.} \begin{tabular}{l l l l l} \hline\hline Methods/Data sets & AIDS & PROTEINS & PROTEINS-F & IMDB-B \\ \hline FGW & 91.0 $\pm$0.7 & 66.4$\pm$0.8 & 66.0$\pm$0.9 & 56.7$\pm$1.5\\ GWB-KM & 95.2$\pm$0.9 & 64.7$\pm$1.1 & 62.9$\pm$1.3 & 53.5$\pm$2.3\\ GWF-BADMM & 97.6$\pm$0.8 & 69.2$\pm$1.0 & 68.1$\pm$1.1 & 55.9$\pm$1.8\\ GWF-PPA & \textbf{99.5}$\pm$0.4 & 70.7$\pm$0.7 & 69.3$\pm$0.8 & \textbf{60.2}$\pm$1.6\\ \hline \texttt{linearFGW}-Kmeans (Ours) & 98.7$\pm$1.2 & 70.58$\pm$0.57 & 71.46$\pm$1.03 & 54.49$\pm$0.3\\ \texttt{linearFGW}-SC (Ours) & 98.2 $\pm$0.83 & \textbf{72.70}$\pm$0.03 & \textbf{73.33}$\pm$0.82 & 58.3$\pm$0.8\\ \hline\hline \end{tabular} \label{tab:clusteringresults} \end{table} \textbf{Runtime Analysis:} By using the \texttt{linearFGW}, we can reduce the computational complexity of calculating the pairwise \texttt{FGW} distance for a data set of $N$ graphs from a quadratic complexity in $N$ i.e. $N(N-1)/2$) to linear complexity i.e. $N$ calculation of the \texttt{FGW} distances from graphs to the reference measure graph. We compare the running time of \texttt{linearFGW} and \texttt{FGW} with the same setting as in the classification task with fixed $\alpha$ of 0.5 and 0.0 for labeled graph data sets and IMDB-B (unlabeled), respectively. In Table \ref{tab:runningtimeanalysis}, we report the total running time of methods (both training time and inference time) on 5 data sets used for classification experiments . It is shown that the \texttt{linearFGW} is much faster than \texttt{FGW} on all considered data sets (roughly 7 times faster on COX2, BZR, ENZYMES, PROTEINS and 3 times faster on IMDB-B). These numbers confirm the computational efficiency of \texttt{linearFGW}, making it possible to analyze large-scale graph data sets. \begin{table}[] \centering \caption{The total training time and inference time (in seconds) averaged over 10-folds of cross-validation (with fixed $\alpha$) for different data sets. The standard deviation is reported with the symbol $\pm$.} \begin{tabular}{l l l l l l} \hline\hline Methods/Data sets & COX2 & BZR & ENZYMES & PROTEINS & IMDB-B \\ \hline \texttt{FGW} & 520.21$\pm$21.15 & 347.78$\pm$5.21 & 817.31$\pm$7.49 & 3224.36$\pm$125.02 & 1235.33$\pm$83.28\\ \texttt{linearFGW} & 72.43 $\pm$ 0.16 & 53.81 $\pm$ 0.2 & 146.26 $\pm$1.64 & 431.25$\pm$9.25 & 358.92$\pm$10.41\\ \hline\hline \end{tabular} \label{tab:runningtimeanalysis} \end{table} \section{CONCLUSION AND FUTURE WORK} We have developed an OT-based distance for learning with graph structured data. The key idea of this method is to embed node feature and topology of a graph into a linear tangent space, where the Euclidean distance between two embeddings of two graphs approximates their \texttt{FGW} distance. In fact the proposed distance is a generalization of the linear optimal transport \cite{wang2013linear} and the \texttt{FGW} distance. Thus it has the following advantages: 1) as the \texttt{FGW} distance, the proposed distance allows to take into account node features and topologies of graphs into the OT problem for computing the dissimilarity between two graphs, 2) we can derive a valid kernel from the proposed distance for graphs while the existing OT-based graph kernels are invalid, and 3) it provides the fast approximation of pairwise \texttt{FGW} distance, making it more efficient to deal with large-scale graph data sets. We conducted experiments on some benchmark graph data sets on both classification and clustering tasks, demonstrating the effectiveness of the proposed distance. In this work, we suggested to use the fused Gromov-Waserstein barycenter \cite{titouan2019optimal} as the reference measure graph. Thanks to the differentiablity of OT frameworks using techniques such as entropic regularization \cite{cuturi2013sinkhorn}, one possibility for future work is to learn the reference measure graph by updating the reference to minimize the supervision loss. The classification performance will be improved with the label information of graphs used in the training process. Another possibility would be to incorporate the \texttt{linearFGW} into graph-based deep learning models for learning with graph structured data. \bibliographystyle{abbrvnat} \section{Introduction} Many applications of machine learning involve learning with graph structured data such as bioinformatics \cite{sharan2006modeling}, social networks \cite{scott2011social}, chemoinformatics \cite{trinajstic2018chemical}, and so on. To deal with graph structured data, many graph kernels have been proposed in literature for measuring the similarity between graphs in a kernel-based framework. Most of them are based on $\mathcal{R}$-framework, which focuses on comparing graphs based on their substructures such as subtree \cite{shervashidze2011weisfeiler}, shortest path \cite{borgwardt2005shortest}, random walk \cite{kashima2003marginalized}, and so on. However, these methods have several limitations: 1) they do not consider feature and structure distributions of graphs, 2) they require to define substructures based on the domain knowledge, which might not be available in many practical applications. Optimal Transport (OT) \cite{villani2009wasserstein} has received much attention in the machine learning community and has been shown to be an effective tool for comparing probability measures in many applications. In recent years, several studies have attempted to use the OT distance for learning graph structured data by considering the problem of measuring the similarity of graphs as an instance of computing OT distance for graphs. Togninalli et al. \cite{togninalli2019wasserstein} introduced the Wasserstein distance to compare graphs based on their node embeddings obtained by Weisfeiler-Lehmann labeling framework \cite{shervashidze2011weisfeiler}. Titouan et al. \cite{titouan2019optimal} proposed fused Gromov-Wasserstein (\texttt{FGW}) which combines Wasserstein and Gromov-Wasserstein \cite{memoli2011gromov,peyre2016gromov} distances in order to jointly take into account features and structures of graphs. These OT-based distances have achieved great performance for graph classification. However, they have several limitations: 1) kernel matrices converted from the OT-based distances are generally not valid, so they are not ready to use for the kernel-based framework, 2) calculating the similarity between each pair of graphs is computationally expensive, so the need for computing the kernel matrix of all pairwise similarity can be a burden for dealing with large-scale graph data sets. In order to overcome the aforementioned limitations, inspired by the linear optimal transport framework introduced by Wang et al. \cite{wang2013linear}, we propose an OT-based distance, named \texttt{linearFGW}, for learning with graph structured data. As the name suggests, our distance is a generalization of the linear optimal transport framework and \texttt{FGW} distance. The basic idea is to embed the node features and topology of a graph into a linear tangent space through a fixed reference measure graph. Then the \texttt{linearFGW} distance between two graphs is defined as the Euclidean distance between their two embeddings, which approximates their \texttt{FGW} distance. Therefore, the \texttt{linearFGW} distance has the following advantages: 1) it can take into account node features and topologiesb of graphs for the OT problem in order to calculate the dissimilarity between graphs, 2) we can derive a valid graph kernel from the embeddings of graphs for the downstream tasks such as graph classification and clustering, 3) by using the \texttt{linearFGW} as an approximate of the \texttt{FGW}, we can avoid expensive computation of the pairwise \texttt{FGW} distance for large-scale graph data sets. Finally, we conduct experiments on graph data sets to show the effectiveness of the proposed distance in terms of classification and clustering accuracies. The remainder of the paper is organised as follows: in Section 2, we present some related work. In Section 3, we present the idea of our proposed distance for learning graph with structured data and its theoretical properties. In Section 4, experimental results on benchmark graph data sets are provided. Finally, we conclude by summarizing this work and discussing possible extensions in Section 5. \section{Related Work} \subsection{Kernels for Graphs} Graph is a standard representation for relational data, which appear in various domains such as bioinformatics \cite{sharan2006modeling}, chemoinformatics \cite{trinajstic2018chemical}, social network analysis \cite{scott2011social}. Making use of graph kernels is a popular approach to learning with graph structured data. Essentially, a graph kernel is a measure of the similarity between two graphs and must satisfy two fundamental requirements to be a valid kernel: 1) symmetric and 2) positive semi-definite (PSD). There are a number of kernels for graphs with discrete attributes such as random walk \cite{kashima2003marginalized}, shortest path \cite{kashima2003marginalized}, Weisfeiler-Lehman (WL) subtree \cite{shervashidze2011weisfeiler} kernels, just to name a few. There are several kernels for graphs with continuous attributes such as GraphHopper \cite{feragen2013scalable}, Hash Graph \cite{morris2016faster} kernels. \subsection{Optimal Transport Frameworks for Graphs} Optimal Transport (OT) \cite{villani2009wasserstein} has received much attention from the machine learning community as it provide an effective way to measure the distance between two probability measures. Some OT-based graph kernels have been proposed and achieved great performance in comparison with traditional graph kernels. Wasserstein Weisfeiler-Lehman (WWL) \cite{togninalli2019wasserstein} used OT for measuring distance between two graphs based on their WL embeddings (discrete feature vectors of subtree patterns). Nguyen et al. \cite{nguyen2021learning} extends WWL by proposing an efficient algorithm for learning subtree pattern importance, leading to higher classification accuracy on graph data sets. However, they are not valid kernels for graphs with continuous attributes. Following the work in \cite{memoli2011gromov}, Peyre et al. \cite{peyre2016gromov} proposed a Gromow-Wasserstein distance to compare pairwise similarity matrices from different spaces. Then, Titouan et al. \cite{titouan2019optimal} proposed a fused Gromov-Wasserstein distance which combine Wasserstein and Gromov-Wasserstein distances in order to jointly leverage feature and structure information of graphs. To reduce computational complexity, OT-based distances are often computed using a Sinkhorn algorithm \cite{sinkhorn1967diagonal, cuturi2013sinkhorn}. Due to the nature of optimal assignment problem, these OT-based graph kernels are indefinite similarity matrices so they are invalid kernels, leading to the use of support vector machine (SVM) with indefinite kernels introduced in \cite{luss2007support}. \subsection{Linear Optimal Transport} Wang et al. \cite{wang2013linear} proposed a simplified version of OT in 2-Wasserstein space, called linear optimal transport. In the sense of geometry, the basic idea is to transfer probability measures from the geodesic 2-Wasserstein space to the tangent space with respect to some fixed base or reference measure. One advantage is that we can work on a linear tangent space instead of the complex 2-Wasserstein space so that the downstream tasks such as classification, clustering can be done in the linear space. Another advantage is the fast approximation of pairwise Wasserstein distance for large-scale data sets. In the context of graph learning, Kolouri et al. \cite{kolouri2020wasserstein} leveraged the above framework and introduced the concept of linear Wasserstein embedding for learning graph embeddings. In a concurrent work, Mialon et al. \cite{mialon2020trainable} proposed a similar idea for learning set of features. In this paper, we extend the idea of linear optimal transport framework from the 2-Wasserstein distance to the Fused Gromov-Wasserstein distance (\texttt{FGW}), and define a valid graph kernel for learning with graph structured data. Furthermore, we derive theoretical understandings of the proposed distance. \section{Proposed Distance for Graphs: Linear Fused Gromov-Wasserstein} We denote a measure graph as $\mathcal{G}(\mathbf{X}, \mathbf{A}, \mu)$, where $\mathbf{X}=\{\mathbf{x}_{i}\}_{i=1}^{m}\in \mathbb{R}^{m\times d}$ is the set of $m$ node features with dimensionality of $d$, $\mathbf{A}=\left[a\right]_{ij}\in \mathbb{R}^{m \times m}$ is a square matrix to encode the topology of the given graph such as the adjacency matrix or the matrix of pairwise distances between nodes, $\mu=\left[\mu_{i} \right]\in \Delta^{m}$ (probability simplex) is a Borel probability measure defined on the nodes (note that when no additional information is provided, all probability measures can be set as uniform). \subsection{\texttt{FGW}: A Distance for Matching Node Features and Structures} \label{subsection:fgw} In \cite{titouan2019optimal}, a graph distance, named Fused Gromov-Wasserstein (\texttt{FGW}), is proposed to take into account both node feature and topology information into the OT problem for measuring the dissimilarity between two graphs. Formally, given two graphs $\mathcal{G}_{1}(\mathbf{X}, \mathbf{A}, \mu)$ and $\mathcal{G}_{2}(\mathbf{Y}, \mathbf{B}, \nu)$, the \texttt{FGW} distance between $\mathcal{G}_{1}$ and $\mathcal{G}_{2}$ is defined for a trade-off parameter $\alpha\in \left[0,1 \right]$ as: \begin{align} \label{eqn:fgw} \texttt{FGW}_{q,\alpha}(\mathcal{G}_{1}, \mathcal{G}_{2})=\min_{\pi \in \Pi(\mu,\nu)}\sum_{i,j,k,l}(1-\alpha)\lVert \mathbf{x}_{i}-\mathbf{y}_{j}\rVert^{q}+\alpha |\mathbf{A}_{i,k}-\mathbf{B}_{j,l}|^{q}\pi_{i,j}\pi_{k,l} \end{align} where $\Pi(\mu,\nu)=\{\pi\in R_{+}^{m\times n} \text{s.t. }\sum_{i=1}^{m}\pi_{i,j}=\nu_{j}\text{, } \sum_{j=1}^{n}\pi_{i,j}=\mu_{i} \}$ is the set of all admissible couplings between $\mu$ and $\nu$. The \texttt{FGW} distance acts as a generalization of the Wasserstein \cite{villani2009wasserstein} and Gromov-Wasserstein \cite{memoli2011gromov}, which allows balancing the importance of matching the node features and topologies between two graphs. However, similar to the existing OT-based graph distances, it is challenging to define a valid kernel from the \texttt{FGW} for the graph-related prediction task, due to the nature of optimal assignment problem. In the following, we restrict our attention to the OT with $q=2$ and for the ease of presentation, we use the notation $\texttt{FGW}_{\alpha}$ instead of $\texttt{FGW}_{q,\alpha}$. \subsection{\texttt{linearFGW}: A New Distance for Comparing Graphs} \label{subsection:linearFGW} In order to overcome the limitations of the \texttt{FGW} distance, we propose to approximate it by a linear optimal transport framework, which we call Linear Fused Gromov-Wasserstein (\texttt{LinearFGW}). Computing the \texttt{LinearFGW} distance requires a reference, which we choose to be also a measure graph $\overline{\mathcal{G}}(\mathbf{Z}, \mathbf{C}, \sigma)$. How the reference measure graph is chosen is described later in Subsection \ref{subsection:reference}. To precisely define the \texttt{LinearFGW} distance, we first define the barycentric projections for node features and structures of graphs as follows: \begin{defn} (Barycentric projections for nodes and edges of graphs). For a reference measure graph $\overline{\mathcal{G}}(\mathbf{Z}, \mathbf{C}, \sigma)$ with $K$ nodes and a transport plan $\pi =\sum_{k,i}\pi_{k,i}\delta_{(\mathbf{z}_k, \mathbf{x}_{i})}$. Then the barycentric projections for nodes and edges of the measure graph $\overline{\mathcal{G}}$ using the transport plan $\pi$ are defined as follows: \begin{equation} \label{eqn:barycentricprojections} T_{\text{n}, \pi}(\mathbf{z}_{k})=\frac{1}{\sigma_{k}}\sum_{i}\pi_{ki}\mathbf{x}_{i} \text{ and } T_{\text{e}, \pi}(\mathbf{C}_{k,l})=\frac{1}{\sigma_{k}\sigma_{l}}\sum_{i,j}\pi_{k,i}\pi_{l,j}\mathbf{C}_{ij} \text{, where } k,l=\overline{1,K} \end{equation} \label{def:barycentricprojections} \end{defn} The definitions of these projections are extended from \cite{wang2013linear,beier2021linear}. Furthermore, we derive their properties in the following lemma. \begin{lem} \label{lemma:optimalplan} Given two measure graphs $\mathcal{G}(\mathbf{X}, \mathbf{A}, \mu)$ and $\overline{\mathcal{G}}(\mathbf{Z}, \mathbf{C}, \sigma)$, we denote $\pi^{*}$ as the optimal transport plan from $\overline{\mathcal{G}}$ to $\mathcal{G}$ with respect to the \texttt{FGW} distance, and $\Tilde{\mathcal{G}}(\Tilde{\mathbf{Z}},\Tilde{\mathbf{C}},\sigma)$ as the probability measure graph obtained by applying barycentric projections for nodes and edges $T_{\text{n}, \pi^{*}}(.)$ and $T_{\text{e}, \pi^{*}}(.)$, respectively (see Definition \ref{def:barycentricprojections}). Then, we have the following claims: \begin{enumerate} \item $\operatorname{diag}(\sigma)=\begin{bmatrix}\sigma_{1} & 0 & 0\\0 & \ddots & 0\\ 0 & 0 & \sigma_{K}\end{bmatrix}$ is the optimal transport plan from $\overline{\mathcal{G}}$ to $\Tilde{\mathcal{G}}$ in the sense of the \texttt{FGW} distance. \item $\texttt{FGW}_{\alpha}(\overline{\mathcal{G}}, \Tilde{\mathcal{G}})\leq \texttt{FGW}_{\alpha}(\overline{\mathcal{G}}, \mathcal{G})$. \end{enumerate} \end{lem} The proof is given in the Appendix section. An important implication of the above lemma is that $\Tilde{\mathcal{G}}$ can be considered as a surrogate measure graph for $\mathcal{G}$ with respect to the reference $\overline{\mathcal{G}}$. Thus we propose to define the \texttt{LinearFGW} distance between two measure graphs $\mathcal{G}_{1}$ and $\mathcal{G}_{2}$ with respect to the reference measure graph $\overline{\mathcal{G}}$ as follows: \begin{align} \label{eqn:linearfgw} \texttt{linearFGW}_{\alpha}(\mathcal{G}_{1}, \mathcal{G}_{2})=(1-\alpha) \sum_{k}\lVert T_{\text{n}, \pi_{1}}(\mathbf{z}_{k})-T_{\text{n}, \pi_{2}}(\mathbf{z}_{k})\rVert^{2} + \alpha \sum_{k,l}|T_{\text{e}, \pi_{1}}(\mathbf{C}_{k,l})-T_{\text{e}, \pi_{2}}(\mathbf{C}_{k,l})|^{2} \end{align} where $\pi_{1}$ and $\pi_{2}$ denote the optimal transport plans from $\overline{\mathcal{G}}$ to $\mathcal{G}_{1}$ and $\mathcal{G}_{2}$, respectively, in the sense of the \texttt{FGW} distance. We call this distance \texttt{linearFGW} as it acts as a generalization of linear optimal transport \cite{wang2013linear} and \texttt{FGW} \cite{titouan2019optimal}. Furthermore, the proposed distance also suggests a Euclidean embedding of the measure graph $\mathcal{G}_{1}$ with respect to the reference measure graph $\overline{\mathcal{G}}$ becomes: $\Phi_{\overline{\mathcal{G}}, \alpha}(\mathcal{G}_{1})=\left(\sqrt{1-\alpha}T_{\text{n}, \pi_{1}}(\mathbf{z}_{1}),...,\sqrt{1-\alpha}T_{\text{n}, \pi_{1}}(\mathbf{z}_{K}),...,\sqrt{\alpha}T_{\text{e}, \pi_{1}}(\mathbf{C}_{k,l}),...\right)$ with dimension of $K+K^{2}$. So we can derive a valid kernel for graph-related prediction tasks. The computation of the \texttt{linearFGW} can be illustrated in Figure \ref{fig:linearfgw}. \begin{figure}[t] \centerline{\includegraphics[width=0.9\columnwidth]{linearfgw.png}} \caption{Illustration of the computation of the \texttt{linearFGW} distance between $\mathcal{G}_{1}(\mathbf{X},\mathbf{A},\mu)$ and $\mathcal{G}_{2}(\mathbf{Y},\mathbf{B},\nu)$, given the fixed reference measure graph $\overline{\mathcal{G}}(\mathbf{Z},\mathbf{C},\sigma)$. First, we find the optimal transport plans $\pi_{1}$ and $\pi_{2}$ from $\overline{\mathcal{G}}$ to $\mathcal{G}_{1}$ and $\mathcal{G}_{2}$, respectively, in the sense of the \texttt{FGW}. Then we transport $\overline{\mathcal{G}}$ with the barycentric projections for nodes and edges (see Definition \ref{def:barycentricprojections}) using the optimal plans $\pi_{1}$ and $\pi_{2}$ to get the surrogate measure graphs $\Tilde{\mathcal{G}}_{1}(\Tilde{\mathbf{Z}}^{(1)}, \Tilde{\mathbf{C}}^{(1)}, \sigma)$ and $\Tilde{\mathcal{G}}_{2}(\Tilde{\mathbf{Z}}^{(2)}, \Tilde{\mathbf{C}}^{(2)}, \sigma)$ for $\mathcal{G}_{1}$ and $\mathcal{G}_{2}$, respectively. Finally, the Euclidean distance between $\Tilde{\mathcal{G}}_{1}$ and $\Tilde{\mathcal{G}}_{2}$ can be directly calculated using Equation (\ref{eqn:linearfgw}).} \label{fig:linearfgw} \end{figure} \subsection{Selection of Reference Measure Graph} \label{subsection:reference} Selecting the reference measure graph in Subsection \ref{subsection:linearFGW} is important. We empirically observe that if the reference is randomly selected or distant from all measure graphs, the approximation error between \texttt{FGW} and \texttt{linearFGW} is likely to increase. In the lemma presented below, we show the relation between \texttt{FGW} and \texttt{linearFGW} with respect to the reference measure graph. \begin{lem} \label{Lemma2} We denote the mixing diameter of a graph $\mathcal{G}(\mathbf{X}, \mathbf{A}, \mu)$ by $\texttt{diam}_{\alpha}(\mathcal{G})=\alpha \max_{i,j}\lVert \mathbf{x}_{i}-\mathbf{x}_{j} \rVert^{2} + (1-\alpha)\max_{i,j,i^\prime, j^\prime}|\mathbf{A}_{i,j}-\mathbf{A}_{i^\prime, j^\prime}|^{2}$. Then, given a fixed reference measure graph $\overline{\mathcal{G}}(\mathbf{Z}, \mathbf{C}, \sigma)$, for two input measure graphs $\mathcal{G}_{1}(\mathbf{X}, \mathbf{A}, \mu)$ and $\mathcal{G}_{2}(\mathbf{Y}, \mathbf{B}, \nu)$, we have the following inequality: \begin{equation} \label{lemma:barycentric} |\texttt{FGW}_{\alpha}(\mathcal{G}_{1}, \mathcal{G}_{2})-\texttt{linearFGW}_{\alpha}(\mathcal{G}_{1}, \mathcal{G}_{2})|\leq 4\min\{\texttt{FGW}_{\alpha}(\mathcal{G}_{1}, \overline{\mathcal{G}}),\texttt{FGW}_{\alpha}(\mathcal{G}_{2}, \overline{\mathcal{G}})\} + 2\texttt{diam}_{\alpha}(\mathcal{G}_{1}) + 2\texttt{diam}_{\alpha}(\mathcal{G}_{2}) \end{equation} \end{lem} The proof is given in the Appendix section. A corollary of the above lemma suggests how to select a good reference measure graph $\overline{\mathcal{G}}$: given $N$ graphs $(\mathcal{G}_{1},...,\mathcal{G}_{N})$, the total approximation error is upper bounded by: \begin{equation} \sum_{i=1}^{N}\sum_{j=i+1}^{N}|\texttt{FGW}_{\alpha}(\mathcal{G}_{i}, \mathcal{G}_{j})-\texttt{linearFGW}_{\alpha}(\mathcal{G}_{i}, \mathcal{G}_{j})|\leq 4\sum_{i=1}^{N}\text{FGW}_{\alpha}(\mathcal{G}_{i}, \overline{\mathcal{G}}) + 4\sum_{i=1}^{N}\texttt{diam}_{\alpha}(\mathcal{G}_{i}) \end{equation} where the right-hand side has two terms: the first term is the objective of the fused Gromov-Wasserstein barycenter problem \cite{titouan2019optimal} while the second term is constant with respect to the reference measure graph $\overline{\mathcal{G}}$, suggesting that we can use the fused Gromov-Wasserstein barycenter of $N$ given measure graphs as the reference. \subsection{Implementation Details} The \texttt{FGW} is the main component of our method. We use the proximal point algorithm (PPA) \cite{xu2020gromov} to implement the \texttt{FGW}. Specifically, given two graphs $\mathcal{G}_{1}$ and $\mathcal{G}_{2}$, we solve the problem (\ref{eqn:fgw}) iteratively ( with maximum $T$ iterations) as follows: \begin{align} \pi^{(t+1)}=\mathop{\rm arg~min}\limits_{\pi \in \Pi(\mu,\nu)}\langle (1-\alpha)\mathbf{D}_{12} + \alpha (\mathbf{C}_{12}-2 \mathbf{A}\pi^{(t)}\mathbf{B}), \pi \rangle + \eta \textbf{KL}(\pi|| \pi^{(t)}) \label{eqn:sinkhorniterations} \end{align} where $\langle\cdot,\cdot\rangle$ denote the inner product of matrices, $\mathbf{D}_{12}=(\mathbf{X}\odot \mathbf{X})\mathbf{1}_{d}\mathbf{1}_{n}^\top+\mathbf{1}_{m}\mathbf{1}_{d}^\top(\mathbf{Y}\odot \mathbf{Y})^\top$, $\mathbf{C}_{12}=(\mathbf{A}\odot \mathbf{A})\mu \mathbf{1}_{n}^\top+\mathbf{1}_{m}\nu^\top(\mathbf{B}\odot \mathbf{B})^\top$ and $\odot$ denotes the Hadamard product of matrices. $\textbf{KL}(\pi|| \pi^{(t)})$ is the Kullback-Leibler divergence between the optimal transport plan and the previous estimation. We can approximately solve the above problem by Sinkhorn-Knopp update (see \cite{xu2020gromov} for the algorithmic details). \begin{table}[] \centering \caption{Statistics of data sets used in experiments} \begin{tabular}{c|c c c c c} \hline\hline Dataset & \#graphs & \#classes & Ave. \#odes & Ave. \#edges & \#attributes\\ \hline COX2 & 467 & 2 & 41.22 & 43.45 & 3\\ BZR & 405 & 2 & 35.75 & 38.36 & 3\\ ENZYMES & 600 & 6 & 32.63 & 62.14 & 18\\ PROTEINS & 1113 & 2 & 39.06 & 72.82 & 1\\ PROTEINS-F & 1113 & 2 & 39.06 & 72.82 & 29\\ AIDS & 2000 & 2 & 15.69 & 16.20 & 4\\ IMDB-B & 1000 & 2 & 19.77 & 96.53 & -\\ \hline\hline \end{tabular} \label{tab:datasets} \end{table} \section{Experimental Results} We now show the effectiveness of our proposed graph distance on real world data sets in terms of graph classification and clustering. Our code can be accessed via the following link: \texttt{https://github.com/haidnguyen0909/linearFGW}. \subsection{Data sets} In this work, we focus on graph kernels/distances for graphs with continuous attributes. So we consider the following seven widely used benchmark data sets: BZR \cite{sutherland2003spline}, COX2 \cite{sutherland2003spline}, ENZYMES \cite{dobson2003distinguishing}, PROTEINS \cite{borgwardt2005protein}, PROTEINS-F \cite{borgwardt2005protein}, AIDS \cite{riesen2008iam} contain graphs with continous attributes, while IMDB-B \cite{yanardag2015deep} contains unlabeled graphs obtained from social networks. All these data sets can be downloaded from \texttt{https://ls11-www.cs.tu-dortmund.de/staff/morris/graphkerneldatasets}. The details of the used data sets are shown in Table \ref{tab:datasets}. \subsection{Experimental settings} To compute numerical features for nodes of graphs, we consider two main settings: 1) we keep the original attributes of nodes (denoted by suffix \texttt{RAW}), 2) we consider Weisfeiler-Lehman (WL) mechanism by concatenating numerical vectors of neighboring nodes (denoted by the suffix $\texttt{WL}-$H where H means we repeat the procedure H times to take neighboring vertices within H hops into account to obtain the features, see \cite{shervashidze2011weisfeiler} for more detail). For the matrix $\textbf{A}$, we restrict our attention to the adjacency matrices of the input graphs. For solving the optimization problem (\ref{eqn:sinkhorniterations}), we fix $\eta$ as 0.1 and the number of iterations $T$ as 5. We carry out our experiments on a 2.4 GHz 8-Core Intel Core i9 with 64GB RAM. For the classification task, we convert a distance into a kernel matrix through the exponential function, i.e. $K=\exp(-\gamma D)$ (Gaussian kernel). We compare the classification accuracy with the following state-of-the-art graph kernels (or distances): GraphHopper kernel (GH, \cite{feragen2013scalable}), HGK-WL \cite{morris2016faster}, HGK-SP \cite{morris2016faster}, RBF-WL \cite{togninalli2019wasserstein}, Wasserstein Weisferler-Lehman kernel (WWL, \cite{togninalli2019wasserstein}), FGW \cite{titouan2019optimal}, GWF \cite{xu2020gromov}. We divide them into two groups: OT-based graph kernels including WWL, FGW, GFW and \texttt{linearFGW} (ours) and non-OT graph kernels including GH, HGK-WL, HGK-SP, RBF-WL. Note that our proposed graph kernel converted from the \texttt{linearFGW} is the only (valid) positive definite kernel among OT-based graph kernels. We perform 10-fold cross validation and report the average accuracy of the experiment repeated 10 times. The accuracies of other graph kernels are taken from the original papers. We use SVM for classification and cross-validate the parameters $C=\{2^{-5},2^{-4},...,2^{10}\}$, $\gamma=\{10^{-2},10^{-1},...,10^{2}\}$. The range of the WL parameter $H={1,2}$. For our proposed \texttt{linearFGW}, $\alpha$ is cross-validated via a search in $\{0.0,0.3,0.5,0.7,0.9,1.0\}$. Note that the linear optimal transport \cite{wang2013linear} is a special case of the \texttt{linearFGW} with $\alpha=0$. We also compare the clustering accuracy with OT-based graph distances: \texttt{FGW}, GWB-KM, GWF on four real world data sets: AIDS, PROTEINS, PROTEINS-F, IMDB-B. For fair comparison, we use K-means and spectral clustering on the Euclidean embedding and Gaussian kernel of the proposed \texttt{linearFGW} distance (denoted by \texttt{linearFGW-Kmeans} and \texttt{linearFGW-SC}, respectively). We fix the parameters $H=1$, $\alpha=0.5$ for data sets of graphs with continuous attributes and $\gamma=0.01$ for the Gaussian kernel. \begin{table}[] \centering \caption{Average classification accuracy on the graph data sets with vector attributes. The best result for each column (data set) is highlighted in bold and the standard deviation is reported with the symbol $\pm$.} \begin{tabular}{l l l l l l l} \hline\hline & Kernels/Data sets & COX2 & BZR & ENZYMES & PROTEINS & IMDB-B\\ \hline \multirow{4}{*}{Non OT} & GH & 76.41$\pm$ 1.39 & 76.49$\pm$ 0.9 & 65.65$\pm$ 0.8 & 74.48$\pm$ 0.3 & -\\ & HGK-WL & 78.13$\pm$ 0.45 & 78.59$\pm$ 0.63 & 63.04$\pm$ 0.65 & 75.93$\pm$ 0.17 & -\\ & HGK-SP & 72.57$\pm$ 1.18 & 76.42$\pm$ 0.72 & 66.36$\pm$ 0.37 & 75.78$\pm$ 0.17 & -\\ & RBF-WL & 75.45$\pm$ 1.53 & 80.96$\pm$ 1.67 & 68.43$\pm$ 1.47 & 75.43$\pm$ 0.28 & -\\ \hline \multirow{4}{*}{OT-based} & WWL & 78.29$\pm$0.47 & 84.42$\pm$ 2.03 & 73.25$\pm$ 0.87 & 77.91$\pm$ 0.8 & -\\ & \texttt{FGW} & 77.2$\pm$4.7 & 84.1 $\pm$4.1 & 71.0$\pm$ 6.7 & 75.1$\pm$ 2.9 & \textbf{64.2}$\pm$ 3.3\\ & \texttt{GWF} & - & - & - & 73.7$\pm$2.0 & 63.9$\pm$2.7\\ & \texttt{linearFGW}-RAW (Ours) & 79.74 $\pm$ 1.99 & \textbf{86.07}$\pm$1.64 & 83.25 $\pm$ 2.44 & 82.49$\pm$ 1.75 & 63.62$\pm$1.9\\ & \texttt{linearFGW}-WL1 (Ours) & \textbf{79.98} $\pm$ 3.21 & 84.80$\pm$2.95 & \textbf{85.28}$\pm$1.64 & 83.29 $\pm$ 1.63 & -\\ & \texttt{linearFGW}-WL2 (Ours) & 79.50$\pm$3.29 & 84.37$\pm$2.75 & 83.13$\pm$ 1.56 & \textbf{83.95} $\pm$ 1.12 & -\\ \hline\hline \end{tabular} \label{tab:classificationresults} \end{table} \subsection{Results} \textbf{Classification:} The average classification accuracies shown in Table \ref{tab:classificationresults} indicate that the \texttt{linearFGW} is a clear state-of-the-art method for graph classification. It achieved the best performances on 4 out of 6 data sets. In particular, on two data sets ENZYMES and PROTEINS, the \texttt{linearFGW} outperformed all the rest by large margins (around 12\% and 6\%, respectively) in comparison with the second best ones. On COX2 and BZR, the \texttt{linearFGW} achieved improvements of around 2\% and 1.5\%, respectively, over WWL which is the second best one. Note that the Gaussian kernel derived from WWL is not valid for the data sets of graphs with continuous attributes (see \cite{togninalli2019wasserstein}). On IMDB-B, the average accuracies of the compared methods are comparable. Interestingly, despite that our \texttt{linearFGW} is an approximate of the \texttt{FGW} distance, the \texttt{linearFGW} consistently achieved significantly higher performance than \texttt{FGW}. This can be explained by the fact that the kernel derived from the \texttt{linearFGW} distance is valid. \textbf{Clustering:} The average clustering accuracies shown in Table \ref{tab:clusteringresults} also indicate that the \texttt{linearFGW} could achieve high performance on clustering. On PROTEINS and PROTEINS-F, the \texttt{linearFGW} achieved the highest accuracies by margins of around 2\% and 3\%, respectively, over the second best one. On AIDS and IMDB-B, the \texttt{linearFGW} achieved comparable performances with GWF-PPA which is the best performer. \begin{table}[] \centering \caption{Average clustering accuracy on the graph data sets with continuous attributes. The best result for each column (data set) is highlighted in bold and the standard deviation is reported with the symbol $\pm$.} \begin{tabular}{l l l l l} \hline\hline Methods/Data sets & AIDS & PROTEINS & PROTEINS-F & IMDB-B \\ \hline FGW & 91.0 $\pm$0.7 & 66.4$\pm$0.8 & 66.0$\pm$0.9 & 56.7$\pm$1.5\\ GWB-KM & 95.2$\pm$0.9 & 64.7$\pm$1.1 & 62.9$\pm$1.3 & 53.5$\pm$2.3\\ GWF-BADMM & 97.6$\pm$0.8 & 69.2$\pm$1.0 & 68.1$\pm$1.1 & 55.9$\pm$1.8\\ GWF-PPA & \textbf{99.5}$\pm$0.4 & 70.7$\pm$0.7 & 69.3$\pm$0.8 & \textbf{60.2}$\pm$1.6\\ \hline \texttt{linearFGW}-Kmeans (Ours) & 98.7$\pm$1.2 & 70.58$\pm$0.57 & 71.46$\pm$1.03 & 54.49$\pm$0.3\\ \texttt{linearFGW}-SC (Ours) & 98.2 $\pm$0.83 & \textbf{72.70}$\pm$0.03 & \textbf{73.33}$\pm$0.82 & 58.3$\pm$0.8\\ \hline\hline \end{tabular} \label{tab:clusteringresults} \end{table} \textbf{Runtime Analysis:} By using the \texttt{linearFGW}, we can reduce the computational complexity of calculating the pairwise \texttt{FGW} distance for a data set of $N$ graphs from a quadratic complexity in $N$ i.e. $N(N-1)/2$) to linear complexity i.e. $N$ calculation of the \texttt{FGW} distances from graphs to the reference measure graph. We compare the running time of \texttt{linearFGW} and \texttt{FGW} with the same setting as in the classification task with fixed $\alpha$ of 0.5 and 0.0 for labeled graph data sets and IMDB-B (unlabeled), respectively. In Table \ref{tab:runningtimeanalysis}, we report the total running time of methods (both training time and inference time) on 5 data sets used for classification experiments . It is shown that the \texttt{linearFGW} is much faster than \texttt{FGW} on all considered data sets (roughly 7 times faster on COX2, BZR, ENZYMES, PROTEINS and 3 times faster on IMDB-B). These numbers confirm the computational efficiency of \texttt{linearFGW}, making it possible to analyze large-scale graph data sets. \begin{table}[] \centering \caption{The total training time and inference time (in seconds) averaged over 10-folds of cross-validation (with fixed $\alpha$) for different data sets. The standard deviation is reported with the symbol $\pm$.} \begin{tabular}{l l l l l l} \hline\hline Methods/Data sets & COX2 & BZR & ENZYMES & PROTEINS & IMDB-B \\ \hline \texttt{FGW} & 520.21$\pm$21.15 & 347.78$\pm$5.21 & 817.31$\pm$7.49 & 3224.36$\pm$125.02 & 1235.33$\pm$83.28\\ \texttt{linearFGW} & 72.43 $\pm$ 0.16 & 53.81 $\pm$ 0.2 & 146.26 $\pm$1.64 & 431.25$\pm$9.25 & 358.92$\pm$10.41\\ \hline\hline \end{tabular} \label{tab:runningtimeanalysis} \end{table} \section{CONCLUSION AND FUTURE WORK} We have developed an OT-based distance for learning with graph structured data. The key idea of this method is to embed node feature and topology of a graph into a linear tangent space, where the Euclidean distance between two embeddings of two graphs approximates their \texttt{FGW} distance. In fact the proposed distance is a generalization of the linear optimal transport \cite{wang2013linear} and the \texttt{FGW} distance. Thus it has the following advantages: 1) as the \texttt{FGW} distance, the proposed distance allows to take into account node features and topologies of graphs into the OT problem for computing the dissimilarity between two graphs, 2) we can derive a valid kernel from the proposed distance for graphs while the existing OT-based graph kernels are invalid, and 3) it provides the fast approximation of pairwise \texttt{FGW} distance, making it more efficient to deal with large-scale graph data sets. We conducted experiments on some benchmark graph data sets on both classification and clustering tasks, demonstrating the effectiveness of the proposed distance. In this work, we suggested to use the fused Gromov-Waserstein barycenter \cite{titouan2019optimal} as the reference measure graph. Thanks to the differentiablity of OT frameworks using techniques such as entropic regularization \cite{cuturi2013sinkhorn}, one possibility for future work is to learn the reference measure graph by updating the reference to minimize the supervision loss. The classification performance will be improved with the label information of graphs used in the training process. Another possibility would be to incorporate the \texttt{linearFGW} into graph-based deep learning models for learning with graph structured data. \bibliographystyle{abbrvnat}
1,314,259,994,571
arxiv
\section{Introduction} \label{intro} Quiescent solar filaments are clouds of cool and dense plasma suspended against gravity by forces though to be of magnetic origin. They form along the inversion polarity line in or between the weak remnants of active regions. Early filament observations already revealed that their fine structure is apparently composed by many horizontal and thin dark threads \citep{Engvold98}. More recent high-resolution H$_\alpha$ observations obtained with the Swedish Solar Telescope (SST) in La Palma \citep{Lin05} and the Dutch Open Telescope (DOT) in Tenerife \citep{HA06} have allowed to observe this fine structure with much greater detail (see \citealt{Lin10}, for a review). The measured average width of resolved thin threads is about $0.3$ arc.sec ($\sim$ $210$ km) while their length is between $5$ and $40$ arc.sec ($\sim$ 3500 - 28000 km). The fine threads of solar filaments seem to be partially filled with cold plasma \citep{Lin05}, typically two orders of magnitude denser and cooler than the surrounding corona, and it is generally assumed that they outline their magnetic flux tubes \citep{Engvold98,Linthesis,Lin05,Engvold08,Martin08,Lin08}. This idea is strongly supported by observations which suggest that they are inclined with respect to the filament long axis in a similar way to what has been found for the magnetic field (\citealt{Leroy80,Bommier94,BL98}). On the opposite, \cite{HA06} suggest that these dark horizontal filament fibrils are a projection effect. According to this view, many magnetic field dips of rather small vertical extension, but filled with cool plasma, would be aligned in the vertical direction and the projection against the disk would produce the impression of a horizontal thread. Oscillations in prominences and filaments are a commonly observed phenomenon. They are usually classified, in terms of their amplitude, as small and large amplitude oscillations \citep{OB02}. This paper is concerned with small amplitude oscillations. Large amplitude oscillations have been recently reviewed by \cite{Tripathi09}. It is well established that these small amplitude periodic changes are of local nature. The detected peak velocity ranges from the noise level (down to 0.1 km~s$^{-1}$ in some cases) to 2--3~km~s$^{-1}$, although larger values have also been reported \citep{BashkirtsevMashnich84,Molowny-Horas99}. Two-dimensional observations of filaments \citep{YiEngvold91,Yi91} revealed that individual fibrils or groups of fibrils may oscillate independently with their own periods, which range between 3 and 20 minutes. More recently, \cite{Linthesis} reports spatially coherent oscillations found over slices of a polar crown filament covering an area of $1.4 \times 54$ arc.sec$^2$ with, among other, a significant periodicity at 26 minutes, strongly damped after 4 periods. Furthermore, \cite{Lin07} have shown evidence about traveling waves along a number of filament threads with an average phase velocity of $12$ km s$^{-1}$, a wavelength of $4''$ ($\sim 2800$ \ km), and oscillatory periods of the individual threads that vary from $3$ to $9$ minutes. Oscillatory events have been reported from both ground-based observations \citep{Terradas02,Linthesis}, as well as observations from instruments onboard space-crafts, such as SoHO \citep{Blanco99,Regnier01,Pouget06} and Hinode \citep{Okamoto07,Ning09}. The observed periodic signals are mainly detected from Doppler velocity measurements and can therefore be associated to the transverse displacement of the fine structures \citep{Lin09}. Extensive reviews on small amplitude oscillations in prominences can be found in \cite{OB02,Engvold04,Wiehr04,Ballester06,banerjee07,Engvold08,Oliver09, Ballester10}, and \citet{Mck10}. Small amplitude oscillations in quiescent filaments have been interpreted in terms of magnetohydrodynamic (MHD) waves \citep{OB02,Ballester06}, which has allowed to develop the theoretical models (see \citealt{Ballester05,Ballester06}, for recent reviews). Models of filament threads usually represent them as part of a larger magnetic flux tube, whose foot-points are anchored in the solar photosphere \citep{BP89,Rempel99}. Early works studying filament thread oscillations have considered the MHD eigenmodes supported by a filament thread modeled as a Cartesian slab, partially filled with prominence plasma, and embedded in the corona \citep{JNR97,diaz01,diaz03}. These works were later extended by considering a more representative cylindrical geometry by \cite{diaz02}. These authors found that the fundamental transverse fast mode is always confined in the dense part of the flux tube, hence, for an oscillating cylindrical filament thread, it should be difficult to induce oscillations in adjacent threads, unless they are very close. Groups of multithread structures and their collective oscillations have been modeled by \cite{diaz05,DR06} in Cartesian geometry and by \cite{Soler09noad} in cylindrical geometry. Time and spatial damping is a recurrently observed characteristic of small amplitude prominence oscillations. Observational evidence for the damping of small amplitude oscillations in prominences can be found in \cite{Landman77,Tsubaki86,Tsubaki88,Wiehr89,Molowny-Horas99,Terradas02}, and more recently in \cite{Linthesis,Berger08,Ning09,Lin09}. These observational studies have allowed to obtain some characteristic spatial and time scales. Reliable values for the damping time have been derived, from different Doppler velocity time series, by \cite{Molowny-Horas99} and \cite{Terradas02}, in prominences, and by \cite{Linthesis} in filaments. The values thus obtained are usually between 1 and 4 times the corresponding period, and large regions of prominences/filaments display similar damping times. Several theoretical mechanisms have been proposed in order to explain the observed damping. \citet{Ballai03} estimated, through order of magnitude calculations, that several isotropic and anisotropic dissipative mechanisms, such as viscosity, magnetic diffusivity, radiative losses and thermal conduction cannot in general explain the observed wave damping. Linear non-adiabatic MHD waves have been studied by \cite{Carbonell04,Terradas01,Terradas05,Soler07a,Soler08}. The overall conclusion from these studies is that thermal mechanisms can only account for the damping of slow waves in an efficient manner, while fast waves remain almost undamped. Since prominences can be considered as partially ionized plasmas, a possible mechanism to damp fast waves (as well as Alfv\'en waves) could come from ion-neutral collisions \citep{Forteza07,Forteza08}, although the ratio of the damping time to the period does not completely match the observations. Besides non-ideal mechanisms, another possibility to attenuate fast waves in thin filament threads comes from resonant wave damping (see e.g.\ \citealt{Goossens10}). This phenomenon is well studied for transverse kink waves in coronal loops (\citealt{GAA06,Goossens08}) and provides a plausible explanation for quickly damped transverse loop oscillations first observed by TRACE (\citealt{Aschwanden99,Nakariakov99}). The time scales of damping produced by these different mechanisms should be compared with those obtained from observations. The theoretical approach of many works studying the damping of prominence oscillations has been to first study a given damping mechanism in simplified uniform and unbounded media thereafter introducing structuring and non-uniformity. This has led to an increasing complexity of theoretical models in such a way that some of them now combine different damping mechanisms. This paper presents results from recent theoretical studies on small amplitude oscillations in prominences and their damping. This topic has been the theme of several recent review works, see e.g. \citet{Oliver09} and \citet{Ballester10}, a fact that reflects the liveliness of the subject. Here we extend these works by reporting the last studies and mainly focusing on the theory of damping mechanisms for oscillations in prominences and its application to prominence seismology, a discipline in a rapidly developing stage, as better observations and more accurate theoretical models become available. \section{Damping of oscillations by thermal mechanisms} \label{sec:2} In a seminal paper, \citet{Field1965} studied the thermal instability of a dilute gas in mechanical and thermal equilibrium. Using this approach, the time and spatial damping of magnetohydrodynamic waves in unbounded media \citep{Carbonell04, Carbonell09} and in bounded slabs \citep{Terradas01, Terradas05} with physical properties akin to those of quiescent solar prominences, as well as in slabs having prominence-corona physical properties \citep{Soler07a, Soler09trans} have been studied. Furthermore, the behavior of non-adiabatic waves in an isolated and flowing prominence fine structure \citep{Soler08} has also been considered. All these investigations have been already reviewed in \citet{Oliver09}, \citet{Ballester10} and \citet{Mck10}. For this reason, in the following we concentrate in a recent investigation about the time damping of linear non-adiabatic MHD waves in a multi-thread prominence when mass flows are also present. \subsection{Time damping of non-adiabatic magnetoacoustic waves in filament threads with mass flows} \label{nonadiabatic} In an attempt to explain the observations about in-phase oscillations of large areas of a filament reported by \citet{Linthesis}, \citet{diaz05} studied the collective oscillations of a set of inhomogeneous threads in Cartesian geometry. The configuration was made of five unequally separated threads having different widths and densities, while the magnetic field keeps the same strength everywhere. When realistic separations between threads are considered, the system oscillates with the only non-leaky mode, and oscillations are in phase with similar amplitudes and with a frequency smaller than that of the densest thread. Although these results show some agreement with observations, the use of Cartesian geometry favors collective oscillations since the transversal velocity perturbation has a very long tail which allows for an easy interaction between neighboring threads. However, \citet{Linthesis} also pointed out that, after four periods, the oscillations were strongly damped in time and that simultaneously flowing and oscillatory structures were also present in filament threads. To model this situation, \citet{Soler09noad} have used the T-matrix theory of scattering to study the propagation of non-adiabatic magnetoacoustic waves in an arbitrary system of cylindrical threads when material flows inside them are present. In this study, the effects of radiative losses, thermal conduction and heating have been taken into account, mass flows along magnetic field lines have been allowed, and the general case $\beta \neq 0$ has been considered. The equilibrium system is made of two homogeneous and unlimited parallel cylinders, prominence threads, embedded in an also homogeneous and unbounded coronal medium (see Fig.~{\ref{fig:model}). For the study, two different configurations have been chosen. \subsubsection{Identical threads} \label{nonadiabatic_I} In this case the same typical values of prominence density and temperature have been used for both threads, as well as typical coronal values for the external density and temperature. Furthermore, the magnetic field strength is the same everywhere. In the absence of flow, four fast kink modes, already found by \citet{Luna2008} when studying the collective oscillations of two coronal loops, are present. These modes are named as $S_x, A_x, S_y, A_y$, where $S$ and $A$ denote symmetry or antisymmetry of the velocity field inside the tubes with respect to the ($y$, $z$) plane and the subscripts refer to the main direction of polarization of motions. In addition to the kink modes, two further fundamental collective wave modes, one symmetric ($S_z$) and one antisymmetric ($A_z$), mainly polarized along the $z$-direction and corresponding to slow modes have been identified. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{AB10f01.eps} \caption{Scheme in the $xy$-plane of the model considered in Sect.~\ref{nonadiabatic_I}. The $z$- axis is perpendicular to the plane of the figure and points towards the reader. Adapted from \citet{Soler09noad}.} \label{fig:model} \end{figure} Figure~\ref{fig:distkink}a displays the ratio of the real part of the frequency of the four kink solutions to the frequency of the individual kink mode \citep{Soler08}, $\omega_k$, as a function of the distance between the center of cylinders, $d$. It can be seen that the smaller the distance between centers, the larger the interaction between threads and the separation between frequencies. The frequency of collective kink modes is almost identical to the individual kink frequency for a distance between threads larger than 6 or 7 radii. Therefore, we expect the collective behavior of oscillations to be stronger when the threads are closer. On the opposite, for larger distances, the interaction between threads is much weaker and individual oscillations are expected. Furthermore, Figure~\ref{fig:distkink}b shows the ratio of the damping time to the period, $\tau_D/P$, of the four kink modes as a function of $d$. This ratio appears to be very large suggesting that dissipation by non-adiabatic mechanisms is not very efficient to damp the fast kink modes which could be responsible of observed filament threads oscillations. \begin{figure*}[!t] \includegraphics[width=0.5\textwidth]{AB10f02a.eps} \includegraphics[width=0.5\textwidth]{AB10f02b.eps} \caption{$a)$ Ratio of the real part of frequency, $\omega_{\rm R}$, of the $S_x$ (solid line), $A_x$ (dotted line), $S_y$ (triangles), and $A_y$ (diamonds) wave modes to the frequency of the individual kink mode, $\omega_k$, as a function of the distance between centers. $b)$ Ratio of the damping time to the period versus the distance between centers. Linestyles are the same as in panel $a)$. Adapted from \citet{Soler09noad}.} \label{fig:distkink} \end{figure*} In the case of slow modes, Figure~\ref{fig:distslow}a displays the ratio of the real part of the frequency of the $S_z$ and $A_z$ solutions to the frequency of the individual slow mode, $\omega_s$. It can be seen that the frequencies of the $S_z$ and $A_z$ modes are almost identical to the individual slow mode frequency, and the strength of the interaction is almost independent of the distance between cylinders. This is in agreement with the fact that transverse motions (responsible for the interaction between threads) are not significant for slow-like modes in comparison with their longitudinal motions. Therefore, the $S_z$ and $A_z$ modes essentially behave as individual slow modes, contrary to kink modes, which display a more significant collective behavior. Finally, Figure~\ref{fig:distslow}b shows $\tau_D / P$ corresponding to the $S_z$ and $A_z$ solutions versus $d$. Slow modes are efficiently attenuated by non-adiabatic mechanisms, with $\tau_D / P \approx 5$, which is in agreement with previous studies \citep{Soler07a, Soler08} and consistent with observations. \begin{figure*}[!t] \includegraphics[width=0.5\textwidth]{AB10f03a.eps} \includegraphics[width=0.5\textwidth]{AB10f03b.eps} \caption{$a)$ Ratio of the real part of frequency, $\omega_{\rm R}$, of the $S_z$ (solid line) and $A_z$ (dotted line) wave modes to the frequency of the individual slow mode, $\omega_s$, as a function of the distance between centers. $b)$ Ratio of the damping time to the period versus the distance between centers. Linestyles are the same as in panel $a)$. Adapted from \citet{Soler09noad}.} \label{fig:distslow} \end{figure*} Next, the effect of flows on the behavior of collective modes has been studied. Arbitrary flows have been assumed in both cylinders while coronal flows have been neglected. First of all, we concentrate on transverse modes. We denote the flow in the first cylinder by $U_1$, setting its value to $20$~km~s$^{-1}$, and we study the behavior of the oscillatory frequency when the flow in the second cylinder, denoted by $U_2$, varies (see Fig.~\ref{fig:phase}). Since frequencies are almost degenerate, we follow the notation of \citet{VD2008} and call low-frequency modes the $S_x$ and $A_y$ solutions, while high-frequency modes refer to $A_x$ and $S_y$ solutions. We have restricted ourselves to parallel propagation although the argumentation can be easily extended to anti-parallel waves. In order to understand the asymptotic behavior of frequencies in Figure~\ref{fig:phase}, we define the following Doppler-shifted individual kink frequencies: \begin{equation} \Omega_{k 1} = \omega_k + U_1 k_z, \label{eq:wkleft} \end{equation} \begin{equation} \Omega_{k 2} = \omega_k + U_2 k_z. \label{eq:wkright} \end{equation} Where the $+$ signs arise because of the performed Fourier analysis of the form $\exp(-ik_z z+i \omega t)$. Since $U_1$ is fixed, $\Omega_{k 1}$ is a horizontal line in Figure~\ref{fig:phase}, whereas $\Omega_{k 2}$ is linear with $U_2$. \begin{figure}[!t] \centering \includegraphics[width=0.7\textwidth]{AB10f04.eps} \caption{Ratio of the real part of the frequency, $\omega_{\rm R}$, to the individual kink frequency, $\omega_k$, as a function of $U_2$ for $U_1=$~20~km~s$^{-1}$. The solid line corresponds to parallel low-frequency modes ($S_x$ and $A_y$) while the dashed line corresponds to parallel high-frequency solutions ($A_x$ and $S_y$). Dotted lines correspond to the Doppler-shifted individual kink frequencies of the threads, $\Omega_{k 1}$ and $\Omega_{k 2}$. The small letters next to the solid line refer to particular situations studied in the text. Adapted from \citet{Soler09noad}.} \label{fig:phase} \end{figure} Three interesting situations are worth to study and are denoted as $a$, $b$, and $c$ in Figure~\ref{fig:phase}. $(a)$ When $U_2 = -10$~km~s$^{-1}$, which corresponds to a situation with counter-streaming flows, the solutions do not interact with each other and low-frequency (high-frequency) solutions are related to individual oscillations of the second (first) thread. For an external observer this situation would correspond to an individual thread oscillation. $(b)$ When $U_2 = 20$~km~s$^{-1}$, both flow velocities and their directions are equal in both threads. In such a situation, there is a coupling between low- and high-frequency modes, and an avoided crossing of the solid and dashed lines is seen in Figure~\ref{fig:phase} and collective oscillations appear. $(c)$ When $U_2 = 27$~km~s$^{-1}$. This case is just the opposite to $(a)$. Therefore, it corresponds again to an individual thread oscillation. Considering now slow modes, the behavior of the $S_z$ and $A_z$ solutions can only be considered collective when the flow velocity is the same in both threads because, in such a case, both modes couple. When different flows are considered, the $S_z$ and $A_z$ slow modes behave like individual slow modes. The coupling between slow modes is very sensitive to the flow velocities, and the $S_z$ and $A_z$ solutions quickly decouple if $U_1$ and $U_2$ slightly differ. \subsubsection{Non-identical threads} \label{nonadiabatic_NI} Consider, now, a system of two nonidentical threads and focus first on kink modes. From above, we expect collective kink motions to occur when the Doppler-shifted individual kink frequencies of both threads coincide. Following \citet{Soler09noad}, the relation between flow velocities $U_1$ and $U_2$ for which the coupling takes place is, \begin{equation} U_1 - U_2 \approx \pm \sqrt{2} \left( v_{{\rm A} 2} - v_{{\rm A} 1} \right), \label{eq:velrelation} \end{equation} where the $+$ sign is for parallel waves and the $-$ sign is for anti-parallel propagation. A similar analysis can be performed for slow modes to obtain, \begin{equation} U_1 - U_2 \approx \pm \left( c_{s 2} - c_{s 1} \right). \label{eq:velrelationslow} \end{equation} which points out that, in general, the coupling between slow modes occurs at different flow velocities than the coupling between kink modes. Therefore, the simultaneous existence of collective slow and kink solutions in systems of non-identical threads is difficult. Then, we could conclude that the relation between the individual Alfv\'en (sound) speed of threads is determinant for the collective or individual behavior of kink (slow) modes. In the absence of flows and when the Alfv\'en speeds of threads are similar, kink modes are of collective type. On the contrary, when Alfv\'en speeds differ each thread oscillates independently. The same happens for slow modes, but replacing the Alfv\'en speeds by the sound speeds of threads. In summary, when flows are present in the equilibrium, collective motions can be found even in systems of non-identical threads by considering appropriate flow velocities. These velocities are within the observed values if threads with not too different temperatures and densities are assumed. However, since the flow velocities required for collective oscillations must take very particular values, such a special situation may rarely occur in real prominences. This conclusion has important repercussions for future prominence seismological applications, in the sense that if collective oscillations are observed in large areas of a prominence \citep{Linthesis}, threads in such regions should possess very similar temperatures, densities, and magnetic field strengths. This study has also confirmed that collective slow modes are efficiently damped by thermal mechanisms, with damping ratios similar to those reported in observations, $\tau_D / P \approx 5$, while collective fast waves are poorly damped. This is a key point since if the observed efficiently damped oscillations are transverse, this could suggest that other mechanisms could be at work (see Sects.~$\ref{DIN}$ and $\ref{arregui-alfven}$) while if the observed oscillations correspond to slow modes, then, thermal mechanisms could account for the observed damping. Therefore, it becomes crucial to be able to discriminate between the transversal or longitudinal character of the oscillations. However, we should also be aware that because of the inhomogeneous nature of filament threads coupling between modes could make a proper mode identification difficult. \section{Damping of oscillations by ion-neutral collisions} \label{DIN} Since the temperature of prominences is typically of the order of 10$^4$ K, the prominence plasma is only partially ionized. The exact ionization degree of prominences is unknown and the reported ratio of electron density to neutral hydrogen density \citep[see e.g.][]{patsovial02} covers about two orders of magnitude (0.1-10). Partial ionization brings the presence of neutrals and electrons in addition to ions, thus collisions between the different species are possible. Because of the occurrence of collisions between electrons with neutral atoms and ions, and more importantly between ions and neutrals, Joule dissipation is enhanced, when compared to the fully ionized case. The main effects of partial ionization on the properties of MHD waves are manifested through a generalized Ohm's law, which adds some extra terms in the resistive magnetic induction equation, in comparison to the fully ionized case. The induction equation can be cast as \citep{Soler09rapi} \begin{eqnarray} \frac{\partial {\mathit {\bf B}}_1}{\partial t} &=& \nabla \times \left( {\mathit {\bf v}}_1 \times {\mathit {\bf B}}_0\right) - \nabla \times \left( \eta \nabla \times {\mathit {\bf B}}_1 \right) + \nabla \times \left\{ \eta_{\rm A} \left[ \left( \nabla \times {\mathit {\bf B}}_1 \right) \times {\mathit {\bf B}}_0 \right] \times {\mathit {\bf B}}_0 \right\}\nonumber \\ &&- \nabla \times \left[ \eta_{\rm H} \left( \nabla \times {\mathit {\bf B}}_1 \right) \times {\mathit {\bf B}}_0 \right] , \label{eq:induction} \end{eqnarray} \noindent with ${\mathit {\bf B}}_1$, and ${\mathit {\bf v}}_1$ the perturbed magnetic field and velocity, respectively. Quantities $\eta$, $\eta_{\rm A}$, and $\eta_{\rm H}$ in Equation~(\ref{eq:induction}) are the coefficients of ohmic, ambipolar, and Hall's magnetic diffusion, and govern the collisions between the different plasma species. Ohmic diffusion is mainly due to electron-ion collisions, ambipolar diffusion is mostly caused by ion-neutral collisions, and Hall's effect is enhanced by ion-neutral collisions since they tend to decouple ions from the magnetic field while electrons remain able to drift with the magnetic field \citep{Pandey08}. The ambipolar diffusivity can be expressed in terms of the Cowling's coefficient, $\eta_{\rm C}$, as \begin{equation} \eta_A=\frac{\eta_{\rm C}-\eta}{\|{\bf B}_0\|^2}. \end{equation} \noindent The quantities $\eta$ and $\eta_{\rm C}$ correspond to the magnetic diffusivities longitudinal and perpendicular to magnetic field lines, respectively. For a fully ionized plasma, $\eta_{\rm C}=\eta$, there is no ambipolar diffusion, and magnetic diffusion is isotropic. Due to the presence of neutrals, $\eta_{\rm C} \gg \eta$, meaning that perpendicular magnetic diffusion is much more efficient than longitudinal magnetic diffusion in a partially ionized plasma. It is important to note that $\eta_{\rm C}\gg\eta$ even for a small relative density of neutrals. \subsection{Partial ionization effects in a homogeneous an unbounded prominence medium} \subsubsection{Time damping of magnetohydrodynamic waves} Several studies have considered the damping of MHD waves in the solar atmosphere in partially ionized plasmas \citep{depontieu01,James03,Khodachenko04,Leake05}. In the context of solar prominences \cite{Forteza07} derived the full set of MHD equations for a partially ionized, one-fluid hydrogen plasma and applied them to the study of the time damping of linear, adiabatic fast and slow magneto-acoustic waves in an unbounded prominence medium. A partially ionized plasma can be represented as a single-fluid in the strong coupling approximation, which is valid when the ion density in the plasma is small and the collision time between neutrals and ions is short compared with other timescales in the problem. Using this approximation we can describe the very low frequency and large-scale fluid-like behaviour of plasmas \citep {Goossens03}. The study by \citet{Forteza07} was later extended to the non-adiabatic case, including thermal conduction by neutrals and electrons and radiative losses \citep{Forteza08}. The most important results obtained by \cite{Forteza07} have been summarized recently by \cite{Oliver09} and \citet{Ballester10}. \cite{Forteza07} consider a uniform and unbounded prominence plasma and find that ion-neutral collisions are more important for fast waves, for which the ratio of the damping time to the period is in the range 1 to 10$^5$, than for slow waves, for which values in between 10$^4$ and 10$^8$ are obtained. Fast waves are efficiently damped for moderate values of the ionization fraction, while in a nearly fully ionized plasma, the small amount of neutrals is insufficient to damp the perturbations. In the above studies, a hydrogen plasma has been considered, however, $90\%$ of the prominence chemical composition is hydrogen while the remaining $10\%$ is helium. Therefore, it is of great interest to know the effect of the presence of helium on the behavior of magnetohydrodynamic waves in a partially ionized plasma with prominence physical properties. This study has been done by \citet{Soler09helium} in a medium like that considered in \citet{Forteza08}, but composed of hydrogen and helium. The species present in the medium are electrons, protons, neutral hydrogen, neutral helium (HeI), and singly ionized helium (HeII), while the presence of He III is negligible \citep{GL09}. Under such conditions the basic MHD equations for a non-adiabatic, partially ionized, single-fluid plasma have been generalized. The hydrogen ionization degree is characterized by $\tilde{\mu}_{\rm H}$ which varies between $0.5$, for fully ionized hydrogen, and $1$ for fully neutral hydrogen. The helium ionization degree is characterized by $\delta_{\rm He} = \frac{\xi_{{\rm HeII}}}{\xi_{{\rm HeI}}}$, where $\xi_{{\rm HeII}}$ and $\xi_{{\rm HeI}}$ denote the relative densities of single ionized and neutral helium, respectively. Figure~\ref{fig:mhdwaves} displays $\tau_D/P$ as a function of $k$ for the Alfv\'en, fast, and slow waves, and the results corresponding to several helium abundances are compared for hydrogen and helium ionization degrees of $\tilde{\mu}_{\rm H} = 0.8$ and $\delta_{\rm He}=0.1$, respectively. We can observe that the presence of helium has a minor effect on the results. In the case of Alfv\'en and fast waves (Figs.~\ref{fig:mhdwaves}a,b), a critical wavenumber, $k_{\rm c}^{\rm a}$, at which the real part of the frequency becomes zero occurs. In a partially ionized plasma, this critical wavenumber, $k_{\rm c}^{\rm a}$, is given by \begin{equation} k_{\rm c}^{\rm a} = \frac{2 v_{\mathrm{A}}}{\cos \theta( \eta_C + \eta \tan^2 \theta)}, \end{equation} \noindent where $v_{\mathrm{A}}$ is the Alfv\'en speed and $\theta$ the angle between the wavevector and the equilibrium magnetic field. The Cowling's diffusivity, $\eta_C$, depends on the fraction of neutrals and the collisional frequencies between electrons, ions and neutrals. When a fully ionized plasma is considered the numerical value of both diffusivities is the same and in a fully ionized ideal plasma this numerical value is taken equal to zero, therefore, the critical wavenumber goes to infinity. When partially ionized plasmas are considered, wavenumbers greater than the critical value only produce purely damped perturbations. Since Cowling's diffusivity is larger in the presence of helium because of additional collisions of neutral and singly ionized helium species, $k_{\rm c}^{\rm a}$ is shifted toward slightly lower values than when only hydrogen is considered, so the larger $\xi_{{\rm HeI}}$, the smaller $k_{\rm c}^{\rm a}$. In the case of the slow wave (Fig.~\ref{fig:mhdwaves}c), the maximum and the right hand side minimum of $\tau_D/P$ are also slightly shifted toward lower values of $k$. Previous results from \citet{Carbonell04} and \citet{Forteza08} suggest that thermal conduction is responsible for these maxima and minima of $\tau_D/P$. The additional contribution of neutral helium atoms to thermal conduction produces this displacement of the curve of $\tau_D/P$. In the case of Alfv\'en and fast waves, this effect is not very important. Although \citet{GL09} suggest that for central prominence temperature a realistic ratio between the number densities of $\xi_{{\rm HeII}}$ to $\xi_{{\rm HeI}}$ is $10\%$, in Figure~\ref{fig:mhdwaves}, and for sake of comparison, the results for $\xi_{{\rm HeI}} = 10\%$ and $\delta_{\rm He} = 0.5$ have been also plotted. \begin{figure*}[!t] \includegraphics[width=0.5\textwidth]{AB10f05a.eps} \includegraphics[width=0.5\textwidth]{AB10f05b.eps}\\ \includegraphics[width=0.5\textwidth]{AB10f05c.eps} \includegraphics[width=0.5\textwidth]{AB10f05d.eps} \caption{($a$)--($c$): Ratio of the damping time to the period, $\tau_D/P$, versus the wavenumber, $k$, corresponding to the Alfv\'en wave, fast wave, and slow wave, respectively. ($d$) Damping time, $\tau_{\mathrm{d}}$, of the thermal wave versus the wavenumber, $k$. The different linestyles represent: $\xi_{{\rm HeI}} = 0\%$ (solid line), $\xi_{{\rm HeI}} = 10\%$ (dotted line), and $\xi_{{\rm HeI}} = 20\%$ (dashed line). In all computations, $\tilde{\mu}_{\rm H} = 0.8$, $\delta_{\rm He} = 0.1$, and the angle between the wavevector and the equilibrium magnetic field is $\theta=\pi/4$. The results for $\xi_{{\rm HeI}} = 10\%$ and $\delta_{\rm He} = 0.5$ are plotted by means of symbols for comparison. The shaded regions correspond to the range of typically observed wavelengths of prominence oscillations. Adapted from \citet{Soler09helium}.} \label{fig:mhdwaves} \end{figure*} Finally, the thermal mode has been considered. Since it is a purely damped, non-pr\-o\-pa\-ga\-ting disturbance ($\omega_{\rm R} = 0$), we only plot the damping time, $\tau_D$, as a function of $k$ for $\tilde{\mu}_{\rm H} = 0.8$ and $\delta_{\rm He}=0.1$ (Fig.~\ref{fig:mhdwaves}d). We observe that the effect of helium is different in two ranges of $k$. For $k > 10^{-4}$~m$^{-1}$, thermal conduction is the dominant damping mechanism, so the larger the amount of helium, the smaller $\tau_D$ because of the enhanced thermal conduction by neutral helium atoms. On the other hand, radiative losses are more relevant for $k < 10^{-4}$~m$^{-1}$. In this region, the thermal mode damping time grows as the helium abundance increases. Since these variations in the damping time are very small, we again conclude that the damping time obtained in the absence of helium does not significantly change when helium is taken into account. } In summary, the study by \citet{Soler09helium} points out that the consideration of neutral or single ionized helium in partially ionized prominence plasmas does not modify the behavior of linear, adiabatic or non-adiabatic MHD waves already found by \citet{Forteza07} and \citet{Forteza08}. \subsubsection{Spatial damping of magnetohydrodynamic waves} \citet{Terradas02} analyzed small amplitude oscillations in a polar crown prominence and reported the presence of a plane propagating wave as well as a standing wave. In the case of the propagating wave, which was interpreted as a slow MHD wave, the amplitude of the oscillations spatially decreased in a substantial way after a distance of $2-5 \times 10^4$ km from the location where wave motion was being generated. This distance can be considered as a typical spatial damping length, $L_\mathrm{d}$, of the oscillations. On the other hand, a typical feature in prominence oscillations is the presence of flows which are observed in $H_\alpha$, UV and EUV lines \citep{Labrosse10}. In $H_\alpha$ quiescent filaments, the observed velocities range from $5$ to $20$ km s$^{-1}$ \citep{ZEM98, Lin03, Lin07} and, because of physical conditions in filament plasma, they seem to be field-aligned. Recently, observations made with Hinode/SOT by \citet{Okamoto07} reported the presence of synchronous vertical oscillatory motions in the threads of an active region prominence, together with the presence of flows along the same threads. However, in limb prominences different kinds of flows are observed and, for instance, observations made by \citet{Berger08} with Hinode/SOT have revealed a complex dynamics with vertical downflows and upflows. The spatial damping of magnetohydrodynamic waves in a homogeneous and unbounded fully ionized plasma was studied by \citet{Carbonell06}. Recently, the spatial damping of linear non-adiabatic magnetohydrodynamic waves in a homogenous, unbounded, magnetized and flowing partially ionized plasma has been studied by \citet{Carbonell10}. Since we consider a medium with physical properties akin to those of a solar prominence, the density is $\rho_\mathrm{0} = 5 \times 10^{-11}$ kg m$^{-3}$, the temperature $T_\mathrm{0} = 8000$ \ K, the magnetic field $\vert \bf B_\mathrm{0} \vert$= 10 \ G, and, in general, a field-aligned flow with $v_\mathrm{0} = 10$ km s$^{-1}$ has been considered. The dispersion relation for Alfv\'en waves with a background flow is given by: \begin{eqnarray} iv_\mathrm{0}(\eta_{\rm C} \cos^{2} \theta + \eta \sin^{2} \theta)k^{3} \nonumber \\ + \left[(v_\mathrm{0}^{2}-v_\mathrm{A}^{2}-i \eta_{\rm C} \omega)\cos \theta-i \eta \omega \sin \theta \tan \theta\right]k^{2} - 2 \omega v_\mathrm{0}k\nonumber{}\\ + \omega^{2} \sec \theta=0, \label{disp_alf5} \end{eqnarray} \noindent with $\theta$ the angle between the wavevector and the equilibrium magnetic field. The increase in the degree of the dispersion relation, with respect to the typical dispersion relation for Alfv\'en waves, is produced by the joint presence of flow and resistivities. In this case, we should obtain three propagating Alfv\'en waves, and Figure~\ref{f01} shows the numerical solution of dispersion relation (\ref{disp_alf5}). For the entire interval of periods considered, a strongly damped additional Alfv\'en wave appears, while on the contrary, the other two Alfv\'en waves are very efficiently damped for periods below $1$ s. However, within the interval of periods typically observed in prominence oscillations these waves are only efficiently attenuated when almost neutral plasmas are considered. \begin{figure}[!t] \includegraphics[width=0.5\textwidth]{AB10f06a.eps} \includegraphics[width=0.5\textwidth]{AB10f06b.eps}\\ \begin{minipage}{0.5\textwidth} \includegraphics[width=\textwidth]{AB10f06c.eps} \end{minipage} \hspace{0.5cm} \begin{minipage}{0.45\textwidth} \caption{Damping length, wavelength, and ratio of the damping length to the wavelength versus period for the three (solid, dashed, dotted) Alfv\'en waves in a partially ionized plasma with an ionization degree $\tilde{\mu}=0.8$ and with a background flow of $10$ \ km s$^{-1}$. In all the panels, the shaded region corresponds to the interval of observed periods in prominence oscillations. \label{f01}} \end{minipage} \end{figure} The dispersion relation for thermal and magnetoacoustic waves in presence of a background flow is given by \begin{eqnarray} (\Omega^{2} -k^{2} \Lambda^{2}) (ik^{2} \eta_\mathrm{C} \Omega-\Omega^{2})+k^{2} v_\mathrm{A}^{2}(\Omega^{2} -k_{x}^{2} \Lambda^{2} )+i k^{2} k_{z}^{2}v_\mathrm{A}^{2} \Lambda^{2} \Xi \rho_{0} \Omega= 0, \label{disp_mag} \end{eqnarray} where $\Lambda^{2}$ is the non-adiabatic sound speed squared \citep{Forteza08, Soler08} and is defined as \begin{eqnarray} \Lambda^{2} = \frac{\frac{T_{0}}{\rho_{0}} A - H +ic_\mathrm{s}^{2} \Omega}{\frac{T_{0}}{p_{0}}A+i \Omega}, \label{nass} \end{eqnarray} where $A$ and $H$, including optically thin radiative losses, thermal conduction by electrons and neutrals, and a constant heating per unit volume, were defined in \citet{Forteza08}. When Equation~(\ref{disp_mag}) is expanded, it becomes a seventh degree polynomial in the wavenumber $k$, whose solutions are three propagating fast waves, two slow waves and two thermal waves. Figure~\ref{f23} displays the behavior of the damping length, wavelength and ratio of damping length versus wavelength for two of the three fast waves and slow waves. The curves for the third fast wave, which owes its existence to the joint action of flow and resistivity are similar to those corresponding to the strongly damped Alfv\'en wave in Figure~\ref{f01}. The most interesting results are those related with the ratio $L_\mathrm{d}/\lambda$. For fast waves, this ratio decreases with the period becoming small for periods below $10^{-2}$ s while for one of the slow waves, the ratio becomes very small for periods typically observed in prominence oscillations. When ionization is decreased, slight changes of the above described behavior occur, the most important being the displacement towards longer periods of the peak of most efficient damping corresponding to slow waves. \begin{figure*} \includegraphics[width=0.5\textwidth]{AB10f07a.eps} \includegraphics[width=0.5\textwidth]{AB10f07b.eps} \caption{Damping length, wavelength, and ratio of the damping length to the wavelength versus period for the non-adiabatic fast (left panels), slow (right panels) waves in a partially ionized plasma with an ionization degree $\tilde{\mu}=0.8$ (solid) and $\tilde{\mu}=0.95$ (dashed). The flow speed is $10$ \ km s$^{-1}$.} \label{f23} \end{figure*} In summary, Alfv\'en waves in a partially ionized plasma can be spatially damped. When the ionization decreases, the damping length of these waves also decreases and the efficiency of their spatial damping in the range of periods of interest is improved, although the most efficient damping is attained for periods below $1$ s. A new feature is that when a flow is present a new third Alfv\'en wave, strongly attenuated, appears. The presence of this wave depends on the joint action of flow and resistivities, and it could only be detected by an external observer to the flow. In the case of non-adiabatic magnetoacoustic waves, when partial ionization is present the behavior of fast, slow and thermal waves is strongly modified. In particular, the damping length of a fast wave in a partially ionized plasma is strongly diminished by neutrals thermal conduction for periods between $0.01$ and $100$ s, and, at the same time, the radiative plateau present in fully ionized ideal plasmas almost disappears. The behavior of slow waves is not so strongly modified as for fast waves, although thermal conduction by neutrals also diminishes the damping length for periods below $10$ s, and a short radiative plateau still remains for periods between $10$ and $1000$ s. Thermal waves are only slightly modified although the effect of partial ionization is to increase the damping length of these waves, just the opposite to what happens with the other waves. Next, when a background flow is included, a new third fast wave appears which, again, is due to the joint action of flow and resistivities. Also, in the presence of flow, wavelengths and damping lengths are modified, and since for slow waves sound speed and observed flow speeds are comparable this means that the change in wavelength and damping length are important, leading to an improvement in the efficiency of the damping. Moreover, the maximum of efficiency is displaced towards long periods when the ionization decreases, and for ionization fractions from $0.8$ to $0.95$ it is clearly located within the range of periods typically observed in prominence oscillations with a value of $L_\mathrm{d}/\lambda$ smaller than $1$. This means that for a typical period of $10^{3}$ s, the damping length would be between $10^{2}$ and $10^{3}$ km, the wavelength around $10^{3}$ km and, as a consequence, in a distance smaller than a wavelength the slow wave would be strongly attenuated. In conclusion, the joint effect of non-adiabaticity, flows and partial ionization allows to damp slow waves in an efficient way within the interval of periods typically observed in prominences. \subsection{Partial ionization effects in a cylindrical filament thread model} \label{robertopicyl} Recently, \cite{Soler09picyl} have applied the equations derived by \cite{Forteza07} to the study of MHD waves and their time damping in a partially ionized filament thread. The adopted thread model is the classic one-dimensional magnetic flux tube, with prominence conditions, embedded in an unbounded medium with coronal conditions. A uniform magnetic field, ${\bf B}_0$ pointing along the axis of the tube is considered. As in \cite{Forteza07}, the one fluid approximation for a hydrogen plasma is considered. The internal and external media are characterized by their densities, temperatures, and the relative densities of neutrals, ions, and electrons. The contribution of the latter is neglected. The ionization fraction is thus defined as $\tilde{\mu}=1/(1+\zeta_i)$, with $\zeta_i$ the relative density of ions. For a fully ionized plasma $\tilde{\mu}=0.5$ ($\zeta_i=1$), while for a neutral plasma $\tilde{\mu}=1$ ($\zeta_i=0$). The external coronal medium is considered as fully ionized, while the ionization fraction in the internal filament plasma is allowed to vary in this range. Note that any value of $\tilde{\mu}$ outside this range is physically meaningless. In their analysis \cite{Soler09picyl} neglect Hall's term, which is important only for frequencies larger than $\sim$ 10$^4$ Hz, much larger than the observed frequencies of prominence oscillations (for a dimensional analysis that further justifies this approximation see \citealt{Soler09rapi}). After Fourier analyzing the linear MHD wave equations by assuming perturbations of the form $\exp(i \omega t+i m \varphi -i k_z z)$, \cite{Soler09picyl} note that terms with $\eta_{\rm C}$ appear accompanied by longitudinal derivatives, while terms with $\eta$ correspond to radial and azimuthal derivatives. By defining $L_{\eta_{\rm C}}$ and $L_{\eta}$ as typical length-scales parallel and perpendicular to the magnetic field, the first is associated to the wavelength of perturbations in the longitudinal direction, $\lambda_z\sim 2\pi k_z$, while the second is related to the radius of the structure, $a$. This allows to define the corresponding Reynolds numbers in the parallel and perpendicular directions as $R_{m\parallel}=c_{sf} a/\eta$ and $R_{m\perp}=4\pi^2 c_{sf}/\eta_{\rm C} k^2_z a$, where the typical velocity scale has been associated to the sound speed in the filament, $c_{sf}$. The parallel Reynolds number is independent of the wavenumber, while the relative importance of Cowling's diffusion increases with $k_z$. A simple calculation reveals that Cowling diffusion is dominant for the cases of interest. Consider, for instance, $k_za=1$, $a=100$ km, and $\tilde{\mu}_f=0.8$. Then $R_{m\parallel}=7\cdot 10^6$ and $R_{m\perp}=4\cdot 10^2$. The wavenumber for which both ohmic and Cowling's diffusion have the same importance can be estimated to be $\lambda\sim[5\cdot 10^3-10^5]$ km, for $a\in[75-375]$ km. In the range of observed wavelengths ($k_za\sim[10^{-3}-10^{-1}]$) both Cowling's and ohmic diffusion could therefore be important. \cite{Soler09picyl} analyze separately the effect(s) of partial ionization in Alfv\'en, fast kink, and slow waves. \begin{figure*} \includegraphics[width=\textwidth]{AB10f08top.eps}\\ \includegraphics[width=\textwidth]{AB10f08middle.eps}\\ \includegraphics[width=\textwidth]{AB10f08bottom.eps} \caption{Phase speed (left panels) in units of the Alfv\'en speed, kink speed and internal cusp speed from top to bottom, and ratio of the damping time to the period (right panels) as a function of $k_z a$ for Alfv\'en (top panels), kink (middle panels) and slow (bottom panels) waves. In all panels different linestyles represent different ionization degrees: $\tilde{\mu}_f=0.5 $ (dotted), $\tilde{\mu}_f=0.6$ (dashed), $\tilde{\mu}_f=0.8$ (solid), and $\tilde{\mu}_f=0.95$ (dash-dotted). Symbols are the approximate solution given by Equation (36) in \citet{Soler09picyl} for $\tilde{\mu}_f=0.8$. The shaded zones correspond to the range of typically observed wavelengths of prominence oscillations. Adapted from \citet{Soler09picyl}. } \label{figpicyl1} \end{figure*} Alfv\'en waves are incompressible disturbances with velocity perturbations polarized in the azimuthal direction. In a resistive medium these velocity perturbations are not strictly confined to magnetic surfaces, but have a global nature \citep[see e.g.,][]{FerraroPlumpton61}. \citet{Soler09picyl} show that, by considering solutions with $m=0$ (no azimuthal dependence) and defining a modified Alfv\'en speed squared as $\Gamma^2_A=v^2_A+\imath\omega\eta_{\rm C}$, with $v_A$ the ideal Alfv\'en speed, the azimuthal components of the momentum and induction equations can be combined to obtain a Bessel equation of order one for the perturbed magnetic field component in the azimuthal direction. By solving this equation, the phase speed and damping rate of Alfv\'en waves can be studied as a function of the wavelength of perturbations for different ionization degrees and values of the ohmic dissipation. It turns out that Alfv\'en wave propagation is constrained between two critical wavenumber values. These critical wavenumbers are, however, outside the range that corresponds to the observed wavelengths (Fig.~\ref{figpicyl1} top). The small critical wavenumber is found to be insensitive to the ionization fraction, while the large critical wavenumber strongly depends on this parameter. The obtained values of the damping time over the period are independent of the ionization degree, for small wavenumber values, while they are affected for large wavenumber values. They are found to be in between 10 and 10$^2$ times the corresponding period, in the range of typically observed wavelengths. By considering solutions to the dispersion relation by neglecting separately one of the two possible damping mechanisms, i.e., partial ionization and ohmic dissipation, \citet{Soler09picyl} observe that the presence of neutrals has an important effect on the damping time for large wavenumbers, while ohmic diffusion dominates for small wavenumbers (Fig.~\ref{figpicyl2}a). \begin{figure}[!t] \includegraphics[width=0.5\textwidth]{AB10f09a.eps} \includegraphics[width=0.5\textwidth]{AB10f09b.eps}\\ \begin{minipage}{0.5\textwidth} \includegraphics[width=\textwidth]{AB10f09c.eps} \end{minipage} \hspace{0.5cm} \begin{minipage}{0.45\textwidth} \caption{Ratio of the damping time to the period as a function of $k_z a$ corresponding to ($a$) the Alfv\'en mode, ($b$), the kink mode, and ($c$) the slow mode, with $\tilde{\mu}_f=0.8$. Solid lines are the complete solution considering all terms in the induction equation. The dotted and dashed lines are the results obtained by neglecting ohmic diffusion ($\eta=0$) or ion-neutral collisions ($\eta_{\rm C}=\eta$), respectively. The vertical dot-dashed lines indicate the analytically estimated transitional wavenumber values between both regimes of dominance. Adapted from \citet{Soler09picyl}. \label{figpicyl2}} \end{minipage} \end{figure} The propagation of transverse kink waves is also found to be constrained by two critical wavenumbers (Fig.~\ref{figpicyl1} middle), a result that can be easily understood by noting that both in the short- and long-wavelength regimes the asymptotic approximations for the kink mode frequency directly depend on the Alfv\'en speed. Within the relevant range of observed wavelengths, the phase speed closely corresponds to the ideal counterpart, $c_k=\omega/k_z$, so non-ideal effects are irrelevant to wave propagation for the observed wavelengths. The behavior of the damping rate as a function of wavelength and ionization fraction is seen to closely resemble the result obtained for Alfv\'en waves, with values of $\tau_d/P$ above observed values, in the range of observed wavelengths. Therefore, neither ohmic diffusion nor ion-neutral collisions seem to provide damping times as short as those observed in transverse kink waves in filament threads. Only for an almost neutral plasma, with $\tilde{\mu}_f > 0.95$ and very short wavelengths the obtained damping rates are compatible with the observed time-scales. As for Alfv\'en waves, similar results are obtained for the kink mode when comparing the damping rate by neglecting ohmic diffusion or ion-neutral collisions separately (Fig.~\ref{figpicyl2}b). Ohmic diffusion dominates for small wavenumbers, while ion-neutral collisions are the dominant damping mechanism for large values of $k_za$. Finally, \citet{Soler09picyl} analyze the propagation properties and the damping by ion-neutral collisions for slow waves. The analysis concentrates on the radially fundamental mode with $m=1$, since the behavior of the slow mode is weakly affected by the value of the azimuthal wavenumber. It turns out that slow wave propagation is constrained by one critical wavenumber, which strongly depends on the ionization fraction, in such a way that for $k_z<k_{z crit}$ the wave is totally damped. More importantly, for large enough values of $\tilde{\mu}$, the corresponding critical wavelength lies in the range of observed wavelengths of filament oscillations (Fig.~\ref{figpicyl1} bottom). As a consequence, the slow wave might not propagate in realistic thin filament threads. By computing the damping rate, it is found that ion-neutral collisions are a relevant damping mechanism for slow waves, since very short damping times are obtained, especially close to the critical wavenumber. By comparing the separate contributions of ohmic diffusion and ion-neutral collisions, the slow mode damping is seen to be completely dominated by ion-neutral collisions (Fig.~\ref{figpicyl2}c). Ohmic diffusion is found to be irrelevant, since the presence of the critical wavenumber prevents slow wave propagation for small wavenumbers, where ohmic diffusion would start to dominate. \section{Resonant damping of filament thread oscillations}\label{resonantdamping} As discussed in the previous sections, non-adiabatic MHD waves and partial ionization seem to be able to explain the time damping of oscillations in the case of slow waves. The question remains about what physical mechanism(s) could be responsible for the rapid time damping of transverse kink waves in thin filament threads, which seem to be rather insensitive to thermal effects. Partial ionization could be relevant, in view of the results obtained by \cite{Soler09picyl}, but the ratios of the damping time to the period obtained for a cylindrical filament thread are still one or two orders of magnitude larger that those observed. The phenomenon of resonant wave damping in non-uniform media has provided a plausible explanation in the context of quickly damped transverse coronal loop oscillations \citep{GAA02,Goossens08, Goossens10}. This mechanism relies in the wave coupling and energy transfer between oscillatory modes due to the spatial variation of physical properties that in turn define a non-uniform Alfv\'en and/or cusp speed. Because of the highly inhomogeneous nature of quiescent filament threads at their transverse spatial scales, it is natural to believe that resonant damping must be operative whenever transverse kink waves propagate along those structures. This idea has been put forward by \cite{Arregui08thread}. The analysis by \cite{Arregui08thread} is restricted to the damping of kink oscillations due to the resonant coupling to Alfv\'en waves in a pressureless (zero plasma-$\beta$) plasma. It has been extended to the case in which both the Alfv\'en and the slow resonances are present by \cite{Soler09slowcont}. The main results from these two studies are summarized. \subsection{Resonant damping in the Alfv\'en continuum}\label{arregui-alfven} Given the relatively simple structure of filament threads, when compared to the full prominence/filament system, the magnetic and plasma configuration of an individual and isolated thread can be theoretically approximated using a rather simplified model. \cite{Arregui08thread} make use of a one-dimensional, straight flux tube model in a gravity-free environment. In a system of cylindrical coordinates ($r$, $\varphi$, $z$), with the $z$-axis coinciding with the axis of the tube, the uniform magnetic field is pointing in the $z$-direction, ${\bf B}=B\mbox{$\hat{{\bf e}}_{\rm z}$}$. As gas pressure is neglected slow modes are absent and the oscillatory properties of the remaining fast and Alfv\'en MHD waves and their mutual interaction is analyzed. In particular the analysis concentrates on the fundamental kink mode. In such a straight field configuration the zero plasma-$\beta$ approximation implies that the field strength is uniform and that the density profile can be chosen arbitrarily. The non-uniform filament thread is then modeled as a density enhancement with a one-dimensional non-uniform transverse distribution of density, $\rho(r)$, across the structure. The internal filament plasma, with uniform density, $\rho_f$, occupies the full length of the tube and is connected to the coronal medium, with uniform density, $\rho_c$, by means of a non-uniform transitional layer of thickness $l$. If $a$ denotes the radius of the tube, the ratio $l/a$ provides us with a measure of the transverse inhomogeneity length-scale, that can vary in between $l/a=0$ (homogeneous thread) and $l/a=2$ (fully non-uniform thread). The explicit expression for the density profile used by \cite{Arregui08thread} is \begin{equation} \rho_{0}\left(r\right)=\left\{\begin{array}{clc} \rho_f,&{\rm if}&r\le a - l/2, \\ \rho_{\rm tr}\left(r\right),&{\rm if}&a-l/2<r<a+l/2,\\ \rho_c,&{\rm if}&r\geq a+l/2, \\ \end{array} \right. \label{rhor} \end{equation} with \begin{equation} \rho_{\rm tr}\left(r\right)=\frac{\rho_f}{2}\left\{\left(1+\frac{\rho_c}{\rho_f}\right) - \left( 1-\frac{\rho_c}{\rho_f}\right)\sin \left[\frac{\pi}{l}\left( r-a\right)\right]\right\}.\label{rhotrans} \end{equation} \noindent Typical values for the filament and coronal densities are $\rho_f=5\times10^{-11}~{\rm kg}~{\rm m}^{-3}$ and $\rho_c=2.5\times 10^{-13}~{\rm kg}~{\rm m}^{-3}$, the density contrast between the filament and coronal plasma being $\rho_f/\rho_c=200$. Observations of transverse oscillations in filament threads can be interpreted in terms of linear kink waves. When considering perturbations of the form $f(r)$exp$(i (\omega t+m\varphi-k_z z))$, with $m$ and $k_z$ the azimuthal and longitudinal wavenumbers and $\omega$ the oscillatory frequency, the fundamental kink mode has $m=1$, and produces the transverse displacement of the tube as it propagates along the density enhancement. This mode is therefore consistent with the detected Doppler velocity measurements and the associated transverse swaying motions of the threads \citep{Lin07,Lin09}. In the absence of a non-uniform transitional layer, i.e., $l/a=0$, the density is uniform in the internal and external regions and changes discontinuously at the tube radius $a$. Then, a well known dispersion relation is obtained by imposing the continuity of the radial displacement, $\xi_r$, and the total pressure perturbation, $p_T$, at $r=a$. This dispersion relation (Edwin \& Roberts 1983) is \begin{equation} D_m(\omega,k_z)=\frac{\xi_{r,e}}{P'_{T,e}}-\frac{\xi_{r,i}}{P'_{T,i}}=0 \end{equation} \noindent where the indices ``i'' and ``e'' refer to internal and external, respectively, and the prime denotes a derivative with respect to the radial direction. In the commonly used thin tube or long wavelength approximation ($k_za<<1$), the kink mode frequency can be calculated explicitly as \begin{equation}\label{kinkfrequency} \omega_k=k_z\sqrt{\frac{\rho_f v^2_{Af}+\rho_c v^2_{Ac}}{\rho_f+\rho_c}}, \end{equation} \noindent with $v_{Af,c}=B/\sqrt{\mu\rho_{f,c}}$ the filament and coronal Alfv\'en velocities. The period of kink oscillations with a wavelength $\lambda=2\pi/k_z$ can be written, in terms of the density contrast, as \begin{equation} P=\frac{\sqrt{2}}{2}\frac{\lambda}{V_{Af}} \left(\frac{1+c}{c}\right)^{1/2}.\label{period} \end{equation} \noindent Note that the factor containing the density contrast varies between $\sqrt{2}$ and $1$, when $c$ is allowed to vary between a value slightly larger that $1$ (extremely tenuous thread) and $c\rightarrow\infty$. This has consequences in the application of the model to perform prominence seismology (see Sect. 6). In Equation~(\ref{kinkfrequency}) the eigen-frequency of the fundamental kink mode is in between the internal and external Alfv\'en frequencies, $\omega_{Af}<\omega_k<\omega_{Ac}$, with $\omega_{Af,c}=k_z v_{Af,c}$. This means that when the discontinuous jump in density is replaced by a continuous variation in a non-uniform layer, of thickness $l$, going from $\rho_f$ to $\rho_c$, the fundamental kink mode has its eigen-frequency in the Alfv\'en continuum, and thus couples to an Alfv\'en continuum mode. This results in a transfer of wave energy from the transverse motion of global nature to azimuthal motions of localized nature, and the time damping of the kink mode. Asymptotic analytical expressions for the damping time, $\tau_d$, can be obtained under the assumption that the transverse inhomogeneity length-scale is small ($l/a \ll 1$). This is the so-called thin boundary approximation, which assumes that the thickness of the dissipative layer, namely $\delta_{\rm A}$, coincides with the width of the inhomogeneous transitional layer. This condition is approximately verified for thin layers, i.e., $l/a \ll 1$, which makes the approximation very accurate in such a case. The method was outlined by e.g. \cite{sakurai91,goossens95,TIGO96} and makes use of jump conditions to obtain analytical expressions for the dispersion relation. For the purposes of our description it will suffice to write down these conditions. In a straight and constant magnetic field, we have for the Alfv\'en resonance \begin{equation} [\xi_r]=-\imath\pi\frac{m^2/r^2_A}{\omega^2_A|\partial_r\rho_0|_A}p_T, \mbox{\hspace{1cm}} [p_T]=0, \mbox{\hspace{1cm}} \mbox{\rm at} \mbox{\hspace{1cm}} r=r_A, \label{jumpalfven} \end{equation} \noindent where $\omega_A$ and $|\partial_r\rho_0|_A$ are the Alfv\'en frequency and the modulus of the radial derivative of the density profile, evaluated at the resonant point. These jump conditions allow us to obtain a dispersion relation of the form \begin{equation}\label{disperalfven} D(\omega,k_z)=-\imath\pi\frac{m^2/r^2_A}{\omega^2_A}\frac{\rho_i\rho_e}{|\partial_r\rho_0|_A}. \end{equation} \noindent When the long wavelength and the thin boundary approximations are combined, the analytical expression for the damping time over period for the kink mode ($m=1$) can be written as (see e.g.\ \citealt{HY88,sakurai91,goossens92,goossens95,RR02}) \begin{equation} \frac{\displaystyle \tau_{d}}{\displaystyle P} = F \;\; \frac {\displaystyle a} {\displaystyle l}\;\; \frac{\displaystyle c + 1}{\displaystyle c - 1}. \label{dampingrate} \end{equation} \begin{figure*}[!t] \includegraphics[width=0.5\textwidth]{AB10f10a.ps} \includegraphics[width=0.5\textwidth]{AB10f10b.ps}\\ \includegraphics[width=0.5\textwidth]{AB10f10c.ps} \includegraphics[width=0.5\textwidth]{AB10f10d.ps}\\ \caption{{\em (a)--(c):} Damping time over period for fast kink waves in filament threads with $a=100$ km. In all plots solid lines correspond to analytical solutions given by equation~(\ref{dampingrate}), with $F=2/\pi$. {\em (a):} As a function of density contrast, with $l/a=0.2$ and for two wavelengths. {\em (b)}: As a function of wavelength, with $l/a=0.2$, and for two density contrasts. {\em (c)}: As a function of transverse inhomogeneity length-scale, for two combinations of wavelength and density contrast. {\em (d)} Percentage difference, $\Delta$, with respect to analytical formula (\ref{dampingrate}) for different combinations of wavelength, $\lambda=30a$ (dashed lines); $\lambda=200a$ (dash-dotted lines), and density contrast. Adapted from \citet{Arregui08thread}.} \label{alfven} \end{figure*} \noindent Here $F$ is a numerical factor that depends on the particular variation of the density in the non-uniform layer. For a linear variation $F=4/\pi^2$ \citep{HY88,goossens92}; for a sinusoidal variation $F=2/\pi$ \citep{RR02}. We can already anticipate that, for example, considering $c=200$ as a typical density contrast and $l/a=0.1$ Equation~(\ref{dampingrate}) predicts a damping time of $\sim$ 6 times the oscillatory period. Figure~\ref{alfven} shows analytical estimates computed by \cite{Arregui08thread} using Equation~(\ref{dampingrate}) (solid lines). The damping is affected by the density contrast in the low contrast regime and $\tau_d/P$ rapidly decreases for increasing thread density (Fig.~\ref{alfven}a). Interestingly, it stops being dependent on this parameter in the large contrast regime, typical of filament threads. The damping time over period is independent of the wavelength of perturbations (Fig.~\ref{alfven}b), but rapidly decreases with increasing inhomogeneity length-scale (Fig.~\ref{alfven}c). These results led \cite{Arregui08thread} to propose that resonant absorption in the Alfv\'en continuum is a very efficient mechanism for the attenuation of fast waves in filament threads, especially because large thread densities and transverse plasma inhomogeneities can be combined together. The analysis by \cite{Arregui08thread} is completed by computing numerical approximations to the solutions outside the thin tube and thin boundary approximations used to derive Equations~(\ref{period}) and (\ref{dampingrate}), which may impose limitations to the applicability of the obtained results to filament thread oscillations. \cite{Arregui08thread} find that analytical and numerical solutions display the same qualitative behavior with density contrast and transverse inhomogeneity length-scale (Figs.~\ref{alfven}a, c). Now the damping time over period slightly depends on the wavelength of perturbations (Fig.~\ref{alfven}b). Equation (\ref{dampingrate}) underestimates/overestimates this magnitude for short/long wavelengths. The differences are small, of the order of 3\% for $c=10$, and do not vary much with density contrast for long wavelengths ($\lambda=200a$), but increase until 6\% for short ones ($\lambda=30a$). The long wavelength approximation is responsible for the discrepancies obtained for thin non-uniform layers (Fig.~\ref{alfven}c). Figure~\ref{alfven}d shows how accurate Equation~(\ref{dampingrate}) is for different combinations of wavelength, density contrast, and inhomogeneity length-scale. For thin layers ($l/a=0.1$) the inaccuracy of the long wavelength approximation produces differences up to $\sim$ 10\% for the combination of short wavelength with high contrast thread. For thick layers, differences of the order of 20\% are obtained. Here, the combination of large wavelength with high contrast thread produces the largest discrepancy. Numerical results allow the computation of more accurate values, but do no change the overall conclusion regarding the efficiency and properties of resonant damping of transverse oscillations in filament threads. Resonant damping in the Alfv\'en continuum appears as a very efficient mechanism for the explanation of the observed damping time scales. \subsection{Resonant damping in the slow continuum} Although the plasma $\beta$ in solar prominences is probably small, it is definitely nonzero. \cite{Soler09slowcont} have recently incorporated gas pressure to the cylindrical filament thread model of \cite{Arregui08thread}. This introduces the slow mode in addition to the kink mode and the Alfv\'en continuum. In the context of coronal loops, which are presumably hotter and denser than the surrounding corona, the ordering of sound, Alfv\'en and kink speeds does not allow for the simultaneous matching of the kink frequency with both Alfv\'en and slow continua. Because of their relatively higher density and lower temperature conditions, this becomes possible in the case of filament threads. Therefore, the kink mode phase speed is also within the slow (or cusp) continuum, which extends between the internal and external sound speeds, in addition to the Alfv\'en continuum. This brings an additional damping mechanism. Its contribution to the total resonant damping has been assessed by \cite{Soler09slowcont}. The study by \cite{Soler09slowcont} considers a filament thread model equivalent to that of \cite{Arregui08thread}, i.e., a straight cylinder with prominence-like conditions embedded in an unbounded corona, with a transverse inhomogeneous layer between both media. The same one-dimensional density profile given by Equation~(\ref{rhor}) is used. The internal and external densities are set to $\rho_i=5\times10^{-11}$ kg m$^{-3}$ and $\rho_e=2.5\times10^{-13}$ kg m$^{-3}$, giving a density contrast of $\rho_{i}/\rho_e=200$. The plasma temperature is related to the density through the usual ideal gas equation and values of $T_i=8000$ K and $T_e=10^{6}$ K are taken. The magnetic field, pointing in the axial direction is taken to be uniform, with a value $B_0= 5$ G everywhere. Under this conditions, the plasma-$\beta\simeq0.04$. In order to compute the analytical contribution to the damping of the kink mode due to the slow resonance a similar method to the one outlined for the Alfv\'en resonance is followed. Assuming that the inhomogeneous transition region over which density varies is sufficiently small, compared to the tube radius, jump conditions can be used to obtain analytical expressions for the dispersion relation. The jump conditions for the Alfv\'en resonance are given in Equation~(\ref{jumpalfven}). The corresponding jump conditions for the slow resonance at the point $r=r_S$ are \begin{equation} [\xi_r]=-\imath\pi\frac{k^2_z}{\omega^2_c|\partial_r\rho_0|_S}\left(\frac{c^2_s}{c^2_s+v^2_A}\right)^2\ p_T, \mbox{\hspace{1cm}} [p_T]=0, \mbox{\hspace{1cm}} \mbox{\rm at} \mbox{\hspace{1cm}} r=r_S, \end{equation} \noindent where $\omega_c$ and $|\partial_r\rho_0|_s$ are the cusp frequency and the modulus of the radial derivative of the density profile, evaluated at the slow resonance and $c^2_s$ is the sound speed. Note that the factor $\left(\frac{c^2_s}{c^2_s+v^2_A}\right)$ is constant everywhere in the equilibrium. These jump conditions allow to obtain a more general dispersion relation that now contains the contributions from both Alfv\'en and slow resonances as \begin{equation} D(\omega,k_z)=-\imath\pi\frac{m^2/r^2_A}{\omega^2_A}\frac{\rho_i\rho_e}{|\partial_r\rho_0|_A} -\imath\pi\frac{k^2_z}{\omega^2_c}\left(\frac{c^2_s}{c^2_s+v^2_A}\right)^2\frac{\rho_i\rho_e}{|\partial_r\rho_0|_S}. \end{equation} \noindent To obtain an analytic expression for the damping rate of the kink mode the long-wavelength limit has to be considered. In terms of the physically relevant quantities, the damping time over the period can now be cast as \begin{equation} \frac{\tau_d}{P}=F\frac{1}{(l/a)}\left(\frac{\rho_i+\rho_e}{\rho_i-\rho_e}\right)\left[\frac{m}{\cos{\alpha_A}}+\frac{(k_za)^2}{m}\left(\frac{c^2_s}{c^2_s+v^2_A}\right)^2\frac{1}{\cos\alpha_S}\right]^{-1}.\label{dampingalfvenslow} \end{equation} \noindent Here $F$ is the same numerical factor as in Equation~(\ref{dampingrate}), $\alpha_A=\pi(r_A-a)/l$, and $\alpha_S=\pi(r_S-a)/l$. The term with $k_z$ corresponds to the contribution of the slow resonance. If this term is dropped and $m=1$ and $\cos\alpha_A=1$ are taken Equation~(\ref{dampingalfvenslow}) becomes equivalent to Equation~(\ref{dampingrate}) which only takes into account the Alfv\'en resonance. Equation~(\ref{dampingalfvenslow}) can now directly be applied to measure the relative contribution of each resonance to the total damping. To do that \cite{Soler09slowcont} assume $r_A\simeq r_S\simeq a$, for simplicity, so $\cos\alpha_A\simeq\cos\alpha_S\simeq 1$. The ratio of the two terms in Equation~(\ref{dampingalfvenslow}) is then \begin{equation}\label{ratioas} \frac{(\tau_d)_A}{(\tau_d)_S}\simeq\frac{(k_za)^2}{m^2}\left(\frac{c^2_s}{c^2_s+v^2_A}\right)^2. \end{equation} \noindent A simple calculation shows that for the wavelength of observed filament thread oscillations, typically in between $10^{-3} < k_za < 10^{-1}$, the slow resonance is irrelevant to account for the kink mode damping. For instance, for $m=1$ and $k_z a=10^{-2}$ Equation~(\ref{ratioas}) gives $(\tau_d)_A/(\tau_d)_S\simeq 10^{-7}$. Even with the largest ratio that can be obtained, assuming a very large plasma-$\beta$, i.e., $c^2_s\rightarrow\infty$, so the factor $\left(\frac{c^2_s}{c^2_s+v^2_A}\right)\rightarrow 1$, gives a ratio of Alfv\'en to slow continuum damping times of $(\tau_d)_A/(\tau_d)_S\ll 1$. \begin{figure}[!t] \begin{minipage}{0.50\textwidth} \includegraphics[width=1.0\textwidth]{AB10f11.ps} \end{minipage} \hspace{0.1cm} \begin{minipage}{0.48\textwidth} \caption{Ratio of the damping time to the period, $\tau_d/P$, as a function of the dimensionless wavenumber, $k_za$, corresponding to the kink mode for $l/a=0.2$. The solid line is the full numerical solution. The symbols and the dashed line are the results of the thin boundary approximation for the Alfv\'en and slow resonances. The shaded region represents the range of typically observed values for the wavelengths in prominence oscillations. Adapted from \citet{Soler09slowcont}. \label{resonantslow}} \end{minipage} \end{figure} This analytical result is further confirmed by \cite{Soler09slowcont} by performing numerical computations outside the thin tube and thin boundary approximations, by solving the full resistive eigenvalue problem. Figure~\ref{resonantslow} displays the individual contribution of each resonance. The slow resonance is seen to be much less efficient than the Alfv\'en resonance. For the wavenumbers relevant to observed prominence oscillations, the value of $\tau_d/P$ due to the slow resonance is in between 4 and 8 orders of magnitude larger than the same ratio obtained for the Alfv\'en resonance. On the other hand, the complete numerical solution (solid line) is close to the result for the Alfv\'en resonance. As seen in Sect.~ \ref{arregui-alfven} one can obtain values of $\tau_d/P\simeq 3$ in the relevant range of wavenumber. Note also that for short wavelengths ($k_za \geq 10^0$) the value of $\tau_d/P$ increases and the efficiency of the Alfv\'en resonance as a damping mechanism decreases. For the larger wavelengths considered to produce Figure~\ref{resonantslow}, $k_za \simeq 10^2$, both the slow and the Alfv\'en resonances produce similar (and inefficient) damping rates. The overall conclusion obtained by \cite{Soler09slowcont} is therefore that, although the plasma-$\beta$ in solar prominences is definitely non-zero, the slow resonance is very inefficient in damping the kink mode for typical prominence conditions and in the observed wavelength range. The damping times obtained with this mechanism are comparable to those due to the thermal effects discussed previously. Therefore, the resonant damping of transverse thread oscillations is governed by the Alfv\'en resonance. \section{Resonant absorption in a partially ionized filament thread} From the results described so far resonant absorption in the Alfv\'en continuum seems to be the most efficient damping mechanism for the kink mode and the only one that can produce the observed damping time-scales. On the other hand, the effects of partial ionization could also be relevant, at least for short wavelengths. The question arises on whether partial ionization affects the mechanism of resonant absorption. \citet{Soler09rapi} have integrated both mechanisms in a non-uniform cylindrical filament thread model in order to assess both analytically and numerically the combined effects of partial ionization and Alfv\'enic resonant absorption on the kink mode damping. Apart from the inherent relevance of this work in connection to the damping of prominence oscillations, this study constitutes the first of the kind that considers the resonant absorption phenomenon in a partially ionized plasma. The filament thread model used by \citet{Soler09rapi} is an infinite straight cylinder with prominence-like conditions embedded in an unbounded coronal medium. Resonant damping is included in the model by connecting the prominence and coronal densities by means of a transitional layer, with a characteristic inhomogeneity length-scale $l$ (see expressions [\ref{rhor}] and [\ref{rhotrans}]). The plasma properties are now also characterized by the ionization fraction, $\tilde{\mu}_0$. The coronal plasma is assumed to be fully ionized, but the prominence material is only partially ionized. As with the transverse density variation, the radial behavior of the ionization fraction in filament threads is unknown, but a one-dimensional transverse profile, similar to the one used to model the equilibrium density, can be assumed. The following profile for the ionization fraction, $\tilde{\mu}(r)$, is adopted \begin{equation} \tilde{\mu}_{0}\left(r\right)=\left\{\begin{array}{clc} \tilde{\mu}_f,&{\rm if}&r\le a - l/2, \\ \tilde{\mu}_{\rm tr}\left(r\right),&{\rm if}&a-l/2<r<a+l/2,\\ \tilde{\mu}_c,&{\rm if}&r\geq a+l/2, \\ \end{array} \right. \label{eq:profilemu} \end{equation} with \begin{equation} \tilde{\mu}_{\rm tr}\left(r\right)=\frac{\tilde{\mu}_f}{2}\left\{\left(1+\frac{\tilde{\mu}_c}{\tilde{\mu}_f}\right) - \left( 1-\frac{\tilde{\mu}_c}{\tilde{\mu}_f}\right)\sin \left[\frac{\pi}{l}\left( r-a\right)\right]\right\}, \end{equation} where the filament ionization fraction, $\tilde{\mu}_f$, is considered a free parameter and the corona is assumed to be fully ionized, so $\tilde{\mu}_c = 0.5$. The non-uniform transitional layer of length $l$ therefore connects plasma with densities in the range $\rho_f$ and $\rho_c$ and ionization degrees in the range $\tilde{\mu}_f $ and $\tilde{\mu}_c$. As before, the one-fluid approximation is used and for simplicity the $\beta=0$ limit, which excludes slow waves, is considered. The quantities $\eta$, $\eta_{\rm C}$, and $\eta_{\rm H}$ are now functions of the radial direction. \subsection{Analytical results} In order to obtain some analytical approximations, first, only Cowling's diffusion is considered. This basically introduces perpendicular magnetic diffusion and allows to derive an analytical dispersion relation for transverse oscillations, since the induction equation can be written in a compact form as follows \begin{equation} \frac{\partial {\mathit {\bf B}}_1}{\partial t} = \frac{\Gamma^2_{\rm A} }{v_{\mathrm{A}}^2} \nabla \times \left( {\mathit {\bf v}}_1 \times {\mathit {\bf B}}_0\right), \label{eq:induction2} \end{equation} \noindent where $\Gamma^2_{\rm A} \equiv v_{\mathrm{A}}^2 - i \omega \eta_{\rm C}$ is the modified Alfv\'en speed squared \citep{Forteza08}. The dispersion relation for trapped waves is obtained by imposing that the solutions are regular at $r=0$, the continuity of the radial displacement, $\xi_r$ , and the total pressure perturbation, $p_{\rm T}$, at the tube boundary, and that perturbations must vanish at infinity \citep[see, e.g.,][]{ER83}. The dispersion relation reads $D\left( \omega, k_z \right) =0$, with \begin{equation} D\left( \omega, k_z \right) = \frac{n_c}{\rho_c \left( \omega^2 - k_z^2 \Gamma_{{\rm A}c}^2 \right)} \frac{K'_m \left( n_c a \right)}{K_m \left( n_c a \right)} - \frac{m_f}{\rho_f \left( \omega^2 - k_z^2 \Gamma_{{\rm A}f}^2 \right)} \frac{J'_m \left( m_f a \right)}{J_m \left( m_f a \right)}, \label{eq:dispernocapa} \end{equation} \noindent where $J_m$ and $K_m$ are the Bessel function and the modified Bessel function of the first kind, respectively, and the quantities $m_f$ and $n_c$ are given by \begin{equation} m_f^2 = \frac{\left(\omega^2 - k_z^2 \Gamma_{{\rm A}f}^2 \right)}{\Gamma_{{\rm A}f}^2}, \mbox{\hspace{0.5cm}} n_c^2 = \frac{\left(k_z^2 \Gamma_{{\rm A}c}^2 -\omega^2 \right)}{\Gamma_{{\rm A}c}^2}. \end{equation} As shown in Section~\ref{resonantdamping}, an analytical dispersion relation in the case $l/a\neq0$ can be obtained in the thin boundary approximation, by making use of the jump conditions for the radial displacement and the total pressure perturbation at the Alfv\'en resonance (expressions [\ref{jumpalfven}]). This dispersion relation is \begin{equation} D\left( \omega, k_z \right) = -i \pi \frac{m^2 / r_{\rm A}^2}{\omega_k^2 \left| \partial_r \rho_0 \right|_{r_{\rm A}}}. \label{eq:dispercapa} \end{equation} \noindent Note that Equation~(\ref{eq:dispercapa}) is formally identical to Equation~(\ref{disperalfven}), but now $D \left( \omega, k_z \right)$ is defined in Equation~(\ref{eq:dispernocapa}). By combining the thin tube and thin boundary approximations, and neglecting terms proportional to $k^2_z$, \citet{Soler09rapi} arrive at the following short-hand expression for the damping time over the period \begin{equation} \frac{\tau_{\mathrm{d}}}{P} = \frac{2}{\pi} \left[ m \left( \frac{l}{a} \right) \left( \frac{\rho_f - \rho_c}{\rho_f + \rho_c } \right) + \frac{2 \left(\rho_f \tilde{\eta}_{{\rm C}f} + \rho_c \tilde{\eta}_{{\rm C}c} \right)k_z a}{\sqrt{2 \rho_f \left(\rho_f + \rho_c \right) }} \right]^{-1}, \label{eq:tdp} \end{equation} with $\tilde{\eta}_{{\rm C}c,f}=\eta_{\rm C}/v_{{\rm A}c,f}a$ the filament and coronal Cowling's diffusivities in dimensionless form. We see that the term related to the resonant damping is independent of the value of Cowling's diffusivity and, therefore, of the ionization degree, and takes the same form as in a fully ionized plasma, see e.g. Equation~(77) in \citet{goossens92} and Equation~(56) in \citet{RR02}. On the other hand, the second term related to the damping by Cowling's diffusion is proportional to $k_z$, so its influence in the long-wavelength limit is expected to be small. To perform a simple calculation, consider the case $m=1$, $k_z a = 10^{-2}$, and $l/a = 0.2$. This results in $\tau_{\mathrm{d}} / P \approx 3.18$ for a fully ionized thread ($\tilde{\mu}_f = 0.5$), and $\tau_{\mathrm{d}} / P \approx 3.16$ for an almost neutral thread ($\tilde{\mu}_f = 0.95$). The ratio $\tau_{\mathrm{d}} / P$ depends very slightly on the ionization degree, suggesting that resonant absorption dominates over Cowling's diffusion. The relative importance of the two mechanisms can be estimated by taking the ratio of the two terms on the right-hand side of Equation~(\ref{eq:tdp}) \begin{equation} \frac{\left( \tau_{\mathrm{d}} \right)_{\rm RA}}{\left( \tau_{\mathrm{d}} \right)_{\rm C}} = \sqrt{\frac{2 \left(\rho_f + \rho_c \right)}{\rho_f}} \left( \frac{\rho_f \tilde{\eta}_{{\rm C}f} + \rho_c \tilde{\eta}_{{\rm C}c}}{\rho_f - \rho_c} \right) \frac{k_z a}{m \left( l/a \right)}, \end{equation} \noindent with $\left( \tau_{\mathrm{d}} \right)_{\rm RA}$ and $\left( \tau_{\mathrm{d}} \right)_{\rm C}$ the damping times due to resonant absorption and Cowling's diffusion, respectively. This last expression can be further simplified by considering that in filament threads $\rho_f \gg \rho_c$ and $\tilde{\eta}_{{\rm C}f} \gg \tilde{\eta}_{{\rm C}c}$, so that \begin{equation} \frac{\left( \tau_{\mathrm{d}} \right)_{\rm RA}}{\left( \tau_{\mathrm{d}} \right)_{\rm C}} \approx \sqrt{2} \tilde{\eta}_{{\rm C}f} \frac{k_z a}{m \left( l/a \right)}. \label{eq:ratiotaus} \end{equation} The efficiency of Cowling's diffusion with respect to that of resonant absorption increases with $k_z a$ and $\tilde{\mu}_f$ (through $\tilde{\eta}_{{\rm C}f}$). Considering the same parameters as before, one obtains $\left( \tau_{\mathrm{d}} \right)_{\rm RA} / \left( \tau_{\mathrm{d}} \right)_{\rm C} \approx 2 \times 10^{-8}$ for $\tilde{\mu}_f = 0.5$, and $\left( \tau_{\mathrm{d}} \right)_{\rm RA} / \left( \tau_{\mathrm{d}} \right)_{\rm C} \approx 6 \times 10^{-3}$ for $\tilde{\mu}_f = 0.95$, meaning that resonant absorption is much more efficient than Cowling's diffusion. From Equation~(\ref{eq:ratiotaus}) it is also possible to estimate the wavenumber for which Cowling's diffusion becomes more important than resonant absorption, by setting $\left( \tau_{\mathrm{d}} \right)_{\rm RA} / \left( \tau_{\mathrm{d}} \right)_{\rm C} \approx 1$, as \begin{equation} k_z a \approx \frac{m \left( l/a \right)}{\sqrt{2} \tilde{\eta}_{{\rm C}f}}. \label{eq:kzCRA} \end{equation} Considering again the same parameters, Equation~(\ref{eq:kzCRA}) gives $k_z a \approx 5 \times 10^5$ for $\tilde{\mu}_f = 0.5$, and $k_z a \approx 1.7$ for $\tilde{\mu}_f = 0.95$. Note however that Equation~(\ref{eq:ratiotaus}) is only valid for $k_z a \ll 1$, so resonant absorption is expected to be the dominant damping mechanism in the long-wavelength regime, even for an almost neutral filament plasma. \subsection{Numerical results} The analytical estimates described above are verified and extended by \citet{Soler09rapi} by numerically solving the full eigenvalue problem. Computations include now Hall's diffusion in addition to ohmic and Cowling's dissipation. \begin{figure*} \includegraphics[width=0.5\textwidth]{AB10f12a.ps} \includegraphics[width=0.5\textwidth]{AB10f12b.ps} \caption{Ratio of the damping time to the period of the kink mode as a function of $k_z a$ corresponding to a thread without transitional layer, i.e., $l/a=0$. ($a$) Results for $a = 100$~km considering different ionization degrees: $\tilde{\mu}_f = 0.5$ (dotted line), $\tilde{\mu}_f = 0.6$ (dashed line), $\tilde{\mu}_f = 0.8$ (solid line), and $\tilde{\mu}_f = 0.95$ (dash-dotted line). Symbols are the approximate solution given by solving Equation~(\ref{eq:dispernocapa}) for $\tilde{\mu}_f = 0.8$. ($b$) Results for $\tilde{\mu}_f = 0.8$ considering different thread widths: $a = 100$~km (solid line), $a = 50$~km (dotted line), and $a = 200$~km (dashed line). The shaded zone corresponds to the range of typically observed wavelengths of prominence oscillations. Adapted from \citet{Soler09rapi}.} \label{fig:nocapa} \end{figure*} First, a homogeneous filament thread without transitional layer ($l/a = 0$) is considered. This is the same case presented in Section~\ref{robertopicyl} with the addition of Hall's term in the induction equation. Figure~\ref{fig:nocapa}a displays the obtained damping rate for different ionization degrees. In agreement with the results displayed for the kink mode in Figure~\ref{figpicyl1}, $\tau_{\mathrm{d}} / P$ has a maximum which corresponds to the transition between the ohmic-dominated regime, which is almost independent of the ionization degree, to the region where Cowling's diffusion is more relevant and the ionization degree has a significant influence. The approximate solution obtained by solving Equation~(\ref{eq:dispernocapa}) for a given value of $\tilde{\mu}$ agrees very well with the numerical solution in the region where Cowling's diffusion dominates, while it significantly diverges from the numerical solution in the region where ohmic diffusion is relevant. Within the range of typically reported wavelengths, $\tau_{\mathrm{d}} / P$ is between 1 and 2 orders of magnitude larger than the measured values, so neither ohmic nor Cowling's diffusion can account for the observed damping time. Figure~\ref{fig:nocapa}b shows that the smaller the thread radius, the more rapidly the kink wave is attenuated. In addition, the critical wavenumbers are shifted when varying the thickness of the threads and the wavenumber range for which the kink wave propagates is wider for thick threads than for thin threads. The critical wavenumbers are far from the relevant values of $k_z a$ for the observed thread widths, and so they should not affect the kink wave propagation in realistic filament threads. \begin{figure*} \includegraphics[width=0.5\textwidth]{AB10f13a.ps} \includegraphics[width=0.5\textwidth]{AB10f13b.ps} \caption{Ratio of the damping time to the period of the kink mode as a function of $k_z a$ corresponding to a thread with an inhomogeneous transitional layer. ($a$) Results for $\tilde{\mu}_f = 0.8$ considering different transitional layer widths: $l/a = 0$ (dotted line), $l/a = 0.1$ (dashed line), $l/a = 0.2$ (solid line), and $l/a = 0.4$ (dash-dotted line). Symbols are the solution in the TB approximation given by solving Equation~(\ref{eq:dispercapa}) for $l/a = 0.2$. ($b$) Results for $l/a = 0.2$ considering different ionization degrees: $\tilde{\mu}_f = 0.5$ (dotted line), $\tilde{\mu}_f = 0.6$ (dashed line), $\tilde{\mu}_f = 0.8$ (solid line), and $\tilde{\mu}_f = 0.95$ (dash-dotted line). In both panels we have considered $a = 100$~km. Adapted from \citet{Soler09rapi}.} \label{fig:capa} \end{figure*} Next, the inhomogeneous case ($l/a\neq0$) is analyzed, with the inclusion of resonant wave coupling in the Alfv\'en continuum. Figure~\ref{fig:capa}a displays some relevant differences with respect to the homogeneous case ($l/a = 0$). First, the damping time is dramatically reduced for intermediate values of $k_z a$ including the region of typically observed wavelengths. In this region, the ratio $\tau_{\mathrm{d}} / P$ becomes smaller as $l/a$ is increased, a behavior consistent with damping by resonant absorption. The ratio $\tau_{\mathrm{d}} / P$ is independent of the thickness of the layer, and for large $k_z a$ coincides with the solution in the homogeneous case. The cause of this behavior is that perturbations are essentially confined within the homogeneous part of the thread for large $k_z a$, and therefore the kink mode is mainly governed by the internal plasma conditions. On the other hand, the solution for small $k_z a$ is completely different when $l/a \neq 0$. The inclusion of the inhomogeneous transitional layer removes the smaller critical wavenumber and consequently the kink mode exists for very small values of $k_z a$. Figure~\ref{fig:capa}a also shows a very good agreement between the numerical and the analytical solutions (obtained by solving the dispersion relation~[\ref{eq:dispercapa}]), for wavenumbers above $k_z a \sim 10^{-4}$, and a poor agreement in the range for which ohmic diffusion dominates, below $k_z a \sim 10^{-4}$. To understand this one has to bear in mind that dispersion relation~(\ref{eq:dispercapa}) takes into account the effects of resonant absorption and Cowling's diffusion, but not the influence of ohmic diffusion. As Figure~\ref{fig:capa}b shows, the ionization degree is only relevant for large wavenumbers, where the damping rate significantly depends on the ionization fraction. \begin{figure} \begin{minipage}{0.50\textwidth} \includegraphics[width=1.0\textwidth]{AB10f14.ps} \end{minipage} \hspace{0.1cm} \begin{minipage}{0.48\textwidth} \caption{Ratio of the damping time to the period of the kink mode as a function of $k_z a$ corresponding to a thread with $a = 100$ ~km and $l/a = 0.2$. The different linestyles represent the results for: partially ionized thread with $\tilde{\mu}_f = 0.8$ considering all the terms in the induction equation (solid line), partially ionized thread with $\tilde{\mu}_f = 0.8$ neglecting Hall's term (symbols), and fully ionized thread (dotted line). Adapted from \citet{Soler09rapi}. \label{fig6soler} } \end{minipage} \end{figure} Finally, Figure~\ref{fig6soler} displays the ranges of $k_z a$ where Cowling's and Hall's diffusion dominate. As expected, Hall's diffusion is irrelevant in the whole studied range of $k_z a$, while Cowling's diffusion dominates the damping for large $k_z a$. In the whole range of relevant wavelengths, resonant absorption is the most efficient damping mechanism, and the damping time is independent of the ionization degree as predicted by the analytical result (Eq.~[\ref{eq:tdp}]). On the contrary, ohmic diffusion dominates for very small $k_z a$. In that region, the damping time related to Ohm's dissipation becomes even smaller than that due to resonant absorption, meaning that the kink wave is mainly damped by ohmic diffusion. \section{Application to prominence seismology} Solar atmospheric seismology aims to determine physical parameters in magnetic and plasma structures that are difficult to measure by direct means. It is a remote diagnostics method that combines observations of oscillations and waves in magnetic structures, together with theoretical results from the analysis of oscillatory properties of models of those structures. The philosophy behind this discipline is akin to that of Earth seismology, the sounding of the Earth interior using seismic waves, and helio-seismology, the acoustic diagnostic of the solar interior. It was first suggested by \cite{uchida70} and \cite{REB84}, in the coronal context, and by \cite {TH95} in a prominence context. The last years increase in the number and quality of high resolution observations has lead to its rapid development. In the context of coronal loop oscillations, recent applications of this technique have allowed the estimation and/or restriction of parameters such as the magnetic field strength \citep{Nakariakov01}, the Alfv\'en speed in coronal loops \citep{temury03,Arregui07,GABW08}, the transversal density structuring \citep{verwichte06}, or the coronal density scale height \citep{AAG05,Verth08}. Its application to prominences is less developed. Some recent results of the MHD prominence seismology technique are shown. \subsection{Seismology using the period of filament thread oscillations} The first prominence seismology application using Hinode observations of flowing and transversely oscillating threads was presented by \cite{Terradas08hinode}, using observations obtained in an active region filament by \cite{Okamoto07}. The observations show a number of threads that flow following a path parallel to the photosphere while they are oscillating in the vertical direction. \cite{Terradas08hinode} interpret these oscillations in terms of the kink mode of a magnetic flux tube. By using previous theoretical results from a normal mode analysis in a two-dimensional piecewise filament thread model by \cite{diaz02} and \cite{DR05}, these authors find that, although it is not possible to univocally determine the physical parameters of interest, a one-to-one relation between the thread Alfv\'en speed and the coronal Alfv\'en speed can be established. This relation comes in the form of a number of curves relating the two Alfv\'en speeds for different values of the length of the magnetic flux tube and the density contrast between the filament and coronal plasma. The obtained curves have an asymptotic behavior for large values of the density contrast, typical of filament to coronal plasmas, and hence a lower limit for the thread Alfv\'en speed can be obtained. Further details on this study can be found in \cite{Terradas08hinode} and \cite{Oliver09}. A recent application of the prominence seismology technique, using the period of observed filament thread transverse oscillations can be found in \citet{Lin09}. These authors find observational evidence about swaying motions of individual filament threads from high resolution observations obtained with the Swedish 1-m Solar Telescope in La Palma. The presence of waves propagating along individual threads was already evident in, e.g., \citet{Lin07}. However, the fact that line-of-sight oscillations are observed in prominences beyond the limb, as well as in filaments against the disk, suggests that the planes of the oscillation may acquire various orientations relative to the local solar reference system. For this reason, \citet{Lin09} combine simultaneous recordings of motions in the line-of-sight and in the plane of the sky, which leads to information about the orientation of the oscillatory plane in each case. Periodic oscillatory signals are obtained in a number of threads, that are next fitted to sine curves, from which the period and the amplitude of the waves are derived. The presence of different cuts along the structures allow \cite{Lin09} to obtain the phase difference between the fitted curves, which can be used to measure the phase velocities of the waves. The overall periods and mean velocity amplitudes that are obtained correspond to short period, $P\sim$ 3.6 s, and small amplitude $\sim 2$ km s$^{-1}$ oscillations. The information obtained from these H$_\alpha$ filtergrams in the plane of the sky is combined with H$_\alpha$ Dopplergrams, which allow to detect oscillations in the line-of-sight direction. By combining the observed oscillations in the two orthogonal directions the full vectors are derived, which show that the oscillatory planes are close to the vertical. \begin{figure*}[!t] \includegraphics[width=0.5\textwidth]{AB10f15a.ps} \includegraphics[width=0.5\textwidth]{AB10f15b.ps} \caption{{\em (a):} Ratio $c^2_k/v^2_{Ai}$ (solid line) as a function of the density contrast, $c$. The dotted line corresponds to the value of the ratio $c^2_k/v^2_{Ai}$ for $c\rightarrow\infty$. {\em (b):} Magnetic field strength as a function of the internal density, $\rho_i$, corresponding to some selected threads: thread 1 (solid line), thread 3 (dotted line), thread 5 (dashed line), and thread 7 (dash-dotted line). Adapted from \citet{Lin09}.} \label{seismologylin09} \end{figure*} \cite{Lin09} interpret the observed swaying thread oscillations as kink MHD waves supported by the thread body. By assuming the classic one-dimensional, straight, flux tube model a comparison between the observed wave properties and the theoretical prediction is performed on order to obtain the physical parameters of interest, namely the Alfv\'en speed and the magnetic field strength in the studied threads. To this end $c_k=\omega/k_z$ with $\omega$ defined by Equation~(\ref{kinkfrequency}) is used as an approximation to the kink speed, which can directly be associated to the measured phase velocity of the observed disturbances. Figure~\ref{seismologylin09}a shows that, for the large density contrasts expected for filament threads, the curve that relates the kink speed to the Alfv\'en speed becomes flat, and so the ratio is almost independent of the density contrast. This allows to simplify the kink speed to $c_k\simeq\sqrt{2}v_{Ai}$ and therefore obtain the thread Alfv\'en speed through $v_{Ai}\simeq V_{\rm phase}/\sqrt{2}$. The obtained values for a set of 10 threads can be found in Table 2 in \cite{Lin09}. Once the Alfv\'en speed in each thread is determined, the magnetic field strength can be computed, if a given internal density is assumed. For a typical value of $\rho_i=5\times10^{-11}$ kg m$^{-3}$ magnetic field strengths in between 0.9 -3.5 G are obtained, for the analyzed events. If the unknown density is allowed to vary in a range of plausible values, the derived magnetic field strength also changes accordingly, as can be seen in Figure~\ref{seismologylin09}b. The important conclusion that we extract from the analysis by \cite{Lin09} is that prominence seismology is possible and works well, provided high resolution observations are available. The derived plasma parameters show rather different values for different threads, which indicates that the plasma parameters may be varying along individual threads belonging to the same filament. This however does not come as a surprise in view of the highly inhomogeneous nature of these objects. \begin{figure*} \includegraphics[width=0.5\textwidth]{AB10f16a.ps} \includegraphics[width=0.5\textwidth]{AB10f16b.ps} \caption{{\em (Left):} Analytic inversion of physical parameters in the ($c$, $l/a$, $v_{Af}$) space for a filament thread oscillation with $P=3$ min, $\tau_d=9$ min, and a wavelength of $\lambda=3000$ km (see e.g. Lin et al. 2007). {\em (Right):} Magnetic field strength as a function of the density contrast and transverse inhomogeneity length-scale, derived from the analytic inversion for a coronal density of $\rho_c=2.5\times10^{-13}$ kg m$^{-3}$.} \label{seismologyarregui} \end{figure*} \subsection{Seismology using the period and damping of filament thread oscillations} A feature clearly observed by \cite{Lin09} is that the amplitudes of the waves passing through two different cuts along a thread are notably different. Apparent changes can be due to damping of the waves in addition to the noise in the data. Among the different damping mechanisms that are described in this paper, resonant absorption in the Alfv\'en continuum seems a very plausible mechanism and is open to direct application for filament thread seismology, using the damping as an additional source of information. This additional information has been used by \cite{Arregui07,GABW08} in the context of transversely oscillating coronal loops and its application to filament threads is straightforward. The analytical and numerical inversion schemes by \citet{Arregui07} and \citet{GABW08} make use of the simple idea that it is the same magnetic structure, whose equilibrium conditions we are interested to assess, that is oscillating with a given period and undergoing a given damping rate. By computing the kink normal mode frequency and damping time as a function of the relevant equilibrium parameters for a one-dimensional model, the period and damping rate have the following dependencies \begin{eqnarray}\label{relations} P=P(k_z,c,l/a,\tau_{Ai}), \mbox{\hspace{1cm}} \frac{P}{\tau_d}=\frac{P}{\tau_d}(k_z,c,l/a), \end{eqnarray} \noindent where $\tau_{Ai}$ the internal Alfv\'en travel time. In the case of coronal loop oscillations, an estimate for $k_z$ can be obtained directly from the length of the loop and the fact that the fundamental kink mode wavelength is twice this quantity. For filament threads, the wavelength of oscillations needs to be measured. Relations~(\ref{relations}) indicate that, if no assumption is made in any of the physical parameters of interest, we have two observed quantities, period and damping time, and three unknowns, density contrast, transverse inhomogeneity length-scale, and Alfv\'en travel time (or conversely Alfv\'en speed through the relation $v_{Ai}=L/\tau_{Ai}$). There are therefore infinite different equilibrium models that can equally well explain the observations. These valid equilibrium models are displayed in Figure~\ref{seismologyarregui}a, where the analytical algebraic expressions in the thin tube and thin boundary approximations by \cite{GABW08} have been used to invert the problem. It can be appreciated that, even if an infinite number of solutions is obtained, they define a rather constrained range of values for the thread Alfv\'en speed. Also, and because of the insensitiveness of the damping rate with density contrast, for the typically large values of this parameter in prominence plasmas, the obtained solution curve displays an asymptotic behavior for large values of $c$. This allows us to obtain precise estimates for the filament thread Alfv\'en speed, $v_{Ai}\simeq 12$ km s$^{-1}$, and the transverse inhomogeneity length scale, $l/a\simeq 0.16$. Note that these asymptotic values can directly be obtained by inverting Equations~(\ref{period}) and (\ref{dampingrate}) for the period and the damping rate in the limit $c\rightarrow\infty$. The computation of the magnetic field strength from the obtained seismological curve requires the assumption of a particular value for either the filament or the coronal density. The resulting curve for a typical coronal density is shown in Figure~\ref{seismologyarregui}b. As can be seen, precise values of the magnetic field strength cannot be obtained, unless the density contrast is accurately known. \section{Open Issues} Solar prominences are, probably, the most complex structures in the solar corona. A full understanding of their formation, magnetic structure and disappearance has not been reached yet, and a lot of physical effects remain to be included in prominence models. For this reason, theoretical models set up to interpret small-amplitude oscillations and their damping are still poor. High-resolution observations of filaments suggest that they are made of threads whose thickness is at the limit of the available spatial resolution. Then, one may wonder whether future improvements of the spatial resolution will provide with thinner and thinner threads or, on the contrary, there is a lower limit for thickness and we will able to determine it in the near future. The presence of these long and thin threads together with the place where they are anchored and the presence of flows along them suggest that they are thin flux tubes filled with continuous or discontinuous cool material. This cool material is probably subject to cooling, heating, ionization, recombination, motions, etc. which, altogether, makes a theoretical treatment very difficult. For instance, in the case of the considered thermal mechanisms, up to now only optically thin radiation has been considered while, probably, the consideration of optically thick effects would be more realistic; the prominence heating mechanisms usually taken into account are tentative and ``ad hoc'' while true prominence heating processes are still deeply unknown. An important step ahead would be to couple radiative transfer with magnetohydrodynamic waves as a means to establish a relationship between velocity, density, magnetic field and temperature perturbations, and the observed signatures of oscillations like spectral line shift, width and intensity. Partial ionization is another topic of interest for prominence oscillations since, apart from influencing the behavior of magnetohydrodynamic waves, it poses an important problem for prominence equilibrium models when gravity is taken into account and which is: How non-ionized prominence material is kept in the magnetic field?. Another issue which still remains a mystery is the triggering mechanism of small-amplitude oscillations. In the case of large-amplitude oscillations, observations provide with information about the exciting mechanism, however, the available observations of small-amplitude oscillations do not show any signature of the exciting mechanism. Are these oscillations of chromospheric or photospheric origin? Are they generated inside prominence magnetic structure by small reconnection events? Are they produced by weak external disturbances coming from far away in the solar atmosphere? The changing physical conditions of prominence plasmas suggest that for an in-depth theoretical study of prominence oscillations more complex models together with numerical simulations are needed. Therefore, and as a step ahead, in the next future numerical studies of the time evolution of magnetohydrodynamic waves in partially ionized flowing inhomogeneous prominence plasmas, subject to different physical processes such as ionization, recombination, etc., should be undertaken. However, a full three-dimensional dynamical prominence model involving magnetic equilibrium, radiative transfer, etc. whose oscillatory behavior could be studied seems to be still far away in the future. \begin{acknowledgements} The authors acknowledge the financial support received from the Spanish MICINN, FEDER funds, and the Conselleria d'Economia, Hisenda i Innovaci\'o of the CAIB under Grants No. AYA2006-07637 and PCTIB-2005GC3-03. The authors also want to acknowledge the contributions to the research reported here made by M. Carbonell, A.J. D\'{\i}az, P. Forteza, R. Oliver, R. Soler, and J. Terradas. \end{acknowledgements}
1,314,259,994,572
arxiv
\section{Introduction} \label{intro} The environmental dependence of galaxy properties (colour, star formation, mass) is well established in the local universe. At present many local studies have been carried out to analyse the influence of environment on colours, luminosities, morphologies, structural parameters, star formation, and stellar masses: all local relations can be considered as different faces of the morphology--density relation shown by \citet{Dressler1980}. At higher redshifts, this kind of study becomes very difficult, because the need for large spectroscopic samples of faint galaxies with a good sampling rate hampers a reliable estimate of the environment. Until now, therefore, most of the studies in high density environments have analysed galaxy clusters or groups and the more general effect of the environment on field galaxy evolution remains poorly explored. The evolution of the galaxy stellar mass function (GSMF) as a function of the large-scale environment has been studied in the DEEP2 Galaxy Redshift Survey \citep{Bundy2006}, considering the redshift range $z=0.4 - 1.4$, which limits the connection between this study and those in the local Universe. Some remaining open questions are: what is the most important property leading the evolution of field galaxies? Is the fate of a galaxy decided once its mass is defined or do some external players have a role? And, if the environment plays such a role, when does it start to affect galaxy evolution, and by means of which mechanism? On the basis of literature results, the full story of galaxies is not consistently presented. Most low-redshift studies are based on SDSS data. We try to summarise the most relevant conclusions, without pretending to be exhaustive. Some studies assert that the mass is the most important parameter in galaxy evolution: from the colour bimodality, \citet{Balogh2004} propose that the properties of star-forming galaxies are mainly related to their mass and that, to preserve the bimodality without altering the colours modelled by two Gaussian distributions, the transformation from late- to early-type galaxies should be rapid in truncating the star formation and efficient for all luminosities and environments. Analogous studies reach similar conclusions: \citet{Hogg2003} find that blue galaxies show no correlation between their luminosity/mass and local density at a fixed colour; \citet{Baldry2006} affirm that the fraction of red galaxies depends on environment, but not their colour--mass relation. \citet{Thomas2010} find that correlations between properties of galaxies in the red sequence are only driven by galaxy mass. Furthermore, \citet{Bosch2008b}, investigating the efficiency of transformation processes on the SDSS groups catalogue, claim that both the colour and the concentration of a satellite galaxy are mostly determined by their stellar mass. On the other hand, many other studies based on the same SDSS dataset agree on giving importance, at different levels, to both nature and nurture in the evolutionary paths of galaxies. In these studies, environment is not considered a secondary effect and it has an impact on one or more of the galaxy properties and their relations such as colour, star formation rate and its spatial variation, structural parameters, morphology, the presence of active galactic nuclei (AGN), age, and the timescale of transformation of galaxies \citep[e.g.][]{Kauffmann2004,Tanaka2004,Bamford2009,Skibba2009,Welikala2009,Cooper2010a,Gavazzi2010,Clemens2006,Bernardi2006,Lee2010,Mateus2007,Mateus2008,Blanton2005,Gomez2003}. In addition to considering the importance of the environment on galaxy evolution, the scale on which the environment is evaluated has been found to be of huge importance: for instance, another study on the colour bimodality by \citet{Wilman2010} finds that a correlation of the colour and the fraction of red galaxies with increasing densities is seen only on scales smaller than $\sim 1\,h^{-1}$\,Mpc, which is the characteristic scale on which galaxies are accreted in more massive dark matter haloes, undergoing the truncation of their star formation. Other studies dealing with the groups environment support a similar scenario in which central and satellites galaxies follow different evolutionary paths, with satellite galaxies falling into more massive haloes and experiencing a slow transformation because of the removal of gas by strangulation, resulting in the fading of star formation \citep{Rogers2010,Weinmann2009,Wel2009,Wel2010,Bosch2008}. Still at low redshifts, but using 2MASS and LCRS data, \citet{Balogh2001} distinguished between different environments such as field, groups, and clusters, finding that luminosity and mass functions depend on both galaxy type (with steeper functions for emission line galaxies) and environment (with more massive and brighter objects being more common in clusters), mainly as a consequence of the different contributions of passive galaxies. At higher redshifts, probing the effect of environment on galaxy evolution becomes more difficult and often this kind of study uses projected estimators of local density and relies on photometric redshifts \citep[e.g. ][]{Scoville2007b,Wolf2009}. The main studies using spectroscopic redshifts analyse data from the two major surveys of the recent past, DEEP2 \citep{Davis2003} and VVDS \citep{LeFevre2003}. \citet{Bundy2006}, using DEEP2 data at $0.4<z<1.4$ and $R_{\rm AB}<24.1$, estimate the effect of environment on GSMFs: they drew the conclusion that the quenching of star formation, and then the transition between the blue cloud and the red sequence, is primarily internally driven and dependent on mass, even if they detected a moderate acceleration of the downsizing phenomenon in overdense regions, where the rise of the quiescent population with cosmic time appears to be faster, as seen through the evolution of the transition and quenching masses, $\mathcal{M}_{\rm cross}$ and $\mathcal{M}_{\rm Q}$. Using the same dataset complemented by SDSS at low redshifts, \citet{Cooper2008} studied the connection between the star formation rate (SFR) and environment, finding hints of a reversal of that relation from $z\sim 0$, where the mean SFR decreases with local density, to $z\sim 1$, where a blue population causes an increase in the mean SFR in overdense regions; nonetheless, the decline of the global cosmic star formation history (SFH) since $z\sim 1$ seems to be caused by a gradual gas consumption rather than environment-dependent processes. A similar result on the reversing relationship SFR--environment was found by \citet{Elbaz2007}, using GOODS data and SFR derived from UV and $24\,\mu$m emission. Using spectroscopic data from the VVDS up to $z\sim 1.5$, \citet{Cucciati2006} found a steep colour--density relation at low-$z$, which appeared to fade at higher redshifts. In particular, they identified differences in colour distributions in low and high density regimes at low redshifts, whereas at high redshifts the environment was not found to affect these distributions. In their proposed scenario the processes of star formation and gas exhaustion are accelerated for more luminous objects and high density environments, leading to a shift with cosmic time in star formation activity toward fainter galaxies and low density environments. \citet{Scodeggio2009} studied the stellar mass and colour segregations in the VVDS at redshifts $z=0.2 - 1.4$, using a density field computed on scales of $\sim 8$\,Mpc; they found that the colour--density relation is a mirror of the stellar mass segregation, that in turn is a consequence of the dark matter halo mass segregation predicted by hierarchical models. The effects of environment on both local galaxy properties and their evolution are still uncertain, keeping the nature versus nurture debate open. From the aforementioned results, there seems to be some hint that the galaxy evolutionary path from the blue cloud to the red sequence depends on environment, but the determination of the mechanism behind this transformation, its probability of occurring, its link to both the environment and intrinsic galaxy properties is a difficult task. Different physical processes of galaxy transformation differ in terms of timescales, efficiency and observational repercussions, such as colour and morphology. The GSMF is a very suitable tool for investigating this problem and witnessing the buildup of galaxies and its dependence on environment. In this paper, we focus on the effect of environment on field galaxies using data from COSMOS (Cosmic Evolution Survey) and zCOSMOS; in this field the most extreme overdense regions such as cluster cores are almost absent. Parallel and complementary analyses are presented in \citet{Pozzetti2010}, \citet{Zucca2009}, \citet{Iovino2010}, \citet{Cucciati2010}, \citet{Tasca2009}, \citet{Kovac2010b}, \citet{Vergani2010}, \citet{Moresco2010}, and \citet{Peng2010}. The plan of this paper is the following: in Sect.~\ref{data}, we describe the spectroscopic and photometric datasets and the derived properties we used to characterise different galaxy populations; in Sect.~\ref{mf} we derive the GSMFs and in Sect.~\ref{mftype} we analyse the different contribution of galaxy types to the GSMF in different environments. We compare our results with similar analyses in the literature and we discuss the implications for the picture of galaxy evolution in Sect.~\ref{discuss}. Throughout the paper we adopted the cosmological parameters $\Omega_m=0.25$, $\Omega_\Lambda=0.75$, $h_{70}=H_0/(70\,{\rm km\,s^{-1}\,Mpc^{-1}})$, magnitudes are given in the AB system and stellar masses are computed assuming the Chabrier initial mass function \citep{Chabrier2003}. \section{Data} \label{data} The zCOSMOS survey \citep{Lilly2007} is a redshift survey intended to measure the distances of galaxies and AGNs over the COSMOS field \citep{Scoville2007}, the largest HST survey carried out to date with ACS \citep{Koekemoer2007}. The whole field of about $2$\,deg$^2$ was observed from radio to X-ray wavelengths by parallel projects, involving worldwide teams and observatories. The coexistence of multiwavelength observations, morphologies, and spectroscopic redshifts ensures that COSMOS provides a unique opportunity to study the evolution of galaxies in their large-scale structure context. \subsection{Spectroscopy} \label{spectro} The spectroscopic survey zCOSMOS is currently ongoing and is subdivided into two different parts: the ``bright'' survey, which targets $\sim 20\,000$ galaxies, with a pure flux-limited selection corresponding to $15 \le I_{\rm AB} \le 22.5$, and the ``deep'' survey, whose goal is the measurement of redshifts in the range $1.4\le z \le 3.0$, within the central $1$\,deg$^2$. The data used in this paper belong to the so-called 10k sample \citep{Lilly2009}, consisting of the first $10\,644$ observed objects of the ``bright'' survey, over an area of $1.402$\,deg$^2$ with a mean sampling rate of $\sim 33$\%. The final design of the survey aims to reach a sampling rate of $\sim 60-70$\%, achieved by means of an eight-pass strategy. The observations have been carried out with VIMOS@VLT with the red grism at medium resolution $R\sim 600$. The data have been reduced with VIPGI \citep{Scodeggio2005} and spectroscopic redshifts have been visually determined after a first hint provided by EZ \citep{Garilli2010}\footnote{Both VIPGI and EZ are public softwares retrievable from {\tt http://cosmos.iasf-milano.inaf.it/pandora/}}. The confidence on the redshift measurements has been represented by means of a flag ranging from 4, for redshifts assigned without doubts, to 0, for undetermined redshifts; a subsample of duplicated spectroscopic observations allowed us to estimate the rate of confirmation of redshift measurements, being in the range 99.8 -- 70\% depending on the flag (see \citealt{Lilly2009} for details). All the redshifts have been checked by at least two astronomers. A decimal digit specifies whether the redshift is in agreement with photometric redshifts \citep{Feldmann2006} computed from optical and near-infrared (NIR) photometry using the code ZEBRA \citep[Zurich Extragalactic Bayesian Redshift Analyzer,][]{Feldmann2008}. For some objects, the measure resulted to be hampered by technical reasons (for instance the spectrum at the edge of the slit); in those cases, a flag $-99$ has been assigned. Different flags have been assigned to identify broad-line AGNs and targets observed by chance in slits. \subsection{Photometry} \label{photo} The photometry used in the following is part of the COSMOS observations and encompasses optical to NIR wavelengths: $u*$ and $K_s$ from CFHT, $B_J$, $V_J$, $g^+$, $r^+$, $i^+$, and $z^+$ from Subaru, and Spitzer IRAC magnitudes at $3.6$, $4.5$, $5.8\,\mu$m. Details of photometric observations and data reduction are given in \citet{Capak2007} and \citet{McCracken2010}. The scantiness of standard stars in the photometric observations and the uncertainty in the knowledge of the filter responses result in an uncertain calibration of zero-points. To avoid this inconvenience, we optimised the photometry by applying offsets to the observed magnitudes: we computed these photometric shifts for each band minimising the differences between observed magnitudes and reference ones computed from a set of spectral energy distributions (hereafter SEDs). We adopted an approach similar to \citet[see their Table 13]{Capak2007}, but considering the same set of SEDs we used to compute stellar masses detailed in Sect.~\ref{stellarmasses}, obtaining in general very similar offsets for all the filters. \subsection{Stellar masses} \label{stellarmasses} Stellar masses were evaluated by means of a SED fitting technique, using the code \emph{Hyperzmass}, a modified version of the photometric redshift code \emph{Hyperz} \citep{Bolzonella2000}. \citet{Marchesini2009} analysed the effect of random and systematic uncertainties in the stellar mass estimates on the GSMF, considering the influence of metallicity, extinction law, stellar population synthesis model, and initial mass function (IMF). On the other hand, \citet{Conroy2009} analysed the impact of the choice of the reference SEDs on the output parameters of the stellar population synthesis. Here we describe the approach and the tests we performed on our data. We used different libraries of SEDs, derived from different models of stellar population synthesis: (1)~the well-known \citet[hereafter BC03]{Bruzual2003} library, (2)~\citet[hereafter M05]{Maraston2005} and (3)~\citet[hereafter CB07]{Charlot2010}. The main difference between the three libraries is the treatment of thermally pulsing asymptotic giant branch (TP-AGB) stars. M05 models include the TP-AGB phase, calibrated with local stellar populations. This stellar phase is the dominant source of bolometric and NIR energy for a simple stellar population in the age range $0.2$ to $2$\,Gyr. Summing up the effects of both overshooting and TP-AGB, the M05 models are brighter and redder than the BC03 models for ages between $\sim 0.2$ and $\sim 2$\,Gyr \citep{Maraston2006}. The use of the M05 models leads to the derivation of lower ages and stellar masses for galaxies in which the TP-AGB stars are contributing significantly to the observed SED (i.e., ages of the order of $\sim 1$\,Gyr). At older ages, the M05 models are instead bluer. CB07 is the first release of the new version of the Charlot \& Bruzual library, which is not yet public. CB07 models include the prescription of \citet{Marigo2007} for the TP-AGB evolution of low and intermediate-mass stars. As for the M05 models, this assumption produces significantly redder NIR colors, hence younger ages and lower masses for young and intermediate-age stellar populations. A brief description of the effect on GSMFs of different choices of template SEDs can be found in the companion paper by \citet{Pozzetti2010}. All the considered libraries provide a simple stellar population (SSP) and its evolution in many age steps for a fixed metallicity and a given IMF; it is possible from the SSP models to derive the composite stellar populations that can reproduce the different types of observed galaxies, imposing a star formation history (SFH). We compiled 10 exponentially declining SFHs with $e$-folding times ranging from $0.1$ to $30$\,Gyr plus a model with constant star formation. Smooth SFHs are a simplistic representation of the complex SFHs galaxies have experienced. In \citet{Pozzetti2007}, using VVDS data, we also computed stellar masses using SEDs with random secondary bursts superimposed on smooth SFHs, finding average differences well within the statistical uncertainties for most of the sample. However, repeating the comparison with the zCOSMOS 10k sample, we estimated that about $15$\% of the sample has $\log {\mathcal M}_{\rm complex} / {\mathcal M}_{\rm smooth} \ga 0.35$\,dex \citep[see also][]{Pozzetti2010}. Most of these galaxies are characterised by a significant fraction of stellar mass ($\sim 5$ -- $15$\%) produced in a secondary burst in the past Gyr and an age of the underlying smoothly evolving population a few Gyr older than the age obtained by fitting SEDs with only smooth SFHs. We verified that these differences in the stellar mass estimate produce negligible effects on the final GSMF and therefore the results are not affected by the choice of the SEDs. The IMF is another important parameter: different choices on the IMF produce different estimates of stellar mass, but these differences can be statistically recovered. The most widely used IMFs are those of Salpeter \citep{Salpeter1955}, Kroupa \citep{Kroupa2001}, and Chabrier \citep{Chabrier2003}. The statistical differences in stellar masses are given by $\log {\mathcal M}_{\rm Salp}\simeq \log {\mathcal M}_{\rm Chab} +0.23$ and $\log {\mathcal M}_{\rm Chab}\simeq \log{\mathcal M}_{\rm Krou} -0.04$. Using the zCOSMOS and a mock photometric catalogue, we checked how the other parameters of the SED fitting, i.e. the age and the amount of reddening, vary when the SEDs are compiled using Chabrier and Salpeter IMFs: we found that these parameters are very similar for the two best-fit SEDs, with negligible offset and very small dispersion. In the following, stellar masses are computed assuming the Chabrier IMF. In stellar population synthesis models, the metallicity can either evolve with time or remain fixed. In BC03, the included software does not allow us to build SEDs with evolving metallicity, although $6$ different values of $Z$ are available. To evaluate the effect of metallicity on stellar masses and GSMFs, we verified in simulated and real catalogues that the inclusion of different values of $Z$ does not introduce a significant bias, the differences on the best-fit stellar masses being $\la 0.1$\,dex. Using the available values of $Z$ does not lead to a substantial improvement in the quality of the best-fits, at the cost of the introduction of an additional parameter. We therefore adopted a fixed and solar metallicity. Dust extinction was modelled using the Calzetti's law \citep{Calzetti2000}, with values ranging from $0$ to $3$ magnitudes of extinction in $V$ band. The $\chi^2$ minimisation comparing observed and template fluxes at a fixed redshift $z=z_{\rm spec}$ provides the best-fit SED, with which are associated a number of physical parameters, such as age, reddening, instantaneous star formation, and stellar mass. We note that the meaning of stellar mass throughout this paper is not the integral of the star formation, because from that value we would have to exclude the return fraction, i.e., the fraction of gas processed by stars and returned to the interstellar medium during their evolution. Tests on simulated catalogues considering the effect on stellar mass estimates of different choices of reddening law, SFHs, metallicities, and SED libraries show a typical dispersion of the order of $\sigma_{\log\mathcal{M}}\simeq 0.20$. Even a simpler technique such as that used by \citet{Maier2009} and derived from Eq.~1 of \citet{Lin2007}, produces a scatter not larger than $\sim 0.16$\,dex, although with some slight trend as a function of stellar mass and redshift. These tests show that stellar mass is a rather stable parameter in SED fitting when dealing with a set of data spanning a wide wavelength range extending to NIR. Since the fluxes provided by the available libraries at IR wavelengths have been extrapolated, the choice of filters used in determining best-fit solutions is limited to $2.5\,\mu$m rest-frame for M05 models (at longer wavelengths, these models use the Rayleigh-Jeans tail extrapolation) and to $5\,\mu$m rest-frame for BC03 and CB07 models, since at longer wavelengths the dust re-emission can contribute to the flux budget. A problem arising when dealing with a very large number of template SEDs is to avoid non-physical best-fits. We applied two priors (the same used in \citealp{Pozzetti2007}, and proposed by \citealp{Fontana2004} and \citealp{Kauffmann2003}) to avoid such a problem. In particular, we excluded best-fit SEDs not fulfilling the following requirements: (1)~$A_V \le 0.6$ if age$/\tau \ge 4$ (i.e., old galaxies must have a moderate dust extinction); (2)~star formation must start at $z > 1$ if $\tau < 0.6$\,Gyr (to obtain a better estimate of the ages of early-type galaxies typically fitted by these low-$\tau$ models). Moreover, we tested by means of simulations that imposing a minimum best fit age of $0.09$\,Gyr reduces potential degeneracies and improves the reliability of the stellar mass estimate. The maximum allowed age is the age of the Universe at $z_{\rm spec}$. As mentioned in Sect.~\ref{photo}, the first SED fitting run over the brightest galaxies and most secure galaxy redshifts has been performed to compute the photometric offsets. We checked that additional iterations of the SED fitting and offset estimation do not significantly improve the $\chi^2$ statistics. To ease the comparison with literature results, in the following we present GSMFs obtained adopting the BC03 stellar masses. However, the qualitative trends are the same for any choice of stellar population synthesis model. \subsection{Environment} \label{env} The density field was derived for the 10k spectroscopic sample using different estimators combined with the ZADE \citep[Zurich Adaptive Density Estimator,][]{Kovac2010a} algorithm. Some of the existing studies rely heavily on photometric redshifts and projected densities computed in wide redshift slices, possibly diluting the signal from overdense regions. \citet{Cooper2005} found that photometric redshifts with accuracies of $\sigma_z \ga 0.02$ hamper the computation of the density field on small scales. An important added value of COSMOS is the availability of spectroscopic redshifts obtained with a good sampling rate, making feasible an accurate estimate of the environment, with high resolution also on the radial direction. To this aim, we used spectroscopic redshifts to delineate a skeleton of galaxy structures, and we incorporated a statistical treatment of the likelihood function of photometric redshifts computed with ZEBRA. This approach allows us to probe a wide range of environments, thanks to the precision of spectroscopic redshifts, and to reduce the Poisson noise, thanks to the inclusion of fractional contributions belonging to objects with photometric redshifts, estimated from their probability function. Results have been extensively and carefully tested on mock catalogues from the Millennium simulation \citep{Kitzbichler2007}. The reconstruction of overdensities $1+\delta$ has been explored using different tracer galaxies, different spatial filters, and different weights (e.g., luminosity or stellar mass) assigned to each galaxy. The density contrast $\delta$ is defined as $(\rho-\bar{\rho})/\bar{\rho}$, where $\rho$ is the density as a function of RA, DEC, and $z$ and $\bar{\rho}$ is the mean density measured at the same redshift. In principle, a fully realistic physical representation of the environment should involve the mass of the dark matter haloes in which the galaxies are embedded. This mass is clearly not directly accessible to observations, hence an affordable surrogate to weight the number density field is given by the stellar masses of the surrounding galaxies. This is a proxy of the overall density field, since galaxies are biased tracers of the underlying matter distribution. The choice of a fixed selection band results in different populations preferentially sampled at different redshifts, weighting with stellar mass should also mitigate this issue. As expected, mass-weighted overdensities have an increased dynamical range, in particular at the highest densities. As we see in Sect.~\ref{mfenv}, this procedure, although physically motivated, can introduce some spurious signal, mainly induced by the mass of the galaxy around which the overdensity is computed. Another estimate of the high density environments in which galaxies reside can be obtained by selecting optical groups, as described in \citet{Knobel2009}, or X-ray ones (\citealt{Finoguenov2007}; Finoguenov et al. \citeyear{Finoguenov2010}); low density environments can be tracked by isolated galaxies defined using their Voronoi volumes, as in \citet{Iovino2010}. The two determinations of the environment are in fairly good agreement, considering the differences of the involved scales, with most galaxies being members of groups residing in the most overdense regions (see also Sect.~\ref{defD1D4}). In the following, we use as reference the $5$th nearest neighbour estimator (hereafter 5NN) of the density field, which represents a good compromise between the smallest accessible scales and the reliability of the overdensity values. In this approach, tracer galaxies, selected to be brighter than absolute magnitudes $M_B=-20.5-z$ or $M_B=-19.3-z$, are considered within an interval $\pm 1000$\,km\,s$^{-1}$ centred on the central galaxy and counted, after distance sorting, until their number becomes larger than $5$, considering also the fractional contribution from objects with photometric redshifts. Photometric redshifts are not crucial to the estimate of the density field, but they mainly contribute to reduce the Poisson noise and improve the agreement with the ``true'' density field, as has been proven by testing the method on simulated samples. Overdensities are then computed at the position of each galaxy in the spectroscopic sample, considering also the contribution to the number or mass density of the galaxy itself. We checked that the same qualitative trends of the GSMFs analysed in the following are present also when considering other estimators. \subsection{Galaxy type classification} \label{class} Galaxy types can be classified in a multitude of ways, using their rest-frame colours, their SEDs, their spectroscopic features, their structural parameters and their morphologies, all of them derivable with different methods. Different classifications map different physical properties. For instance, the rest-frame colour $U-B$ and the galaxy SED are used as a proxy of the star formation activity and history, the morphology is an indicator of the dynamical state, and the two are partially independent \citep{Mignoli2009}. Even if COSMOS offers a wide range of methods to group galaxies, we chose to use only two types of classification: photometric and morphological. The photometric type is defined by SED fitting to the optical magnitudes, assuming as reference the same templates used by \citet{Ilbert2006}: the four locally observed CWW \citep{Coleman1980} and two starburst SEDs from \citet{Kinney1996}, extrapolated at UV and mid-IR wavelengths. These six templates are then interpolated to obtain 62 SEDs and optimised with VVDS spectroscopic data. The SED fitting, a $\chi^2$ minimisation performed with the code ALF \citep{Ilbert2005,Zucca2006,Zucca2009}, provides as output the best-fit solution. Galaxies are then classified into two types, closely corresponding to colours of ellipticals up to early spirals (type 1, hereafter T1) and later types up to irregular and starburst galaxies (type 2, hereafter T2) to explore in a simple way the evolution of the early- and late-type bimodality. We adopted the morphological classification presented in \citet{Scarlata2007}: the availability of deep F814-band HST ACS images over the whole COSMOS field \citep{Koekemoer2007} allows a good determination of the structural parameters on which the morphology derived with the software ZEST \citep[Zurich Estimator of Structural Types,][]{Scarlata2007} is based. The method is a PCA analysis using estimates of asymmetry, concentration, Gini coefficient, $M_{20}$ (the second order moment of the $20$\% brightest pixels), and ellipticity. The morphological classes are the following: early-type (type 1), disk (type 2, with an associated sub-classification ranging from $0$ to $3$ representing the ``bulgeness'', derived from the $n$ S\'ersic indices, \citealt{Sargent2007}), and irregular galaxies (type 3). Adopting the same line of reasoning used for the photometric types, we grouped morphologically classified galaxies into two broad classes, with early-type including classes $1$ and $2.0$, i.e., ellipticals and bulge-dominated galaxies. \section{Mass functions} \label{mf} \subsection{The sample} \label{sample} Not all the spectroscopic redshifts have the same level of reliability, as explained in Sect.~\ref{spectro}. The sample we used includes only the galaxies with flags corresponding to most secure redshifts, i.e., starting from flag $=1$ in case of agreement with photometric redshifts. In detail, we excluded from our sample broad line AGNs ($\sim 1.8$\% of the statistical sample), stars ($\sim 5.9$\%), objects with fewer than five detected magnitudes available to compute the SED fitting ($\sim 1.7$\%) and objects for which the ground photometry can be affected by blending of more sources, as derived from the number of ACS sources brighter than $I = 22.5$ within $0.6''$ ($\sim 0.5$\%). The final sample contains $8450$ galaxies with redshifts between $0.01$ and $2$ and $7936$ in the redshift range where the following analysis is carried out, $z=0.1-1$. For this sample, the global reliability of spectroscopic redshifts is $96$\%, as estimated from the mix of flags and the associated verification rates reported in \citet{Lilly2009}. \subsection{Statistical weights} \label{weights} We took into account that the observed galaxies are only a fraction of the total number of possible available targets with the same properties by applying statistical weights to each observed object \citep{Zucca1994,Ilbert2005}. We computed the weight $w_i$ for each galaxy in our sample as the product of two factors connected to the target sampling rate (TSR) and to the spectroscopic success rate (SSR). Here we outline the basic principles on which the computation is based, referring the reader to \citet{Zucca2009} for further details. The TSR is the fraction of sources observed in the spectroscopic survey compared to the total number of objects in the parent photometric catalogue from which they are randomly extracted. In the case of zCOSMOS, the VMMPS tool for mask preparation \citep{Bottini2005} has been set in such a way that the objects have been randomly selected without any bias. A different treatment has been granted to compulsory targets, i.e., objects with forced slit positioning: they have a much higher TSR ($\sim 87$\%) than the ``random'' sample ($\sim 36$\%). The associated weight is $w_i^{\rm TSR}=1/{\rm TSR}$. The SSR represents the fraction of observed sources with a successfully measured redshift: it is a function of apparent magnitude, being linked to the signal-to-noise ratio of the spectrum, and it ranges from $97.5$\% to $82$\% for the brightest and faintest galaxies, respectively. The weight derived from the SSR is $w_i^{\rm SSR}=1/{\rm SSR}$. The SSR is not only a function of magnitude, but also of redshift, since the spectral features on which the redshift measurement relies can enter or go out of the observed wavelength window \citep{Lilly2007}. Therefore, the redshift distribution of the measured redshifts can be different from the real one; it is possible to take into account our lack of knowledge of the failed measurements by using photometric redshifts. Hence, we used the \citet{Ilbert2009} release of $z_{\rm phot}$ and computed the SSR in $\Delta z=0.2$ redshift bins. We had also to consider that the characteristic emission or absorption lines are different for different galaxy types, as shown in \citet{Lilly2009}. We computed the SSR in each redshift bin by separating red and blue galaxies, selected on the basis of their rest-frame $U-V$ colour. The so-called secondary targets, i.e., objects in the parent catalogue, imaged in the slit by chance, were considered separately: they are characterised by a lower SSR because they are often located at the spectrum edge or observed only at their outskirts. We computed and assigned the final weights $w_i=w_i^{\rm TSR}\times w_i^{\rm SSR}$ considering all the described dependencies. \subsection{Mass function methods} \label{mf_method} To compute the GSMFs, we adopted the usual non-parametric method $1/V_{\rm max}$ \citep{Avni1980}, from which we derived the best-fit Schechter function \citep{Schechter1976}. The observability limits inside each redshift bin, $z_{\rm min}$ and $z_{max}$, were computed for each galaxy from its best-fit SED. As in \citet{Pozzetti2010}, we estimated the parametric fit of the GSMFs with both a single Schechter function, as in most published results, and the sum of two Schechter functions, which appears to provide a more accurate fit to the data at least in the lowest redshift bins. We adopted the formalism introduced by \citet{Baldry2004,Baldry2006} using a single $\mathcal{M}^*$ to limit the number of free parameters \begin{eqnarray} \phi(\mathcal{M}){\rm d}\mathcal{M} & = & \phi^*_1\left(\frac{\mathcal{M}}{\mathcal{M}^*}\right)^{\alpha_1} \exp\left(-\frac{\mathcal{M}}{\mathcal{M}^*}\right) {\rm d}\frac{\mathcal{M}}{\mathcal{M}^*} +\nonumber \\ && + \phi^*_2\left(\frac{\mathcal{M}}{\mathcal{M}^*}\right)^{\alpha_2} \exp\left(-\frac{\mathcal{M}}{\mathcal{M}^*}\right) {\rm d}\frac{\mathcal{M}}{\mathcal{M}^*}\,. \end{eqnarray} Until now the need to model a faint-end upturn has been studied in luminosity function (LF) studies, both in the field \citep{Zucca1997,Blanton2005b} and in clusters and groups \citep{Trentham1998,Trentham2005,Popesso2007,Jenkins2007}. The departure of the GSMF from a single Schechter function at low stellar masses was noticed by \citet{Baldry2006,Baldry2008} and \citet{Panter2004} for SDSS data. At higher redshifts, an \emph{a posteriori} look at the published GSMFs often reveals such an upturn. We refer to $\mathcal{M}_{\rm min}$ as the lowest mass at which the GSMF can be considered reliable and unaffected by incompleteness on $\mathcal{M}/L$ \citep[see][]{Ilbert2004,Pozzetti2007}. A complete description of the procedure can be found in \citet{Pozzetti2010}. Our aim is to recover the stellar mass up to which all the galaxy types contributing significantly to the GSMF can be observed. We derived this value in small redshift slices by considering the $20$\% faintest galaxies, i.e., those contributing to the low-mass end of the GSMF. For each galaxy of this subsample, we computed the ``limiting mass'', that is the stellar mass that the object would have had at the limiting magnitude of the survey, $\log\mathcal{M}_{\rm lim} = \log\mathcal{M} + 0.4(I-22.5)$. For each redshift bin, we define as minimum mass the value corresponding to $95$\% of the distribution of limiting masses and we smooth the $\mathcal{M}_{\rm min}$ versus $z$ relation by means of an interpolation with a parabolic curve. The minimum stellar mass we adopt is the value up to which we can reliably compute the GSMF in each considered redshift bin, i.e. the $\mathcal{M}_{\rm min}$ at the lowest extreme of the interval, since the $1/V_{\rm max}$ method corrects the residual volume incompleteness. We note that this limit substantially decreases the number of objects considered in each redshift bin to derive the GSMF. The redshift intervals $[0.10,0.35]$, $[0.35,0.50]$, $[0.50,0.70]$, and $[0.70,1.00]$ were chosen to contain a similar number of galaxies and the values we obtained for the limiting mass of the total sample are $\log\mathcal{M}_{\rm lim}/\mathcal{M}_\odot = 8.2, 9.4, 9.9, 10.5$ from the lowest to the highest redshift bin. When dealing with GSMFs divided into galaxy types, the minimum masses are obtained separately for each subsample. \subsection{The choice of the environment definition} \label{envchoice} As mentioned in Sect.~\ref{env}, the density field of the COSMOS field \citep[see][]{Kovac2010a} was reconstructed for different choices of filters (of fixed comoving aperture or adaptive with a fixed number of neighbours), tracers (from flux-limited or volume-limited subsamples), and weights (stellar mass, luminosity or no weight, i.e., considering only the number of galaxies). We tested the options that allow an unbiased comparison over the whole redshift range, from $z=0.1$ to $1.0$. In particular, we explored the 5NN estimator and the 5NN mass-weighted one (hereafter 5NNM), both of them computed using volume-limited tracers, with two choices of luminosity limits: $M_B \le -20.5-z$ (bright tracers) and $M_B \le -19.3-z$ (faint tracers), where $M_B$ is the absolute magnitude in the $B$ band computed with ZEBRA. The absolute magnitude cut was derived by considering the distribution of absolute magnitudes versus redshift, the so-called Spaenhauer diagram \citep{Spaenhauer1978}, and the evolution of the parameter $M_B^*$ of the LFs \citep{Zucca2009}. Two different limits are necessary because of the rareness of bright tracers at low redshift and the incompleteness of faint tracers at high redshift; for this reason, the two overdensity estimates cannot be computed over the whole redshift range, but only at $[0.1,0.7]$ and $z=[0.4,1.0]$ for faint and bright tracers, respectively. \subsubsection{The effect of environment tracers on GSMF} Two problems affect the study on the evolution of GSMFs as a function of environment, which must be solved: (1)~we have to understand whether the 5NNM estimator is a more robust tracer of the environment, as predicted theoretically; (2)~we have to be certain that the use of two different tracers, e.g., with a change at $z=0.7$, does not introduce a spurious signal that may be misinterpreted as an evolutionary trend. To answer both questions, we used as a test case the redshift interval $[0.4,0.7]$, where all the estimates are available, and we computed the quartiles of the $1+\delta$ distribution in this redshift bin considering only the objects with masses higher than the minimum mass. Henceforth, we refer to the lowest and highest quartiles of $1+\delta$ as D1 and D4, respectively. In the reminder of the paper, we focus our study on these two extremes. In Fig.~\ref{fig:mf_test_5NN}, panel (a), we compare the GSMFs derived using a single Schechter function fit, for 5NN and 5NNM overdensity estimators, both using the faint volume-limited tracers. The separation of GSMFs between D1 and D4 environments is more prominent when considering the mass-weighted estimator, because of the larger dynamical range of the $1+\delta$ values studied. In particular, the main difference is in the massive part of D1 GSMF: massive galaxies in low density environments using 5NN move to intermediate densities for 5NNM estimator because of their high stellar masses. This decreases the number (and therefore the normalisation of the GSMF) of massive galaxies in low density environments when the 5NNM estimator is adopted. To test whether this enhancement of the difference between the D1 and D4 GSMFs is real, we performed the following test: we removed as much as we could the mass--density relation by shuffling the original catalogue and computing overdensities considering objects with their original coordinates, but assigning to each one the observed properties (magnitudes, stellar mass, weight) of the 25th following object after redshift sorting. Both 5NN and 5NNM overdensities and their quartiles were then recomputed, since the shuffling also changes the tracers. The choice of the 25 object jump is a compromise between the requirements of preserving a similar probability of being observed at the chosen redshift (i.e., avoiding unphysical galaxy properties if a large jump in redshift is allowed) and selecting objects possibly not in the same structure, where we know galaxies share similar properties (in this case the mass--density relation would not be removed). In this test, we expect that GSMFs derived in D1 and D4, regardless of the estimator of the density contrast used, to be approximately the same, since, after reshuffling, massive galaxies should no longer occupy preferentially high density environments. Moreover, we also expect that the 5NN and 5NNM estimators of the density should produce similar results, since the 5 neighbours should have a random distribution of their stellar masses. The comparison between GSMFs with 5NN and 5NNM ``shuffled'' overdensities is shown in Fig.~\ref{fig:mf_test_5NN}, panel (b). For 5NN, we see that GSMFs in D1 and D4 are more similar than before, but not coincident; this may be due to an insufficiently large amount of shuffling being used to separate masses and environment in the biggest structures. Furthermore, the 5NN and 5NNM estimates are still quite different, mainly at the high masses in D4. These results may be caused by the non-negligible influence of the stellar mass of the object itself in the case of the 5NNM estimator, possibly enhanced by a residual signal in the mass--density relation. In our last test to interpret this residual signal, we removed the central galaxy when computing $1+\delta$ from the original catalogue: the comparison of the resulting GSMFs is in Fig.~\ref{fig:mf_test_5NN} panel (c), which shows now fully consistent GSMFs at high and low densities as defined from 5NN and 5NNM estimators. These tests seem to indicate that the mass weighting scheme assigns too great an importance to stellar masses on scales of the order of the galaxy itself. Thus, we attempted to avoid any possible bias due to stellar mass over-weighting, despite its physically motivated link with the halo mass, by discarding the 5NNM estimator and we performed our analysis using number-weighted overdensities. \begin{figure} \centering \includegraphics[width=\hsize]{bolzonella_zcosmos_fg1.ps} \caption{ (a) Comparison of GSMFs for environment estimates from 5NN and 5NNM volume limited with faint tracers: Black: D1 (underdense); Grey: D4 (overdense). Solid line and empty dots: 5NN. Dashed line and empty triangles: 5NNM. The vertical dashed line represent the value of $\mathcal{M}_{\rm min}$ at $z=0.4$. (b) As in panel (a), but 5NN and 5NNM overdensities have been estimated after a random shuffling of galaxy properties to remove the mass--density relation. (c) As in panel (a), but 5NN and 5NNM overdensities have been estimated without considering the properties of the central galaxy. (d) GSMFs for bright ($M_B \le -20.5-z$, dashed lines and empty triangles) and faint ($M_B \le -19.3-z$, solid lines and empty dots) tracers using 5NN overdensities in the D1 (black) and D4 (grey) environments.} \label{fig:mf_test_5NN} \end{figure} To help resolve the second problem, we tested whether the change of the tracers at $z=0.7$ could introduce some change in the GSMF, which can be misinterpreted as evolution. We already know that the scales probed at the same $1+\delta$ are more or less twice as large for bright than faint tracers \citep{Kovac2010a}, therefore it is not possible to use the same $1+\delta$ threshold for both faint and bright tracers. To overcome this problem, we determined the quartiles of $1+\delta$ separately for each redshift bin. The results of this test are shown in panel (d) of Fig.~\ref{fig:mf_test_5NN}. In the $z=0.4-0.7$ bin, where both tracers are available, the GSMFs obtained with the two tracers, with independently computed quartiles, are completely consistent with each other in under and overdense environments D1 and D4, and therefore we assume we can safely compare the results at redshifts $z<0.7$ computed with faint tracers to those computed at $z\ge 0.7$ with the bright ones. \subsubsection{Definition of overdensity quartiles} \label{defD1D4} As already mentioned, we traced the effect of extreme environments on the evolution of galaxies by considering the quartiles D1 and D4 of the $1+\delta$ distribution, using 5NN volume-limited overdensities. The quartiles were computed at each redshift bin considering only the population of galaxies more massive than the minimum stellar mass considered for the GSMF (see Sect.~\ref{mf_method}) in the highest redshift bin, i.e., ${\log \mathcal M}_{\rm min}/\mathcal{M}_\odot \simeq 10.5$, to ensure that this definition is unaffected by the variation as a function of redshift in the observable mass range, populated by different mix of galaxy types. The quartile definition used throughout this paper is shown in Fig.~\ref{fig:quartiles}. The median scales probed by the $5$th nearest neighbour range from $0.87\,{\rm Mpc}\,h_{70}^{-1}$ in the D4 environment at low redshift to $7.57\,{\rm Mpc}\,h_{70}^{-1}$ in the D1 quartile at the highest redshift bin, where we have to use bright tracers. \begin{figure} \centering \includegraphics[width=\hsize]{bolzonella_zcosmos_fg2.ps} \caption{ Definition of quartiles for the 5NN estimator using volume-limited tracers: grey points represent the full sample, black squares the galaxies with masses above the $\mathcal{M}_{\rm min}$ computed in the last redshift bin, horizontal segments show the values of the quartiles of $1+\delta$ computed from the distribution of these massive galaxies, and the dashed ones indicate the median.} \label{fig:quartiles} \end{figure} The trend toward higher values of overdensity at lower redshifts is in some measure expected from the growth of structures, which amplifies the dynamic range of overdensities, but this increase cannot be quantified using the linear approximation, which is invalid on the scales probed by our density estimates. The different values of the $1+\delta$ quartiles in the different redshift bins correspond to very similar scales when the same tracers are used. It is not easy to compare the values of density contrast in Fig.~\ref{fig:quartiles} with those of known objects, such as rich clusters or voids, because of the different definitions of environment and the different scales probed. A possible comparison is instead feasible with the distribution of $1+\delta$ for the members of galaxy groups identified in the same COSMOS field. This comparison is shown in Fig.~22 of \citet{Kovac2010a}, where it is possible to see that galaxy members of optical groups with $\ge 2$ members have a distribution of overdensities that peaks at $1+\delta \sim 6$, whereas richer groups and X-ray candidate clusters typically have $1+\delta \sim 20$. Although the different classifications of the environment are obviously related, they are not perfectly coincident, with $\sim 59$\,\% of the objects in the group catalogue used by \citet{Kovac2010b} belonging to D4 (and only $6$\% to D1) and $\sim 73$\,\% of the objects classified as ``isolated'' by \citet{Iovino2010} being in D1 (and only $0.2$\% in D4). \subsection{Mass functions in different environments} \label{mfenv} \begin{figure} \centering \includegraphics[width=\hsize]{bolzonella_zcosmos_fg3.ps} \caption{ The MFs in the extreme quartiles D1 and D4 of the 5NN volume-limited overdensities. Black: total GSMF, with $1/V_{\rm max}$ dots and their Poissonian error bars and Schechter function fit (double Schechter function in the first two redshift ranges and a single one at higher redshifts). Blue: lowest $1+\delta$ quartile. Red: highest density quartile. } \label{fig:mfenv} \end{figure} The GSMFs in the two extreme environments are shown in Fig.~\ref{fig:mfenv}: the bimodality is clearly visible in the global GSMFs \citep[][see also the points and lines in Fig.~\ref{fig:mfenv}]{Pozzetti2010}, with an upturn at the low-mass end around $\mathcal{M}\sim 10^{9.5}\,\mathcal{M}_{\odot}$, which is more pronounced in the high density regions, at least in the two lowest redshift bins. We used the double Schechter function fit only up to $z\sim 0.5$, where the dip in the GSMFs falls at stellar masses higher than $\mathcal{M}_{\rm min}$. Because of our choice of environment definition, the normalisation of D1 and D4 GSMFs does not have a clear physical meaning, since the volumes occupied by each galaxy are referred to the total volume of the survey and the number of galaxies in each environment is not $1/4$ of the total sample. To obtain a more meaningful definition of the normalisation, we should compute the volume occupied by the structures with the considered ranges of $1+\delta$; here we compare only the GSMF shapes, hence defer a more in-depth study of the normalisation to a future analysis. A striking difference in GSMF shapes is evident, with massive galaxies preferentially residing in high density environments, characterised on average by a higher $\mathcal{M}^*$, and with a steeper slope than the D1 GSMFs at $z\ge 0.35$. The different shapes and the strong bimodality in the D4 GSMF can be interpreted in a similar way to the global one \citep{Pozzetti2010} by the different contribution of different galaxy types, as we see in the next section. The parameters of the Schechter fits to the GSMFs are listed in Table~\ref{tab:mfenv}. \begin{table} \caption{Parameters of the GSMF in the low and high-density environments.} \begin{tabular}{lccccc} \hline\hline & $z$ & $\alpha_1$ & $\alpha_2$ & $\log\mathcal{M}^*/\mathcal{M}_\odot$ & $\phi^*_1/\phi^*_2$\\ \hline D1 & $0.10-0.35$ & -1.35 & +0.14 & 10.53 & 1.61 \\ & $0.35-0.50$ & -1.25 & +0.82 & 10.52 & 0.79 \\ & $0.50-0.70$ & -1.13 & ... & 10.82 & ... \\ & $0.70-1.00$ & -1.12 & ... & 10.80 & ... \\ \hline D4 & $0.10-0.35$ & -1.80 & -0.33 & 10.76 & 0.01 \\ & $0.35-0.50$ & -1.28 & +0.95 & 10.52 & 0.50 \\ & $0.50-0.70$ & -0.70 & ... & 10.92 & ... \\ & $0.70-1.00$ & -0.90 & ... & 10.98 & ... \\ \hline \end{tabular} \label{tab:mfenv} \end{table} \section{Evolution of galaxy types in different environments} \label{mftype} \begin{figure*} \centering \includegraphics[width=0.48\hsize]{bolzonella_zcosmos_fg4a.ps} \includegraphics[width=0.48\hsize]{bolzonella_zcosmos_fg4b.ps} \caption{ Left: quartile D1 (low density environment). Right: quartile D4 (high density). Grey: total GSMF. Black: MF relative to the considered quartile. Red triangles and dotted lines: photometric early-type galaxies. Blue squares and dashed lines: photometric late-type galaxies. At high masses, the upper limit points show the $2\sigma$ confidence limits for $0$ detections following \citet{Gehrels1986}.} \label{fig:mf_delta_type} \end{figure*} The need to use a double Schechter function to fit the global and environment-selected GSMFs at least up to $z\sim 0.5$ may be linked to the contribution of different galaxy populations. Galaxies with the same luminosity may be characterised by very different $\mathcal{M}/L$, which can explain why it is difficult to identify the bimodal shape of LFs, even though this bimodality was first detected in LFs. To study the contribution of galaxies with different photometric types and morphologies in the extreme environments, we computed the GSMFs of D1 and D4, defined as in Sect.~\ref{mfenv}, by dividing each sub-sample into galaxy classes. The values of $\mathcal{M}_{\rm min}$ were computed separately for early/elliptical/bulge-dominated and late/spiral/disc-dominated galaxies. These values differ significantly, especially at low redshift, confirming the very different distributions of $\mathcal{M}/L$. The results for the contribution of different photometric types to D1 and D4 GSMFs are presented in Fig.~\ref{fig:mf_delta_type}, and the best-fit parameters of the single Schechter function fits are given in Table~\ref{tab:mfenvt}. Dividing the sample into the two broad morphological classes results in qualitatively similar GSMFs. Looking at the plots in Fig.~\ref{fig:mf_delta_type}, it is clear that the stronger bimodality in the first two redshift bins in the D4 GSMF is primarily due to the larger contribution of early-type galaxies. As for the global GSMF, in both of the considered environments early-type galaxies are dominant at high masses ($\log\mathcal{M/M}_\odot \ga 10.7$), while their contribution rapidly decreases at intermediate masses. On the other hand, late-type galaxies, which have much steeper GSMFs, start to dominate at intermediate and low masses ($\log\mathcal{M/M}_\odot \sim 10$). In addition to assessing the relative contributions of different galaxy types in D1 and D4, it is sensible to ask whether the shape of the GSMFs of galaxies of the same type is the same in different environments, i.e., whether a ``universal'' mass function of early/late-type galaxies does exist. In Fig.~\ref{fig:mf_type_d14}, we compare early- and late-type GSMFs in the two environments, in each redshift bin renormalised with the number density computed for masses $\ge \mathcal{M}_{\rm min}$. The shapes of the GSMFs differ slightly, there being a slightly higher density of massive galaxies in overdense regions; however the similarity of the GSMFs in all the redshift bins, and in particular for late-type galaxies, is remarkable and somewhat unexpected. If the shape of the GSMF of galaxies of the same type is similar in different environments, any difference seen in the total GSMFs in under- and overdense regions at low redshift should be due to the different evolution of their normalisations. \begin{figure*} \centering \includegraphics[width=0.48\hsize]{bolzonella_zcosmos_fg5a.ps} \includegraphics[width=0.48\hsize]{bolzonella_zcosmos_fg5b.ps} \caption{ Left: GSMFs of photometric early-type galaxies in D1 and D4 environments, renormalised to number density $=1$ for stellar masses $>\mathcal{M}_{\rm min}$. Right: the same for photometrically late type galaxies. Dotted lines, circles and dark shaded regions represent the GSMFs in underdense regions, D1. Dashed lines, squares and light shaded regions illustrate D4 GSMFs.} \label{fig:mf_type_d14} \end{figure*} \begin{figure} \centering \includegraphics[width=\hsize]{bolzonella_zcosmos_fg6.ps} \caption{ Evolution of the fractional contribution of the photometric early-type to the global MFs (the late-type fractional contribution is complementary to the one shown in this plot) in the two extreme environments. Blue lines and circles refer to the low density environment D1 (displaced by 0.02 in the abscissa to avoid overlapping), red lines and squares to the high density sample D4. Dotted lines and empty symbols represent the highest redshift bin $z=[0.7,1.0]$, solid lines and filled points the lowest one, $z=[0.1,0.35]$. The vertical dashed line indicates $\mathcal{M}_{\rm min}$ in the high redshift bin (the value at low redshift is outside the plot). Error bars have been computed as $16-84$\% of the distribution of Monte Carlo simulations.} \label{fig:frac_mf_delta_type} \end{figure} \begin{table} \caption{Parameters of the GSMF for the two photometric types T1 (early-type galaxies) and T2 (late-type galaxies) in the low and high-density environments. When the parameter $\alpha$ is undetermined, we fixed it to the best-fit value in the previous bin of the same environment. Error bars are at $1\sigma$ confidence level.} \begin{tabular}{lccc} \hline\hline & $z$ & $\alpha$ & $\log\mathcal{M}^*/\mathcal{M}_\odot$ \\ \hline D1T1 & $0.10-0.35$ & $-0.33^{+0.46}_{-0.37}$ & $10.60^{+0.15}_{-0.11}$\\ & $0.35-0.50$ & $-0.17^{+0.71}_{-0.55}$ & $10.72^{+0.32}_{-0.21}$\\ & $0.50-0.70$ & $-0.90^{+0.85}_{-0.60}$ & $10.93^{+0.26}_{-0.25}$\\ & $0.70-1.00$ & $[-0.90]$ & $10.88^{+0.10}_{-0.10}$\\ \hline D1T2 & $0.10-0.35$ & $-1.41^{+0.11}_{-0.07} $ & $10.71^{+0.18}_{-0.23}$\\ & $0.35-0.50$ & $-1.51^{+0.32}_{-0.25} $ & $10.81^{+0.51}_{-0.36}$\\ & $0.50-0.70$ & $-1.45^{+0.52}_{-0.36} $ & $10.70^{+0.28}_{-0.26}$\\ & $0.70-1.00$ & $[-1.45]$ & $10.59^{+0.06}_{-0.08}$\\ \hline D4T1 & $0.10-0.35$ & $-0.03^{+0.46}_{-0.32}$ & $10.68^{+0.18}_{-0.21}$\\ & $0.35-0.50$ & $-0.23^{+0.59}_{-0.45}$ & $10.82^{+0.22}_{-0.20}$\\ & $0.50-0.70$ & $-0.28^{+0.65}_{-0.48}$ & $10.87^{+0.15}_{-0.18}$\\ & $0.70-1.00$ & $[-0.28]$ & $10.97^{+0.06}_{-0.06}$\\ \hline D4T2 & $0.10-0.35$ & $-1.39^{+0.13}_{-0.09}$ & $10.92^{+0.32}_{-0.38}$\\ & $0.35-0.50$ & $-1.43^{+0.19}_{-0.14}$ & $11.02^{+0.22}_{-0.45}$\\ & $0.50-0.70$ & $[-1.43]$ & $10.81^{+0.13}_{-0.15}$\\ & $0.70-1.00$ & $[-1.43]$ & $10.75^{+0.07}_{-0.07}$\\ \hline \end{tabular} \label{tab:mfenvt} \end{table} To examine the differential contribution of various galaxy types in different environments, we can compute the evolution of the ratio of the GSMF of a given galaxy class to the global GSMFs in each environment. In Fig.~\ref{fig:frac_mf_delta_type}, we show the ratio of $1/V_{\rm max}$ estimates of early-type GSMF in over and underdense regions for the two extreme redshift bins. The trend for late-type galaxies is the opposite of that shown in the figure. The error bars were computed using a Monte Carlo simulation considering Gaussian distribution of errors of rms derived from Poissonian error bars using $1/V_{\rm max}$ method. The $16$\% and $84$\% of the $100\,000$ iterations of the ratio distribution are reported in the plot as error bars. The vertical dashed line shows the value of $\mathcal{M}_{\rm min}$ for early-type galaxies in the redshift bin $z=0.7-1.0$. Despite the large error bars, Fig.~\ref{fig:frac_mf_delta_type} illustrates that in the high redshift bin the fractional contributions of photometric early-types to the GSMF in different environments are more or less the same for D1 and D4 at all the masses we can safely study. On the other hand, the fractional contribution is significantly different at low redshift, mainly at intermediate stellar masses ($\log\mathcal{M/M}_\odot \la 10.5$). This trend appears to imply that there is a more rapid growth with time in high density environments of the fractional contribution of early-type galaxies. At intermediate masses, the differences between the two extreme environments are larger: high stellar masses ($\log\mathcal{M/M}_\odot \ga 10.7$) are populated mainly by passive red galaxies in both environments, while at lower masses ($\log\mathcal{M/M}_\odot \la 10$, in the low redshift bins, where it is possible to probe them) the population of late-type/star-forming galaxies dominates in all the environments. In a scenario that is consistent with these data, which indicate there is an increase in early-type galaxies with cosmic time, blue intermediate-mass galaxies are being transformed into more massive red galaxies, after quenching their star formation in a more efficient way in overdense than underdense regions. A possible way to quantify this difference in evolutionary speed is by analysing the evolution with redshift of ${\mathcal M}_{\rm cross}$, which represents the mass above which the GSMF is dominated by early-type galaxies. We show this quantity computed from $1/V_{\rm max}$ points in Fig.~\ref{fig:mcrosst} for different photometric types. We can see that since $z\sim 1$, where the $\mathcal{M}_{\rm cross}$ values in low and high density environments were similar, the subsequent evolution produces a significant difference between the two $\mathcal{M}_{\rm cross}$ values. The ratio of $\mathcal{M}_{\rm cross}$ in the highest to lowest redshift bins implies an evolution of a factor $\sim 2$ in low density and $\sim 4.5$ in high density regions. From a different point of view, the plot in Fig.~\ref{fig:mcrosst} indicates that the environment begins to affect the evolution of galaxies at $z\sim 1$, causing in the lowest redshift bin a delay of $\sim 2$\,Gyr in underdense relative to overdense regions before the same mix of galaxy types is observed in high density regions. \begin{figure} \centering \includegraphics[width=\hsize]{bolzonella_zcosmos_fg7.ps} \caption{ ${\mathcal M}_{\rm cross}$ of photometric types in the extreme quartiles D1 and D4. Blue: low-density environments. Red: high-density. The points are located at the median redshift of the early plus late samples and error bars represent the width of the redshift bin and the error in the GSMF ratio from $1/V_{\rm max}$ method. A linear fit to the points is also shown.} \label{fig:mcrosst} \end{figure} \section{Discussion} \label{discuss} \subsection{Comparison with literature data} \label{literature} As mentioned in Sect.~\ref{intro}, a similar analysis of the influence of environment on the evolution of the GSMF of red and blue galaxies was carried out by \citet{Bundy2006} using DEEP2 data. They considered a sample in the redshift range $0.4<z<1.4$, partially overlapping with ours, and a the definition of galaxy types and environment that slightly differed; their galaxy types are defined on the basis of the rest-frame colour $U-B$ and their under- and overdense environments are defined with respect to the average local density for the majority of their analysis. Since, as the authors also state, most of the galaxies belong to regions around the average density, we do not expect to find that the environment has a significant influence of the redshift evolution of galaxies. However, they also considered the extremes of the density field in their Fig.~11, where they present the evolution with redshift in the fractional contribution of red and blue galaxies. We compare our results obtained using our definitions of environment and galaxy types, with the \citet{Bundy2006} paper in Fig.~\ref{fig:frac_bundy}. At low redshift, we plot for reference the results of \citet{Baldry2006}, who used SDSS data divided into density bins and galaxy types separated by means of the colour bimodality. The lines in the plot are derived from their Eq.~10, adopting their highest and lowest density values. The results from the two high-$z$ surveys are in reasonably good agreement. The largest difference is in low density environments in our redshift bin $z=[0.70,1.00]$, but results are marginally consistent with each other. When we study the evolution of the mass function fractions derived from the three surveys, the main visible trend is the continuous increase with time in the fractional contribution of red/early-type galaxies in all environments, which is an alternative way of observing the build-up of the red sequence and its increasing population at lower stellar masses. The differences between low and high density environments seem to increase towards low redshift, whereas at high redshifts the quite large error bars prevent our drawing robust conclusions, which may also depend on the particular definitions of the samples. \begin{figure} \centering \includegraphics[width=\hsize]{bolzonella_zcosmos_fg8.ps} \caption{ Evolution of the fractional contribution of the early-type/red galaxies to the global MFs in low and high density environments from the surveys SDSS, zCOSMOS, and DEEP2. In the low redshift bin, red and blue lines are computed from Eq.~10 by \citet{Baldry2006}, representing the fraction of red galaxies in the highest and lowest environmental densities in their SDSS analysis. In the other redshift bins, red solid lines and filled squares represent the zCOSMOS high-density sample D4, and blue long-dashed lines and filled circles the low-density sample D1. Orange and cyan lines and empty symbols represent the values of the analogous fractions taken from \citet{Bundy2006}. The vertical dashed lines mark $\mathcal{M}_{\rm min}$ in zCOSMOS, and vertical dotted lines represent the $K_s$-band completeness limits in \citet{Bundy2006}. Redshift ranges between brackets refer to DEEP2 binning. } \label{fig:frac_bundy} \end{figure} \citet{Cooper2010} analysed the colour--density relation in the DEEP2 sample and claimed that the environmental dependence is still present at $z=1$. In contrast to our analysis, they considered the top $10$\% of the high-density sample, using the density field computed at the distance of the 3rd-nearest neighbour in the total flux-limited sample. With this choice, they explored a smaller scale environment than the one used in the present paper. For instance, they state that the typical distance involved in the computation of their top $5$\% overdensities is about $35\arcsec$ at $z\sim 0.9$, corresponding to a comoving scale $\sim 0.37\,h^{-1}$\,Mpc. The average scale of our top $5$\% overdensities in our highest redshift bin is $\sim 1.1\,h^{-1}$\,Mpc. Therefore, the results of the two surveys do not necessarily disagree if the environmental mechanism modifying galaxy properties at $z \ga 1$ is mainly effective on small scales. Other studies of the evolution of the GSMFs of galaxies of different types and morphologies are presented in \citet{Pozzetti2010}, \citet{Ilbert2010}, \citet{Bundy2010}, and \citet{Drory2009}, though without incorporating directly the environmental effects. They all find that the global GSMF has a bimodal shape, with the need to use two Schechter functions eventually extending to the single galaxy types GSMFs, as found by \citet{Drory2009}. These authors interpret the presence of a plateau at $\sim 10^{10} \mathcal{M}_\odot$ in blue galaxies as a signature of either a change in star formation efficiency, which is more dramatic at lower masses, or an increase in the galaxy assembly rate at higher masses. At low redshift, the dip appears to move from blue to red galaxies, because blue massive galaxies become red and satellite galaxies undergo environmental quenching. \citet{Bundy2010}, \citet{Ilbert2010}, and \citet{Pozzetti2010} compare results obtained for galaxies classified from rest-frame colours and morphology, finding that the transformation from blue to red colours and from disk-dominated to bulge-dominated morphologies may be due to two or more processes, which are either environmentally driven (strangulation, major or minor merging with varying amounts of gas) or internal (instabilities, gas consumption, morphological quenching, AGN feedback) \citep{Bundy2010}. Any scenario should account for the non-negligible fraction of quiescent disk-dominated galaxies at low masses, and involve processes with different timescales for the shutdown of the star formation and the morphological transformation \citep[e.g.][]{Pozzetti2010}, whereas for massive galaxies the correspondence of red colours and elliptical morphologies should be explained by a single dominant mechanism, probably associated with secular evolution \citep{Oesch2010}. We explore in more detail the differences between morphological and colour transformation in different environments in Sect.~\ref{timescale}. \citet{Scodeggio2009} study the rest-frame colours of VVDS galaxies at $0.2<z<1.4$ in environments based on the density contrast on scales of $\sim 8$\,Mpc, and conclude that the segregation of galaxy properties is ultimately the result of the large scale environment, via the mass of the dark matter halo. This conclusion agrees with our findings: from Fig.~\ref{fig:mfenv}, we infer that the large-scale environment sets up the stellar mass distribution, which is in turn is linked to the mass of the hosting haloes, and its evolution. At low redshift, the bimodality of the GSMF has also been detected: for instance, from the SDSS dataset, \citet{Baldry2006} and \citet{Baldry2008} detect a significant upturn at low stellar masses with respect to the single Schechter function on the global and environment dependent GSMFs. Considering the alternative definition of environment, i.e., galaxy clusters and groups, we also find in the literature signs of an excess of low mass systems, for instance by converting the composite LF of RASS-SDSS clusters by \citet{Popesso2006} to GSMFs by making an assumption about the mass-to-light ratio, as done in \citet{Baldry2008}. A steep low mass end is seen for clusters, steeper than the upturn noticed in the field from the SDSS and also, to a lesser extent, than our $\alpha_1$ value in D4 in the low redshift bin. The mechanisms responsible for the bimodal nature of the GSMFs should therefore operate in both the field and high density environments, but in the most dense regions they should be able to originate the steepest low mass end. For instance, in \citet{Rudnick2009}, the same bimodality in the LF can be seen for SDSS clusters at low redshifts, and in \citet{Banados2010} for galaxies members of Abell 1689 at $z=0.183$. Analyses of high redshift clusters \citep[e.g.][]{Poggianti1999,Poggianti2009,Desai2007,SanchezB2009,Simard2009,Patel2009b,Patel2009a,Just2010,Wolf2009,Gallazzi2009,Balogh2007,Balogh2009,Wilman2009,Treu2003} are mainly focused on the buildup of the red sequence and the evolution of the fraction of morphological types, in particular S0 galaxies, linked especially to the peculiar mechanisms acting on these densest environments. In these quoted works, a complex picture, but broadly consistent within the uncertainties, is emerging for the evolutionary paths of galaxies, with many mechanisms playing a role, whose relative importance is a function of the mass, environment and past history of each considered system. \subsection{The mechanism and timescale of galaxy transformation} \label{timescale} Figures~\ref{fig:frac_mf_delta_type} and \ref{fig:mcrosst} provide some clues about the timescale and mechanism responsible of galaxy quenching in different environments. We have found that the evolution in the high density regions is more rapid than in low density ones, i.e., the rate of transformation into photometric early-types is higher from $z=1$ to low redshifts in overdense regions than underdense ones. Therefore, some of the mechanisms responsible for quenching the star formation, and then transforming blue galaxies into passive ones, must be environment dependent. The physical processes operating on galaxies and transforming their colours and/or morphologies can be internally or externally driven and gravitationally or hydrodynamically induced \citep[for reviews see][]{Boselli2006,Treu2003}. Since only a small fraction of the galaxies studied are probably located in rich clusters, we have not sought to consider processes that occur primarily in such very high density environments. Improbable processes are ram pressure stripping, consisting of the gas stripping of a galaxy moving through a dense inter-galactic medium and the abrupt truncation of its star formation, and harassment, i.e. a gravitational interaction in high velocity encounters of galaxies, causing morphological transformation and bursts of star formation. Given the typical galaxy velocities and inter-galactic medium density involved in these processes, they cannot have a significant impact on the results presented in this paper. Post-starburst galaxies have been found in a wide range of environments in DEEP2 \citep{Yan2009} and zCOSMOS \citep{Vergani2010} indicating that the formation mechanism behind this class of objects, i.e. their star formation shutdown, is not a peculiarity of clusters. Viable mechanisms in the field are galaxy-galaxy merging and starvation. Major merging processes can trigger AGN activity and quench the star formation: the fraction of pairs, related to the rate of merging, may depend on environment. Merging of galaxies in the densest regions is impeded by the high relative velocities, but at high redshift, supposedly $z \sim 1$, this process was more common, thus the merging rate higher \citep{deRavel2009}. In this context, at high redshift merging processes produced a shift in the GSMF towards higher masses, because of the depletion at low masses and consequent increase in early-type galaxies at high masses. At later times, the decrease in the merging rate ensures that the high mass end remains almost constant, while the acquisition of new galaxies from the field, by means of the hierarchical growth of the structures, can produce the observed shape of the D4 GSMF at low redshift in Figs.~\ref{fig:mfenv} and \ref{fig:mf_delta_type}, the dip at intermediate masses, and the high contribution of massive early-type galaxies. To explain the evolution in the density of massive elliptical galaxies, \citet{Ilbert2010} concluded that the rate of wet mergers should steeply decline at $z<1$. Limits on the contribution of major merging as primary mechanism can be drawn from the evolution of pair fraction \citep[][who found that 20\% of the stellar mass in present day galaxies with $\log\mathcal{M}/\mathcal{M}_\odot>9.5$ has been accreted by major merging events since $z\sim 1$]{deRavel2009} and from the GSMF \citep[][who derived an average number of total mergers $\sim 0.16$\,gal$^{-1}$\,Gyr$^{-1}$ since $z\sim 1$ for the global population, derived from the GSMF evolved according to the mass growth due to star formation]{Pozzetti2010}. In addition, strangulation (also referred to starvation or suffocation), consisting of halo-gas stripping, can play a role: when the diffuse warm and hot gas reservoir in the galaxy corona is stripped because of gravitational interaction with low-mass group-size haloes or with cluster haloes at large distances from the core, the gas cannot be accreted anymore and the galaxy will exhaust the remaining cold gas through star formation, on a timescale which can be instantaneous or slow, i.e., up to a few Gyr, depending on the mass of the galaxy \citep{Wolf2009}. The result is the suppression of the star formation, not immediately followed by a morphological transformation, explaining the possible presence of red spirals, even if the fading of the disc can lead to an earlier-type morphological classification. This mechanism alone is not able to reproduce the shape of the D4 GSMF and the contribution of the different galaxy types, since it predict a large amount of red galaxies at low masses \citep[for the difficulties of the starvation scenario see also][]{Bundy2010}, as demonstrated by comparing observed data with simulations in Sect.~\ref{mock}; nonetheless, this mechanism may be effective in the group environment, where galaxies are undergoing morphological transformations and suppression of their star formation \citep[e.g.][]{Wilman2009}. \begin{figure} \centering \includegraphics[width=\hsize]{bolzonella_zcosmos_fg9.ps} \caption{ Like Fig.~\ref{fig:mcrosst}, with ${\mathcal M}_{\rm cross}$ computed for morphological types.} \label{fig:mcrossm} \end{figure} To help identify the most likely transformation mechanisms, we also computed GSMFs for samples divided following the morphological classification by \citet{Scarlata2007}, as defined in Sect.~\ref{class}. In Fig.~\ref{fig:mcrossm}, we show the values of ${\mathcal M}_{\rm cross}$ in the $4$ considered redshift bins. This plot appears to differ from the analogous plot obtained for samples produced by dividing galaxies on the basis of photometric types: the values of ${\mathcal M}_{\rm cross}$ are higher and their evolution seems insensitive to the environment from $z\sim 1$ to $z\sim 0.4$. The higher values of ${\mathcal M}_{\rm cross}$ for the morphological classification suggest that the dynamical transformation into elliptical galaxies follows the quenching of their star formation. It is possible that the transformations of morphology occur on longer timescales than those of colour \citep[e.g.]{Capak2007b,Smith2005,Bamford2009,Wolf2009}, as inferred also from the study of post-starburst galaxies selected in the same zCOSMOS sample (Vergani et al. \citeyear{Vergani2010}) or by considering different evolutionary paths \citep{Skibba2009}. A more comprehensive study should be performed to investigate this point, since the larger number of photometric early-types than morphological ones may also be caused by a relatively large fraction of dust-reddened spiral galaxies. To evaluate the uncertainties related to this comparison of photometric and morphological types, we altered the threshold between elliptical galaxies and morphological late-types: we divided the morphological class $2.1$, which should still represent bulge-dominated galaxies, following the observed $B-z$: the evolutionary track of the $B-z$ colour of a galaxy Sab \citep{Coleman1980} provides a criterion to separate quiescent and star-forming galaxies in good agreement with the spectral classification, as shown in \citet{Mignoli2009}. With this separation, the values of the morphological ${\mathcal M}_{\rm cross}$ become consistent with the photometric values, both in terms of the absolute value and the trend with redshift. Both mechanisms, gas stripping and interactions, likely operate to explain the suppression of the star formation and the morphological transformation. Those processes act on different timescales and have different efficiencies as a function of galaxy mass and environment, but it is still difficult to draw firm conclusions, because of the uncertainties associated with the galaxy classification. \subsection{Comparison with mock catalogues} \label{mock} \begin{figure} \centering \includegraphics[width=\hsize]{bolzonella_zcosmos_fg10.ps} \caption{ GSMFs derived with $1/V_{\rm max}$ method in mock catalogues (D1 environment: blue dotted lines, D4: red dashed lines, both representing the average obtained from $12$ mocks) compared to the observed ones (points) in D1 and D4 environments (blue circles and red squares, respectively). The functions are rescaled to arbitrary units, to maintain the same integral of the GSMFs in the overdense regions at masses larger than $10^{10.5}\,\mathcal{M}_\odot$ in observed and mocks samples.} \label{fig:mf_mock_env} \end{figure} \begin{figure*} \centering \includegraphics[width=0.48\hsize]{bolzonella_zcosmos_fg11a.ps} \includegraphics[width=0.48\hsize]{bolzonella_zcosmos_fg11b.ps} \caption{ Left: quartile D1 (low density environment). Right: quartile D4 (high density). Points refer to the observed quantities, lines to the GSMFs derived from the mock catalogues. Black points and solid lines: GSMFs relative to the considered density quartile, renormalised to the same integral at $\log \mathcal{M}/\mathcal{M}_\odot>10.5$. Red triangles and dotted lines: galaxies with $B-I>1.15$. Blue squares and dashed lines: galaxies with $B-I\le1.15$. } \label{fig:mf_mock_env_typ} \end{figure*} We used $12$ COSMOS mock lightcones \citep{Kitzbichler2007} based on the Millennium N-body simulation \citep{Springel2005}. The galaxy population of lightcones was then assigned by means of semi-analytical recipes \citep{Croton2006,DeLucia2007}. The final catalogues are the same as those described in \citet{Knobel2009}, who used them to test the group finder algorithm. We used the 5NN flux-limited $1+\delta$ estimate of the environment and the rest-frame colour $B-I$ to differentiate early- from late-type galaxies, and to be able to compare the same quantities in observations and mocks. Even though at the lowest stellar masses the mock catalogues may be affected by colour incompleteness, this does not affect our analysis, since we limit our comparison to the higher masses probed in the zCOSMOS. In Fig.~\ref{fig:mf_mock_env}, we compare the high and low-density GSMFs in both the observed sample and the $12$ averaged mock catalogues. To avoid normalisation uncertainties caused by cosmic variance \citep{Meneux2009}, we decided to renormalise the GSMFs, in such a way that the observed and mock GSMFs of the overdense regions have the same integral value at masses higher than $10^{10.5}\,\mathcal{M}_\odot$ in all the redshift bins. The most evident characteristic of the observed GSMFs, namely the bimodality of the GSMFs in overdense regions at low redshift, is not reproduced by semi-analytical models. To explore the reason for this failure of semi-analytical models (SAMs) in reproducing observations, we separated red and blue galaxies adopting the threshold $B-I=1.15$, which corresponds to the location of the dip of the colour bimodality, obtaining the GSMFs in Fig.~\ref{fig:mf_mock_env_typ}. For the low density environments, SAMs produce too many blue galaxies at intermediate and especially at high masses in all the redshift bins, and consequently also a too low density of red galaxies, in particular at $10^{10}-10^{11}\,\mathcal{M}_\odot$. This can be ascribed to an inefficient suppression of the star formation in the absence of external drivers, as in the case of sparse environments. \citet{Weinmann2006} also find a too high blue fraction of central galaxies: they explain this discrepancy by an improper modelling of dust extinction, which is very likely underestimated for starburst galaxies, and AGN feedback, that may be more effective above a given halo mass. A threshold halo mass above which the star formation is naturally shut down, as proposed by \citet{Cattaneo2008}, may also alleviate the discrepancy. In the high density regions, the most visible difference is the excess of low and intermediate mass red galaxies ($<10^{10}\,\mathcal{M}_\odot$) in SAMs with respect to the observed fractions in the lowest redshift bin, where the probed mass range is wider. This last comparison reflects the problem of the overquenching of satellites in the SAMs we used, which produces too many small red galaxies: a too efficient strangulation produces an instantaneous shut down of the star formation when a galaxy enters in a halo \citep[see][for a description of the problem and some attempts to solve it]{Weinmann2006,Weinmann2010,Font2008,Kang2008,Kimm2009,Fontanot2009}. \section{Conclusions} We have computed GSMFs in different environments and studied the relative contributions of different galaxy types to these GSMFs, and their evolution. Our main results are: \begin{enumerate} \item The bimodality seen in the global GSMF \citep{Pozzetti2010} up to $z\sim 0.5$ is considerably more pronounced in high density environments; a sum of two Schechter functions is thus required to reproduce the observed non-parametric estimates of the GSMF. \item The bimodality is due to the different relative contributions of early- and late-type galaxies in different environments, each contribution being reasonably well represented by a single Schechter function. \item The shapes of the GSMFs of different galaxy types in different environments and their evolution with time are very similar, i.e., the differences on the global GSMFs may be ascribed to the evolution in the normalisation of the GSMFs of different galaxy types in the extreme environments we have considered. \item The evolution with time in the fractional contributions of different galaxy types to the environmental GSMF appears to be a function of the overdensity in which the galaxies live, and is consistent with a higher rate of downsizing with time in overdense regions. \item The evolution of the crossover mass for photometric late- and early-type galaxies suggests a faster transition rate in overdense regions, with galaxies in low-density regions experiencing the same evolutionary path as the analogous galaxies in overdense environments with a delay of $\sim 2$\,Gyr being accumulated between $z\sim 1$ and $z\sim 0.2$. \item The environment starts to play a significant role in the evolution of galaxies at $z\la 1$. \item The timescales for quenching of star formation and morphological metamorphosis differ in different environments; tentatively, the crossover mass considering morphological classification suggests that the morphological transformation is slower than the colour change. \item SAMs fail in different ways as a function of the environment: GSMFs computed from mock catalogues show an underestimate of the number of red massive galaxies in low density environments, probably because of an inefficient internal mechanism suppressing the star formation at relatively high masses; in high density regimes the overquenching problem of satellites in SAMs causes an excess of red galaxies at intermediate and low masses. \end{enumerate} As a consequence of the remarkable difference in the shape of the GSMFs in under- and overdense regions, we can infer that all the galaxy properties depending on mass will also depend on environment by virtue of the GSMF environmental dependence, as shown in the case of the colour--density and morphology--density relations \citep{Cucciati2010,Tasca2009} and of the AGN fraction \citep{Silverman2009}. The nature versus nurture debate is unresolvable, because the mass of a galaxy, often thought to be its nature, is a strong function of the environment. A more relevant issue is the understanding of the mechanisms producing the observed evolution of galaxies and their transition from late- to early-type in different environments. Future investigations will also concern the impact of merging in different environments (\citealp{deRavel2010}; Kampczyk et al. \citeyear{Kampczyk2010}) and the role of the dark-matter halo mass functions in different environments \citep[e.g.][]{Abbas2007} in the determining galaxy formation efficiency. \begin{acknowledgements} MB wishes to thank Preethi Nair, Alexis Finoguenov, and Ramin Skibba for useful discussions, comments and suggestions. We thank the anonymous referee for the constructive first report, that helped improve the paper. MB is grateful to the editor, Fran\c{c}oise Combes, for her kind support. This work was partly supported by an INAF contract PRIN/2007/1.06.10.08 and an ASI grant ASI/COFIS/WP3110 I/026/07/0. \end{acknowledgements} \bibliographystyle{aa}
1,314,259,994,573
arxiv
\section{Introduction} \label{s1} The classical trajectories $x(t)$ of the family of complex $\mathcal P\mathcal T$-symmetric Hamiltonians \begin{equation} H=p^2+x^2(ix)^\epsilon\qquad(\epsilon\geq0) \label{e1} \end{equation} have been examined in detail \cite{r1}. One can plot graphs of these trajectories by solving numerically the system of Hamilton's differential equations \cite{dargweb} \begin{equation} {\dot x}=\frac{\partial H}{\partial p}=2p,\quad {\dot p}=-\frac{\partial H}{\partial x}=-(2+\epsilon)x(ix)^\epsilon \label{e2} \end{equation} for a given set of initial conditions $x(0)$, $p(0)$. Since $x(0)$ and $p(0)$ are not necessarily real numbers and the differential equations (\ref{e2}) are complex, the classical trajectories are curves in the complex-$x$ plane. It is known \cite{r1} that for $\epsilon\geq0$ nearly all trajectories are closed curves. (When $\epsilon$ is a positive integer, it is possible for trajectories that originate at some of the turning points to run off to infinity. However, we are not interested here in these isolated singular cases. When $\epsilon<0$, all trajectories are open curves.) If $\epsilon$ is noninteger, there is a branch cut in the complex-$x$ plane, and we take this cut to run from $0$ to $\infty$ along the positive-imaginary axis. Thus, it is possible for a closed classical trajectory to visit many sheets of the Riemann surface before returning to its starting point. The non-Hermitian Hamiltonians (\ref{e1}) are remarkable because when they are quantized, their spectra are entirely real and positive \cite{r2,r3}. Moreover, these Hamiltonians specify a unitary time evolution \cite{r4} of the vectors in the associated Hilbert space. Thus, it is important to understand the nature of the complex classical systems underlying these quantum systems. Several studies \cite{r5,r6} of the classical trajectories $x(t)$ of complex Hamiltonians were done prior to the work in Ref.~\cite{r1} and from all these studies many features of the complex trajectories of $\mathcal P\mathcal T$-symmetric Hamiltonians (\ref{e1}) are known. However, some of the conclusions of the earlier work are wrong. For example, when $\epsilon\geq0$, the $\mathcal P\mathcal T$ symmetry of the quantum-mechanical theory is unbroken \cite{r4}, and based on these studies it was believed that all classical orbits are $\mathcal P\mathcal T$ symmetric. [We say that an orbit is $\mathcal P\mathcal T$ {\it symmetric} if the orbit remains unchanged upon replacing $x(t)$ by $-x^*(-t)$. Such an orbit has mirror symmetry under reflection about the imaginary axis on the principal sheet of the Riemann surface.] While the equations of motion (\ref{e2}) exhibit $\mathcal P\mathcal T$ symmetry, it is not required that the solutions to these equations also exhibit $\mathcal P \mathcal T$ symmetry, but in all previous numerical studies only $\mathcal P\mathcal T$-symmetric orbits were found. We will show in this paper that there are also rare trajectories that are {\it not} $\mathcal P\mathcal T$ symmetric, and we will argue that these new kinds of orbits explain the strange fine-structure behavior of the periods of the orbits that was first reported in Ref.~\cite{r1}. This paper is organized as follows: In Sec.~\ref{s2} we review briefly the earlier work on classical trajectories. Then, in Sec.~\ref{s3} we present new findings that help us to grasp the underlying reasons for the appearance of the elaborate and intricate structures of the classical trajectories that were described in Ref.~\cite{r1}. In Sec.~\ref{s4} we make some concluding remarks. \section{Brief summary of previous numerical studies} \label{s2} To construct the classical trajectories, we note that the Hamiltonian in (\ref{e1}) is a constant of the motion. This constant (the energy $E$) may be chosen to be 1 because if $E$ were not 1, we could rescale $x$ and $t$ to make $E=1$. Because $p(t)$ is the time derivative of $\textstyle{\frac{1}{2}} x(t)$ [see (\ref{e2})], the trajectory $x(t)$ satisfies a first-order differential equation whose solution is determined by the initial condition $x(0)$ and the sign of $\dot x(0 )$. The simplest version of the Hamiltonian (\ref{e1}) is the harmonic oscillator, which is obtained by setting $\epsilon=0$. For the harmonic oscillator the turning points, which are the solutions to $x^2=1$, lie at $x=\pm1$. If we chose $x(0)$ to lie between these turning points, then the classical trajectory oscillates between the turning points with period $\pi$. However, while the harmonic-oscillator Hamiltonian is real, it still has complex classical trajectories. To generate one of these trajectories, we choose a value for $x(0 )$ that does not lie between $\pm1$ and find that the resulting trajectories are ellipses in the complex plane \cite{r1}. The foci of these ellipses are the turning points at $x=\pm1$ \cite{r1}. The period for all of these closed orbits is $\pi$. The constancy of the period is due to the Cauchy integral theorem applied to the path integral that represents the period. The (closed) contour of integration encircles the square-root branch cut that joins the turning points. As $\epsilon$ increases from 0, the turning points at $x=1$ (and at $x=-1$) rotate downward and clockwise (anticlockwise) into the complex-$x$ plane. These turning points are solutions to the equation $1+(ix)^{2+\epsilon}=0$. When $\epsilon\neq2$, this equation has many solutions that all lie on the unit circle and have the form \begin{equation} x=\exp\left(i\pi\frac{4N-\epsilon}{4+2\epsilon}\right)\quad(N~{\rm integer}). \label{e3} \end{equation} (This notation differs slightly from that used in Ref.~\cite{r1}.) These turning points occur in $\mathcal P\mathcal T$-symmetric pairs (pairs that are symmetric when reflected through the imaginary axis) corresponding to the $N$ values $(N=-1,~N= 0)$, $(N=-2,~N=1)$, $(N=-3,~N=2)$, $(N=-4,~N=3)$, and so on. We label these pairs by the integer $K$ ($K=0,~1,~2,~3,~\ldots$) so that the $K$th pair corresponds to $(N=-K-1,~N=K)$. The pair of turning points on the real-$x$ axis for $\epsilon=0$ deforms continuously into the $K=0$ pair of turning points when $\epsilon>0$. When $\epsilon$ is rational, there are a finite number of turning points in the complex-$x$ Riemann surface. For example, when $\epsilon=\frac{12} {5}$, there are 5 sheets in the Riemann surface and 11 pairs of turning points. The $K=0$ pair of turning points are labeled $N=-1$ and $N=0$, the $K=1$ pair are labeled $N=-2$ and $N=1$, and so on. The last ($K=10$) pair of turning points are labeled $N=-11$ and $N=10$. These turning points are shown in Fig.~\ref{fig1}. \begin{figure*}[t!] \vspace{5.0in} \special{psfile=fig1.ps angle=0 hoffset=88 voffset=-12 hscale=59 vscale=59} \caption{Locations of the turning points for $\epsilon=\frac{12}{5}$. There are 11 $\mathcal P\mathcal T$-symmetric pairs of turning points, with each pair being mirror images under reflection through the imaginary-$x$ axis on the principal sheet. All 22 turning points lie on the unit circle on a five-sheeted Riemann surface, where the sheets are joined by cuts on the positive-imaginary axis.} \label{fig1} \end{figure*} As $\epsilon$ increases from 0, the elliptical complex trajectories for the Harmonic oscillator begin to distort. However, the trajectories remain closed and periodic except for special singular trajectories that run off to complex infinity. These singular trajectories only occur when $\epsilon$ is an integer. All of the orbits discussed in Ref.~\cite{r1} are $\mathcal P\mathcal T$ symmetric, and it was firmly believed that all closed periodic orbits are $\mathcal P\mathcal T$ symmetric. (We will see that this is not so, and that non-$\mathcal P\mathcal T$-symmetric orbits are crucial in understanding the observed rapid variation in the periods of the complex orbits as $\epsilon$ varies slowly.) In Ref.~\cite{r1} many complex trajectories $x(t)$ were examined, some having a rich topological structure. Some of these trajectories visit many sheets of the Riemann surface. The classical orbits exhibit fine structure that is exquisitely sensitive to the value of $\epsilon$. Small variations in $\epsilon$ can cause huge changes in the topology and in the periods of the closed orbits. Depending on the value of $\epsilon$, there are orbits having short periods as well as orbits having long and possibly arbitrarily long periods (see Fig.~\ref{fig2}). \begin{figure*}[t!] \vspace{5.25in} \special{psfile=fig2.ps angle=0 hoffset=-4 voffset=-12 hscale=54 vscale=54} \caption{Period of a classical trajectory beginning at the $N=1$ turning point in the complex-$x$ plane. The period is plotted as a function of $\epsilon$. The period decreases smoothly for $0\leq\epsilon<1$ (Region I). However, when $1\leq \epsilon\leq4$ (Region II), the period becomes a rapidly varying and noisy function of $\epsilon$. For $\epsilon>4$ (Region III) the period is once again a smoothly decaying function of $\epsilon$. Region II contains short subintervals where the period is a small and smoothly varying function of $\epsilon$. At the edges of these subintervals the period suddenly becomes extremely long. Detailed numerical analysis shows that the edges of the subintervals lie at special rational values of $\epsilon$. Some of these special rational values of $\epsilon$ are indicated by vertical line segments that cross the horizontal axis. At these rational values the orbit does not reach the $N=-2$ turning point and the $\mathcal P\mathcal T$ symmetry of the classical orbit is spontaneously broken.} \label{fig2} \end{figure*} Figure~\ref{fig2} delineates three regions of $\epsilon$ for which an orbit that begins at the $N=1$ turning point exhibits a specific kind of behavior. When $0 \leq\epsilon\leq1$ (Region I), the period is a smooth decreasing function of $\epsilon$; when $1<\epsilon\leq4$ (Region II), the period is a rapidly varying and choppy function of $\epsilon$; when $4<\epsilon$ (Region III), the period is once again a smooth and decreasing function of $\epsilon$. For some values of $\epsilon$ in Region II the period is extremely long. Thus, it is difficult to see the behavior of the period in Regions I and III in Fig.~\ref{fig2}. We have therefore plotted in Fig.~\ref{fig3} the period for these slowly varying regions. \begin{figure*}[t!] \vspace{2.85in} \special{psfile=fig3.ps angle=0 hoffset=-2 voffset=-11 hscale=46 vscale=46} \caption{Period of a classical trajectory joining the $K=1$ pair of turning points for $\epsilon$ in Regions I and III. (See Fig.~\ref{fig2}.) The orbits in these regions are all $\mathcal P\mathcal T$ symmetric.} \label{fig3} \end{figure*} For a trajectory beginning at the $N=2$ turning point, the period as a function of $\epsilon$ again exhibits these three types of behaviors. The period decreases smoothly for $0\leq\epsilon<\textstyle{\frac{1}{2}}$ (Region I). When $\textstyle{\frac{1}{2}}\leq\epsilon \leq8$ (Region II), the period becomes a rapidly varying and noisy function of $\epsilon$. When $\epsilon>8$ (Region III) the period is once again a smoothly decaying function of $\epsilon$. These behaviors are shown in Fig.~\ref{fig4}. Again, because it is difficult to see the dependence of the period in Regions I and III when Region II is included, we display the period for $K=2$ for $\epsilon$ in Regions I and III in Fig.~\ref{fig5}. The trajectory terminates at the $N=-3$ turning point except when $\mathcal P\mathcal T$ symmetry is spontaneously broken. Broken-symmetry orbits occur only at isolated points in Region II. \begin{figure*}[t!] \vspace{5.25in} \special{psfile=fig4.ps angle=0 hoffset=-3 voffset=-13 hscale=54 vscale=54} \caption{Period of a classical trajectory joining (except when $\mathcal P\mathcal T$ symmetry is broken) the $K=2$ pair of turning points. The period is plotted as a function of $\epsilon$. As in the $K=1$ case shown in Fig.~\ref{fig2}, there are three regions. When $0\leq\epsilon\leq\textstyle{\frac{1}{2}}$ (Region I), the period is a smooth decreasing function of $\epsilon$; when $\textstyle{\frac{1}{2}}<\epsilon\leq8$ (Region II), the period is a rapidly varying and choppy function of $\epsilon$; when $8<\epsilon$ (Region III), the period is again a smooth and decreasing function of $\epsilon$.} \label{fig4} \end{figure*} \begin{figure*}[t!] \vspace{2.85in} \special{psfile=fig5.ps angle=0 hoffset=-2 voffset=-11 hscale=46 vscale=46} \caption{Period of a classical trajectory joining the $K=2$ pair of turning points in the complex-$x$ plane for $\epsilon$ in Regions I and III. (See Fig.~\ref{fig4}.)} \label{fig5} \end{figure*} Figures \ref{fig2} -- \ref{fig5} illustrate a general pattern that holds for all $K$. For classical orbits that oscillate between the $K$th pair of turning points, there are always three regions. The domain of Region I is $0\leq\epsilon \leq\frac{1}{K}$, the domain of Region II is $\frac{1}{K}<\epsilon<4K$, and the domain of Region III is $4K<\epsilon$. As $\epsilon$ varies, the turning points move in a characteristic fashion for each of these three regions (see Fig.~\ref{fig6}). When $\epsilon=0$, the turning points lie on the real axis. As $\epsilon$ increases, the turning points rotate into the complex-$x$ plane. Just as $\epsilon$ reaches the upper edge of Region I, the turning points rotate through an angle of $\frac{\pi}{2}$ and now lie on the imaginary axis. As $\epsilon$ continues to increase, the turning points continue to rotate around $x=0$, and may encircle the origin many times. Just as $\epsilon$ reaches the upper boundary of Region II, the turning points again lie on the real axis. Finally, as $\epsilon$ moves from the lower edge to the upper edge of Region III ($\epsilon=\infty$), the turning points again rotate through an angle of $\frac{\pi}{2}$ and lie on the negative imaginary-$x$ axis. \begin{figure*}[t!] \vspace{4.85in} \special{psfile=fig6.ps angle=0 hoffset=-3 voffset=-11 hscale=56 vscale=56} \caption{Locations of the turning points in the complex-$x$ plane as $\epsilon$ increases from $0$ to $\infty$. At $\epsilon=0$ the turning points lie on the real-$x$ axis. As $\epsilon$ increases through Region I the turning points rotate through an angle of $\frac{\pi}{2}$ and end up on the imaginary-$x$ axis. As $\epsilon$ increases through Region II, the turning points rotate around the origin and wind up once again on the real axis. Finally, as $\epsilon$ passes through Region III, the turning points again rotate through an angle of $\frac{\pi}{2}$ and finish up on the negative imaginary-$x$ axis.} \label{fig6} \end{figure*} In general, the period of any classical orbit depends on the specific pairs of turning points that are enclosed by the orbit and on the number of times that the orbit encircles each pair. As explained in Ref.~\cite{r1}, any orbit can be deformed to a much simpler orbit of exactly the same period. This simpler orbit connects two turning points and oscillates between them rather than encircling them. For the elementary case of orbits that enclose only the $K=0$ pair of turning points, the formula for the period of the closed orbit is \begin{eqnarray} T(\epsilon)=2\sqrt{\pi}\frac{\Gamma\left(\frac{3+\epsilon}{2+\epsilon}\right)}{ \Gamma\left(\frac{4+\epsilon}{r4+2\epsilon}\right)}\cos\left(\frac{\epsilon\pi}{ 4+2\epsilon}\right). \label{e4} \end{eqnarray} The derivation of (\ref{e4}) is straightforward. The period $T$ is given by a closed contour integral along the trajectory in the complex-$x$ plane. This trajectory encloses the square-root branch cut that joins the $K=0$ pair of turning points. This contour can be deformed into a pair of rays that run from one turning point to the origin and then from the origin to the other turning point. The integral along each ray is easily evaluated as a beta function, which is then written in terms of gamma functions. Equation (\ref{e4}) is valid for all $\epsilon\geq0$. When the classical orbit encloses more than just the $K=0$ pair of turning points, the formula for the period of the orbit becomes more complicated \cite{r1}. In general, there are contributions to the period integral from many enclosed pairs of turning points. We label each such pair by the integer $j$. The general formula for the period of the topological class of classical orbits whose central orbit terminates on the $K$th pair of turning points is \begin{eqnarray} T_K(\epsilon)=2\sqrt{\pi}\frac{\Gamma\left(\frac{3+\epsilon}{2+\epsilon}\right)} {\Gamma\left(\frac{4+\epsilon}{4+2\epsilon}\right)}\sum_{j=0}^{\infty}a_j(K, \epsilon)\left|\cos\left(\frac{(2j+1)\epsilon\pi}{4+2\epsilon}\right)\right|. \label{e6} \end{eqnarray} In this formula the cosines originate from the angular positions of the turning points in (\ref{e3}). The coefficients $a_j(K,\epsilon)$ are all nonnegative integers. The $j$th coefficient is nonzero only if the classical path encloses the $j$th pair of turning points. Each coefficient is an {\it even\/} integer except for the $j=K$ coefficient, which is an odd integer. The coefficients $a_j (K,\epsilon)$ satisfy \begin{eqnarray} \sum_{j=0}^{\infty}a_j(K,\epsilon)=k, \label{e7} \end{eqnarray} where $k$ is the number of times that the central classical path crosses the imaginary axis. Equation (\ref{e7}) truncates the summation in (\ref{e6}) so that it contains a finite number of terms. As we can see in Figs.~\ref{fig2} and \ref{fig4}, for an orbit that oscillates between the $K$th pair of turning points ($K>0$) the classical behavior undergoes abrupt transitions as $\epsilon$ is varied smoothly in Region II. In Region II there are narrow patches in which the period of the orbit is rapidly varying which are sandwiched between small regions of quiet stability. At the boundaries of the slowly varying and rapidly varying regions there are transitions in the topologies and periods of the classical orbits. We can understand from (\ref{e6}) how there can be rapid variations in the period of the orbit. The summation in (\ref{e6}) can vary rapidly as a function of $\epsilon$ because small changes in $\epsilon$ can cause fluctuations in the topology of the orbit. If the orbit suddenly encloses many more pairs of turning points, the value of the period may fluctuate wildly. [Note that abrupt changes in the periods of the orbits cannot occur for the trajectories joining the $K=0$ pairs of turning points because $T(\epsilon)$ in (\ref{e4}) is a smoothly decreasing function for all $\epsilon\geq0$.] \section{Classical Orbits Having Broken $\mathcal P\mathcal T$ Symmetry} \label{s3} We now demonstrate that the abrupt changes in the topology and the periods of the orbits that we observe for $\epsilon$ in Region II are associated with the appearance of orbits having spontaneously broken $\mathcal P\mathcal T$ symmetry. In Region II there are short patches where the period is relatively small and is a slowly varying function of $\epsilon$. These patches are bounded by special values of $\epsilon$ for which the period of the orbit suddenly becomes extremely long. From our numerical studies of the orbits connecting the $K$th pair of turning points, we believe that there are only a finite number of these special values of $\epsilon$ and that these values of $\epsilon$ are always {\it rational}. Furthermore, we have discovered that at these special rational values of $\epsilon$, the closed orbits are {\it not} $\mathcal P\mathcal T$-symmetric and we say that such orbits exhibit {\it spontaneously broken} $\mathcal P\mathcal T$ symmetry. Some special values of $\epsilon$ at which spontaneously broken $\mathcal P\mathcal T$-symmetric orbits occur are indicated in Figs.~\ref{fig2} and \ref{fig4} by short vertical lines below the horizontal axis. These special values of $\epsilon$ always have the form $\frac{p}{q}$, where $p$ is a multiple of 4 and $q$ is odd. Figure~\ref{fig7} displays an orbit having spontaneously broken $\mathcal P\mathcal T$ symmetry. This orbit occurs when $\epsilon=\frac{4}{5}$. The orbit starts at the $N=2$ turning point, but it never reaches the $\mathcal P\mathcal T$-symmetric turning point $N=-3$. Rather, the orbit terminates when it runs into and is reflected back from the complex conjugate $N=4$ turning point [see (\ref{e3})]. Thus, a broken-$\mathcal P\mathcal T$-symmetric orbit is a failed $\mathcal P\mathcal T$-symmetric orbit. The period of the orbit is short ($T=4.63$). This orbit is not $\mathcal P\mathcal T$ (left-right) symmetric but it does possess complex-conjugate (up-down) symmetry. In general, for a non-$\mathcal P\mathcal T$-symmetric orbit to exist, it must join or encircle a pair of complex-conjugate turning points. \begin{figure*}[t!] \vspace{2.60in} \special{psfile=fig7.ps angle=0 hoffset=109 voffset=-11 hscale=42 vscale=42} \caption{A horseshoe-shaped non-$\mathcal P\mathcal T$-symmetric orbit. This orbit is not symmetric with respect to the imaginary axis but it is symmetric with respect to the real axis. The orbit terminates at a complex-conjugate pair of turning points. For this orbit, $\epsilon=\frac{4}{5}$.} \label{fig7} \end{figure*} If we change $\epsilon$ slightly, $\mathcal P\mathcal T$ symmetry is restored and one can only find orbits that are $\mathcal P\mathcal T$ symmetric. For example, if we take $\epsilon= 0.805$, we obtain the complicated orbit in Fig.~\ref{fig8}. The period of this orbit is large ($T=173.36$). \begin{figure*}[ht!] \vspace{2.95in} \special{psfile=fig8.ps angle=0 hoffset=109 voffset=-11 hscale=45 vscale=45} \caption{$\mathcal P\mathcal T$-symmetric orbit for $\epsilon=0.805$. This orbit connects the $K=2$ pair of turning points.} \label{fig8} \end{figure*} It is possible to have more than one kind of broken-$\mathcal P\mathcal T$-symmetric orbit for a given rational value of $\epsilon$. For example, in Figs.~\ref{fig9} and \ref{fig10} we display two different non-$\mathcal P\mathcal T$-symmetric closed orbits for $\epsilon=\frac{12}{5}$. In Fig.~\ref{fig9} the horseshoe-shaped orbit terminates on the $N=3$ and $N=7$ turning points. The period of this orbit is ($T=5.04$). The more complicated orbit shown in Fig.~10 terminates on the $N=2$ and $N=8$ turning points. The period of this orbit is $T=12.90$. All of these turning points are shown in Fig.~\ref{fig1} \footnote{In Figs.~\ref{fig9} and \ref{fig10} the $N=5$ turning point is shown. We have found that for all orbits having a broken $\mathcal P\mathcal T$ symmetry there exists a special turning point on the real-$z$ axis. This turning point is symmetric under complex conjugation. A classical particle that is released from this turning point falls into the origin, where it stops when it encounters the branch point.}. \begin{figure*}[t!] \vspace{2.65in} \special{psfile=fig9.ps angle=0 hoffset=109 voffset=-11 hscale=45 vscale=45} \caption{A non-$\mathcal P\mathcal T$-symmetric horseshoe-shaped orbit for $\epsilon=\frac{12} {5}$. This orbit begins at the $N=3$ turning point and terminates at the $N=7$ turning point, so that it fails to reach the $\mathcal P\mathcal T$-symmetric turning point at $N=-4$. These turning points are shown in Fig.~\ref{fig1}. The period of this orbit is $T=5.04$.} \label{fig9} \end{figure*} \begin{figure*}[t!] \vspace{2.75in} \special{psfile=fig10.ps angle=0 hoffset=109 voffset=-10 hscale=45 vscale=45} \caption{A non-$\mathcal P\mathcal T$-symmetric orbit for $\epsilon=\frac{12}{5}$. This orbit, which is more complicated than that shown in Fig.~\ref{fig9}, begins at the $N= 2$ turning point and ends at the $N=8$ turning point before it can reach the $\mathcal P\mathcal T$-symmetric turning point at $N=-3$. These turning points are shown in Fig.~\ref{fig1}. The period of this orbit is $T=12.90$.} \label{fig10} \end{figure*} A non-$\mathcal P\mathcal T$-symmetric orbit that is even more complicated than that in Fig.~\ref{fig10} is shown in Fig.~\ref{fig11}. This orbit connects the $N=2$ and $N=48$ turning points for $\epsilon=\frac{76}{13}$. \begin{figure*}[t!] \vspace{3.30in} \special{psfile=fig11.ps angle=0 hoffset=80 voffset=-9 hscale=30 vscale=30} \caption{A broken-$\mathcal P\mathcal T$-symmetric orbit connecting the $N=2$ and $N=48$ turning points for $\epsilon=\frac{76}{13}$. The period of this orbit is $T= 78.36$.} \label{fig11} \end{figure*} Non-$\mathcal P\mathcal T$-symmetric orbits can encircle a complex-conjugate pair of turning points as well as terminating at them. In Fig.~\ref{fig12} two non-$\mathcal P \mathcal T$-symmetric orbits are shown for $\epsilon=\frac{8}{5}$. One orbit joins the $N=1$ and $N=7$ complex-conjugate pair of turning points. The other orbit encircles these turning points. Both orbits have the same period of $T=9.07$. \begin{figure*}[t!] \vspace{3.25in} \special{psfile=fig12.ps angle=0 hoffset=89 voffset=-10 hscale=32 vscale=32} \caption{Two non-$\mathcal P\mathcal T$-symmetric orbits of the same period $T=9.07$. The value of $\epsilon$ is $\frac{8}{5}$. The solid-line orbit terminates at the $N= 1$ and $N=7$ turning points, while the dotted-line orbit encircles these turning points.} \label{fig12} \end{figure*} Broken-$\mathcal P\mathcal T$-symmetric orbits can have an elaborate topology. For example, at $\epsilon=\frac{16}{9}$ we find a non-$\mathcal P\mathcal T$-symmetric orbit whose topology is even more complicated than that of the orbit shown in Fig.~\ref{fig11}. This orbit, which is shown in Fig.~\ref{fig13}, is a failed $K=3$ $\mathcal P\mathcal T$-symmetric orbit. It originates at the $N=-4$ turning point, but it never reaches the $\mathcal P \mathcal T$-symmetric $N=3$ turning point. This is because it is reflected back by the complex-conjugate $N=-14$ turning point. Figure \ref{fig14} shows the $\mathcal P \mathcal T$-symmetric companion of the orbit in Fig.~\ref{fig13}. This orbit begins at the $N=3$ turning point, but is reflected back by the $N=13$ turning point, which is the $\mathcal P\mathcal T$ counterpart of the $N=-14$ turning point. \begin{figure*}[t!] \vspace{3.65in} \special{psfile=fig13.eps angle=0 hoffset=95 voffset=-9 hscale=59 vscale=59} \caption{Non-$\mathcal P\mathcal T$-symmetric orbit for $\epsilon=\frac{16}{9}$. This topologically complicated orbit originates at the $N=-4$ turning point but does not reach the $\mathcal P\mathcal T$-symmetric $N=3$ turning point. Instead, it is reflected back at the complex-conjugate $N=-14$ turning point. The period of this orbit is $T=186.14$).} \label{fig13} \end{figure*} \begin{figure*}[t!] \vspace{3.7in} \special{psfile=fig14.eps angle=0 hoffset=95 voffset=-9 hscale=59 vscale=59} \caption{$\mathcal P\mathcal T$-symmetric reflection of the orbit in Fig.~\ref{fig13}. This orbit originates at the $N=3$ turning point but is reflected back by the $N=13$ turning point.} \label{fig14} \end{figure*} To show that the orbits in Figs.~\ref{fig13} and \ref{fig14} are $\mathcal P\mathcal T$ reflections, we have plotted both orbits in Fig.~\ref{fig15}. The left-right symmetry is manifest. A definitive demonstration of the symmetry can be given by plotting the complex argument of $x(t)$ as a function of time $t$ for each orbit. This is done in Fig.~\ref{fig16}. \begin{figure*}[t!] \vspace{3.6in} \special{psfile=fig15.eps angle=0 hoffset=100 voffset=-10 hscale=59 vscale=59} \caption{Superposition of the orbits in Figs.~\ref{fig13} and \ref{fig14}. The $\mathcal P\mathcal T$ (left-right) symmetry is exact.} \label{fig15} \end{figure*} \begin{figure*}[t!] \vspace{4.7in} \special{psfile=fig16.eps angle=0 hoffset=24 voffset=-9 hscale=48 vscale=48} \caption{The complex argument of $x(t)$ plotted as a function of $t$ for the non-$\mathcal P\mathcal T$-symmetric orbits in Figs.~\ref{fig13} and \ref{fig14}. The two plots verify that the two orbits are $\mathcal P\mathcal T$-symmetric reflections of one another.} \label{fig16} \end{figure*} Finally, we display in Fig.~\ref{fig17} a spontaneously broken-$\mathcal P \mathcal T$-symmetric orbit that is vastly more complicated than those shown in Figs.~\ref{fig13} and \ref{fig14}. This orbit begins at the $N=1$ turning point for $\epsilon=\frac{16}{15}$. It terminates at the complex conjugate $N=21$ turning point rather than at the $\mathcal P\mathcal T$-symmetric $N=-2$ turning point. The period of this orbit is $T=3393.64$. \begin{figure*}[t!] \vspace{6.25in} \special{psfile=fig17.ps angle=0 hoffset=-5 voffset=-13 hscale=59 vscale=59} \caption{A non-$\mathcal P\mathcal T$-symmetric orbit having an extremely elaborate topology. The value of $\epsilon$ for this orbit is $\frac{16}{15}$. The orbit originates at the $N=1$ turning point and terminates at the complex-conjugate $N=21$ turning point. This orbit visits eight of the fifteen sheets of the Riemann surface and travels a great distance from the origin. Only the $-2<{\rm Re}\,x< 2$, $-2<{\rm Im}\,x<2$ portion of the complex-$x$ plane is shown.} \label{fig17} \end{figure*} \section{Concluding Remarks} \label{s4} The work in Ref.~\cite{r1} provides a heuristic explanation of how very long-period orbits arise. In order for a classical trajectory to travel a great distance in the complex plane, its path must slip through a forest of turning points. When the trajectory comes under the influence of a turning point, it usually executes a huge number of nested U-turns and eventually returns back to its starting point. However, for some values of $\epsilon$ the complex trajectory may evade many turning points before it eventually encounters a turning point that takes control of the particle and flings it back to its starting point. We speculated in Ref.~\cite{r1} that it may be possible to find special values of $\epsilon$ for which the classical path manages to avoid and sneak past all turning points. Such a path would have an infinitely long period. (We still do no know if such infinite-period orbits exist.) However, in Ref.~\cite{r1} we could not provide an explanation of why the period of a closed trajectory, as a function of $\epsilon$, is such a wildly fluctuating function. We have shown here that for special rational values of $\epsilon$ the trajectory bumps directly into a turning point that is located at a point that is the complex conjugate of the point from which the trajectory was launched. This turning point reflects the trajectory back to its starting point and prevents the trajectory from being $\mathcal P\mathcal T$ symmetric. Trajectories for values of $\epsilon$ near these special rational values have extremely different topologies and thus have periods that tend to be relatively long. This explains the noisy plots in Figs.~\ref{fig2} and \ref{fig4}. Figs.~\ref{fig7} and \ref{fig8} illustrate this phenomenon. The value of $\epsilon$ for Fig.~\ref{fig7} differs from that in Fig.~\ref{fig8} by only $0.005$. Nevertheless, the orbits in these two figures exhibit very different topologies. This is because the trajectory in Fig.~\ref{fig7} is a failed $\mathcal P \mathcal T$-symmetric orbit, where the trajectory that starts at the $N=2$ turning point is reflected almost immediately by the complex-conjugate $N=4$ turning point. When $\epsilon$ is changed by the tiniest amount, the turning points are slightly displaced. As a result, the trajectory in Fig.~\ref{fig8} manages to sneak past the $N=4$ turning point and travels a great distance in the complex plane before bouncing back from the $\mathcal P\mathcal T$-symmetric $N=-3$ turning point. We do not know whether for each turning point there are a finite or an infinite number of special rational values of $\epsilon$ for which the classical orbit has a broken $\mathcal P\mathcal T$ symmetry. It is worth noting that the data used to produce Figs.~\ref{fig2} and \ref{fig4} is far from exhaustive and that the bars underneath the horizontal axes are only the {\em known} examples of broken $\mathcal P\mathcal T$ symmetry. There are surely many more such special rational values of $\epsilon=\frac{p}{q}$. In this paper we concluded our study at $q=13$, and this work took an immense amount of computer time! The study of complex trajectories for classical dynamical systems is a rich new area in mathematical physics that deserves extensive analytical and numerical exploration. Already, there has been work done on the complex orbits of the simple pendulum \cite{r7}, the complex extension of the Korteweg-de Vries equation \cite{r8,r9}, complex solutions of the Euler equations for rigid-body rotation \cite{r10}, and the complex version of the kicked rotor \cite{r11}. We expect that complex analysis will provide a deep insight into the behavior of dynamical systems. \begin{acknowledgments} We thank D.~Hook and S.~McLenahan for programming assistance. CMB is grateful to the Theoretical Physics Group at Imperial College, London, for its hospitality. As an Ulam Scholar, CMB receives financial support from the Center for Nonlinear Studies at the Los Alamos National Laboratory and he is supported in part by a grant from the U.S. Department of Energy. \end{acknowledgments}
1,314,259,994,574
arxiv
\section{INTRODUCTION} The chemical composition of very metal-poor (VMP; [Fe/H] $< -2.0$) and extremely metal-poor (EMP; [Fe/H] $< -3.0$) stars provides a fossil record of the star formation and nucleosynthesis history of the early Galaxy. Carbon (and often N and O) enhancement appears to be common for stars with [Fe/H] $\lesssim -2.5$ \citep{BeCh05,Carol11}, but the overall abundance ratios of elements up to the iron peak are well established by [Fe/H] $\simeq -2.5$, with very small scatter \citep{LP5,Arn05}. The full range of {\it r}-process elements was also in place at this stage in the chemical evolution of the Galaxy \citep[see the review by][]{sneden08}. Other neutron-capture processes, such as the {\it s-}process, began to contribute significant amounts of heavy elements from [Fe/H] $\lesssim -2.5$ \citep[e.g, ][]{sim04}\footnote{Note that \citet{roederer10} have argued that the {\it majority} of $s-$process element enhancement may have been delayed until [Fe/H] $\sim -1.4$. See also the discussion in \citet{Bisterzo11}.}. Hence, the processes by which the chemical elements were produced and recycled into the Galactic halo system are, at least in a broad-brush sense, well understood. However, in a small fraction of stars in the abundance range $-3.2 <$ [Fe/H] $< -2.6$, the uniform abundance pattern of the {\it r}-process elements as a group is enhanced by factors up to $\sim80$ relative to that of the iron-peak and lighter elements. Spectroscopic analyses of such stars with 8m-class telescopes have provided precise and detailed abundances of many {\it r}-process elements as a key to understanding their origin \citep{sneden08, cowan11}. Yet, the likely production site(s) of the {\it r}-process elements, as well as the mechanisms by which their abundances relative to the ``standard'' halo composition could vary so strongly from star to star in the early Galaxy, remain essentially unknown. Explanations of this diversity fall into two main classes: Inhomogeneous enrichment and incomplete mixing of the ISM by the first generation(s) of stars, or later, local pollution of neutron-capture elements by a binary companion of the presently observed star \citep{QW01}. In the first case, the very uniform abundance pattern of the $\alpha$- and iron-peak elements is difficult to reconcile with the predictions of models of stochastic star formation and enrichment in the early Galaxy \citep{Arn05}. This conflict would be resolved in the second case, but unlike the {\it s}-process-element enhanced Ba and CH giants, which are known to {\it all} be long-period binaries \citep{McCW90, Joris98}, this conjecture is so far without observational foundation. Here we report the first results from four years of precise radial-velocity monitoring, performed in order to assess the binary frequency of a sample of 17 {\it r}-process-element enhanced VMP and EMP giants. Our results provide strong new constraints on the nature of the {\it r}-process production site(s) and on the use of these stars as tracers of the star formation and/or merger history of the early Galaxy. \section{Sample Definition and Observations} \label{sect_obs} Our program stars were drawn from the HERES search for {\it r}-process-element enhanced stars \citep{Chri04,Barklem05}, supplemented by earlier and later discoveries as summarised by \citet{Hayek09}. Only stars north of declination $\sim -25\degr$ and brighter than $V \sim 16.0$ are accessible for study with the Nordic Optical Telescope (NOT) on La Palma, resulting in a total sample of 17 stars. Eight of these are in the r-I class ($+0.3 <$ [Eu/Fe] $<+1.0$), as defined by \citet{BeCh05}, and 9 are in the r-II class ([Eu/Fe] $>+1.0$). Table \ref{tbl-1} lists the program stars, in order of increasing [$r$/Fe] ratio, in order to highlight the {\it continuum} of {\it r}-process enhancements that exists; the division into r-I and r-II classes appears to be merely one of convenience\footnote{Note that the r-II and r-I stars do occupy different, but overlapping, regions of metallicity space. The r-II stars are found {\it only} at very low metallicity, while the r-I stars extend over the range $-3.0 \leq$ [Fe/H] $\leq -0.5$; see the bottom panel of Fig. 3 in \citet{aoki10}.}. Stars with measured U abundances, and those found here to be spectroscopic binaries, are indicated in the Table. As the chemical compositions of the targets are already known -- some in great detail -- our observations were designed to just yield precise radial velocities as efficiently as possible, over a time span of several years. To this end, we obtained high-resolution spectra ($R \sim 45,000$) with a S/N ratio $\sim 10$, using the bench-mounted, fibre-fed \'{e}chelle spectrograph FIES at the NOT \citep{ADJA10}, which is installed in a separate, temperature-controlled underground enclosure. The useful wavelength range covered by these spectra is $4000-7000$ \AA. Our goal was to reach an accuracy of $100-200$ m s$^{-1}$ per observation, except perhaps for the faintest program stars. The radial-velocity zero point was checked with standard stars on every night of observation, and found to be reproducible to better than 45 m s$^{-1}$ per observation over a five-year period. Thus, the accuracy of the radial velocities of the program stars is not limited by the instrument. Our initial assumption, by analogy with the Ba and CH giants \citep{Joris98}, was that any spectroscopic binaries in the sample would likely have orbits of long period, low eccentricity, and small amplitude -- i.e., small, slow velocity variations. Thus, our strategy was to observe these stars at roughly monthly intervals, weather permitting, and then adapt the frequency of the observations to follow any objects with radial-velocity variations detected in the initial data. Observations began in April 2007, and were continued on 51 nights through September 2011, for a total of $\sim$ 234 spectra, an average of 14 per star. Faint stars at far southern declinations require ideal conditions, and were observed less frequently than average. \begin{deluxetable}{lrrrrrrrl} \tabletypesize{\scriptsize} \tablecaption{Stars Monitored for Radial Velocity Variation \label{tbl-1}} \tablewidth{0pt} \tablehead{ \colhead{Star} & \colhead{RA (J2000)} & \colhead{Dec (J2000)} & \colhead{$V$} & \colhead{$B-V$} & \colhead{[Fe/H]} & \colhead{[r/Fe]} & \colhead{Nobs} & \colhead{Remarks} } \startdata HE 0524-2055 & 05:27:04 & $-$20:52:42 & 14.01 & 0.87 & $-$2.58 & $+$0.49 & 8 & r-I \\ HE 0442-1234 & 04:44:52 & $-$12:28:46 & 12.91 & 1.07 & $-$2.41 & $+$0.52 & 23 & r-I, SB \\ HE 1430+0053 & 14:33:17 & $+$00:40:49 & 13.69 & 0.58 & $-$3.03 & $+$0.72 & 19 & r-I \\ CS 30315-029 & 23:34:27 & $-$26:42:19 & 13.66 & 0.91 & $-$3.33 & $+$0.72 & 9 & r-I \\ HD 20 & 00:05:15 & $-$27:16:18 & 9.07 & 0.54 & $-$1.58 & $+$0.80 & 9 & r-I \\ HD 221170 & 23:29:29 & $+$30:25:57 & 7.71 & 1.02 & $-$2.14 & $+$0.85 & 23 & r-I \\ HE 1044-2509 & 10:47:16 & $-$25:25:17 & 14.34 & 0.66 & $-$2.89 & $+$0.94 & 13 & r-I, SB \\ HE 2244-1503 & 22:47:26 & $-$14:47:30 & 15.30 & 0.60 & $-$2.88 & $+$0.95 & 12 & r-I \\ HE 2224+0143 & 22:27:23 & $+$01:58:33 & 13.68 & 0.71 & $-$2.58 & $+$1.05 & 18 & r-II \\ HE 1127-1143 & 11:29:51 & $-$12:00:13 & 15.88 & \dots & $-$2.73 & $+$1.08 & 12 & r-II \\ HE 0432-0923 & 04:34:26 & $-$09:16:50 & 15.16 & 0.73 & $-$3.19 & $+$1.25 & 14 & r-II \\ HE 1219-0312 & 12:21:34 & $-$03:28:40 & 15.94 & 0.64 & $-$2.81 & $+$1.41 & 6 & r-II \\ CS 22892-052 & 22:17:01 & $-$16:39:26 & 13.21 & 0.80 & $-$2.95 & $+$1.54 & 17 & r-II \\ CS 29497-004 & 00:28:07 & $-$26:03:03 & 14.03 & 0.70 & $-$2.81 & $+$1.62 & 8 & r-II \\ CS 31082-001 & 01:29:31 & $-$16:00:48 & 11.66 & 0.76 & $-$2.78 & $+$1.66 & 17 & r-II, U \\ HE 1523-0901 & 15:26:01 & $-$09:11:38 & 11.10 & 1.10 & $-$2.95 & $+$1.80 & 17 & r-II, U, SB \\ HE 1105+0027 & 11:07:49 & $+$00:11:38 & 15.64 & 0.39 & $-$2.42 & $+$1.81 & 9 & r-II \\ \enddata \tablecomments{Remarks: U indicates that uranium has been detected; SB indicates a confirmed spectroscopic binary.} \end{deluxetable} \section{Data Reduction and Analysis}\label{sect_data} The entire set of reductions of the raw spectra (bias subtraction, division by a flat-field exposure, cosmic ray removal, 2-D order extraction, and wavelength calibration) was performed with a program developed and extensively tested on exoplanet hosts by \citet{LarsB10}. For the fainter stars, it was found preferable to divide the long exposures into three pieces, and remove cosmic ray hits by median filtering. Radial velocities were then derived from the reduced spectra by a multi-order cross-correlation procedure. This operation is the most difficult step in the analysis because {\it (i)} these stars are extremely metal-poor and chemically peculiar; and {\it (ii)} the individual spectra have low S/N ratios. Thus, selecting an optimum template spectrum for each star is no trivial task. Noting that the primary objective of the analysis is to measure small velocity {\it variations} rather than absolute values, we have experimented with three types of template spectra: {\it (a) } the highest S/N spectrum of each star; {\it (b)} the velocity-shifted and co-added {\it mean} spectrum of each star, and {\it (c)} a synthetic spectrum consisting of $\delta$ functions at the (solar) wavelengths of the strongest visible lines. The choice of template for each star was then guided by the consistency of the resulting velocities. Templates {\it (b)} and {\it (c)} were generally found to give the most consistent results, the latter also yielding velocities on a reliable absolute scale. In summary, our final procedure yielded radial velocities with a standard deviation of $\lesssim 100$ m s$^{-1}$ for the brighter and more metal-rich stars, rising to $\sim 300-1000$ m s$^{-1}$ for the faintest stars with the weakest spectral features. \section{Binary Detection and Orbit Determination} Fourteen of our stars exhibit no significant variation in radial velocity over the period covered by our observations, including any earlier velocities reported from HERES \citep{Barklem05}. Linear and parabolic fits of the run of velocities vs. time were made to check for any long-term trends, but they were generally negligible within the uncertainties; a few of the stars will be kept under continued surveillance. The star HE~0442-1234 was shown by \citet{PB10} to be a probable long-period spectroscopic binary. Our new data enabled us to complete the orbit of this star, as well as for the newly-discovered binaries HE~1044-2509 and HE~1523-0901. The orbital elements for these stars are listed in Table \ref{tbl-orb}, and the observed and computed velocity curves are shown in Fig. \ref{fig-orb}. \begin{figure} \plotone{fig1.eps} \caption{Observed and computed spectroscopic orbits for HE~0442-1234 (top), HE~1044-2509 (middle) and HE~1523-0901 (bottom). Orbital elements are listed in Table \ref{tbl-orb}. {\it Blue dots:} FIES velocities; {\it red dots:} Earlier data.\label{fig-orb}} \end{figure} In general, the individual component masses cannot be derived directly, but assuming a standard value of 0.8 $M_{\sun}$ for the mass of a halo giant allows us to estimate a minimum mass for the unseen companion. For HE~0442-1234, this is 0.67 $M_{\sun}$ if the inclination $i \sim90\deg$, and it cannot be much larger for the secondary star to remain invisible in the spectrum. In the other two systems, a secondary star on the main sequence could be of similar or lower mass. Assuming $M_2 = 0.6 M_{\sun}$ leads to the estimates of $i$ given in Table \ref{tbl-orb}; note that the orbit of HE~1523-0901 is seen nearly face-on. These, in turn, lead to estimates of the size (volume equivalent radius) of the Roche lobes of the unseen stars, which are not very sensitive to the adopted geometry. \label{sect_bin} \begin{deluxetable}{lccc} \tabletypesize{\scriptsize} \tablecaption{Orbital Elements for the Detected Binary Stars\label{tbl-orb}} \tablewidth{0pt} \tablehead{ \colhead{Element} & \colhead{HE 0442-1234} & \colhead{HE 1523-0901} & \colhead{HE 1044-2509} } \startdata P (d) & 2513.38$\pm$4.46 & 302.78$\pm$0.78 & 36.57$\pm$0.20 \\ K (km s$^{-1}$) & 12.50$\pm$0.20 & 0.399$\pm$0.008 & 27.48$\pm$0.22 \\ $e$ & 0.760$\pm$0.001 & 0.23$\pm$0.129 & 0.000$\pm$0.000 \\ $\gamma$ (km s$^{-1}$) & 236.16$\pm$0.20 & 162.50$\pm$0.10 & 359.28$\pm$0.55 \\ $f(m)$ ($M_{\sun}$) & 0.1396$\pm$0.0012 & 1.59 $10^{-6}\pm$0.54 $10^{-6}$ & 0.0823$\pm$0.0027 \\ $a \sin i$ ($R_{\sun}$) & 403.2$\pm$1.2 & 2.21$\pm$0.17 & 20.24$\pm$0.24 \\ $i$ (for $M_2 \sim 0.6 M_{\sun}$) & 88 & 1.5: & 65 \\ $R_{Roche}$ ($R_{\sun}$) & 65\tablenotemark{a} & 48 & 13 \\ $\sigma$ (1 obs., km s$^{-1}$) & 0.28 & 0.11 & 0.98 \\ \enddata \tablenotetext{a}{Size at periastron.} \end{deluxetable} \section{Discussion} \label{sect_disc} Our results enable us to address several issues of importance for understanding the likely astrophysical site(s) of the $r$-process, and also shed some light on the early Galactic production of carbon, as described below. \subsection{Binary Frequency} With three binaries detected in a sample of 17 stars, the binary frequency of the {\it r}-process-enhanced stars is $18$\%. The star HE~2327-5642, an r-II star showing a possibly variable velocity \citep{Mash10}, is below the southern limit of our sample, but is another candidate. This is fully consistent with the $\sim$20\% binary frequency determined for normal cluster giants by \citet{Merm96}. An additional 1-2 future discoveries in a larger sample might boost the frequency to perhaps as much as 25\%, but there is clearly no evidence that {\it all} {\it r}-process-element enhanced stars are binaries, as speculated by \citet{QW01} and others. Within the limitations imposed by the small sample, there also seems to be no difference in the binary population of the r-I and r-II classes. Similarly, of the two stars with measured U abundances, CS~31082-001 exhibits the so-called ``actinide boost" (an over-abundance of Th and U relative to third-peak {\it r}-process elements such as Eu) and is a single star, while HE~1523-0901 is a binary and shows no actinide boost. Remarkably, the C-enhanced prototypical r-II star CS~22892-052 also seems {\it not} to be a binary, despite the earlier suggestion by \citet{PreSne01}, indicating that its C content was not produced in a former AGB companion. Thus, membership in a binary system appears to be decoupled from details associated with these particular abundance variations. \subsection{Orbital Properties} The periods and eccentricities of our three confirmed binaries are entirely consistent with those of the sample of chemically normal Population I giant binaries by \citet{Merm96}: The longest-period orbit is highly eccentric, and the shortest-period orbit is well below the limit of $\sim 100$d for tidal circularization. Note that the old, metal-poor CH stars typically have circular orbits for periods up to $\sim 1000$d. Moreover, the secondary Roche lobes are too small to accommodate typical AGB stars of $\sim$ 200 $R_{\sun}$, if the secondary stars passed through this phase of their evolution. This is again consistent with the lack of {\it s}-process-element enhancements in these stars. \subsection{The Astrophysical Site of the {\it r}-Process} Proving that the {\it r}-process-element production did not occur in binaries provides no direct evidence of the nuclear physics process at the production site above that afforded by the detailed abundance analyses of these stars. Models attempting to explain the observed {\it r}-process abundance patterns fall into two classes, core-collapse supernovae (SNe) and merging binary neutron stars \citep[see, e.g.,][]{argast04,Gor11}, also discussed comprehensively by \citet{sneden08}, and in the context of CS~31082-001 by \citet{LP15}. The key feature to be highlighted here is that the newly-formed {\it r}-process elements were added in variable, but {\it internally consistent} proportions, in the otherwise constant chemical composition of the next generations of EMP and VMP stars. That the {\it r}-process-element abundances varied so strongly in the clouds from which these stars formed indicates that the {\it r}-process elements were not simply uniformly dispersed, together with all lighter elements, in the supernova explosions that are a common feature of all astrophysical models for the {\it r}-process. Ejection in a jet or beam directly from the nascent neutron star(s) seems the most natural scenario for achieving this. The chemical composition of the progenitor, and the varying distance and direction of the jet from the next cloud, would then explain, in a natural way, the continuously varying proportions of {\it r}-process elements to the bulk composition of the following generation of EMP stars. \subsection{{\it r}-Process-Enhanced Stars as Chemical Tags of the Early Galactic ISM} In this scenario, most stars would receive a ``standard" dose of {\it r}-process elements; stars with {\it r}-process-element abundances exceeding the average by a factor of $\gtrsim2-3$ would be seen as {\it r}-process rich, while those below the average \citep[exemplified by HD 122653,][]{westin00, Honda06} would have been bypassed by the {\it r}-process ejection and appear as {\it r}-process {\it poor}. This latter group may be as numerous as the former, but without spectacularly strong spectral lines to call attention to them. The recent discovery by \citet{aoki10} of a cool, EMP main-sequence dwarf with highly {\it r}-process-enhanced elemental abundance ratios, consistent with classification as an r-II star, would obviate any model to explain {\it r}-process enhancements as only due to some atmospheric anomaly confined to red giants. The existence of {\it r}-process-element enhanced (or depleted) stars in a narrow range of metallicity near [Fe/H] $\sim -3$ would imply that such anisotropic SN explosions only appeared at a certain ``chemical time'', and that the ISM was quickly fully mixed soon thereafter. The spectacular abundance anomalies of these stars can thus be used as extreme examples of ``chemical tags" of the sites and times of their formation. The r-I/r-II classification is just a coarse tool to indicate the degree of enhancement. However, as r-I stars are also found at higher metallicity than the r-II stars, they may have formed from clouds that were further enriched by material of ``normal'' halo composition. The different processes responsible for the light and heavy {\it r}-process elements \citep[see, e.g.,][]{LP8,Montes07}, as well as the existence of stars with and without an actinide boost, remain to be explained by further modeling, but the binary properties of EMP stars apparently played no role in this context: Binaries formed with similar properties as in chemically normal stars \citep[e.g.,][]{GonHer08}. Finally, it is remarkable that the prototypical r-II star CS~22892-052 is single {\it and} significantly enhanced in carbon, which has been assumed to originate in long-period binary stars together with the {\it s}-process elements, which are {\it not} observed in CS~22892-052. This casts doubt on the accepted explanation for the synthesis of C in the early Galaxy as due primarily to pollution by former AGB binary companions, and suggests the synthesis of C, N, and O in earlier, rapidly rotating massive stars as one attractive alternative \citep[see, e.g., ][]{Meynet06,Meynet10}. \section{Conclusions} \label{sect_concl} Eighty percent of our program r-I and r-II stars exhibit no detectable radial-velocity variations, while three stars are binaries with well-determined orbits (Table \ref{tbl-orb}), typical of systems with giant primaries, but no AGB secondary stars. Thus, the binary population among these stars is normal, and binary stars play no special role in producing the {\it r}-process elements and injecting them into the early ISM. The case of CS~22892-052 suggests that this may be true for the early synthesis of carbon as well. We conclude that whatever progenitors produced the {\it r}-process elements (and carbon) were {\it extrinsic} to the EMP and VMP stars we observe today. These elements were likely ejected in a collimated manner, and make these stars archetypical chemical indicators of their formation sites in the early Galaxy. \acknowledgments This paper is based on observations made with the Nordic Optical Telescope, operated on the island of La Palma jointly by Denmark, Finland, Iceland, Norway, and Sweden, in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias. We thank Drs. Piercarlo Bonifacio, Luca Sbordone, Lorenzo Monaco, and Jonay Gonzalez Hernandez for alerting us to the binary nature of HE~0442-1234 and for allowing us to include their velocities in our orbital solution, Dr. G. Torres for computing the orbit of HE1523-091, and Dr. Paul Barklem for providing the observing dates of the HERES spectra. We also thank several NOT students for obtaining most of the NOT observations for us in service mode. J.A. and B.N. acknowledge support from the Danish Natural Science Research Council, and L.A.B from the Carlsberg Foundation. T.C.B. acknowledges partial funding of this work from grants PHY 02-16783 and PHY 08-22648: Physics Frontier Center/Joint Institute for Nuclear Astrophysics (JINA), awarded by the U.S. National Science Foundation.
1,314,259,994,575
arxiv
\section{Introduction} \label{sec:Intro} The bulk of the $3033$ sources in the \textit{Fermi} Gamma-ray Space Telescope - Large Area Telescope (\textit{Fermi}-LAT) 3FGL catalog fall into one of two classes: extragalactic blazars or nearby pulsars \citep{Acero2015}. A small number of other identified sources include supernova remnants, X-ray binaries, and starburst galaxies \citep{Ferrara2015}. However, $1010$ 3FGL sources are ``unassociated", without a confident astrophysical identification with a known pinpointed source, or with several competing astronomical explanations within the gamma-ray detection area. Based on the dominance of blazars and pulsars among identified 3FGL sources, many of these unassociated sources are likely blazars and pulsars. Capturing these unassociated and heretofore unclassified sources would create a more complete all-sky sample of various classes of objects, particularly blazars. Categorizing the blazars and pulsars in the 3FGL unassociated list is an important step towards confident population studies of both classes. The \textit{Fermi}-LAT unassociated sources might include blazars that are lower luminosity or higher redshift than their more easily detected and identified cousins in the established 3FGL blazar catalog \citep{Ferrara2015}. Therefore, pursuing the classification of the 3FGL unassociated sources will help build a more complete population study, which will aid in verification and analysis of the blazar sequence \citep[e.g.,][]{Fossati1998,Ghisellini2017} as a theoretical unifying scheme for blazars. In a similar way, identification and classification of 3FGL unassociated sources can also lead to new pulsar candidates, a population which has included new accreting pulsars in the 3FGL catalog \citep[e.g.][]{Wu2018,KwanLok2018}. Furthermore, in-depth analysis might show that an object suits neither blazar nor pulsar classification. Such sources might represent new gamma-ray binaries, or possibly more exotic objects \citep{SazParkinson2016}. The 3FGL unassociated list could contain several such objects, but classification of the blazars and pulsars that probably make up most of the unassociated sample is the first step in identifying any unique sources. While gamma-ray observations from \textit{Fermi}-LAT are the foundation for the 3FGL catalog, matching gamma-ray sources with X-ray counterparts and extending analysis to lower energy photons is a vital step in deeper analysis of the 3FGL sources. Recent work \citep{Kaur2019} developed a machine learning (ML) approach to sort 217 high-S/N unassociated 3FGL sources into blazars and pulsars. The 217 unassociated sources were analyzed by combining the gamma-ray flux and photon index of each object with a coarse estimate for the X-ray flux using probable counterpart X-ray excesses from the Neil Gehrels \textit{Swift} Observatory (aka \textit{Swift}) \citep{Gehrels2004}. For simplicity, the X-ray fluxes used in that work assumed an X-ray photon index of $\Gamma_{X} = 2$ rather than conducting full spectral fits. Training an ML routine with known pulsar and blazar samples, the authors identified $173$ likely blazars with $P_{bzr} >90 \%$ ($134$ with $P_{bzr} >99 \%$) and $13$ likely pulsars with $P_{bzr} <10 \%$ ($7$ with $P_{bzr} < 1 \%$). $31$ sources from the 3FGL unassociated list defied categorization and were labeled 'ambiguous'. The unassociated sources examined in \citep{Kaur2019} each had only one high-S/N X-ray excess in their gamma-ray confidence ellipse, and the majority of gamma-ray sources are expected to have counterpart X-ray sources. This is especially true of blazars, which make up the majority of the catalog. In \cite{Kaur2019} and this work we only examine gamma-ray sources with a single high-S/N (S/N $>$ 4) X-ray excess in the confidence region, and for the purposes of classification, we make the initial assumption that this source is the counterpart. There may be some rare cases in which this X-ray source and the unassociated gamma-ray source do not correspond to one another, but since there is no other strong X-ray source in the Fermi error ellipse, this X-ray source is the most likely counterpart. In this work, we expand upon ML investigations by conducting detailed X-ray spectral analysis of 184 possible X-ray counterparts of the unassociated 3FGL sources. We obtain fully fitted X-ray fluxes and photon flux power law indices using an absorbed power-law model. Our training and validation sample was drawn from known lists of \textit{Fermi}-LAT blazars and pulsars which had data for all six gamma- and X-ray parameters used in our ML process \citep{Ackermann2015,Abdo2013}. \citet{Kaur2019} used a list of 217 unassociated sources for the test sample. 56 of those 217 objects later were associated with an astronomical object, so those were removed from consideration in this paper. Since then new observations have added 26 solitary X-ray excesses with high $S/N > 4$, leading to our initial list of 187 unassociated sources with one possible X-ray counterpart within the $95\%$ confidence region of the \textit{Fermi}-LAT unassociated source. We found three members of the 187 that were spurious contaminants from optically bright coincident stars creating optical loading in the \textit{Swift}-XRT detector, and we excluded these three excesses from all analysis. The goal of this work is to conduct X-ray spectral analysis for some possible counterparts to \textit{Fermi}-LAT unassociated gamma-ray sources and to begin to build a multiwavelength classification routine. To this end we update the machine learning approach in \cite{Kaur2019} to include full X-ray spectral fits for high-S/N X-ray excesses spatially coincident with the 3FGL unassociated sources. In section \ref{sec:ObsAn}, we discuss observations, spectral fitting, and ML processes used in our analysis, plus we describe the training, test, and research samples. Next, in section \ref{sec:Results} we tabulate and plot parameters for the various samples and we describe our fitting and classification results. In section \ref{sec:DisCon} we discuss the classifications in comparison to previous works and summarize our findings. Tables of spectral fits and ML classification results are also included. \section{Observations and Analysis} \label{sec:ObsAn} \subsection{\textit{Swift}-XRT Observations of 3FGL Unassociated Sources} Our sample is based on a collection that initially contained 187 \textit{Swift} X-ray counterparts to \textit{Fermi}-LAT 3FGL sources with high detection Signal-to-Noise (S/N) ratio ($S/N \ge4$) and only a single X-ray excess in the $95\%$ confidence region of the 3FGL source. As three of these counterparts (3FGL J0858.0$-$4843, J1050.6$-$6112, and J1801.5$-$7825) were near bright ($m_V<8$) stars, the X-rays from those excesses are almost certainly spurious products of optical loading\footnote{\url{https://www.swift.ac.uk/analysis/xrt/optical_loading.php}} in the \textit{Swift}-XRT (X-ray Telescope) \citep{Burrows2005} detector. We excluded these false detections from all further analysis. Particularly bright stars in the field of view cause optical loading in the \textit{Swift}-XRT as optical photons contaminate the detector and introduce spurious signals. Optical loading is less likely for dimmer coincident stars. We used the NASA HEASARC keyword interface to download all \textit{Swift}-XRT observations within 8' of the centroid position of each 3FGL counterpart. Two more sources (3FGL J1216.6-0557 and 3FGL J0535.7-0617c) were matched with \textit{Swift}-XRT observations only upon expanding the HEASARC search radius to 10'. These sources are positioned close to the edge of the \textit{Swift}-XRT field of view in their respective observations. In total we collected over $500$ individual \textit{Swift}-XRT observations, capturing all $184$ members of the unassociated list. All observations used the photon counting (PC) mode of the \textit{Swift}-XRT, enabling two-dimensional imaging across the 23 arcminute XRT field-of-view. Usable \textit{Swift}-XRT exposure times for most objects was around 4 ks, but ranged from 1 ks to over 60 ks. We cleaned and processed each level 1 event file using \verb|xrtpipeline| v.0.13.5 from the HEASOFT software\footnote{\url{https://heasarc.gsfc.nasa.gov/docs/software.html}}, then merged with other observations of that particular object using \verb|xselect| v.2.4g and \verb|ximage| v.4.5.1 to create a single summed event list for each source plus a summed exposure map and ancillary response file using \verb|xrtmkarf|. For each unassociated 3FGL object, we produced spectra for source and background regions using \verb|xselect|. The source region was circular with radius 20 arcseconds, and the background region was annular with inner and outer radii of 50 and 150 arcseconds. Both regions were centered on the coordinates of the examined X-ray excess. If the count rate in the source region of any excess exceeded 0.5 counts per second, we would draw a new annular source region with an inner radius depending on the count rate to avoid photon pile-up and saturation on the detector. The possible X-ray counterparts to 3FGL unassociated sources are faint enough that none caused pile-ups to warrant annular source regions. Adding X-ray variability to our spectral and photometric analysis of the X-ray excesses would certainly be an interesting probe into variability timescales of blazars and pulsars, but unfortunately most of the sources examined here do not have X-ray observations spanning a wide enough time range to maintain a large training, test, or research sample. Finally, we examined the total number of counts in the source region to determine whether to use $\chi^2$ initial spectral fitting. The Cash statistic is a useful fitting statistic for spectra with few counts, particularly in cases for which there are not sufficient counts to group for a $\chi^2$ fit \citep{Cash1976}. When an excess has enough photons in its source region, we binned our data with 20 counts per bin to enable an initial $\chi^2$ fit and also prepared a spectrum file for eventual Cash fitting. For an excess with only a few dozen detected X-ray photons, this approach would result in only one or two bins, and $\chi^2$ fitting would produce unreliable non-Gaussian distributed fits. The Cash statistic does not require any such binning, and therefore is often used in fitting faint X-ray spectra with few counts. While the Cash statistic cannot be directly considered as a measure of goodness of fit like $\chi^2$, a Cash statistic similar to the degrees of freedom is a rough indicator of a reasonable fit. \subsection{Detailed X-ray Spectral Fitting} While the machine learning analysis of the X-ray counterparts to 3FGL unassociated sources in \cite{Kaur2019} assumed an X-ray photon index $\Gamma_{X} = 2$ to calculate X-ray flux, our archival \textit{Swift}-XRT observations facilitated a full spectral fitting. We used \verb|Xspec| v.12.10.1f \citep{Arnaud1996} to fit each spectrum. The fitting model $tbabs \times cflux \times powerlaw$ included three nested functions: \verb|tbabs|, \verb|cflux|, and \verb|powerlaw|. \verb|cflux| calculated the total unabsorbed flux between 0.3 and 10 keV and \verb|tbabs| modeled line-of-sight hydrogen extinction using galactic values from the \verb|nH| lookup function described in \cite{Wilms2000}. The galactic line-of-sight extinction is fixed at the catalog value for each spectrum analyzed. \verb|powerlaw| is a simple power law. Uncertainties on the fitted photon index and X-ray flux were jointly measured using the iterative \verb|steppar| routine; this routine occasionally encountered numerical errors finding the error of photon indices close to zero. For these spectra we report the symmetric error generated by \verb|fit|. Spectra with high photon counts were initially fit using $\chi^2$ as the optimization statistic to create first guesses for Cash statistic fitting. Comparing the new fitted fluxes to fluxes calculated assuming $\Gamma_{X} = 2$ provides a useful sanity check for the X-ray spectral fitting routine. Objects with fitted $\Gamma_X$ close to $2$ should show fitted X-ray fluxes close to the previous X-ray fluxes that assumed $\Gamma_{X} = 2$. Large revisions in X-ray flux should be reserved for objects with $\Gamma_{X} \ne 2$. Figure \ref{fig:indexcheck} shows this reassuring close correspondence for spectra with fitted photon indices near $\Gamma_{X} = 2$, while spectra with fitted indices departing from $2$ show significantly corrected fluxes. All of the successful spectral fits are displayed in Table \ref{tab:fitresults}. Of the eleven X-ray excesses whose fully fitted X-ray fluxes differed by more than an order of magnitude from the X-ray flux assuming $\Gamma_X = 2$, seven corresponded to the individually analyze unusual sources described in table \ref{tab:trouble}. Of the remaining four, one with a fitted index of $\Gamma_X = 4.4$ saw a drop in X-ray flux by a factor of ten compared to $\Gamma_X = 2$ fits. This gamma-ray source (3FGL J1837.3-2403) has \textit{Swift} exposures of such duration that at least four X-ray excesses are expected to spuriously appear in the gamma-ray confidence region, suggesting that this X-ray source is background. The remaining three X-ray excesses with radically different fitted fluxes are three of the five excesses where spectral fitting failed to converge, described above. Figure \ref{fig:indexcheck} excludes only the five excesses with unconverged X-ray fitting. After fitting with a power-law model, seven excesses showed unusually high or low photon indices compared to the photon index of $\Gamma_X \sim 2$ expected for pulsars and blazars. We conducted further analysis on these excesses listed in Table \ref{tab:trouble}, and they were excluded from the main ML process since they warranted individual investigation to check for coincident stars or catalog objects that might explain their spectra. Six of these seven excesses are located within a few arcseconds of dim catalogued stars in the galactic plane, possible targets for future observations to verify whether the X-ray radiation is linked to the star. A quantitative estimate of optical loading for \textit{Swift}-XRT showed that none of the seven spectra can have count rates heavily impacted by optical loading from the nearby star, although one of the seven could have a minor contribution. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{FluxComparer.pdf} \caption{Fully fitted X-ray flux vs X-ray flux assuming $\Gamma_{X} = 2$. Points are colored based on fitted X-ray photon index. Black triangles are excesses identified as having extreme X-ray photon indices in table \ref{tab:trouble}. This figure excludes the five excesses for which full X-ray fitting failed to converge.} \label{fig:indexcheck} \end{figure*} \begin{deluxetable*}{crl} \tablecaption{Seven of the possible X-ray counterparts to unassociated 3FGL sources with extreme X-ray photon indices, $\Gamma_X < -1$ or $\Gamma_X > 5$, were excluded from the main ML classification effort. We describe the spectrum of the excess and list any notable objects close to the X-ray excess via SIMBAD and NASA Extragalactic Database coordinate searches. Possible stellar counterparts include spectral type and apparent magnitude from SIMBAD if available.} \label{tab:trouble} \tablewidth{0pt} \tablehead{ \colhead{3FGL Source} & \colhead{$\Gamma_{X}$} & \colhead{Notes}} \startdata 3FGL J0748.8$-$2208 & 6.29 & very few counts, 2" from TYC 5993-3722-1 ($m_V = 12.4$) \\ 3FGL J0905.6$-$4917 & 7.84 & diffuse in XRT image, listed as `confused' in 4FGL, 3" from 2MASS J09053033-4918382 (M4, $m_J = 9.5$) \\ 3FGL J1329.8$-$6109 & 6.22 & peaked spectrum, 10" from HD117110 (G0V, $m_V = 9.2$) \\ 3FGL J1624.1$-$4700 & 7.42 & peaked spectrum, 1" from CD-46 10711 (K1IV rotationally variable star, $m_V = 11.0$)\\ 3FGL J1710.6$-$4317 & 6.32 & peaked spectrum at 0.9 keV \\ 3FGL J1921.6+1934 & 6.19 & flat spectrum, 2" from HD231222 ($m_V = 10.8$) \\ 3FGL J2035.8+4902 & 5.82 & peaked spectrum at 0.8 keV, 5" from V* V2552 Cyg (Eclipsing binary, $m_V = 10.8$) \\ \enddata \end{deluxetable*} \subsection{Machine Learning} While multi-wavelength spectral analysis enables comprehensive study of individual unassociated gamma-ray sources, the observations and interpretation of hundreds of such objects would pose an onerous time burden on human scientists. Fortunately, recent developments in machine learning (ML) techniques have resulted in numerous applications of ML classification schemes to \textit{Fermi}-LAT unassociated source catalogs \citep{Hassan2012, SazParkinson2016, McFadden2017}. These developments are part of a wave of ML techniques promulgating into survey analysis. In this work, we use a random forest (RF) classifier, an aggregate of many individual decision tree (DT) realizations, to classify sources into blazars and pulsars, following a procedure described in \cite{Breiman2001}. Our approach here is nearly identical to that in \cite{Kaur2019}, which achieved $\approx 95 \%$ accuracy with a decision tree method and $\sim 99 \%$ accuracy with a random forest approach. A detailed description of the statistics and theory of decision trees and random forests can be found in \cite{Breiman2001}. In brief, decision tree classifiers are non-parametric supervised and trained machine learning methods. DT classifiers discriminate objects between classes by branching classes one by one at decision nodes, each node judging a single parameter of an object via an inequality. A tree is optimized using the Gini impurity index, representing the probability of a randomly selected source from the dataset being incorrectly labeled at one decision node. The random forest approach compounds the DT method described above, generating a forest of decision trees and classifying test objects based on the average of multiple decision trees \citep{Breiman2001}. An RF algorithm constructs numerous decision trees by randomly creating subsamples of the training dataset. The overall forest also returns the relative importance of the parameters of the training dataset. Once the forest is fully trained, a new observation is assigned a classification probability based on the average of the classifications of each tree in the forest. Overall, the use of many decision trees in the RF routine creates a more robust analysis of test objects and prevents overfitting in a single tree from biasing results. The DT and RF methods used in this paper utilize the \verb|sklearn| package available in Python. \subsection{Training and Test Samples} To train the RF classifier, we gathered a sample of 831 known sources, including 772 known blazars and 59 known pulsars. The sample was derived from the 3FGL catalog which provided gamma-ray properties and second Swift X-ray Point Source Catalog \citep[2SXPS, ][]{Evans2020} which provided X-ray parameters. In addition, the X-ray properties of known pulsars were obtained from the literature search of various studies; \citep[e.g.,][]{Marelli2012,SazParkinson2016, Wu2018, Zyuzin2018}. The parameters for each source included\footnote{In this work, $\log{x}$ always refers to the logarithm in base 10 of $x$.}: \begin{itemize} \item X-ray photon index $\Gamma_X$ \item Gamma-ray photon index $\Gamma_\gamma$ \item The logarithm of gamma-ray flux $\log{F_\gamma}$ \item The logarithm of X-ray to gamma-ray flux ratio $\log{F_X/F_\gamma}$ \item The significance of the curvature in the gamma-ray spectrum (henceforth simply \textit{curvature}) \item The gamma-ray variability index \end{itemize} All the X-ray parameters were determined through our analysis as explained earlier, while the gamma ray parameters were extracted from the 3FGL catalog. The complete details of the obtained gamma-ray data and the methods are provided in \citet{Kaur2019}. Because there are many more known blazars than known pulsars in the training and test samples, we used Synthetic Minority Over-sampling Technique (SMOTE) \citep{Chawla2002} to generate synthetic members of the underrepresented class (pulsars) with a k-nearest neighbors approach. Previous classification efforts have shown that seriously unbalanced training datasets can lead to trained RF classifiers that are biased against the underrepresented class \citep{Last2017}. The result of the SMOTE expansion is a catalog of known blazars and pulsars, plus artificial pulsars generated with the same distribution in parameter-space as the real pulsars, producing a catalog with a balanced number of 772 blazars and 772 pulsars. Expansion of the pulsar catalog via SMOTE is executed before the catalog is split into training and validation samples. To optimize RF parameters such as number of individual trees and maximum tree depth, we utilized \texttt{GridSearchCV} in \texttt{sklearn v.0.20.3}. We found that 1000 decision trees splitting to a maximum depth of 15 nodes with at least one source in each leaf were required to effectively train the classifier. The reported blazar probability for each source is the fraction of the 1000 trees in which the object was classified as a blazar. In this paper, we utilized a cross-validation method, \texttt{cross\_val\_predict} from \texttt{sklearn v.0.20.3} by dividing our total sample into 10 folds and then used each fold (one at a time) as a test sample For the validation step, a RF classifier is generated using a training subsample. Members of the corresponding test subsample are then classified as a pulsar or a blazar and the generated classification is compared to the actual class label. In this way, an accuracy score for the entire RF tree is generated. This process is repeated ten time, so ten random forest classifiers are trained and each is validated in turn; the overall accuracy of the RF classifier is the average of the validation accuracy of the ten folded iterations. The overall RF accuracy obtained in this way was 98.5\%. In \citet{Kaur2019}, the authors separated 100 sources from the complete set of blazars and pulsars for a test sample to calculate the accuracy of the classifier trained on the rest of the data set. This unitary test sample leads to an accuracy based on only that one test sample. In this way, the reported RF validation accuracy in that paper measures the same reliability as in this paper, but with a more restricted approach to selecting a test sample. \begin{deluxetable*}{cccccc} \tablecaption{RF feature importance} \label{tab:importan} \tablewidth{0pt} \tablehead{ \colhead{$\Gamma_X$} & \colhead{$\Gamma_\gamma$} & \colhead{Curvature} & \colhead{Variability index} & \colhead{$\log{F_\gamma}$} & \colhead{$\log{F_X/F_\gamma}$} } \startdata 0.033 & 0.097 & 0.41 & 0.13 & 0.08 & 0.25 \\ \enddata \end{deluxetable*} \subsection{Unassociated Sample} We conducted an initial investigation of our sample by combining gamma-ray properties from the 3FGL catalog for the 184 unassociated sources with the X-ray properties derived from the new spectral fits. Comparing the photometric and spectroscopic properties of the unassociated sample with those of the known pulsars and blazars, the unassociated sources tend to have lower gamma-ray fluxes than both the known blazars and pulsars. The mean X-ray fluxes of the unassociated counterparts fall between the mean fluxes of the known blazars and known pulsars. The histograms in Figure \ref{fig:Pairs} show that the unassociated sources most readily overlap with the known blazar sample, suggesting that the majority of the unassociated sources should be blazars, consistent with the membership of known 3FGL sources. Interestingly, the histogram for X-ray photon index (the plot in the upper-left corner of Figure \ref{fig:Pairs}) shows two distinct peaks in the known 3FGL blazar distribution, with the unassociated source distribution overlapping primarily with the higher/softer peak. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{Seaborn.pdf} \caption{A full pairs plot of the known blazars (red), known pulsars (blue) and the unassociated sources (green). Histograms in the diagonal plots are smoothed and normalized to different scales for each class. The six parameters are the X-ray photon index, the gamma-ray photon index, the gamma-ray curvature index, the logarithm of the variability index, the logarithm of the gamma-ray flux in $\rm{erg/s/cm^2}$, and the logarithm of the ratio of X-ray to gamma-ray flux} \label{fig:Pairs} \end{figure*} Our model incorporating hydrogen-extincted power law spectra returned unusually high or low $\Gamma_X$ for seven sources, as described above and listed in Table \ref{tab:trouble}. Given that some of these excesses also coincided with catalogued stars, we view these sources as dubious and do not include these seven sources in the ML classification. \section{Results} \label{sec:Results} Our X-ray spectral fits and RF classification results for the entire 3FGL unassociated catalog is available at CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via \href{http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/AJ}{http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/AJ} The vast majority of examined X-ray excesses obtained well-defined spectral fits, reported in Table \ref{tab:fitresults}. Most of the fits to the X-ray spectra have photon indices between $0$ and $4$, a similar range to the lists of known pulsars and blazars used to train the RF routine (as in Figure \ref{fig:Pairs}). This supports our first assumption that most of the unassociated sources are pulsars or blazars. These fits represent a large collection of X-ray parameters for likely counterparts to previously unassociated 3FGL gamma-ray sources. Five excesses had very few X-ray photons after summing up the \textit{Swift}-XRT observations. With so few photons, the \verb|xspec| fitting routine could not return useful spectral fits. For these spectra, we assumed $\Gamma_{X} = 2$ to calculate the flux. The importances of the different parameters in the RF classifier indicate the features of the gamma- and X-ray spectra that are the strongest predictors of blazar or pulsar identification. The two most important features are gamma-ray spectral curvature and $\log{F_X/F_\gamma}$, shown in Table \ref{tab:importan}. Figure \ref{fig:indexcheck} shows that full X-ray fitting with both flux and photon index as free parameters did not alter X-ray flux by more than an order of magnitude for the vast majority of spectra. Some excesses did see a change in X-ray flux by around a half an order of magnitude; these spectra also showed the largest alterations in fitted photon index from $\Gamma_X \sim 2$. By this measure, fully fitting X-ray spectra instead of assuming $\Gamma_X = 2$ can correct reported X-ray flux by up to an order of magnitude and obtain photon index as a fitted parameter. Applying the optimized RF classifier to the 177 fitted unassociated sources and X-ray excesses (ignoring the seven troublesome spectra discussed above), we use cross-validated blazar probabilities to categorize the unassociated sources. In this way, we use each of the ten subfolds used to validate the RF accuracy and average the blazar probability of each source from each fold. We identified 5 likely pulsars ($P_{bar} \le 10 \%$) and 126 likely blazars ($P_{bar} \ge 90 \%$), with 46 sources remaining ambiguous. The results from this classification are reported in Table \ref{tab:MLresults}. Figure \ref{fig:OldNew} compares the blazar probability for the RF classification in this work to the same in \cite{Kaur2019} for the $161$ sources analyzed in both works, color-coding the points by the fully fitted X-ray photon index. While the validation accuracy of the new RF classifier is not significantly different from the approach in \cite{Kaur2019}, we have refined the blazar and pulsar catalogs and introduced new spectral information to all sources by fully fitting for photon index and X-ray flux. These alterations suggest that the new $P_{bzr}$ values are more reliable than previous versions, facilitating the direct comparison in Figure \ref{fig:OldNew}. In general, the addition of fully-fitted X-ray indices and fluxes to the ML training and test data sets did not alter the RF classifications for most of the 3FGL unassociated sources, and there is no pattern linking severe alterations in flux or photon index to previous estimates with drastically changed blazar probabilities. The most significant change in classifications were 13 sources previously classified as likely blazars and 3 previously classified as likely pulsars but here were labeled as ambiguous. Additionally, 7 ambiguous classifications in that previous work were here labeled as likely blazars. 138 of the 161 shared sources were sorted into the same category in both approaches. We did not see any systematic relation that could be attributed as arising from a one-to-one relationship between altered X-ray photon index and changes in $P_{bzr}$. While many sources were classified with a similar blazar probability in this work and in \citep{Kaur2019}, some were classified as blazars or pulsars with greater confidence with more comprehensive spectral fits. Figure \ref{fig:OldNew} shows that changes in blazar probability from \citep{Kaur2019} to this work occur independently of divergence from coarse estimated spectral parameters. However, there are three overarching trends diverging from a one-to-one correspondence in the blazar probabilities shown in figure \ref{fig:OldNew}. Some likely blazars became ambiguous (or vice versa), and there is a general shift among previously more ambiguous sources towards higher blazar probabilites. That only ambiguous sources saw large systematic shifts towards higher blazar probabilities is reassurance that the addition of more comprehensive X-ray spectral fits adds valuable information for discriminating pulsars and blazars. The locations of the X-ray counterparts in galactic coordinates are shown in Figure \ref{fig:Coords}. While the counterparts classified as likely blazars and ambiguous are scattered roughly uniformly across the sky, the excesses with photon indices very different from expected for pulsars or blazars are almost entirely restricted to the galactic plane, which could be an indication that the X-rays of these excesses may be of galactic origin. If all the unusual excesses originate in similar astronomical objects, a catalog of gamma- and X-ray parameters of such objects could be used to add additional class to an RF scheme. As we do not know of a unified explanation for those excesses and as there is no previous catalogue of known similar objects it is not feasible to include sources of this unusual character as a class in the RF classifier method. \begin{figure*} \centering \includegraphics[width=0.6\textwidth]{OldvNew2.pdf} \caption{Blazar probability in this work vs in \cite{Kaur2019} for excesses analyzed in both works. Dotted red and blue vertical and horizontal lines show $<10\%$ (likely pulsar) and $>90\%$ (likely blazar) categorization bounds respectively. Points are color-coded by fully fitted X-ray photon indices with the same scale as figure \ref{fig:indexcheck}} \label{fig:OldNew} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{GalacticLoc2.pdf} \caption{Galactic coordinates for the 184 X-ray counterparts examined in this work. Points include likely blazars, likely pulsars, and ambiguous counterparts, as well as the seven sources with unusual spectral fits. The galactic plane is approximately within the purple boundaries, and the galactic center is at $(l,b) = (0,0)$.} \label{fig:Coords} \end{figure*} \section{Discussion and Conclusions} \label{sec:DisCon} In this work, we conducted full X-ray spectral analysis of counterparts to 3FGL unassociated sources, obtaining fluxes and photon indices for most of the X-ray sources examined. The vast majority of the X-ray sources linked to the 3FGL catalog unassociated sources were ably fit by our model, and represent a significant survey of dim excesses in the X-ray sky. Comprehensive X-ray spectral fits and the exclusion of unsuitable spectra together increase confidence in the output of the ML classification. After training the RF classifier, the feature importances show in Table \ref{tab:importan} show that $log(F_X/F_G)$ was heavily weighted compared to most other parameters in the ML process, indicating that the ratio is an important discriminator for discerning blazars from pulsars. With previous estimates for X-ray flux differing from fully-fit fluxes by a factor of two or more, full X-ray spectral analysis is an important contribution to ML classification of pulsars and blazars and to cataloging unassociated source X-ray parameters. Finally, we compared our classification results to those obtained in previous work and found that introducing fully fit X-ray parameters achieves the same RF classification accuracy ($98.5 \%$ in this work) as in \cite{Kaur2019} ($99\%$) where the X-ray flux was obtained by assuming $\Gamma_{X} = 2$. The new approach with full X-ray fitting essentially matches the validation accuracy of the fixed photon index approach in \cite{Kaur2019}, fulfilling our goal of meeting the old validation accuracy as a target. 138 of the 161 sources examine in both investigations were classified similarly. Including full X-ray spectral parameters shifts some sources into or out of the ``likely blazar" category and shifts previously ambiguous sources to higher blazar probability. Figure \ref{fig:OldNew} shows a general increase in blazar probability for previously ambiguous sources, while likely pulsars did not see an increase in blazar probability. Given that many of the ambiguous sources are probably blazars, this trend suggests that the addition of X-ray spectral fits adds valuable information for discerning pulsars from blazars even if the changes are not dramatic enough to shift sources to 100\% blazar probability. In conducting full X-ray fitting, we discovered several clearly spurious X-ray sources. The elimination of these excesses from consideration is an important step in ensuring a reliable classification method, as those excesses would otherwise remain in consideration for classification as blazars or pulsars. By identifying possible stellar or instrumental origins for seven sources with X-ray photon indices $\Gamma_X<-1$ or $\Gamma_X>5$, we showed that X-ray fitting can also sift out excesses that should not be immediately classified into blazars or pulsars via ML, further increasing the reliability of our results. Besides a few sources identified as optical loading contaminants due to nearby bright stars, we selected seven spectra with X-ray photon indices significantly divergent from typical theoretical predictions of pulsars and blazars for further investigation. Because these sources are largely contained within the galactic plane, it is likely that they originate within the Milky Way. That six of the seven have a star within a few arcseconds suggests that some of these excesses may be linked to stellar phenomena and may be interesting targets for future investigations. Alternatively, the nearby stars in these seven cases may have been simple coincidences that are unrelated to the X-ray or gamma-ray source. In one of the seven cases, the stars may be contributing some optical loading in the \textit{Swift}-XRT CCD, but for this $m_V=9.2$ star, the optical loading contribution is expected to be a small fraction relative to the detected X-ray count rate. Though the requirements for X-ray fits create more stringent observational requirements in a multiwavelength ML routine compared to using only gamma-ray observations, incorporating diverse observations increases the comprehensive capabilities of the ML routine. Assuming $\Gamma_{X} = 2$ to obtain X-ray flux admitted to the ML analysis spectra that now are revealed to be poorly described by a blazar or pulsar model. These spectra might not actually be pulsars and blazars, but in assuming an X-ray photon index they might be incorrectly categorized. These unusual sources can be examined individually instead of analyzed automatically, allowing for more specialized future studies. Even more comprehensive classification could be achieved by including X-ray variability measurements or expanding analysis into the ultraviolet bands using the UVOT telescope onboard \textit{Swift} \citep{Roming2005}. Analysis of lower-energy photons together with X-ray and gamma-ray photons could also constrain the location of the synchrotron peak in each spectrum, greatly increasing the detail in the characterization of the source. It will be particularly interesting to apply a growing understanding of pulsar and blazar classification to the results of surveys using new and upcoming space telescopes such as eROSITA. Whether the characteristics of bright pulsars and blazars are similar to those of dimmer pulsars and blazars should become evident as wide-field surveys extend to dimmer magnitudes. Eventually, after the easier to find blazars and pulsars have been identified, it may become prudent to expand ML classification beyond a binary choice of pulsars and blazars. To that end, switching from a RF routine to a more detailed approach such as guided clustering would dramatically increase the flexibility of the automated system at the cost of more strenuous supervision requirements. \section*{Acknowledgements} We are grateful to the reviewer for their insightful and detailed commentary, which was vital in improving and focusing this work. This research has made use of data and/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC. The authors gratefully acknowledge the support of NASA grants 80NSSC17K0752 and 80NSSC18K1730. Michael C. Stroh is partially supported by the Heising-Simons Foundation under grant 2018-0911. \section*{Catalog} The machine-readable tables corresponding to Tables \ref{tab:fitresults} and \ref{tab:MLresults} are available at CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via \href{http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/AJ}{http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/AJ}. Additional thanks is due to the CDS and VizieR teams for facilitating this web service.
1,314,259,994,576
arxiv
\section{Laminar flow} \label{sec:laminar} For other active turbulent drag reduction techniques the analytical solutions for the corresponding laminar flows induced by wall motion have proven useful for accurately estimating important averaged turbulent quantities, such as the wall spanwise shear \citep{choi-xu-sung-2002}, the power spent for the wall forcing \citep{ricco-quadrio-2008}, and the thickness of the generalized Stokes layer generated by the wall waves \citep{skote-2011}. The laminar solution has also been employed to determine a scaling parameter which relates uniquely to drag reduction under specified wall forcing conditions \citep{quadrio-ricco-2004,cimarelli-etal-2013}. Through the laminar solution of the flow induced by a steadily rotating infinite disc, RH13 obtained an estimate of the time-averaged power spent to move the discs, which showed very good agreement with the power spent computed via DNS. Inspired by previous works, the laminar flow above an infinite oscillating disc is therefore computed to calculate the power spent to activate the disc and to identify areas over the disc surface where the fluid performs work onto the discs, thus aiding the rotation. This is a form of the regenerative braking effect, studied by RH13 for steady disc rotation. These estimates are then compared with the turbulent quantities in \S\ref{sec:turbulent-pspent}. \subsection{Laminar flow over an infinite oscillating disc} \label{sec:lamsolver} The laminar oscillating-disc flow was studied for the first time by \citet{rosenblat-1959} (refer to figure \ref{gap-geom} for the flow geometry). The velocity components are \begin{align} \{u_r^*,u_\theta^*\}=\frac{2r^* W^*}{D^*} \left\{F'\left(\eta,\breve{t}\right), G\left(\eta,\breve{t}\right)\right\}, \qquad u_y^*=-\frac{4 W^*}{D^*}\sqrt{\frac{\nu^* T^*}{\pi}}F\left(\eta,\breve{t}\right), \label{velocity-relations} \end{align} where the prime denotes differentiation with respect to $\eta=y^*\sqrt{\pi/(\nu^* T^*)}$, the scaled wall-normal coordinate, $\breve{t}= 2\pi t^*/T^*$ is the scaled time, and $u_r^*$, $u_\theta^*$ and $u_y^*$ are the radial, azimuthal, and axial velocity components, respectively. The following boundary conditions are satisfied \begin{align*} y^*=0:&\qquad u^*_r=0,\quad u^*_\theta=(2r^*W^*/D^*)\cos\breve{t}, \quad u^*_y=0, \quad p^*=0\text{.}\\ y^*\rightarrow\infty:&\qquad u^*_r=0, \quad u^*_\theta=0. \end{align*} Expressions \eqref{velocity-relations} are substituted into the cylindrical Navier-Stokes equations to obtain the equations of motion for $F'$ and $G$ under the boundary layer approximation, \begin{align} \begin{split} \dot{F'}& = \frac{1}{2}F''' + \gamma(G^2+2FF''-F'^2)\text{,} \\ \dot{G}& = \frac{1}{2}G'' + 2\gamma(FG'-F'G), \end{split} \label{eq:rosenblat-equations} \end{align} with boundary conditions \begin{equation} \begin{array}{llll} \eta=0: & F=F'=0, & G=\cos \breve{t}, \\ \eta\rightarrow\infty: & F'=G=0, \label{fgh-bc-2} \end{array} \end{equation} where the dot denotes differentiation with respect to $\breve{t}$ and $\gamma=T^*W^*/(\pi D^*)$. The latter parameter represents the ratio between the oscillation period $T^*$ and the period of rotation $\pi D^*/W^*$ which would occur if the disc rotated steadily with tip velocity $W^*$. The value $\gamma=\pi$ is relevant because it denotes the special case of maximum disc tip displacement equal to the circumference of the disc, i.e. each point at the disc tip covers a distance equal to $\pi D^*$ during a half period of oscillation. The system \eqref{eq:rosenblat-equations}-\eqref{fgh-bc-2} was discretized using a first-order finite difference scheme for $\breve{t}$ and a second-order central finite difference scheme for $\eta$. The equations were first solved in time by starting from null initial profiles. The boundary condition for $G$ was altered as $G\left(0,\breve{t}\right)=1-e^{-\breve{t}}$ until $G$ was sufficiently close to unity. The system was then integrated with the boundary condition $G\left(0,\breve{t}\right)=\cos\breve{t}$. Figure \ref{G-profile} (left) shows the wall-normal profiles of $F'$ and $G$ at different oscillation phases. \subsection{Laminar power spent} \label{sec:laminar-pspent} \begin{figure} \centering \includegraphics[width=\textwidth]{figs/figure4} \caption{Left: Wall-normal profiles of $F'$ and $G$ at different oscillation phases for $\gamma=1$ (thick lines) and $\gamma=0$ (thin lines). The latter is given by \eqref{first-order-utheta} and coincides with the classical Stokes layer solution. Right: Numerically computed values of $\mathcal{G}(\gamma)$ (solid lines) and asymptotic solutions, \eqref{eq:GGp-asymptotic} for $\gamma \ll 1$ (dashed line in main plot), and \eqref{eq:G-large-gamma} for $\gamma \gg 1$ (dashed line in inset).} \label{G-profile} \end{figure} The laminar power spent $\mathcal{P}_{sp,l}^*$ is calculated using \eqref{hinze-psp}, where only ${\bf u_d}$ is retained in the laminar case as there is no mean streamwise flow above the disc and the turbulent fluctuations are null (${\bf u_m}={\bf u_t}=0$). Substituting $u_d=u_\theta\cos\theta$ and $w_d=u_\theta\sin\theta$ into \eqref{hinze-psp}, using \eqref{velocity-relations} and averaging over $\theta$, $r$, and time leads to \begin{equation} \mathcal{P}_{sp,l}^*= \frac{\mathcal{G}(\gamma) W^{*\hspace{0.1mm}2}}{2}\sqrt{\frac{\pi\nu^{*\hspace{0.1mm}}}{T^{*\hspace{0.1mm}}}}\text{,} \label{pspent-lam} \end{equation} where \begin{equation} \mathcal{G}(\gamma) = \frac{1}{2\pi}\int^{2 \pi}_0 G\left(0,\breve{t}\right)G'\left(0,\breve{t}\right) \mathrm{d}\breve{t} \label{fancy-g} \end{equation} is shown in figure \ref{G-profile} (right). To express $\mathcal{P}_{sp,l}^*$ as percentage of the power spent to drive the fluid along the streamwise direction, \eqref{pspent-lam} is divided by \eqref{p-x} to obtain \begin{equation} \mathcal{P}_{sp,l}(\%)=\frac{50\mathcal{G}(\gamma)W^2 R_p^{3/2}}{U_b R_\tau^2}\sqrt{\frac{\pi}{T}}\text{.} \label{p-lam} \end{equation} \subsubsection{Asymptotic limit for $\gamma \ll 1$: the Stokes-layer regime} \label{sec:gamma-small} To obtain an analytical approximation to $\mathcal{G}$ for $\gamma \ll 1$, the expanded form of $G$ in powers of $\gamma$ can be used, \begin{equation} G_{\gamma \ll 1}(\eta,\breve{t},\gamma)=G_0(\eta,\breve{t})+\gamma^2G_2(\eta,\breve{t})+\mathcal{O}(\gamma^3)\text{,} \label{eq:G-expanded} \end{equation} where $G_0$ and $G_2$ are given in equations (17) and (45) of \citet{rosenblat-1959}. Upon differentiation of \eqref{eq:G-expanded} with respect to $\eta$, the asymptotic form of $\mathcal{G}(\gamma)$ is \begin{equation} \label{eq:GGp-asymptotic} \mathcal{G}_{\gamma \ll 1}(\gamma) = \frac{1}{2\pi} \int_0^{2\pi} G_0(0,\breve{t}) \left[G'_0(0,\breve{t})+\gamma^2G'_2(0,\breve{t})\right] \mathrm{d}\breve{t} =-\frac{1}{2}+\frac{\gamma^2}{160}\left(15\sqrt{2}-26\right) + \mathcal{O}(\gamma^3)\text{,} \end{equation} which is shown in figure \ref{G-profile} (right). The asymptotic solution predicts the numerical solution well for $\gamma<2$. In the limit $\gamma \ll 1$, \citet{rosenblat-1959} obtained a first-order solution \begin{equation} u_\theta^*= \frac{2 r^*W^*}{D^*}e^{-\sqrt{\pi/(\nu^* T^*)}y^*} \cos\left(\frac{2 \pi t^*}{T^*}-\sqrt{\frac{\pi}{\nu^* T^*}}y^*\right)\text{,} \label{first-order-utheta} \end{equation} which is in the same form as the classical Stokes solution \citep{batchelor-1967}. Substituting \eqref{first-order-utheta} into \eqref{hinze-psp}, the first-order approximation is found, $\mathcal{P}_{sp,l}^*=0.25W^*\sqrt{\pi\nu^*/T^*}$, which is expressed as percentage of \eqref{p-x} to obtain \begin{equation} \mathcal{P}_{sp,l,\gamma \ll 1}=\frac{-25W^{2}R_p^{3/2}}{U_b R_\tau^2}\sqrt{\frac{\pi}{T}}\text{.} \label{p-lam-gamma-small} \end{equation} This is also found directly from \eqref{p-lam} by setting $\mathcal{G}(0) =-0.5$. \subsubsection{Asymptotic limit for $\gamma \gg 1$: the quasi-steady regime} \label{sec:gamma-large} As suggested by \citet{benney-1964}, in the limit $\gamma \gg 1$ it is more appropriate to rescale the wall-normal coordinate by the Ekman layer thickness $\delta_e^*=\sqrt{\nu^* D^*/(2W^*)}$. The rescaled equations (2.19) and (2.20) of \citet{benney-1964} were then solved using the same numerical method described in \S\ref{sec:lamsolver}. The von K\'arm\'an equations describing the flow over a steadily rotating disc are recovered in the limit $\gamma \rightarrow \infty$. The asymptotic limit of $\mathcal{G}$ for $\gamma \gg 1$ is found by first rescaling $G'(0,\breve t)$ in \eqref{fancy-g} through $\delta_e^*$ and by noting that the time modulation of the disc motion enters the problem only parametrically, \begin{equation} \label{g-prime-asy} G'_{\gamma \gg 1}(0,\breve t) = \sqrt{2 \gamma} G_s \cos{\breve t}, \end{equation} where $G_s=-0.61592$ \citep{rogers-lance-1960}. By substituting \eqref{g-prime-asy} into \eqref{fancy-g} and by use of \eqref{fgh-bc-2}, one finds \begin{equation} \label{eq:G-large-gamma} \mathcal{G}_{\gamma \gg 1}(\gamma) = G_s \sqrt{\frac{\gamma}{2}} \text{.} \end{equation} As shown in figure \ref{G-profile} (right, inset), the asymptotic expression \eqref{eq:G-large-gamma} matches the numerical values well. By substituting \eqref{eq:G-large-gamma} into \eqref{p-lam}, the asymptotic form of the power spent is obtained \begin{equation} \mathcal{P}_{sp,l,\gamma \gg 1}=\frac{50 G_s W^{3/2} R_p^{3/2}}{\sqrt{2 D} U_b R_\tau^2}\text{.} \label{p-lam-gamma-large} \end{equation} By coincidence, the power spent when $\gamma=0$, i.e. \eqref{p-lam-gamma-small}, is half of the oscillating-wall case at the same $W^*$ and $T^*$ \citep{ricco-quadrio-2008}, and the power spent when $\gamma \gg 1$, i.e. \eqref{p-lam-gamma-large}, is half of the steady-rotation case at the same $W^*$ and $D^*$ (RH13). The oscillating-disc power spent is expected to be smaller than in these two cases, but for different reasons. The oscillating-wall case requires more power because the motion involves the entire wall surface, while the steady-rotation case consumes more power because the motion is uniform in time. \subsection{Laminar regenerative braking effect} \label{app-lampower} The laminar phase- and time-averaged power spent $\mathcal{W}_l$ to oscillate the discs beneath a uniform streamwise flow is computed by following RH13. As the purpose of this analysis is to obtain a simple estimate of the turbulent case, the streamwise shear flow is superimposed on the Rosenblat flow without considering their nonlinear interaction. A rigorous study of this flow would be the extension of the work by \citet{wang-1989} with oscillatory wall boundary conditions. Starting from \eqref{hinze-psp}, using \eqref{decomp}, and setting ${\bf u_t}=0$, one finds \begin{equation} \mathcal{W}_l(x,0,z,\breve{t}) = \frac{1}{R_p}\left[ u_d(x,0,z,\breve{t})\left( u_m'(0) + \left.\frac{\p u_d}{\p y}\right|_{y=0} \right) + w_d(x,0,z,\breve{t}) \left.\frac{\p w_d}{\p y}\right|_{y=0}\right]\text{.} \label{plam-space-1} \end{equation} Using \eqref{velocity-relations}, \eqref{plam-space-1} becomes \begin{equation*} \mathcal{W}_l(r,\breve{t}) = \frac{2rWG(0,\breve{t},\gamma)}{D R_p} \left( u_m'(0)\cos\theta + \frac{2Wr}{D}\sqrt{\frac{\pi R_p}{T}}G'(0,\breve{t},\gamma) \right). \label{plam-space-2} \end{equation*} By rearranging to obtain an inequality in $r$, the region where the streamwise flow exerts work on the disc (regenerative braking effect) is found, \begin{equation} r<-\frac{u'_m(0) D\cos\theta}{2WG'(0,\breve{t},\gamma)}\sqrt{\frac{T}{\pi R_p}}\text{.} \label{plam-space-3} \end{equation} In \S\ref{sec:turbulent-pspent}, the region of regenerative braking effect is computed for the turbulent case and compared with the laminar prediction \eqref{plam-space-3}. \section{Flow definition and numerical procedures} \label{sec:numerics} \subsection{Numerical solver, geometry and scaling} The simulated pressure-driven turbulent channel flow at constant mass flow rate is confined between two infinite parallel flat walls separated by a distance $L_y^*=2h^*$, where the symbol $^*$ henceforth denotes a dimensional quantity. The streamwise pressure gradient is indicated by $\Pi^*$. The direct numerical simulation (DNS) code solves the incompressible Navier-Stokes equations in the channel flow geometry using Fourier series expansions along the streamwise ($\tilde x^*$) and spanwise ($\tilde z^*$) directions, and Chebyshev polynomials along the wall-normal direction $y^*$. The time-stepping scheme is based on a third-order semi-implicit backward differentiation scheme (SBDF3), treating the nonlinear terms explicitly and the linear terms implicitly. The discretized equations are solved using the Kleiser-Schumann algorithm \citep{kleiser-schumann-1980}, outlined in \citet{canuto-etal-2007}. Dealiasing is performed at each time step by setting to zero the upper third of the Fourier coefficients along the streamwise and spanwise directions. The simulations were carried out using an OpenMP parallel implementation of the code on the N8 HPC Polaris cluster. The code was also used by RH13 and it is a developed version of the original open-source code available on the Internet \citep{gibson-2006}. Lengths are scaled with $h^*$ and velocities are scaled with $U_p^*$, the centreline velocity of the laminar Poiseuille flow at the same mass flow rate. The time is scaled by $h^*/U_p^*$ and the pressure by $\rho ^*U_p^{*^2}$, where $\rho^*$ is the density of the fluid. The Reynolds number is $R_p=U_p^* h^*/\nu^*=4200$, where $\nu^*$ is the kinematic viscosity of the fluid. The friction Reynolds number is $Re_\tau=u_\tau^* h^*/\nu^*=180$, where $u_\tau^*=\sqrt{\tau_w^*/\rho^*}$ is the friction velocity in the stationary wall case, and $\tau_w^*$ is the space- and time-averaged wall-shear stress. Quantities non-dimensionalized using outer units are not marked by any symbol. Unless otherwise stated, the superscript $+$ indicates scaling by native viscous units, a terminology first defined by \citet{trujillo-bogard-ball-1997}, based on $u_\tau^*$ of the case under investigation. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figs/figure1} \caption{Schematic of the flow domain showing the location and sense of rotation of the discs when $\widetilde{W}=W$.} \label{discs-channel} \end{figure} The channel walls are covered by flush-mounted rigid discs, as shown schematically in figure \ref{discs-channel}. The discs have diameter $D$ and oscillate in time as the disc tip velocity is \begin{equation} \label{eq:wall-bc} \widetilde{W}=W\cos \left(\frac{2\pi t}{T}\right). \end{equation} Neighbouring discs in the streamwise direction have opposing sense of rotation, whilst neighbouring discs in the spanwise direction have the same sense of rotation. A parametric study was undertaken on $D$, $W$ and $T$, with the parameter range selected in order to focus on the portion of $D$, $W$ parameter space studied by RH13 which leads to high drag reduction. The region of drag increase found by RH13 was not considered. For disc diameters $D=1.78,3.38$, a computational box size of dimensions $L_x=6.79\pi$ and $L_z=2.26\pi$ was utilized, where $L_x$ and $L_z$ are the box lengths along the streamwise and spanwise directions, respectively. For $D=5.07$, $L_x=6.8\pi$ and $L_z=3.4\pi$, and for $D=6.76$, $L_x=9.05\pi$ and $L_z=2.26\pi$. The grid sizes were $\Delta x^+=10$, $\Delta z^+=5$ in all cases, and the time step was within the range $0.008\leq \Delta t^+\leq0.08$ (scaled in reference outer units). The initial transient period during which the flow adjusts to the new oscillating-disc regime was discarded following the procedure outlined in \citet{quadrio-ricco-2004}. Flow fields were saved over an integer number of periods at intervals of $T/8$. After the transient was discarded, the total integration time was $t^+$$=$6000 for $T^+=100$, $t^+$$=$7500 for $T^+=250, 500$, $t^+$$=$15000 for $T^+=1000$ and $t^+$$=$30000. \subsection{Model of disc annular gap} To simulate the disc flow as realistically as possible, a thin annular region of width $c$ was simulated around each disc, as shown in figure \ref{gap-geom}. As explained in RH13, there are two reasons for this choice. The clearance flow between each disc and the stationary portion of the wall is simulated to mimic as closely as possible an experimental disc flow set up where such gap would inevitably be present. Secondly, the velocity profile between the disc tip and stationary wall does not present discontinuities. This serves to suppress strongly the Gibbs-type artificial oscillations that would occur if the velocity were not continuous. Ideally, the gap flow would be more realistically simulated by treating the turbulent channel flow and gap flow as coupled systems, but this lies outside the scope of the present study. As a first approximation, the gap velocity profile is assumed to be symmetric about the disc axis and to change linearly from a maximum velocity at the disc tip to zero at the outer edge of the gap. The tangential velocity $u_\theta$ in this region is a function only of $r$, the radial displacement from the centre of the disc, and time, $t$. The disc velocity profile is \begin{equation*} u_\theta(r,t) = \left\{ \begin{array}{l l} 2Wr\cos(2\pi t/T)/D, & \quad r \leq r_1\text{,} \\ W(c-r+D/2) \cos(2\pi t/T)/c, & \quad r_1 \leq r \leq r_2\text{,}\\ \end{array} \right. \end{equation*} where $r_1=D/2$ and $r_2=D/2 +c$. As a more advanced approximation, the clearance flow is modelled as a thin layer of fluid confined between concentric cylinders. Similarly to the laminar flow between moving flat plates, the flow contained within this annular gap is described by the Womersley number, $N_w=c^*\sqrt{2\pi/(\nu^*T^*)}$ \citep{pozrikidis-2009}. When $N_w\ll1$, the linear velocity profile accurately describes the flow. However, for $N_w=\mathcal{O}(1)$ the oscillating flow surrounding each disc is confined to a boundary layer which is attached to the oscillating disc and is much thinner than $c$. The bulk of the annular gap is quasi-stationary. In our simulations the minimum $N_w=0.51$ occurs for the case with the thinnest gap and the largest oscillation period, i.e. for $D=1.78$, $T=130$. The maximum $N_w=6.42$ occurs for $D=7.1$, $T=13$. Therefore, it is a sensible choice to simulate the gap via the oscillating layer as $N_w$ attains finite values. Following the analysis of \citet{carmi-tustaniwskyj-1981}, the $u_\theta(r,t)$ velocity profile in the gap is described by the azimuthal momentum equation, \begin{figure} \centering \includegraphics[width=0.6\textwidth]{figs/figure2} \caption{Schematic of disc and annular gap flow.} \label{gap-geom} \end{figure} \begin{equation} \frac{\p u_\theta}{\p t} = \frac{1}{R_p}\left(\frac{\p^2u_\theta}{\p r^2} + \frac{1}{r}\frac{\p u_\theta}{\p r} - \frac{u_\theta}{r^2}\right)\text{.} \label{gap-NS-inner} \end{equation} Assuming a solution to \eqref{gap-NS-inner} of the form $u_\theta=\mathbb{R}\left[ \mathring{u}_\theta(r)e^{i2\pi \hat{t}/T} \right]$, where $\mathbb{R}$ denotes the real part and $\hat{t}$ is the rescaled time, $\hat{t}=t/R_p$, the following ordinary differential equation of the Bessel type is obtained \begin{equation} \mathring{u}''_\theta + \frac{\mathring{u}'_\theta}{r} - \left(\frac{2\pi i}{T}+\frac{1}{r^2} \right)\mathring{u}_\theta=0, \label{gap-NS-bess} \end{equation} where the prime denotes differentiation with respect to $r$. Equation \eqref{gap-NS-bess} is subject to $\mathring{u}_\theta(r_1)= W$, $\mathring{u}_\theta(r_2)= 0$. The velocity in the annular gap is \begin{equation} u_\theta(r,\hat{t}) = W \cdot\mathbb{R}\left[ \frac{\mathcal{K}(\xi r_2)\mathcal{I}(\xi r)-\mathcal{I}(\xi r_2)\mathcal{K}(\xi r)}{\mathcal{I}(\xi r_1)\mathcal{K}(\xi r_2)-\mathcal{I}(\xi r_2)\mathcal{K}(\xi r_1)} e^{i2\pi \hat{t}/T} \right], \label{utheta-gap} \end{equation} where $\mathcal{I}(\cdot)$ and $\mathcal{K}(\cdot)$ are the first-order modified hyperbolic Bessel functions \citep{abramowitz-stegun-1964} and $\xi=\sqrt{i2\pi /T}$. Velocity profiles are shown in figure \ref{fig:gap-profile-oscill}. The Bessel layer was included in the code by reading in a map of the wall complex velocity at $t=0$. To advance in time the components within this map were multiplied by $e^{2\pi i {\hat t}/T}$ and the real components were extracted. As the boundary conditions are implemented in spectral space, it was necessary to Fourier transform the time-updated map of the velocity components at each time step, before passing the Fourier components as boundary conditions. The difference between the values of drag reduction and power spent against the viscous forces computed by use of the two annular-gap models for $c=0,0.02D$, and $0.05D$ were within the uncertainty range estimated via numerical resolution checks based on variation of the mesh sizes, time step advancement, and size of the computational box (refer to RH13 for further details on the numerical resolution tests). For this reason and because of the higher computational cost caused by the Bessel profile due to the additional spectral transformations, the linear velocity profile model was used. In order to choose the appropriate gap size for the simulations, the dimensional gap values were examined for typical experimental scenarios, presented in table 6 of RH13 for the steady disc flow case. The largest tested gap size of $c=0.05D$ was implemented as it corresponds to a value that would be achievable in the laboratory conditions detailed in this table. \begin{figure} \centering \includegraphics[width=\textwidth]{figs/figure3} \caption{Velocity profiles within the annular gap over a half period of the oscillation, computed through \eqref{utheta-gap}. Left: $D=7.1$, $W=0.51$, $T=130$, $N_w=2.03$. Right: $D=7.1$, $W=0.51$, $T=13$, $N_w=6.42$.} \label{fig:gap-profile-oscill} \end{figure} \subsection{Flow decomposition} \label{sec:disc-decomp} The averaging operators used to decompose the flow are defined in the following. The space- and time-ensemble average is defined as \begin{equation} \label{ensemble-average} \overline{f} (x,y,z,\tau) =\frac{1}{N_x N_z N} \sum_{n_x=0}^{N_x-1} \sum_{n_z=0}^{N_z-1} \sum_{n_t=0}^{N-1} f(\tilde{x}+2n_xD,y,\tilde{z}+n_zD,n_t T+\tau), \end{equation} where $2N_x$ and $N_z$ are the number of discs within the computational domain along $\tilde{x}$ and $\tilde{z}$, respectively, $\tau$ is the window time of the oscillation, and $N$ is the number of oscillation periods. The time average and the spatial average along the homogeneous directions are defined respectively as \begin{equation} \langle f \rangle (x,y,z)=\frac{1}{T}\int^{T}_0 \overline{f} (x,y,z,\tau) \mathrm{d}\tau, \quad \hat{f}(y)=\frac{1}{L_xL_z}\int_0^{L_x}\int_0^{L_z} \langle f \rangle (x,y,z) \mathrm{d}z\mathrm{d}x\text{.} \label{time-space-average} \end{equation} A global variable is defined as \begin{equation*} \left[f\right]_{g}=\int_0^{1}\hat{f}(y)\mathrm{d}y\text{.} \end{equation*} The size of all statistical samples is doubled by averaging over the two halves of the channel, taking into account the existing symmetries. The channel flow field is expressed by the sum \begin{equation} {\bf u}(x,y,z,t)={\bf u_m}(y)+{\bf u_d}(x,y,z,\tau)+{\bf u_t}(x,y,z,t), \label{decomp} \end{equation} where ${\bf u_m}(y)=\{u_m,0,0\}=\hat{{\bf u}}$ is the mean flow, ${\bf u_d}(x,y,z,\tau)=\{u_d,v_d,w_d\}=\overline{{\bf u}} - {\bf u_m}$ is the disc flow, and ${\bf u_t}$ is the fluctuating turbulent component. \subsection{Performance quantities} \label{sec:perfquants} This section introduces the main quantities used to describe the oscillating-disc flow, i.e. the turbulent drag reduction, the power spent to activate the discs against the viscous resistance of the fluid, and the net power saved, which is their algebraic sum. \subsubsection{Turbulent drag reduction} The skin-friction coefficient $C_f$ is first defined as $C_f$$=$$2\tau_w^*/\left(\rho^*U_b^{* 2}\right)$, where $U_b^*$$=$$[u^*]_{g}/h^*$ is the bulk velocity. The latter is constant because the simulations are performed under conditions of constant mass flow rate. The drag reduction $\mathcal{R}$ is defined as the percentage change of the skin-friction coefficient with respect to the stationary wall value \citep{quadrio-ricco-2004}: \begin{equation} \mathcal{R}(\%)=100 \frac{C_{f,s}-C_f}{C_{f,s}}\text{,} \label{DR-1} \end{equation} where the subscript $s$ refers to the stationary wall case. Using $\tau_w^*=\mu^* u_m^{* \hspace{0.1mm} \prime}(0)$, where the prime denotes differentiation with respect to $y$, \eqref{DR-1} becomes $\mathcal{R}(\%)=100\cdot\left( 1-u_m^{\prime}(0)/u^{\prime}_{m,s}(0) \right)$. \subsubsection{Power spent} \label{sec:pspent} As the oscillating disc flow is an active drag reduction technique, power is supplied to the system to move the discs against the viscous resistance of the fluid. To calculate the power spent, term III of the instantaneous energy equation (1-108) in \citet{hinze-1975} is first considered. Its volume-average is the work done by the viscous stresses per unit time, \begin{equation} \mathcal{P}^*_{sp,t}=\frac{\nu^*}{L_x^*L_y^*L_z^*}\int^{L_x^*}_0\int_0^{L_y^*}\int^{L_z^*}_0\frac{\p}{\p x_i^*}\left[u_j^*\left(\frac{\p u_i^*}{\p x_j^*}+\frac{\p u_j^*}{\p x_i^*}\right)\right]\mathrm{d}\tilde{z}^*\mathrm{d}y^*\mathrm{d}\tilde{x}^*, \label{hinze-psp} \end{equation} where $i,j$ are the indexes indicating the spatial coordinates $\tilde{x}$, $y$, $\tilde{z}$ and the corresponding velocity components (Einstein summation of repeated indexes is used). By substituting \eqref{decomp} into \eqref{hinze-psp} and by use of \eqref{ensemble-average} and \eqref{time-space-average}, one finds \begin{equation} \mathcal{P}^*_{sp,t} = \frac{\nu^*}{h^*}\left(\left. \widehat{u^*_d\frac{\p u^*_d}{\p y^*}}\right|_{y^*=0} + \left.\widehat{w^*_d\frac{\p w^*_d}{\p y^*}}\right|_{y^*=0}\right)\text{.} \label{p-osc} \end{equation} The power spent \eqref{p-osc} is expressed as percentage of the power employed to drive the fluid in the streamwise direction, $\mathcal{P}^*_x$. By volume-, ensemble- and time-averaging the first term on the right-hand side of (1-108) in \cite{hinze-1975}, one obtains \begin{equation} \mathcal{P}^*_x=\frac{U^*_b \Pi^*}{\rho^*}, \label{p-x} \end{equation} By dividing \eqref{p-osc} by \eqref{p-x}, the percentage power employed to oscillate the discs with respect to the power spent to drive the fluid along the streamwise direction is obtained, \begin{equation} \mathcal{P}_{sp,t}(\%)= -\frac{100R_p}{R^2_\tau U_b}\left(\left.\widehat{u_d\frac{\p u_d}{\p y}}\hspace{-0.0mm}\right|_{y=0} + \left.\widehat{w_d\frac{\p w_d}{\p y}}\hspace{-0.0mm}\right|_{y=0}\right). \label{pspent} \end{equation} \subsubsection{Net power saved} \label{sec:pnet} The net power saved, $\mathcal{P}_{net}$, the difference between the power saved due to the disc forcing (which coincides with $\mathcal{R}$ for constant mass flow rate conditions) and the power spent $\mathcal{P}_{sp,t}$, is defined as \begin{equation} \mathcal{P}_{net}(\%)=\mathcal{R}(\%)-\mathcal{P}_{sp,t}(\%). \label{eq-pnet} \end{equation} \section*{Acknowledgements} We would like to thank the Department of Mechanical Engineering at the University of Sheffield for funding this research. This work would have not been possible without the use of the computing facilities of N8 HPC, funded by the N8 consortium and EPSRC (Grant EP/K000225/1). The Centre is coordinated by the Universities of Leeds and Manchester. We also acknowledge the help of Dr Chris Davies from the University of Cardiff and Mr Harry Day from the University of Sheffield on the numerical computation of the laminar oscillating-disc flow. We are also indebted to Professors Shuisheng He, Ning Qin, and Yang Zhang, Dr Bryn Jones, and Misses Elena Marensi and Claudia Alvarenga at the University of Sheffield, Dr Ati Sharma at the University of Southampton, and Dr Hasegawa at the University of Tokyo for providing insightful comments on a preliminary version of the manuscript. Part of this work was presented at the 66$^{\mbox{th}}$ Annual Meeting of the APS Division of Fluid Dynamics, Pittsburgh, Pennsylvania, in November 2013. \section{Outlook for the future} \label{sec:outlook} In line with the analysis by RH13 for the steady disc-flow technique, it is instructive to render the scaled disc forcing parameters dimensional to guide laboratory experiments and to estimate the characteristic length and time scales of the wall forcing for flows of technological relevance. Table \ref{tab:outlook} displays estimated data for three flows of industrial interest and two flows of experimental interest with $D=6.76$, $W=0.39$, and $T=130$, which lead to $\mathcal{R}=16\%$ and $\mathcal{P}_{net}=5.5\%$. This table may be compared with the analogous table 6 in RH13 for the steady rotation case, although it should be noted that $f^*$ indicates the oscillation frequency in the present case ($f^*=2\pi/T^*$) and the rotational frequency in RH13's case ($f^*=\omega^*/2\pi$, where $\omega^*$ is the angular velocity). Experimental realisation of the disc-flow technique is possible with $D^*=4-8\hspace{1mm}\mbox{cm}$, $W^*=0.2\hspace{1mm}\mbox{m/s}$ in a water channel and $4.6\hspace{1mm}\mbox{m/s}$ in a wind tunnel. The frequencies are $f^*=0.37\hspace{1mm}\mbox{Hz}$ and $16\hspace{1mm}\mbox{Hz}$, respectively. The dimensional parameters in flight are $D^*=5.8\hspace{1mm}\mbox{mm}$, $W^*=70.7\hspace{1mm}\mbox{m/s}$, and $f^*=1752\hspace{1mm}\mbox{Hz}$. Commercially available electromagnetic motors ($D^*=2\hspace{1mm}\mbox{mm}$, $f^*=\mathcal{O}(10^{3})\hspace{1mm}\mbox{Hz}$), adapted for oscillatory motion, would guarantee these time and length scales of forcing \citep{chenliu-etal-2010}. The optimal frequency in flight is approximately half of the optimal one for steady rotation: $f^*=1752\hspace{1mm}\mbox{Hz}$ for the oscillating discs compared to $f^*=3718\hspace{1mm}\mbox{Hz}$ for the steady rotating discs. Figure~\ref{scales} shows characteristic time and length scales of the oscillating-disc technique and of other drag reduction methods. The typical length scale of the oscillating-disc technique is larger than that of the steadily rotating discs and the standing wave forcing, whilst being two orders of magnitude greater than both riblets and the feedback control systems studied by \citet{yoshino-suzuki-kasagi-2008}. The typical time scale of the oscillating disc flow is one order of magnitude larger than that of the oscillating wall forcing. It is also worth pointing out that these are optimal values for the tested parameter range and that our results in \S\ref{sec:turbulent-dependence} hint at the possibility to obtain comparable drag-reduction values for even larger oscillation periods and diameters, which are denoted by the dashed lines in figure \ref{scales}. The notable limitation of our analysis is the low Reynolds number of the simulations. It is therefore paramount to investigate the disc-flow properties at higher Reynolds number to assess whether and how the maximum drag reduction values and the optimal forcing conditions vary. We close our discussion by mentioning another advantage of the oscillating-disc flow when compared to the steady-disc flow by RH13. As shown in figure \ref{D-hr-outer} (d), it is possible to achieve $\mathcal{R}=13\%$ with $\gamma=\pi/8$, $T=12$, $W=0.51$, i.e. the disc tip undertakes a maximum displacement of only $1/8$ of the disc circumference. Therefore, for this case the disc-flow technique could be realized in a laboratory by use of a thin elastic seal between the disc and the stationary wall. This design would eliminate any clearance around the discs, which would not be possible for the case of steady rotation. \renewcommand{\arraystretch}{1.2} \begin{table} \centering \begin{tabular}{l||c|c|c||c|c} Parameter & Flight (BL) & Ship (BL) & Train (BL) & WT (BL) & WC (CF) \\ \hline \hline $U^*\hspace{2mm}\mbox{(m/s)}$ & $225$ & $10$ & $83$ & $11.6$ & $0.4$ \\ $\nu^*\cdot10^6\hspace{2mm}\mbox{(m$^{2}$/s)}$ & $35.3$ & $1.5$ & $15.7$ & $15.7$ & $1.1$ \\ $x^*\hspace{2mm}\mbox{(m)}$ & $1.5$ & $1.5$ & $1.8$ & $1.0$ & - \\ $h^*\hspace{2mm}\mbox{(mm)}$ & $22$ & $22$ & $27$ & $25$ & $10$ \\ $u_\tau^*\hspace{2mm}\mbox{(m/s)}$ & $7.9$ & $0.4$ & $2.9$ & $0.5$ & $0.02$ \\ $Re_\tau$ & $4970$ & $4970$ & $4970$ & $800$ & $180$ \\ $C_{f}\cdot10^{3}$ & $2.4$ & $2.4$ & $2.4$ & $3.8$ & $8.1$ \\ \hline $D^*\hspace{2mm}\mbox{(mm)}$ & $5.7$ & $5.6$ & $6.9$ & $39.6$ & $70.9$ \\ $W^*\hspace{2mm}\mbox{(m/s)}$ & $70.7$ & $3.1$ & $26.1$ & $4.6$ & $0.2$ \\ $T^*\hspace{2mm}\mbox{(ms)}$ & $0.6$ & $12.5$ & $1.9$ & $61$ & $2700$ \\ $f^*\hspace{2mm}\mbox{(Hz)}$ & $1752$ & $80$ & $536$ & $16$ & $0.4$ \end{tabular} \caption{Dimensional quantities for the optimum $\mathcal{P}_{net}$ case for three flows of industrial and two of experimental interest ($D=6.76$, $W=0.39$ and $T=130$). In the headings (BL) indicates a turbulent boundary layer with no pressure gradient, and (CF) indicates a pressure-driven channel flow. WT and WC stand for wind tunnel and water channel respectively. For headings marked BL, $U^*$ represents the free-stream mean velocity, $x^*$ is the downstream location and $h^*$ the boundary layer thickness; whilst for the CF case $U^*$ represents the bulk velocity and $h^*$ the channel half-height. The relations used: $h^*=0.37x^*(x^*U^*/\nu^*)^{-0.2}$ and $C_{f}=0.37\left[ \log_{10}(x^*U^*/\nu^*) \right]^{-2.584}$ for BL; $C_{f}=0.0336Re_\tau^{-0.273}$ for CF are from \citet{pope-2000}.} \label{tab:outlook} \end{table} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figs/figure16} \caption{Characteristic optimal time and length scales, $\mathcal{T}^{+}$, $\mathcal{L}^{+}$, for a range of drag reduction methods are shown for comparison with the oscillating disc technique. From left to right the time scales are given as follows: time between successive flow field measurements \citep{kang-choi-2000}, period of transverse travelling wave forcing \citep{du-symeonidis-karniadakis-2002}, period of spanwise wall oscillations \citep{quadrio-ricco-2004}, period of rotation of steady disc forcing \citep{ricco-hahn-2013}, and period of disc oscillation. From left to right the length scales are given as follows: maximum displacement of wall-normal wall motions \citep{kang-choi-2000}, spacing of sensors for feedback control of wall deformation \citep{yoshino-suzuki-kasagi-2008}, riblet spacing \citep{walsh-1990}, maximum displacement of temporally oscillating wall \citep{quadrio-ricco-2004}, wavelength of streamwise-sinusoidal wall transpiration \citep{quadrio-floryan-luchini-2007}, wavelength of standing wave forcing \citep{viotti-quadrio-luchini-2009}, wavelength of transverse travelling wave forcing \citep{du-symeonidis-karniadakis-2002}, diameter of steady discs \citep{ricco-hahn-2013}, and diameter of oscillating discs.} \label{scales} \end{figure} \nocite{kang-choi-2000,du-symeonidis-karniadakis-2002,quadrio-floryan-luchini-2007,walsh-1990} \section{Turbulent flow} \label{sec:turbulent} The turbulent flow results are presented in this section. Sections \S\ref{sec:turbulent-time}, \S\ref{sec:turbulent-dependence}, \S\ref{sec:turbulent-FIK}, \S\ref{sec:turbulent-scaling-s} focus on the drag reduction, section \S\ref{sec:turbulent-discflow} presents disc flow visualization and statistics, and section \S\ref{sec:turbulent-pspent} describes the power spent to move the discs and the comparison with the laminar prediction, studied in \S\ref{sec:laminar-pspent}. \subsection{Time evolution} \label{sec:turbulent-time} The temporal evolution of the space-averaged wall-shear stress is displayed in figure~\ref{D640-hist} (left). The transient time occurring between the start-up of the disc forcing and the fully established disc-altered regime increases with $\mathcal{R}$. This agrees with the oscillating wall and RH13, but the duration of the transient for the discs is shorter than for the oscillating wall case. The time modulation of the wall-shear stress is notable for the high $\mathcal{R}$ cases, with the amplitude of the signal increasing with $T$. The significant time modulation and the shorter transient compared with the oscillating wall technique could be due to the discs forcing the wall turbulence in the streamwise direction. The streamwise wall-shear stress is therefore affected directly whereas in the oscillating-wall case the streamwise shear flow is modified indirectly as the motion is along the spanwise direction only. The space- and phase-averaged wall-shear stress modulation, shown by the dashed line in figure \ref{D640-hist} (right), has a period equal to half of the wall velocity. This is expected because of symmetry of the unsteady forcing with respect to the streamwise direction. The wall-shear stress reaches its minimum value approximately $T/8$ after the disc velocity is maximum, i.e. at $\phi=5\pi/8, 13\pi/8$. The wall-shear stress peaks approximately $T/8$ after the disc velocity is null, i.e. at $\phi=\pi/8, 9\pi/8$. \begin{figure} \centering \includegraphics[width=\textwidth]{figs/figure5} \caption{Left: Space-averaged streamwise wall-shear stress vs. time for cases at $D=3.38$. The disc forcing is initiated at $t^+=770$. Only a fraction of the total integration time is shown. The space-averaging operator here does not include time averaging. Right: Ensemble- and space-averaged streamwise wall-shear stress vs. $\tau^+$ for $D^+=554$, $W^+=9.9$, $T^+=833$ (dashed line). The disc velocity is shown by the solid line. The phase $\phi$ is given in the figure.} \label{D640-hist} \end{figure} \subsection{Dependence of drag reduction on $D$, $W$, $T$} \label{sec:turbulent-dependence} Figure \ref{D-hr-outer} depicts maps of $\mathcal{R}(T,W)(\%)$ for disc sizes $D=1.78$, $3.38$, $5.07$, and $6.76$. The $\gamma$ values are shown as hyperbolae in these planes. For cases with $\gamma>\pi$, the maximum displacement is larger than the disc circumference. Figure \ref{D-hr} shows the same drag-reduction data, scaled in viscous units. The boxed values represent the net power saved $\mathcal{P}_{net}(\%)$ defined in \eqref{eq-pnet}. Only positive $\mathcal{P}_{net}$ values are shown and the bold boxes highlight the maximum $\mathcal{P}_{net}$ values. For $D=1.78$ and 3.38 and fixed $W$, drag reduction increases up to an optimum $T$ beyond which it decays. This optimum $T$ depends on $D$, and increases with the disc diameter. For $D=1.78,3.38$ the optimal periods are in the ranges $T^+=200-400$ and $T^+=400-800$, respectively. For $D=5.07$ and $6.76$ the optimal period is not computed and therefore $\mathcal{R}$ increases monotonically with $T$ for fixed $W$ and $D$. Cases with larger $T$ are not investigated due to the increased simulation time required for the averaging procedure. For $D=1.78$ and fixed $T$, drag reduction increases up to an optimum wall velocity, $W \approx 0.26$ ($W^+ \approx 6$), above which drag reduction decreases. This behaviour also occurs in the steady-disc case studied by RH13. The optimal $W$ are not found for larger $D$ as the drag reduction increases monotonically with $W$ for fixed $D$ and $T$. For $T \gg 1$, the wall forcing is quasi-steady and it is therefore worth comparing the $\mathcal{R}$ value with the ones obtained by steady disc rotation, computed by RH13. RH13's values are however not expected to be recovered in this limit. A primary reason for this is that the power spent in the oscillating-disc case is smaller than in the steady rotation case, as verified in \S\ref{sec:turbulent-pspent} (in \S\ref{sec:gamma-large}, it is predicted to be half of the steady case by use of the laminar solution when the oscillation period is large). RH13's values are displayed in figure \ref{D-hr} by the dark grey circles on the right-hand side of each map. In most of the cases where the optimal $T^+$ is detected, i.e. for $W^+>3$, $D=1.78$, and for $W^+>9$, $D=3.38$ and 5.07, our $\mathcal{R}$ values may reach larger values than RH13's for the same $W^+$. For $D=6.76$, all our computed $\mathcal{R}$ are lower than RH13's. Figure \ref{D-hr} also shows that a positive $\mathcal{P}_{net}$ occurs only for $W^+\leq9$. This confirms the finding by RH13 for steady rotation and is expected because the power spent grows rapidly as $W$ grows, as also suggested by the laminar result in \eqref{p-lam}. The largest positive $\mathcal{P}_{net}$ in the parameter range is $6\pm1\%$, and is obtained for $D^+=855$, $W^+=6.4$, $T^+=880$, and $D^+=568$, $W^+=6.4$, $T^+=874$. \begin{figure} \centering \includegraphics[width=\textwidth]{figs/figure6} \caption{Plots of $\mathcal{R}(T,W)(\%)$ for different $D$. The circle size is proportional to the drag reduction value. The hyperbolae are constant-$\gamma$ lines.} \label{D-hr-outer} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{figs/figure7} \caption{Plots of $\mathcal{R}(T^+,W^+)(\%)$. Scaling is performed using $u_\tau^*$ from the native case. The dark grey circles indicate RH13's data and the boxed values denote positive $P_{net}$ values.} \label{D-hr} \end{figure} \subsection{The Fukagata-Iwamoto-Kasagi identity} \label{sec:turbulent-FIK} The Fukagata-Iwamoto-Kasagi (FIK) identity relates the skin-friction coefficient of a wall-bounded flow to the Reynolds stresses \citep{fukagata-iwamoto-kasagi-2002}. It is extended here to take into account the oscillating-disc flow effects (the reader should refer to Appendix A of RH13 for a slightly more detailed derivation for the steady disc flow case). By non-dimensionalizing the streamwise momentum equation into outer units, decomposing the velocity field as discussed in \S\ref{sec:disc-decomp} and averaging in time, along the homogeneous $x$ and $z$ directions, and over both halves of the channel, the following is obtained \begin{equation*} \Pi Re_p= \left( u_m^\prime - \widehat{u_dv_d} - \widehat{u_t v_t} \right)^\prime\text{,} \label{FIK-anal-1} \end{equation*} where the prime indicates differentiation with respect to $y$. By following the same procedure outlined in \citet{fukagata-iwamoto-kasagi-2002} and noting that the Reynolds stresses term $\widehat{u_t v_t}$ in equation (1) in \citet{fukagata-iwamoto-kasagi-2002} is replaced with the sum $\widehat{u_t v_t} + \widehat{u_d v_d}$, the relationship between $C_f$ and the Reynolds stresses for the disc flow case can be written as \begin{equation} C_f=\frac{6}{U_bRe_p}-\frac{6}{U_b^2}\left[\left(1-y\right)\left( \widehat{u_t v_t} + \widehat{u_d v_d} \right) \right]_g\text{,} \label{FIK-outer} \end{equation} which is in the same form of the steady case by RH13. The drag reduction computed through the Reynolds stresses via \eqref{FIK-outer} is $\mathcal{R}=16.9\%$ for $D=3.38$, $W^+=13.2$ and $T^+=411$, which agrees with $\mathcal{R}=17.1\%$, calculated via the wall shear-stress. Using \eqref{FIK-outer}, it is also possible to separate the total drag reduction into the change of the turbulent Reynolds stresses $\widehat{u_tv_t}-\langle\widehat{u_{t,s}v_{t,s}}\rangle$ and the contribution of the time averaged disc Reynolds stresses $\widehat{u_dv_d}$, i.e. $\mathcal{R}(\%)=\mathcal{R}_t(\%)+\mathcal{R}_d(\%)$ where \begin{align} \label{rt} \mathcal{R}_t(\%)&=100\frac{R_p\left[\left(1-y\right)\left(\widehat{u_t v_t}-\langle\widehat{u_{t,s}v_{t,s}}\rangle\right) \right]_g}{U_b-R_p\left[\left( 1-y\right)\langle\widehat{u_{t,s}v_{t,s}}\rangle\right]_g}\text{,} \\ \quad \label{rd} \mathcal{R}_d(\%)&=100\frac{R_p\left[\left(1-y\right)\widehat{u_dv_d}\right]_g}{U_b-R_p\left[\left(1-y\right)\langle\widehat{u_{t,s}v_{t,s}}\rangle\right]_g}\text{.} \end{align} The subscript $s$ again refers to the stationary wall case. This decomposition is used in section \S\ref{sec:turbulent-scaling-s} to study the drag reduction physics. \subsection{Disc flow visualisations and statistics} \label{sec:turbulent-discflow} The disc flow for $D^+=552$, $W^+=13.2$ and $T^+=411$ ($\mathcal{R}=17\%$) is visualized at different phases in figure \ref{discflow-vis}. Isosurfaces of $q^+=\sqrt{u_d^{+2}+w_d^{+2}}=2.1$ are displayed. Similarly to the steady case by RH13, streamwise-elongated tubular structures appear between discs, which extend vertically up to almost one quarter of the channel height. They occur where there is high tangential shear, i.e. where the disc tips are next to each other and rotate in opposite directions, but also over sections of stationary wall. They persist almost undisturbed across the entire period of oscillation, their intensity and shape being only weakly modulated in time. The thin circular patterns on top of the discs instead show a strong modulation in time. This is expected as the patterns are directly related to the disc wall motion. Although at $\phi=0$ the disc velocity is null, the circular patterns are still observed as the rotational motion has diffused upward from the wall by viscous effects. Instantaneous isosurfaces of low-speed streaks in the proximity of the wall (not shown) reveal that the intensity of these structures is weakened significantly, similarly to the steady disc-flow case. \begin{figure} \centering \includegraphics[width=\textwidth]{figs/figure8} \vspace{0.25cm} \caption{Disc-flow visualisations of $q^+(x,y,z)=\sqrt{u_d^{+\hspace{0.1mm}2}+w_d^{+\hspace{0.1mm}2}}=2.1$ at phases $\phi=0,\pi/4,\pi/2,3\pi/4$. The disc tip velocity at each phase is shown in figure \ref{D640-hist} (right). In this figure and in figures \ref{ubands}, \ref{cf-contours}, \ref{discflow}, and \ref{pspent-space}, $D^+=552$, $W^+=13.2$, $T^+=411$.} \label{discflow-vis} \end{figure} Contour plots of $u_d$ in $x-z$ planes are shown in figure \ref{ubands}. The first column on the left shows the contour at the wall. At $y^+=4$ and $y^+=8$, the disc outlines can still be observed, the clarity decreasing with the increased distance from the wall. At these heights the contour lines are no longer straight, but show a wavy modulation. The circular patters created by the disc motion are displaced in the streamwise direction by the mean flow. The magnitude of the shift increases with distance from the wall and at $y^+=8$ it is about $100\nu^*/u_\tau^*$. At $y^+=27$ the disc outlines are no longer visible and the structures occurring between discs in figure \ref{discflow-vis} here appear as streamwise-parallel bands of $u_d$ which do not modulate in time and are slower than the mean flow. They also appear at higher wall-normal locations up to the channel half-plane, with their width increasing with height. \begin{figure} \centering \includegraphics[width=\textwidth]{figs/figure9} \vspace{0.25cm} \caption{Contour plot of $u_d^+(x,y,z)$ as a function of phase in the $x-z$ plane at $y^+=0$, $y^+=4$, $y^+=8$ and $y^+=27$ (from left to right).} \label{ubands} \end{figure} The contour plots in figure \ref{cf-contours} show the ensemble- and time-averaged wall-shear stress. At phases $\phi=0$ and $\pi$, when the angular velocity of the discs is zero, the wall-shear stress is almost uniform over the disc surface. During the other phases of the cycle, the lines of constant stress are inclined with respect to the streamwise direction and the maximum values are found near the disc tip. The lines show a maximum inclination of about $45^\circ$ at phases $\phi=3\pi/4,7\pi/4$, when the deceleration of the discs is maximum. \begin{figure} \includegraphics[width=1.0\textwidth]{figs/figure10}% \caption{Contour plot of phase-averaged streamwise wall friction, $2\left.\langle\p u^+/\p y^+\right|_0\rangle/U_b^{+\hspace{0.1mm}2}$. The skin-friction coefficient is $C_f=6.79\cdot10^{-3}$.} \label{cf-contours} \end{figure} Figure \ref{discflow} (left) shows contours of the time-averaged $\langle u_dv_d\rangle$ observed from the $y-z$ plane at different streamwise locations. These contours overlap with the elongated structures in figures \ref{discflow-vis} and \ref{ubands}, which are therefore recognized as primarily responsible for these additional Reynolds stresses. It is clear that the structures are only slowly varying along the streamwise direction. The flow over the disc surface does not contribute to $\langle u_dv_d\rangle$ because, although $u_d$ is significant, $v_d$ is negligible. Only the contribution to $\langle u_dv_d\rangle$ from both negative $u_d$ and $v_d$ is included in figure \ref{discflow} (left) as $u_d$ and $v_d$ with other combinations of signs only negligibly add to the total stress. The structures are therefore jets oriented toward the wall and backward with respect to the mean flow. \begin{figure}% \centering \includegraphics[width=\textwidth]{figs/figure11} \vspace{0.25cm} \caption{Left: Isosurfaces of $\langle u_d^+v_d^+\rangle$ observed from the $y-z$ plane at $x^+=0$, $x^+=160$, $x^+=320$ (from left to right). The plot shows only $\langle u_d^+v_d^+\rangle$ for $u_d,v_d<0$ as within the contour range the contributions from other combinations of $u_d$ and $v_d$ are negligible. Right: Wall-normal profiles of the $u_{d,rms}^+$ (solid lines) and $\widehat{u_d^+ v_d^+}$ (dashed lines). Profiles are shown for phases from the first half of the disc oscillation.} \label{discflow} \end{figure} Figure~\ref{discflow} (right) shows the time modulation of the root-mean-square (r.m.s.) of the disc streamwise velocity component, defined as $u_{d, rms}(y,\tau)=\sqrt{\widehat{u_d^2}}$, and of the Reynolds stresses $\widehat{u_d^+ v_d^+}$ (where here the spatial average $\widehat{\cdot}$ does not include the time average as in \eqref{time-space-average}). Four profiles are shown for each quantity, for phases from the first half period of the oscillation. Data from the second half are not shown as the profiles coincide at opposite oscillation phases. The disc flow penetrates into the channel up to $y^+\approx15$. When the disc tip velocity is close to its maximum, the profiles of $u_{d,rms}$ and $w_{d,rms}$ (the latter not shown) decay from their wall value and follow each other closely up to $y^+\approx10$. At higher locations, the magnitude of $u_{d,rms}^+$ is larger than that of the wall-normal and spanwise velocity profiles. In the bulk of the channel, for $y^+>50$, the profiles modulate only slightly in time. This therefore further confirms that the intense temporal modulation of the disc flow is confined in the viscous sublayer and buffer region. $u_{d,rms}^+$ decays to $\approx0.7$ as the channel centreline is approached. As expected, the Reynolds stresses $\widehat{u_d^+ v_d^+}$ show a slow time modulation and are always positive, proving that the streamwise-elongated structures favourably contribute to the drag reduction through $\mathcal{R}_d$ in \eqref{rd}. Neither $u_{d,rms}^+$ nor $\widehat{u_d^+ v_d^+}$ modulate in time for $y^+>120$. \subsection{Power spent} \label{sec:turbulent-pspent} \subsubsection{Comparison with laminar power spent} \label{sec:comparison} Figure \ref{pspent-grouped} (left) shows the comparison between the power spent $\mathcal{P}_{sp,t}$ to impose the disc motion, computed via \eqref{pspent} with DNS data, and the laminar power spent, calculated via \eqref{p-lam}. The values match satisfactorily for low $\mathcal{P}_{sp,t}$, and the disagreement grows for larger $\mathcal{P}_{sp,t}$. This is due to the larger values of $W$, which intensify the nonlinear interactions between the disc flow and the streamwise turbulent mean flow, and promote the interference between neighbouring discs. As the laminar calculations are performed by not accounting for the disc interference through the assumption of infinite disc size and by neglecting the streamwise mean flow, the agreement is expected to worsen for large $W$. Figure \ref{pspent-grouped} (left) also shows that the power spent for cases with positive $\mathcal{P}_{net}$ is predicted more accurately by the laminar solution than for cases with negative $\mathcal{P}_{net}$, a result also found by RH13. Figure~\ref{pspent-grouped} (right) presents the same data of the right plot, with the symbols coloured according to $T$. The agreement is best for the largest oscillation periods, $T=130$, and it worsens as $T$ decreases. The trend for $T=130$ closely resembles the one of the steadily rotating discs by RH13, which is consistent with the wall forcing becoming quasi-steady at large periods. For $T=130$, the highest value of $\mathcal{P}_{sp,t}=37\%$, occurring for $D=1.78$, $W=0.51$, differs from $\mathcal{P}_{sp,l}$ by 17\%, while a disagreement of 15\% is found by RH13 for the same $\mathcal{P}_{sp,l}$ value. \begin{figure} \centering \includegraphics[width=\textwidth]{figs/figure12} \caption{Left: $\mathcal{P}_{sp,t}(\%)$, computed through DNS via \eqref{pspent}, vs. $\mathcal{P}_{sp,l}(\%)$, computed through \eqref{p-lam}, the power spent by an infinite disc oscillating beneath a still fluid. Data are coloured according to $\mathcal{P}_{net}$. Right: $\mathcal{P}_{sp,t}(\%)$ vs. $\mathcal{P}_{sp,l}(\%)$, with symbols grouped according to $T$. } \label{pspent-grouped} \end{figure} \subsubsection{Turbulent regenerative braking effect} \label{sec:turbulent-regenerative} For the majority of oscillation cycle, power is spent by the discs to overcome the frictional resistance of the fluid. However, for part of the oscillation and over portion of the disc surface, work is performed by the fluid on the disc. This is a form of regenerative breaking effect and it also occurs in time for the case of uniform spanwise wall oscillations and in space for the steady rotating disc case (RH13). Contour plots of the localized power spent $\mathcal{W}_t$, defined as \begin{equation} \mathcal{W}_t(x,z,\tau)(\%)=\frac{100R_p}{R_\tau^2 U_b} \left(\hspace{0.2mm} \left.u_d\frac{\p u_d}{\p y}\right|_{y=0} + \left.w_d\frac{\p w_d}{\p y}\right|_{y=0} \hspace{0.2mm}\right) \text{,} \label{psp-tau} \end{equation} are shown in figure \ref{pspent-space} for $\phi=\pi/4,3\pi/4$. The white regions over the disc surface correspond to the regenerative braking effect, where $\mathcal{W}_t \geq 0$, i.e. the fluid performs work on the discs. The dashed lines represent the regions of $\mathcal{W}_l(r,\tau)>0,$ predicted through the laminar solution by \eqref{plam-space-3}. Although the regenerative braking areas computed via DNS are slightly shifted upstream when compared with those predicted through the laminar solution, the overall agreement is very good and better than in RH13's case. \begin{figure} \centering \includegraphics[width=\textwidth]{figs/figure13} \caption{Spatial variation of $\mathcal{W}_t$, computed via \eqref{psp-tau}, for $\phi=\pi/4$ (left) and $\phi=3\pi/4$ (right). The white areas over the disc surfaces for which $\mathcal{W}_t>0$ denote locations where the fluid is performing work onto the disc. The areas of regenerative braking predicted by the laminar solution, i.e. where $\mathcal{W}_l>0$ and \eqref{plam-space-3} applies, are enclosed by the dashed lines.} \label{pspent-space} \end{figure} \subsection{A discussion on drag reduction physics and scaling} \label{sec:turbulent-scaling-s} \begin{figure} \centering \includegraphics[width=0.75\textwidth,trim=0 6cm 0 0,clip=true]{figs/figure14} \caption{Schematic of the two mechanisms responsible for drag reduction induced by oscillating discs. One mechanism is linked to the attenuation of the turbulent Reynolds stresses and is quantified by $\mathcal{R}_t$ in \eqref{rt}. The degrading effect of the oscillation angle $\theta$ \citep{zhou-ball-2006} is represented by the shading. The second mechanism is due to the structures between discs and is quantified by $\mathcal{R}_d$ in \eqref{rd}. The radial streaming induced by the Rosenblat pump is denoted by the open arrows.} \label{schematic} \end{figure} The results in the preceding sections prove that the oscillating discs effectively modify the flow in two distinct ways, which are discussed in the following and illustrated in figure \ref{schematic}. \begin{itemize} \item {\em Role of disc boundary layer} The circular pattern which forms over a disc as a direct consequence of the disc rotation (shown in figure \ref{discflow-vis}) is a thin region of high-shear flow. The laminar analysis suggests that this oscillatory boundary layer resembles the oscillating-wall Stokes layer (of thickness $\delta^*_s=\sqrt{\nu^* T^*}$) at high frequency (refer to \S\ref{sec:gamma-small} when $\gamma \ll 1$), and the Ekman layer of the von K\'arm\'an viscous pump (of thickness $\delta^*_e=\sqrt{\nu^* D^*/(2W^*)}$) at high periods (refer to \S\ref{sec:gamma-large} when $\gamma \gg 1$). It is therefore reasonable to expect that the wall turbulence over the disc surface is modified similarly to the oscillating-wall case at high frequency and to the steady-rotation case studied by RH13 at high periods. The parameter $\gamma$, written as $\gamma=(2/\pi)\left(\delta^*_s/\delta^*_e\right)^2$, can be interpreted as the threshold that distinguishes these two limiting regimes. The thinner boundary layer between these two limits dictates the way the turbulence is altered. When $\gamma=\mathcal{O}(1)$, an intermediate oscillating-disc forcing regime is identified, for which viscous effects diffuse from the wall due to both unsteady oscillatory effects and to large-scale rotational motion. When $\gamma \ll 1$, the drag-reduction mechanism is analogous to the one advanced by \cite{ricco-etal-2012} for the oscillating-wall flow, namely that the near-wall periodic shear acts to increase the turbulent enstrophy and to attenuate the Reynolds stresses. Important differences from the oscillating-wall case are i) the wallward motion of high-speed fluid, entrained by the disc oscillation from the interior of the channel, ii) the radial-flow effects due to centrifugal forces, which are proportional to the nonlinear term $F'^2$ (refer to \eqref{eq:rosenblat-equations} for the laminar case) and produce additional spanwise forcing in planes perpendicular to the streamwise direction, iii) the radial dependence of the forcing amplitude, and iv) the degrading effect on drag reduction due to wall oscillations which are not spanwise oriented. The latter effect was first documented by \citet{zhou-ball-2006}, who proved that spanwise wall oscillations produce the largest drag reduction, while streamwise wall oscillations lead to approximately a third of the spanwise-oscillation value. The shading on the disc surface in figure \ref{schematic} illustrates the effectiveness of the wall oscillations at different orientation angles. \item {\em Role of quasi-steady inter-disc structures} The second contribution is from the tubular interdisc structures, which are streamwise-elongated and quasi-steady as they persist throughout the disc oscillation. They are primarily synthetic jets, an indirect byproduct of the disc rotation (as in RH13) or disc oscillation. As discussed in \S\ref{sec:turbulent-discflow}, these jets are directed wallward and backward with respect to the mean flow $u_m$. The time-averaged flow between discs is therefore retarded with respect to the mean flow. Further insight into the generation of these structures could lead to other actuation methods leading to a similar drag reduction benefit. Although the structures appear directly above the regions of high shear created by neighbouring discs in the spanwise direction, they are largely unaffected by the time-modulation of the shear. These structures could be a product of the interaction between the radial streaming flows of neighbouring discs, which have a non-zero mean (refer to figure \ref{G-profile} (left)). \end{itemize} The FIK identity is useful because the role of disc boundary layer on drag reduction is distilled into $\mathcal{R}_t$, which sums up the decrease of turbulent Reynolds stresses, while the role of the structures is given by $\mathcal{R}_d$, which is solely due to the additional disc-flow Reynolds stresses. $\mathcal{R}_t$ and $\mathcal{R}_d$ quantify mathematically the two drag-reduction effects. It has been shown that drag reduction scales linearly with the penetration depth of the laminar layer for different spanwise wall forcing conditions, such as spatially uniform spanwise oscillation, travelling and steady wall waves \citep{ricco-etal-2012,cimarelli-etal-2013}. An analogous scaling is obtained in the following. The definition of the oscillating-wall penetration depth advanced by \cite{choi-xu-sung-2002} is modified to account for the viscous diffusion effects induced by the disc oscillation. \cite{choi-xu-sung-2002}'s definition is employed because it takes into account the influence of the wall forcing amplitude, which was not necessary in \cite{quadrio-ricco-2011} because the wave amplitude was constant. Following the discussion on the role of the disc boundary layer on drag reduction, the crucial point is that only $\mathcal{R}_t$, i.e. the portion of drag reduction related to the attenuation of the turbulent Reynolds stresses, is scaled with the penetration thickness. The scaling is carried out for the case with the largest diameter, $D=6.76$, for which the infinite-disc laminar flow solution best represents the disc boundary layer flow because of the limited interference between discs. From the envelope of the Stokes layer velocity profile engendered by an oscillating wall \begin{equation*} W_e^+=W_m^+\exp\left( -\sqrt{\pi/T^+}y^+\right)\text{,} \end{equation*} \citet{choi-xu-sung-2002} defined the penetration depth as \begin{equation*} y_d^+=\sqrt{T^+/\pi}\ln\left(W_m^+/W_{th}^+\right)\text{,} \end{equation*} where $W_m^+$ is the maximum wall velocity and $W_{th}^+$ is a threshold value below which the induced spanwise oscillations have little effect on the channel flow. For the oscillating disc case, the enveloping function for the laminar azimuthal disc velocity, $W_e^+=W^+G_e(\eta,\gamma)$, where \begin{equation*} G_e(\eta,\gamma)=\max_{\breve{t}}G(\eta,\breve{t},\gamma)\text{,} \end{equation*} plays a role analogous to the exponential envelope for the classical Stokes layer. Defining the inverse of $G_e$, $\mathsf{L}=G_e^{-1}$, the penetration depth of the oscillating-disc layer is obtained as \begin{equation} \label{eq:delta} \delta^+=\sqrt{T^+/\pi}\hspace{1mm}\mathsf{L}\left(W^+/W_{th}^+\right)\text{.} \end{equation} Note that in the limit of $\gamma\rightarrow0$ one finds \begin{equation*} \lim_{\gamma\rightarrow0}\mathsf{L}\left(W^+/W_{th}^+\right)=\ln\left(W^+/W_{th}^+\right)\text{.} \end{equation*} The Stokes layer penetration depth is therefore obtained as a special case. In figure \ref{scaling} (left), the drag-reduction contributor $\mathcal{R}_t$ shows a satisfactory linear scaling with the penetration depth, computed via \eqref{eq:delta} with $W^+_{th}=2.25$. In order to find a scaling for $\mathcal{R}_d$, the portion of drag reduction only due to the inter-disc structures, the FIK identity and the laminar solution discussed in \S\ref{sec:laminar} are employed. From \eqref{rd}, it is evident that $\mathcal{R}_d$ is proportional to $\widehat{u_dv_d}$. Through the definitions of the laminar velocity components \eqref{velocity-relations}, $u_d\sim W$ and $v_d\sim W\sqrt{T}$. It then follows that a reasonable estimate could be $u_dv_d\sim W^2\sqrt{T}$ at the edge of the discs where the structures appear. It is then logical to look for a scaling of $\mathcal{R}_d$ in the form $W^mT^n$. An excellent linear fit for the drag reduction data is found for $(m,n)=(2,0.3$), as shown in figure \ref{scaling} (right). Outer-unit scaling for $W$ and $T$ applies, which means that the structures are not influenced by the change in $u^*_{\tau}$. The exponent of $W$ is as predicted by the laminar solution. The deviation of the coefficient $n$ from that predicted by the laminar analysis (i.e. $n=0.5$) can be accounted for by the factors which are not taken into account in the laminar analysis, such as the disc-flow interaction with the streamwise turbulent flow and between neighbouring discs. \begin{figure} \centering \includegraphics[width=\textwidth]{figs/figure15} \vspace{0.0cm} \caption{ Left: $\mathcal{R}_t$, the contribution to drag reduction due to turbulent Reynolds stress attenuation, vs. $\delta^+$, the penetration depth, defined in \eqref{eq:delta}. Right: $\mathcal{R}_d$, the contribution to drag reduction due to the disc-flow Reynolds stresses, vs. $W^2T^{0.3}$. The diameter is $D=6.76$. White circles: $W^+=3$, light grey: $W^+=6$, black: $W^+=9$. } \label{scaling} \end{figure} \section{Introduction} Significant effort in the fluid mechanics research community is currently directed towards turbulent drag reduction, motivated by the possibility of huge economic savings in many industrial scenarios. The necessity for improved environmental sustainability has spurred vast academic and industrial interest in the development of novel drag-reduction techniques and in understanding the underlying physical mechanisms. Although to date there exist many control strategies for drag reduction, notably MEMS-based closed-loop feedback control \citep{kasagi-suzuki-fukagata-2009} and open-loop large-scale wall-forcing control \citep{jung-mangiavacchi-akhavan-1992,berger-etal-2000,quadrio-sibilla-2000}, none have been implemented in industrial systems. Amongst the open-loop active drag reduction methods, for which energy is fed into the system in a pre-determined manner, particular attention has been devoted to those which employ in-plane wall motion. A recent review is found in \cite{quadrio-2011} and a brief discussion is presented in the following. \subsection{The oscillating wall} The direct numerical simulations by \citet{jung-mangiavacchi-akhavan-1992} and the experimental campaign by \citet{laadhari-skandaji-morel-1994} of turbulent wall-bounded flows subjected to sinusoidal spanwise wall oscillations produced a rich vein of work in this area. Their findings first revealed the ability of the actuated wall to suppress the frequency and intensity of near-wall turbulent bursts and to yield a maximum sustained wall friction reduction of about $45\%$. The existence of an optimal oscillation period for fixed maximum wall velocity, $T^+\approx120$ (where $+$ indicates scaling in viscous units with respect to the uncontrolled case) has been widely documented \citep{quadrio-ricco-2004}. It was recognized by \citet{choi-xu-sung-2002} that the space-averaged turbulent spanwise flow agrees closely with the laminar solution to the Stokes second problem for oscillation periods smaller or comparable with the optimum one, which led to the use of a scaling parameter for the drag reduction. \citet{quadrio-ricco-2004} found a linear relation between this parameter - a measure of the penetration depth and acceleration of the Stokes layer - and the drag reduction, noted to be valid only for $T^+\leq150$. \citet{quadrio-ricco-2004} were also the first to explain the existence of the optimum period by comparing it with the characteristic Lagrangian survival time of the near-wall turbulent structures. More recently, \citet{ricco-etal-2012} endowed the scaling parameter with a more direct physical meaning, showing it to be proportional to the maximum streamwise vorticity created by the Stokes layer at constant maximum velocity. Through an analysis of the turbulent enstrophy balance, \citet{ricco-etal-2012} were also able to identify the key production term in the turbulent enstrophy equation, which is balanced by the change in turbulent dissipation near the wall. More importantly, by studying the transient evolution from the start-up of the wall motion, they showed that the turbulent kinetic energy and the skin-friction coefficient decrease because of the short-time transient increase of turbulent enstrophy. This is the latest effort aimed at elucidating the drag reduction mechanism, after research works based on the disruption of the near-wall coherent structures \citep{baron-quadrio-1996}, the cyclic inclination of the low-speed streaks \citep{bandyopadhyay-2006}, the weakening of the low-speed streaks \citep{dicicca-etal-2002,iuso-etal-2003}, and simplified models of the turbulence-producing cycle \citep{dhanak-si-1999,moarref-jovanovic-2012,duque-etal-2012}. \subsection{The wall waves} The unsteady oscillating-wall forcing was converted by \citet{viotti-quadrio-luchini-2009} to a steady streamwise-dependent spanwise motion of the wall in the form $\widetilde{W}=W\cos\left(2\pi x/\lambda_x \right)$. Via direct numerical simulations they found an optimal forcing wavelength $\lambda_{opt}^+\approx1250$, which is related to $T_{opt}$, the optimum oscillating-wall period, through $\mathcal{U}_w$, the near-wall convection velocity, as $\lambda_{opt}=\mathcal{U}_wT_{opt}$. \citet{skote-2013} employed Viotti {\em et al.}'s forcing to alter a free-stream turbulent boundary layer and found good agreement between the analytic solution to the spatial Stokes layer flow and the time-averaged spanwise flow. \citet{skote-2013} also showed that the damping of the turbulent Reynolds stresses depends on the penetration depth of the spatial Stokes layer. The oscillating-wall and the steady-wave techniques were generalized by \citet{quadrio-ricco-viotti-2009} by considering wall turbulence forced by wall waves of spanwise velocity of the form $\widetilde{W}=W\cos\left[2\pi (x/\lambda_x - t/T)\right]$. A maximum drag reduction of 47\% and a maximum net energy saving of 26\% were computed. For wall waves travelling at a phase speed comparable with the near-wall turbulent convection velocity, drag increase was also found. Despite the widespread interest in turbulent drag reduction by active wall forcing, the implementation of these techniques in industrial settings appears to be an insurmountable challenge. Progress is nonetheless being made to improve this scenario. Prominent amongst recent efforts is the experimental work by \citet{gouder-potter-morrison-2013} on in-plane forcing of wall turbulence through a flexible wall made of electroactive polymers. The main reasons which render the technological applications of active techniques an involved engineering task are \textit{i)} the extremely small typical time scale of the wall forcing (the optimal period for the oscillating-wall technique translates to a frequency of 15,000Hz in commercial aircraft flight conditions), and \textit{ii)} the requirement of large portion of the surface to be in uniform motion. Therefore, drag reduction methods which operate on a large time scale and rely on finite-size wall actuation are preferable in view of future applications. \subsection{The rotating discs} The novel actuation strategy based on flush-mounted discs rotating upon detection of the bursting process, first proposed by \citet{keefe-1998}, undoubtedly belongs to a group of interesting control methods which employ finite-size actuators. However, Keefe did not follow up on his innovative idea and neither experimental nor numerical results appeared in the subsequent 15 years. \citet{ricco-hahn-2013} (denoted by RH13 hereafter) showed revived interest in this flow and investigated an open-loop variant of Keefe's technique whereby the discs rotate with a pre-determined constant angular velocity. A numerical parametric investigation on $D$, the disc diameter, and $W$, the disc tip velocity, yielded maximum values for drag reduction and net power saved of 23\% and 10\%, respectively. RH13 also showed that drag increase occurs for small diameter and small rotational periods, that the disc-flow boundary layer must be thicker than a threshold to obtain drag reduction, and that the power spent to activate the discs can be calculated accurately through the von K\'arm\'an laminar viscous pump solution \citep{panton-1995} under specified conditions. The Fukagata-Iwamoto-Kasagi (FIK) identity \citep{fukagata-iwamoto-kasagi-2002} was modified for the disc flow to show that the near-wall streamwise-elongated jets appearing between discs provide a favourable contribution to drag reduction. Promisingly, the optimal spatial and temporal scales were $\mathcal{L}^+=\mathcal{O}(1000)$ and $\mathcal{T}^+=\mathcal{O}(500)$. This is a significant result when these scales are compared with those of other localized actuation strategies, such as the feedback control based on wall transpiration \citep{yoshino-suzuki-kasagi-2008}, which are thought to operate optimally at spatio-temporal scales $\mathcal{L}^+=\mathcal{O}(30)$ and $\mathcal{T}^+=\mathcal{O}(100)$. It is our hope that the results of RH13 will therefore offer fertile ground for new avenues of future research on active turbulent drag reduction. \subsection{Objectives and structure of the paper} Prompted by RH13's recent results, the objective of the present work is to study a variant of RH13's disc technique by introducing sinusoidal oscillations, i.e. the disc tip moves according to $\widetilde{W}=W\cos\left(2\pi t/T\right)$. The effect of the additional parameter $T$, the oscillation period, on a turbulent channel flow is investigated through direct numerical simulations, with specific focus on the skin-friction drag reduction and the global power budget, computed by taking into account the power spent to activate the discs. The laminar solution for the flow over an oscillating disc proves useful to estimate the power spent to activate the discs, to predict the occurrence of regenerative braking effect, and to define scaling parameters for drag reduction. An analogy is also drawn to the oscillating wall technique to discuss the drag reduction mechanism at work in the oscillating-disc flow. The numerical procedures, flow field decompositions and performance quantities are described in \S\ref{sec:numerics}. The solution of the laminar flow is presented in \S\ref{sec:laminar}, where it is used to compute the power spent to move the discs and to predict the regenerative braking effect. The turbulent flow results are presented in \S\ref{sec:turbulent}. The dependence of drag reduction on the disc parameters is discussed in \S\ref{sec:turbulent-time} and \S\ref{sec:turbulent-dependence}. In section \S\ref{sec:turbulent-FIK} the FIK identity is modified to account for the disc flow effects, while \S\ref{sec:turbulent-discflow} presents visualisations and statistics of the disc flow. Section \S\ref{sec:turbulent-pspent} includes a comparison between the turbulent power spent and the corresponding laminar prediction. A discussion on the drag reduction physics and scaling is found in \S\ref{sec:turbulent-scaling-s}. Finally, section \S\ref{sec:outlook} presents an evaluation of the applicability of the technique to flows of technological interest, provides a guidance for future experimental studies, and offers a comparison with other drag reduction techniques, with particular focus on the typical length and time scales.
1,314,259,994,577
arxiv
\section{Introduction} \label{sec:intro} Let $G=(V,E)$ be a finite undirected graph with $V=[n]\coloneqq \{1,\dots,n\}$. The chromatic quasisymmetric function $X_G$ introduced by Shareshian--Wachs~\cite{ShWa16} is a generalization of Stanley's chromatic symmetric function~\cite{St95}, which in turn is a generalization of Birkhoff's chromatic polynomial. Given the remarkable circle of ideas relating these functions to the cohomology of Hessenberg varieties~\cite[Section 10]{ShWa16} and the Stanley--Stembridge conjecture~\cite{StanleyStembridge}, these functions have garnered substantial attention in the last decade; see for instance~\cite{AbreuNigro, Ale,Alepan,Ath15,BC18,CMP,GP-hopf,HaradaPrecup,HuhNamYoo}. The Stanley--Stembridge conjecture states that $X_G$ is \emph{$e$-positive} when $G$ is the incomparability graph of a naturally-labeled unit interval order. Such $G$ can be interpreted as Dyck paths $D$ and we refer to them as Dyck graphs, writing $X_D$ in place of $X_G$ when there is no scope for confusion. While the aforementioned conjecture is still wide open, there are known partial cases, most notably the \emph{abelian case}~\cite{AbreuNigro,CMP,HaradaPrecup}. Numerous lines of attack to this conjecture involve the \emph{modular law}~\cite{GP-modular, OS14}. This is a simple linear relation between certain $X_G$ which itself has been a subject of much investigation; see~\cite{PS22} for a deep geometric perspective. Motivated by this law, Guay-Paquet~\cite{GP-notes} in unpublished work introduced the algebra $\pathalg$ as the \emph{noncommutative} algebra over $\mathbb{C}(q)$ generated by $\sfn$ and $\sfe$ with the \emph{modular} relations: \begin{align} \label{eq:mod_relations_path_algebra} (1+q)\msf{ene}&=q\msf{een}+\msf{nee}\\ \label{eq:mod_relations_path_algebra_1} (1+q) \msf{nen}&=q\msf{enn}+\msf{nne}. \end{align} As we will see below, this algebra is in fact known as a down-up algebra. Working in $\pathalg$, Guay-Paquet~\cite[Theorem 1]{GP-notes} established a particularly elegant result which we now state. For undefined jargon in this context, we refer the reader to Sections~\ref{sec:path_alg_kly} and \ref{sec:abelian}. \begin{theorem}[Guay-Paquet] \label{th:intro_1} Let $D=UVW$ be a Dyck path where $V$ is an abelian subpath with $m$ north steps \textup{(}denoted by $\msf{n}$\textup{)} and $n$ east steps \textup{(}denoted by $\msf{e}$ \textup{)}, with $m\geq n$. In particular, $V$ may be identified with a partition $\lambda$ in an $m\times n$ box. Then \[ X_{UVW}=\sum_{0\leq k\leq n}\frac{H_k^{m,n}(\lambda)}{\qint{m}\qint{m-1}\cdots \qint{m-n+1}}\, X_{U\msf{e}^k\msf{n}^m\msf{e}^{n-k}W}. \] Here $H_k^{m,n}(\lambda)$ denotes the \emph{$q$-hit number} of Garsia--Remmel~\cite{GarsiaRemmel}, and $\qint{j}\coloneqq 1+q+\cdots+q^{j-1}$ is the $q$-analogue of $j$ for $j\geq 0$. \end{theorem} Informally put, abelian subpaths of Dyck paths may be replaced by special rectangular paths along with coefficients given by $q$-hit numbers. Hence it suffices to study $X_G$ of the sort that arise on the right-hand side, thereby restricting attention to a much smaller family of Dyck graphs. Recently, Colmenarejo--Morales--Panova~\cite{CMP} gave an independent proof of Theorem~\ref{th:intro_1} relying heavily on intricate rook-theoretic identities. Yet another proof was given independently by Lee and Soh~\cite{LeeSoh22}. \subsection{Discussion of results} Our primary aim is to give a short algebraic proof of Theorem~\ref{th:intro_1} by relating the algebra $\pathalg$ to the \emph{$q$-Klyachko algebra} $\mc{K}$. This \emph{commutative} algebra is generated by $(u_i)_{i\in \mathbb{Z}}$ subject to quadratic relations in~\eqref{eq:relation_q_Klyachko}, and its name reflects the fact that these relations are a deformation of Klyachko's presentation~\cite{Kly85} for the $S_n$-invariant part of the cohomology ring of the permutahedral variety. As the authors demonstrated in~\cite{NT21}, $\mc{K}$ has intimate links with various subareas within algebraic combinatorics. This link to chromatic symmetric functions furthers our case. $\mc{K}$ possesses a basis $\mc{B}$ of square-free monomials and the statement in Theorem~\ref{th:intro_1} is equivalent to the expansion of the monomial $u_{1}^{c_1}\cdots u_k^{c_k}$ where $c_i>0$ for $i=1,\dots,k$ in terms of $\mc{B}$. The resulting coefficients are connected remixed Eulerian numbers~\cite{NT21,NTRemixed}. The previous links unearth other interesting properties of $\pathalg$. We briefly describe them postponing explicit statements. It is the case that $\pathalg$ is a down-up algebra introduced by Benkart--Roby~\cite{BenRob} (see also \cite[Definition 4.14]{NCSFIV} which implies that $\pathalg$ is the $n=2$ case of the \emph{quantum pseudoplactic algebra}). As such it possesses a so-called PBW basis of staircase monomials $\staircase$, which was independently noticed by Guay-Paquet~\cite{GP-notes}. It is then natural to inquire about the expansion of any monomial $w$ in the basis $\staircase$. Colmenarejo--Morales--Panova conjectured~\cite[Conjecture 6.6]{CMP} that the resulting coefficients, up to sign, are Laurent polynomials with nonnegative integer coefficients. We resolve this conjecture by giving a simple combinatorial rule in Section~\ref{sec:arbitrary_into_staircase}. Yet another basis comes up as follows: one can identify the diagonal subalgebra $\pathalg^{\mathrm{diag}}\coloneqq \oplus_{i\geq 0}\pathalg_{i,i}$ with the polynomial subalgebra in $\mc{K}$ generated by $u_0$ and $u_1$, these two generators corresponding to the words $\msf{en}$ and $\msf{ne}$ in $\pathalg^{\mathrm{diag}}$. The monomials in $\msf{en}$ and $\msf{ne}$ thus form a linear basis of $\pathalg^{\mathrm{diag}}$, which can be extended to a third basis for the space $\pathalg_{i,j}$. We refer to this as the \emph{zigzag} basis; see Section~\ref{sub:zigzag} for the precise description. We give an explicit description for the expansion of any word in the alphabet $\{\msf{n},\msf{e}\}$ in this basis; see Theorem~\ref{th:zigzag_coefficient}. In Section~\ref{sub:guay-paquet-rectangular}, we recall how the modular law implies that relations in $\pathalg$ translate to relations amongst chromatic symmetric functions. This leads immediately to the proof of Theorem~\ref{th:intro_1}, which is directly related to the abelian case of the Stanley--Stembridge conjecture. In Section~\ref{sub:abelian_compendium} we revisit that case, and attempt an understanding of how the two new formulae\textemdash{} those of Abreu--Nigro, Harada--Precup \textemdash{} can be related bijectively to the original work of Stanley \cite{St95}. \section{Graded down-up algebra} \label{sec:pathalg} \subsection{Some generalities} \label{subsec:generalities} A \emph{path} $P$ is any word $w\coloneqq w(P)$ in the alphabet $\{\msf{n},\msf{e}\}$. Pictorially we depict it by reading $w$ left to right and translating every instance of $\msf{n}$ (respectively $\msf{e}$) as a unit north step (respectively east step) beginning at the origin. We denote the number of $\msf{n}$'s (resp. $\msf{e}$'s) by $|w|_{\msf{n}}$ (resp. $|w|_{\msf{e}}$). We let $\lambda\coloneqq \lambda(P)$ be the partition (in English notation) naturally determined by $P$ in the top left corner of the $|w|_{\msf{n}} \times |w|_{\msf{e}}$ box. Alternatively, given any $\lambda\subset m\times n$, we may reverse this association to get a path $P\coloneqq P(\lambda)$ starting from $(0,0)$ to $(n,m)$, which in turn determines a word $w(\lambda)$ in $\{\msf{n},\msf{e}\}$. Thus we have the following objects naturally in bijection: \begin{align*} \{\lambda\subseteq m\times n\} \leftrightarrow \{\text{paths $P$ from $(0,0)$ to $(n,m)$}\} \leftrightarrow \{w\in \{\msf{n},\msf{e}\}^{m+n}\text{ with } |w|_{\msf{n}}=m\}. \end{align*} Thus we can, and will, interchangeably use $P$, $w$, or $\lambda$ if it is clear from context. \subsection[Basic properties]{Basic properties of $\pathalg$} \label{subsec:basic_prop} Recall that $\pathalg$ is the $\mathbb{C}(q)$-algebra generated by $\msf{n}$ and $\msf{e}$ subject to the modular relations~\eqref{eq:mod_relations_path_algebra} and \eqref{eq:mod_relations_path_algebra_1}. It turns out that these modular relations imply that $\pathalg$ is an instance of a well-studied class of algebras called \emph{down-up algebras}. These were introduced by Benkart--Roby~\cite[Section 2]{BenRob} inspired by Stanley's work on differential posets~\cite{St88}. In the notation of \emph{loc. cit.}, $\pathalg$ is the down-up algebra $A(1+q,-q,0)$. At $q=1$, this recovers the Weyl algebra. While the algebraic properties of down-up algebras have been thoroughly studied, that it encodes the modular law has hitherto not been noted, to the best of our knowledge. By the \emph{PBW theorem} for down-up algebras~\cite[Theorem 3.1]{BenRob}, the set \[ \staircase=\{\msf{e}^a(\msf{ne})^b\msf{n}^c\;|\; a,b,c\in \mathbb{Z}_{\geq 0}\} \] is a basis for $\pathalg$. We refer to its elements as \emph{staircase} monomials, and to $\staircase$ as the \emph{staircase basis}. In Section~\ref{sec:arbitrary_into_staircase}, we explain how to expand an arbitrary element of $\pathalg$ in this basis. Observe that the modular relations preserve the number of $\msf{n}$'s and $\msf{e}$'s. We can use this information to endow $\pathalg$ with a $\mathbb{Z}_{\geq 0}\times \mathbb{Z}_{\geq 0}$-grading: \begin{align} \pathalg=\bigoplus_{m,n\in \mathbb{Z}_{\geq 0}} \pathalg_{m,n}, \end{align} where $\pathalg_{m,n}$ is spanned by words $w$ satisfying $|w|_{\msf{n}}=m $ and $|w|_{\msf{e}}=n$. A particular graded piece that is relevant for us is $\pathalg^{\mathrm{diag}}$ defined by \begin{align} \pathalg^{\mathrm{diag}}=\bigoplus_{m\in\mathbb{Z}_{\geq 0}} \pathalg_{m,m}. \end{align} \noindent\emph{Involution $\eta$.} Benkart--Roby~\cite[p. 329]{BenRob} consider the map $\eta$ swapping $\msf{n}$ and $\msf{e}$, and extend it to an algebra \emph{anti}automorphism of the free associative algebra generated by $\msf{n}$ and $\msf{e}$. Since the modular relations are preserved under this antiautomorphism, we get an involution $\eta$ on $\pathalg$ which is combinatorially natural. The notion of transposing a partition $\lambda \subseteq m\times n$ to obtain $\lambda^t$ corresponds to reversing $\lambda(w)$ and switching $\msf{n}$'s for $\msf{e}$'s, and vice versa. This resulting word is precisely $\eta(\lambda(w))$. The map $\eta$ sends $\pathalg_{m,n}$ to $\pathalg_{n,m}$. If $\mathscr{B}_{m,n}$ is any basis for $\pathalg_{m,n}$, then applying $\eta$ to each basis element gives a basis for $\mathscr{P}_{n,m}$. This will allow us to work under the assumption that $m\leq n$ (or $n\leq m$) whenever convenient. Notice also that the staircase basis $\staircase$ is stable under $\eta$. \section[Basis expansions]{Basis expansions in the algebra $\pathalg$} \label{sec:path_alg_kly} We consider expansions of elements of $\pathalg$ in three different bases. The first one is the rectangular basis considered by Guay-Paquet~\cite{GP-notes} for which our main result is Theorem~\ref{th:GP}. Its proof makes use of the $q$-Klyachko algebra introduced by the authors~\cite{NT21,NTRemixed}. In Section~\ref{sec:abelian} we will obtain Theorem~\ref{th:intro_1} as a corollary. We give two other expansions: first, in the staircase basis $\staircase$, thus proving a conjecture of Colmenarejo, Morales and Panova \cite{CMP}, and then in what we call the zigzag basis. \subsection[Rectangular]{Expansion into the rectangular basis} \label{sub:rectangular} Given nonnegative integers $m\geq n$, define the set of \emph{rectangular monomials} as follows: \begin{align} \rectangular_{m,n}&=\{\msf{e}^k\msf{n}^{m}\msf{e}^{n-k}\;|\; 0\leq k\leq n\}. \end{align} For $m<n$, we obtain $\rectangular_{m,n}$ using $\eta$. Our aim in this section is to expand any word $w$ in terms of monomials in $\rectangular_{m,n}$. It is not clear that this can be done, but will become transparent in due course. We first explain how $q$-hit polynomials show up in another context. \subsubsection[Klyachko algebra]{The $q$-Klyachko algebra} \label{subsec:qkly} We give a brisk introduction to the $q$-Klyachko algebra covering the bare essentials and refer the reader to~\cite{NT21} for more details. The \emph{$q$-Klyachko algebra} $\mc{K}$ is the commutative, graded $\mathbb{C}(q)$-algebra with generators $(u_i)_{i\in\mathbb{Z}}$ and quadratic relations \begin{align} \label{eq:relation_q_Klyachko} (1+q)u_i^2=qu_iu_{i-1}+u_iu_{i+1} \end{align} for all $i\in\mathbb{Z}$. As the authors demonstrated in~\cite{NT21}, $\mc{K}$ has intimate links with various subareas within algebraic combinatorics. The link to chromatic (quasi)symmetric functions in this article adds to these various connections. If $c=(c_i)_{i\in\mathbb{Z}}$ is a sequence of nonnegative integers with finite support,\footnote{The support of $c$ is the set of indices $i$ such that $c_i>0$.} let $u^c\coloneqq \prod_{i\in\mathbb{Z}} u_i^{c_i}$. In the particular case where the entries of $c$ are $0$s or $1$s, we may identify $c$ with its support $I\subset \mathbb{Z}$, and then let $u_I\coloneqq \prod_{i\in I}u_i$. We let $\mc{B}$ denote the entire collection of such squarefree monomials $u_I$. By~\cite[Proposition 3.9]{NT21}, $\mc{B}$ is a basis for $\mc{K}$. We may thus decompose \[u^c=\sum_Ip_c(I)u_I.\] Let $m=|c|\coloneqq \sum_{i}c_i$. By homogeneity $p_c(I)=0$ unless $|I|=m$. We define \[ A_c(q)=(m)_q! \times p_c(\{1,\dots,m\}). \] It is zero if the support of $c$ is not contained in $\{1,\dots,m\}$; so we can consider $c=(c_1,\dots,c_m)$, and in this case $A_c(q)$ is a nonzero polynomial with nonnegative integer coefficients. These polynomials were introduced by the authors~\cite[Section 4.3]{NT21} under the name \emph{remixed Eulerian numbers}. Indeed they recover Postnikov's mixed Eulerian numbers~\cite[Section 16]{Pos09} at $q=1$; see~\cite{NTRemixed} for a deeper combinatorial study of these polynomials. We will only need them for some special $c$, as we describe next. We say that $c=(c_1,\dots,c_m)$ with $|c|=m$ is \emph{connected} if its support is an interval $I$. We can encode a family of $A_c(q)$ for connected $c$ via a generating function: For $\alpha=(\alpha_1,\ldots,\alpha_k)\vDash m$ a \emph{strong }composition, we have the identity~\cite[Proposition 5.6]{NT21} \begin{equation} \label{eq:gf_connected} \sum_{j\geq 0}\prod_{i=1}^k\qint{j+i}^{\alpha_i}\, t^j=\frac{\sum_{i=0}^{m-k}A_{0^i\alpha 0^{m-k-i}}(q) \, t^i}{(t;q)_{m+1}}. \end{equation} Here $(t;q)_{m+1}=\prod_{1\leq i\leq m+1}(1-tq^{i-1})$ stands for the \emph{$q$-Pochhammer symbol}. \begin{remark} It is in fact the case that $\frac{A_{0^i\alpha 0^{r-k-i}}(q)}{\qfact{k}}$ is a polynomial with nonnegative integer coefficients; see proof of~\cite[Proposition 5.4]{NT21} for the general picture. \end{remark} We next record another generating function identity that is suspiciously similar to \eqref{eq:gf_connected}. \subsubsection{Hit numbers and connected remixed Eulerians} \label{subsec:hit_remixed} Consider a partition $\lambda$ inside an $m\times m$ square. Following~\cite{GarsiaRemmel}, up to the $q$-exponent variation discussed in~\cite{CMP}, the \emph{$q$-hit numbers} $H_j^m(\lambda)$ can be defined by: \begin{align} \label{eq:qhit_gf} \sum_{k\geq 0}t^k\prod_{1\leq i\leq m}\qint{i-\lambda_{m+1-i}+k}=\frac{\sum_{j=0}^m H_j^m(\lambda)\, t^j}{(t;q)_{m+1}}, \end{align} which ought to be compared to \eqref{eq:gf_connected}. For the sake of completeness we give a quick combinatorial description for the $q$-hit number $H_k^{m,n}(\lambda)$ where $\lambda\subseteq m\times n$. The $q$-hit numbers in \eqref{eq:qhit_gf} correspond to the case $m=n$. Let $R(m,n,\lambda,k)$ denote the set of maximal nonattacking rook placements on an $m\times n$ board such that there are exactly $k$ rooks in the Ferrers board corresponding to $\lambda$. Given $p\in R(m,n,\lambda,k)$ we let $\mathrm{stat}(p)$ denote the number of \emph{unattacked} cells in the $m\times n$ board. Unattacked cells are certain cells that do not contain rooks and are defined as follows. A cell in $\lambda$ is unattacked if it does not lie below a rook, or to the right of a rook, or to the left of a rook outside $\lambda$. A cell outside $\lambda$ is unattacked if it does not lie below a rook or to the right of a rook outside $\lambda$. This given, we have \begin{align*} H_k^{m,n}(\lambda)=\sum_{p\in R(m,n,\lambda,k)}q^{\mathrm{stat}(p)}. \end{align*} If $m=n$, we write $H_k^{m}(\lambda)$. It is straightforward to check that, assuming $m\geq n$, that \begin{align} \qfact{m-n}\, \times H_k^{m,n}(\lambda)=H_k^{m}(\lambda). \end{align} See Figure~\ref{fig:qhit} for a maximal nonattacking rook placement $p$ where $m=4$, $n=5$ and $\lambda=(3,3,1,0)$. The six unattacked cells tell us that $q^{\mathrm{stat}(p)}=q^6$, which is the contribution of $p$ to $H_{2}^{4,5}(\lambda)$. \begin{figure} \includegraphics[scale=0.75]{qhit_example.pdf} \caption{Unattacked cells in a maximal nonattacking rook placement on a $4\times 5$ board} \label{fig:qhit} \end{figure} \smallskip To relate~\eqref{eq:gf_connected} and~\eqref{eq:qhit_gf} we need some notation. Assume $m=n$ and $\lambda\subset m\times m$. Define the \emph{area sequence} $\aseq{\lambda}\coloneqq (a_1,\dots,a_m)$ by setting \[ a_i=i-\lambda_{m+1-i}. \] As $i$ goes from $1$ to $m$, the $a_i$ go from $1-\lambda_m\leq 1$ to $m-\lambda_1\geq 0$ with `increments' in $\{1,0,-1,\dots\}$. It follows that the set of entries underlying $\aseq{\lambda}$ is an interval containing $0$ or $1$. Note further that the multisets underlying $\aseq{\lambda}$ and $\aseq{\lambda^t}$ are equal \textemdash{} as may be seen by a standard pairing of north and east steps at the same height for instance. Now consider the monomial $u(\lambda)$ in $\mc{K}$ defined as follows: \begin{equation} \label{eq:ulambda} u(\lambda)\coloneqq \prod_{1\leq i\leq m} u_{a_i}. \end{equation} Clearly, $u(\lambda)$ depends solely on the multiset underlying $\aseq{\lambda}$. \begin{example} Consider $\lambda=(5,5,3,3,3,0)\subset 6\times 6$ as shown in Figure~\ref{fig:area_seq}. We have $\aseq{\lambda}=(1,-1,0,1,0,1)$ and $\aseq{\lambda^t}=(1,0,1,-1,0,1)$. Additionally, $u(\lambda)=u_{-1}u_0^2u_1^3$. \begin{figure}[!htbp] \includegraphics[scale=0.4]{Hit_to_Remixed} \caption{$\lambda=(5,5,3,3,3,0)$ inside a $6\times 6$ board with $a(\lambda)=(1,-1,0,1,0,1)$.} \label{fig:area_seq} \end{figure} \end{example} It turns out that the coefficients when one expresses $u(\lambda)$ in the basis $\mc{B}$ are relevant to us. The fact that $u(\lambda)$ has degree $m$, and that the set underlying $\aseq{\lambda}$ is an interval in $\mathbb{Z}$ containing $0$ or $1$, implies an expansion in the basis $\mc{B}$ as follows: \begin{equation} \label{eq:uM_expansion} u(\lambda)=\sum_{k=0}^m c_k u_{[1,m]\downarrow k}. \end{equation} Here $[1,m]\downarrow k\coloneqq \{1-k,2-k,\dots,m-k\}$. As established in~\cite[\S 4.2]{NTRemixed}, we have that \begin{equation} \label{eq:c_k_equals_q_hit} c_k=\frac{H_k^m(\lambda)}{(m)_q!}. \end{equation} The result next is essentially in~\cite{GP-notes} though not stated as such. The reader should compare this statement to Theorem~\ref{th:intro_1}: as we will see in Section~\ref{sub:guay-paquet-rectangular}, it will in fact imply it. \begin{theorem} \label{th:GP} Fix nonnegative integers $m\geq n$. Let $\lambda\subset m\times n$, and consider the corresponding path $w=w(\lambda)$. Then in $\pathalg_{m,n}$ we have \[ w(\lambda)=\sum_{k=0}^n\frac{H_{k}^{m,n}(\lambda)}{\qint{m}\qint{m-1}\cdots \qint{m-n+1}} \msf{e}^{k}\msf{n}^{m}\msf{e}^{n-k}. \] \end{theorem} \begin{proof} We first consider the case $m=n$. Consider the map $\psi:\pathalg_{m,m}\to \mathcal{K}$ sending \begin{align} w(\lambda) &\mapsto u(\lambda), \end{align} extended by linearity. We need to show that $\psi$ is well defined. To this end, we must verify that the result is unchanged when modular relations~\eqref{eq:mod_relations_path_algebra},\eqref{eq:mod_relations_path_algebra_1} are applied to $w$. Applying the relation \eqref{eq:mod_relations_path_algebra} by changing $(1+q)\msf{ene}$ in $w$ to $q\, \msf{een}+\msf{nee}$ corresponds to changing an $a_i$ in the sequence $a(\lambda)$ to either $a_{i}+1$ or $a_i-1$. We can conclude with the Klyachko relation $(1+q)u_{a_i}^2=qu_{a_i}u_{a_{i}-1}+u_{a_i}u_{a_{i}+1}$, if we can find a $j\neq i$ such that $a_j=a_i$. If $a_i\leq 0$ we are guaranteed that $a_{i+1}\leq a_i$. If it is equal then we are done. Otherwise $a_{i+1}<a_i$. Since $a_m\geq 0$ and increments in $a(\lambda)$ are bounded above by $1$, we must have a $j>i+1$ such that $a_j=a_i$. If $a_i \geq 1$ we apply this argument to $\mathrm{reverse}(w)$. Reversal preserves instances of $\msf{ene}$ and changes $a(\lambda)=(a_1,\dots,a_m)$ to $(1-a_m,\dots,1-a_1)$. Thus an instance of $a_i\geq 1$ translates to $1-a_{m+1-i}\leq 0$ and we are back in the former setting. The case of the relation\eqref{eq:mod_relations_path_algebra_1}, namely $(1+q)\msf{nen}=q\, \msf{enn}+\msf{nne}$ can be dealt with similarly: it is simpler, since the occurrence of $\msf{nen}$ implies that we have the needed $u_{a_i}^2$ in the image already. Thus we have proved that $\psi$ is well defined. \smallskip Now note that $\pathalg_{m,m}$ has dimension $m+1$~\cite[Theorem 3.1]{BenRob}. Indeed the staircase monomials $\delta_i\coloneqq \msf{e}^{m-i}(\msf{ne})^i\msf{n}^{m-i}$ for $0\leq i\leq m$ give a basis. Consider the $m+1$ \emph{rectangular} monomials $\rect_i=\msf{e}^{i}\msf{n}^m\msf{e}^{m-i}$. Since $\psi(\rect_i)=u_{[1,m]\downarrow i}$, we get that the $\rect_i$ are independent as their images are independent in $\mc{K}$. We thus deduce that the $\rect_i$ for $0\leq i\leq m$ give another basis of $\pathalg_{m,m}$. It follows that the coefficients $c_{\lambda,i}$ in the expansion \[ w=\sum_{0\leq i\leq m}c_{\lambda,i}\,\rect_i \] are those in the expansion \[ \psi(w)=u(\lambda)=\sum_{0\leq i\leq m}c_{\lambda,i}\, u_{[1,m]\downarrow i}. \] Comparison with \eqref{eq:uM_expansion} implies the claim for $m=n$. \smallskip Assume now that $m>n$. We append $\msf{e}^{m-n}$ to $w$ at the end. This defines $w'=w(\lambda')$, where $\lambda'$ has the same shape as $\lambda$ but sits inside an $m\times m$ square. We can compute $a(\lambda')$ inside this square as before. This forces the interval $[1,m-n]$ to be included in the set underlying $a(\lambda')$, so we can a priori restrict \eqref{eq:uM_expansion} to a smaller set of $n+1$ intervals: \begin{equation} \label{eq:uM_expansion_rewtricted} u(\lambda)=\sum_{k=0}^{n} \frac{H_k^m(\lambda)}{(m)_q!} \, u_{[1,m]\downarrow k}=\sum_{k=0}^{n}\frac{H_k^{m,n}(\lambda)}{\qint{m}\qint{m-1}\cdots \qint{m-n+1}} \, u_{[1,m]\downarrow k}. \end{equation} Then the rest of the proof follows the same path as the square case. We define $\psi$ as starting from $\pathalg_{m,n}$ by appending $\msf{e}^{m-n}$ to any element and then applying the map defined in the case $m=n$. The $n+1$ staircase monomials $\delta_{i,m,n}= \msf{e}^{n-i}(\msf{ne})^i\msf{n}^{m-i}$ are a basis $\pathalg_{m,n}$, so the $n+1$ rectangular monomials $\rect_{i,m,n}=\msf{e}^i\msf{n}^m\msf{e}^{n-i}$ also form one since their images $\psi(\rect_{i,m,n}\msf{e}^{m-n})$ are independent in $\mc{K}$. We conclude that the coefficient of $\rect_{k,m,n}$ in the expansion of $w$ is given by $\frac{H_k^{m,n}(\lambda)}{\qint{m}\qint{m-1}\cdots \qint{m-n+1}}$. \end{proof} We consider an example next and return to the consequences of Theorem~\ref{th:GP} to chromatic symmetric functions in Section~\ref{sec:abelian}. \begin{example} Let $w=\msf{nennee}\in \pathalg_{3,2}$ and let $\lambda\coloneqq \lambda(w)=(1,1,0)$. Consider the six non-attacking rook placements on the $3\times 2$ board in Figure~\ref{fig:hit_rook}. \begin{figure} \includegraphics[scale=0.75]{hit_rook.pdf} \caption{Nonattacking rook placements on $3\times 2$ board} \label{fig:hit_rook} \end{figure} The leftmost two rook placements contribute to $H_0^{3,2}(\lambda)$ and the remaining to $H_1^{3,2}(\lambda)$. We thus get \begin{align*} H_0^{3,2}(\lambda)&=q+q^2\\ H_1^{3,2}(\lambda)&=1+q+q^2+q^3. \end{align*} Theorem~\ref{th:GP} then says \begin{align*} \msf{nenne}=\frac{q+q^2}{\qint{3}\qint{2}}\, \msf{nnnee}+\frac{1+q+q^2+q^3}{\qint{3}\qint{2}}\, \msf{ennee}. \end{align*} \end{example} \subsection{The staircase basis} \label{sec:arbitrary_into_staircase} Fix positive integers $m$ and $n$. Define $\staircase_{m,n}\coloneqq \staircase\cap \pathalg_{m,n}$. We know that $\staircase_{m,n}$ is a basis for $\pathalg_{m,n}$. In this section we give an expansion for any monomial $w\in \pathalg_{m,n}$ in the basis $\staircase_{m,n}$. Like before, we let $\delta_k\coloneqq\msf{e}^a(\msf{ne})^k\msf{n}^{b}$ where $a,b$ are such that $\delta_k\in \staircase_{m,n}$. We begin by stating our claim. Let $m_w\geq 0$ be the largest integer such that the path $P(\delta_{m_w})$ lies weakly below the path $P(w)$. \begin{theorem} \label{th:conj_cmp} In $\pathalg$, consider the basis expansion \begin{equation} \label{eq:staircase_expansion} w=\sum_{0\leq k\leq m_w}(-1)^{m_w-k}c_{w,k}(q)\,\delta_k, \end{equation} Then $c_{w,k}\in \mathbb{Z}_{\geq 0}[q]$ and vanishes unless $k\leq m_w$. \end{theorem} While there are in general many ways to employ the modular relation to express an arbitrary monomial $w$ in terms of staircase monomials, we are guided by the aim that $\msf{e}$'s and $\msf{n}'s$ move to the left and right respectively, and in doing so, force a string of $\msf{ne}$'s in between. At the same time, we want the signs to behave nicely in a predictable manner. We will need solely the two relations: \begin{align} \label{eq:relation_eu} \msf{n}^i\msf{e}&=\qint{i}\textcolor{blue}{\msf{ne}}\, \msf{n}^{i-1}-q\qint{i-1}\, \textcolor{blue}{\msf{en}}\,\msf{n}^{i-1}\\ \label{eq:relation_nu} \msf{n}\msf{e}^i&=\qint{i}\msf{e}^{i-1}\, \textcolor{blue}{\msf{ne}}-q\qint{i-1}\,\msf{e}^{i-1}\, \textcolor{blue}{\msf{en}}. \end{align} These relations follow from the modular relations easily. The second one follows from the first by applying the transpose $\eta$. Additionally, and crucially, observe that the coefficients involved are, up to a sign, polynomials in $\mathbb{Z}_{\geq 0}[q]$. \medskip We state next our crucial definition that governs how the aforementioned relations apply in the course of our procedure. \begin{definition} Consider a factor $w'$ in $w$ where $w'=\msf{n}^i\msf{e}$ or $w'=\msf{ne}^i$ with $i\geq 2$ maximal. We say that $w'$ is \emph{critical} if $P(w)$ shares an edge with the path $P(\delta_{m_w})$ at one of the letters in $w'$. ,\end{definition} Note that by definition of $m_w$, the letter in the critical factor that corresponds to $P(\delta_{m_w})$ is necessarily the starting $\msf{n}$ if $w'=\msf{n}^i\msf{e}$, and the last $\msf{e}$ if $w'=\msf{ne}^i$. \begin{lemma} \label{lemma:critical} Fix $w$ a word in $\{\msf{n},\msf{e}\}$ The following are equivalent. \begin{enumerate} \item $w$ does not possess a critical factor. \item $w$ corresponds to a staircase monomial. \end{enumerate} \end{lemma} \begin{proof} It is immediate that monomials in $\staircase_{m,n}$ do not contain critical factors. Hence assume $w\notin \staircase_{m,n}$ and consider the path $P(\delta_{m_w})$. It agrees, i.e. shares an edge, with $P(w)$ at various junctures. At one of the two extremes (or both) of any maximal factor of agreement, there must be a critical factor for $w$. At the right extreme, this will be a factor of the form $\msf{n}^i\msf{e}$. At the left extreme this will be the transposed version, i.e. $\msf{n}\msf{e}^i$. \end{proof} \begin{figure} \includegraphics[scale=0.7]{example_critical.pdf} \caption{A path $P(w)$ (in red) and the associated $P(\delta_{m_w})$ (in blue). The subpaths where the two touch are highlighted.} \label{fig:critical} \end{figure} \noindent\textbf{The rewriting procedure:} We now describe a rewriting procedure that takes as input any linear combination of words $C=\sum_wf_ww$. \begin{enumerate} \item Pick $w$ such that $f_w\neq 0$ and $w$ possesses a critical factor. If no such $w$ exists, the procedure terminates and outputs $C$. \item Pick any critical factor $v$ in $w$. Modify $C$ by replacing the critical factor $v$ in $w$ according to the relations \eqref{eq:relation_eu},\eqref{eq:relation_nu} applied from left to right. Go back to the first step. \end{enumerate} In the second step of the procedure, let $w_V,w_H$ be the two words that are obtained from a word $w$ after applying the relations~\eqref{eq:relation_eu},\eqref{eq:relation_nu}. Here $w_V$ comes with a positive weight $\qint{i}$, while $w_H$ comes with a negative weight $-q\qint{i-1}$. Figure~\ref{fig:example_algo} shows an execution of this algorithm for $w=\msf{nneeenne}$, representing naturally the rewriting procedure as a binary tree. We omitted the weights on the edges to keep the picture legible. \begin{figure} \includegraphics[scale=0.65]{algo_example_2.pdf} \caption{Rewriting algorithm applied to $w=\msf{nneeenen}$. Horizontal (respectively vertical) arrows represent $w\to w_H$ (respectively $w\to w_V$).} \label{fig:example_algo} \end{figure} \begin{proof}[Proof of Theorem~\ref{th:conj_cmp}] First note that the rewriting procedure will necessarily end, as the shapes corresponding to the words are strictly increasing after each step of the procedure. It follows that the final output will be a linear combination of words with no critical factors, which represents the same element in $\pathalg$ as the starting linear combination since we only apply relations that are valid in $\pathalg$. By Lemma~\ref{lemma:critical}, this will indeed be the expansion into staircase monomials as desired. Now a key remark is that for any word $w$, and any of its critical factors,we have $m_{w_V}=m_w$ while $m_{w_H}=m_w-1$, where $w_H,w_V$ are defined above. Since $m_{\delta_k}=k$, any sequence of rewritings that goes from $w$ to $\delta_k$ will then necessarily involve $m_w-k$ sign switches, as $w_H$ comes with a negative weight while $w_V$ has a positive weight. It follows that the global sign of the coefficient of $\delta_k$ is $(-1)^{m_w-k}$, and thus that $c_{w,k}\in \mathbb{Z}_{\geq 0}[q]$. It is also immediate from the procedure that $c_{w,k}=0$ if $k>m_w$. \end{proof} \begin{example} Consider $w=\msf{nneeenen}$ as in Figure~\ref{fig:example_algo}. There are exactly two paths from root to a leaf representing $\delta_2=\msf{eenenenn}$, both of which involve a single horizontal edge. By considering the weights for each path we conclude that the coefficient of $\delta_2$ in $w$ is \[ -q\qint{2}\qint{2}\qint{1}-q\qint{2}\qint{1}\qint{1}\qint{1}=-q(1+q)(2+q). \] \end{example} \begin{remark}(Proof of ~\cite[Conjecture 6.6]{CMP}) Theorem~\ref{th:conj_cmp} implies easily~\cite[Conjecture 6.6]{CMP}.\footnote{Their conjecture is stated in terms of chromatic symmetric functions, but we explain in Section~\ref{sub:guay-paquet-rectangular} why this can be expressed in the algebra $\pathalg$.} The staircase basis in~\cite[Section 6]{CMP} corresponds to staircase paths in the top left corner. To expand into this basis, one needs to use the ``reverse'' rewriting rules, which are obtained from~\eqref{eq:relation_eu},\eqref{eq:relation_nu} by reversing the words and changing $q$ to $q^{-1}$: \begin{align} \label{eq:relation_eu_cmp} \msf{e}^i\msf{n}&=q^{1-i}\qint{i}\textcolor{blue}{\msf{en}}\, \msf{e}^{i-1}-q^{1-i}\qint{i-1}\, \textcolor{blue}{\msf{ne}}\,\msf{e}^{i-1},\\ \label{eq:relation_nu_cmp} \msf{e}\msf{n}^i&=q^{1-i}\qint{i}\msf{n}^{i-1}\, \textcolor{blue}{\msf{en}}-q^{1-i}\qint{i-1}\,\msf{n}^{i-1}\, \textcolor{blue}{\msf{ne}}. \end{align} This results in polynomials in $q^{-1}$ for the coefficients, instead of the polynomials in $q$ that we obtain in~\ref{th:conj_cmp}. \end{remark} The proof in fact tells us a little bit more \textemdash{} we must have all $c_{w,k}\neq 0$ for $0\leq k\leq m_w$. Also, since the only way to hit the staircase monomial $\delta_{m_w}$ is by applying moves $w\to w_V$ at all stages, we get an explicit description for $c_{w,m_w}$ as a product of $q$-integers. For instance, for $w$ in Figure~\ref{fig:critical}, we have \[ c_{w,m_w}=\qint{3}\hspace{2mm} \qint{2}\qint{3}\qint{2}\qint{3}\qint{3}\qint{4}\qint{5} \hspace{2mm} \qint{6}\qint{5}. \] It is easy to give a characterization of this product in terms of $w$. More generally, it would be interesting to find a combinatorial interpretation for all the coefficients $c_{w,k}$. \subsection{The zigzag basis} \label{sub:zigzag} For the purposes of this section, we set $s\coloneqq \msf{en}$ and $\msf{t}\coloneqq \msf{ne}$. We return to the map $\psi$ defined in the proof of Theorem~\ref{th:GP}, except this time we take its domain as $\pathalg^{\mathrm{diag}}$. Then $\psi$ is an algebra homomorphism into $\mc{K}$ since $u(\lambda\cdot\mu)=u(\lambda)u(\mu)$. As it sends a basis to independent vectors, it is injective, and its image is the subalgebra of $\mathcal{K}$ with basis given by the $u_I$ with $I$ an interval containing $0$ or $1$. Equivalently, it is the subalgebra of $\mc{K}$ generated by $u_0$ and $u_1$, which is free on the generators. In turn, this implies the following: \begin{proposition} \label{prop:Pdiag_polynomial} $\pathalg^{\mathrm{diag}}$ is the (commutative) polynomial algebra $\mathbb{C}(q)[\msf{s},\msf{t}]$. \end{proposition} \begin{remark} Benkart--Roby~\cite[Proposition 3.5]{BenRob} establish that for a general down-up algebra $A(\alpha,\beta,\gamma)$, the subalgebra $A_0$ (i.e. the analogue of $\pathalg^{\mathrm{diag}}$) is always a commutative subalgebra. The proof in \emph{loc. cit.} is elementary albeit involved.\footnote{The reader should note that the grading employed in \cite{BenRob} is not our bigrading, but a weaker one that can be defined for any down-up algebra.} Kirkman--Musson--Passman~\cite{KMP99} show that the subalgebra generated by $ud$ and $du$ in a general down-up algebra $A(\alpha,\beta,\gamma)$ over a field $K$ is a polynomial algebra in those two generators provided that $\beta\neq 0$. Recalling that $\pathalg$ is $A(1+q,-q,0)$, it is possible to apply their result in our context and obtain another proof of Proposition~\ref{prop:Pdiag_polynomial}. \end{remark} We are thus naturally led to the question of expanding monomial $w\in\pathalg_{m,m}$ in terms of $\msf{s}$ and $\msf{t}$. We consider a more general rectangular version. Fix nonnegative integers $m\geq n$. Consider the set of \emph{zigzag} monomials defined as follows: \begin{align} \zigzag_{m,n}&=\{\msf{s}^a\msf{t}^{n-a}\msf{n}^{m-n}\;|\; 0\leq a\leq n\} \end{align} For $m<n$, define $\zigzag_{m,n}$ by employing $\eta$. These zigzag monomials show up in~\cite[Section 2.1]{KMP99}. Fix a word $w\in \pathalg_{m,n}$ with associated path and partition being $P$ and $\lambda$ respectively. Define the sequence $b(\lambda)=(b_1,\dots,b_n)$ as follows: \begin{align*} b_i=\left\lbrace \begin{array}{cc} m+1-i-\lambda'_i & \lambda_{m+1-i}<i\\ i-\lambda_{m+1-i} &\lambda_{m+1-i}\geq i. \end{array}\right. \end{align*} Informally, the sequence $b(\lambda)$ measures the distance from the diagonal in the same vein as the area sequence $a(\lambda)$ from before. Figure~\ref{fig:b_sequence} gives an example where $m=11$ and $n=9$. We either take the heights of the green shaded rectangle or the lengths of the red shaded rectangles. These capture the two cases that occur in the definition, and we get $b(\lambda)=(2,1,-1,0,4,5)$. \begin{figure} \includegraphics[scale=0.7]{b_sequence.pdf} \caption{} \label{fig:b_sequence} \end{figure} For $i\in \mathbb{Z}$ define $\mathrm{wt}_i\in \pathalg_{1,1}$ for $i\in \mathbb{Z}$ as follows: \begin{align*} \mathrm{wt}_i=\left\lbrace \begin{array}{ll} \qint{i}\, \msf{t}-q\qint{i-1}\, \msf{s} & i\geq 1\\ q^{i}\left(\qint{1-i}\, \msf{s}-\qint{-i}\, \msf{t}\right) & i\leq 0. \end{array}\right. \end{align*} This choice will become transparent during the course of the following proof. Note that $\mathrm{wt}_0=\msf{s}$ and $\mathrm{wt}_1=\msf{t}$. \begin{proposition} \label{prop:ne_en_factorization} Fix a monomial $w\in \pathalg_{m,n}$ where $m\geq n$. We have \begin{align*} w=\mathrm{wt}_{b_1}\cdots \mathrm{wt}_{b_n}\cdot \msf{n}^{m-n}. \end{align*} \end{proposition} \begin{proof} If $n=0$, there is nothing to show as $w$ must necessarily equal $\msf{n}^{m-n}$. So we assume $n\geq 1$ and consider two cases. Suppose $w=\msf{n}^i\msf{e}w'$ where $i\geq 1$. Then $i$ must necessarily equal $m-\lambda'_1$, which is $b_1$. We thus have \begin{align} \label{eq:blah} w=\msf{n}^{b_1}\msf{e}w'&= \left((1-\qint{b_1})\msf{en}\cdot\msf{n}^{b_1-1}+\qint{b_1}\msf{ne}\cdot\msf{n}^{b_1-1}\right) w'=\mathrm{wt}_{b_1}\cdot \msf{n}^{b_1-1}w'. \end{align} Now $\msf{n}^{b_1-1}P'$ is a word representing a path in a smaller bounding box, and we can proceed by induction. On the other hand, if $w=\msf{e}^i\msf{n}w'$ with $i\geq 1$, we must have $i=\lambda_{m}=1-b_1$. Now mimicking the above argument we get \begin{align} \label{eq:blahblah} w=\msf{e}^{1-b_1}\msf{n}w'&= \left((1-(1-b_1)_{q^{-1}})\msf{ne}+(1-b_1)_{q^{-1}}\msf{en}\right)\msf{e}^{-b_1}w'\nonumber\\ &=q^{b_1}(\qint{1-b_1}\msf{en}-\qint{-b_1}\msf{ne})\msf{e}^{-b_1}w'=\mathrm{wt}_{b_1}\cdot \msf{e}^{-b_1}w' \end{align} Again $\msf{e}^{-b_1}P'$ is a word representing a path in a smaller bounding box and we may apply induction. \end{proof} We note that the $\mathrm{wt}_{b_i}$ all commute, so the product can be written in various ways. \begin{example} Referring to Figure~\ref{fig:b_sequence}, we have $w=\msf{n}^2\msf{e}^4\msf{n}^6\msf{e}\msf{n}^2\msf{en}$. Noting that $b(\lambda)=(2,1,-1,0,4,5)$, we get that \[ w=(\qint{2}\,\msf{s}-q\,\msf{t})\cdot \msf{s}\cdot q^{-1}(\qint{2}\,\msf{t}-\msf{s})\cdot \msf{t} \cdot (\qint{4}\,\msf{s}-q\qint{3}\,\msf{t})\cdot (\qint{5}\,\msf{s}-q\qint{4}\,\msf{t})\cdot \msf{n}^{11-6}. \] \end{example} \begin{theorem} \label{th:zigzag_coefficient} Consider the basis expansion in $\pathalg$ \begin{align*} w=\sum_{0\leq r\leq n}c_{w,r}\, \msf{s}^r\msf{t}^{n-r}\msf{n}^{m-n}. \end{align*} Then $c_{w,r}$ is a \emph{globally signed} Laurent polynomial. \end{theorem} \begin{proof} We extract the coefficient of $\msf{s}^r\msf{t}^{n-r}$ in $\mathrm{wt}_{b_1}\cdots \mathrm{wt}_{b_n}$. Let $S\subset \binom{[n]}{r}$. Define $\mathrm{wt}_S$ as \begin{align*} \prod_{\substack{i\in S\\ b_i\geq 1}}\left(-q\qint{b_i-1}\right)\prod_{\substack{i\in S\\ b_i\leq 0}}\left(q^{b_i}\qint{1-b_i}\right)\prod_{\substack{i\notin S\\ b_i\geq 1}}\qint{b_i}\prod_{\substack{i\notin S\\ b_i\leq 0}}\left(-q^{b_i}\qint{-b_i}\right) \end{align*} Now, $\mathrm{wt}_S$ is the coefficient that appears as one scans $\mathrm{wt}_{b_1}\cdots \mathrm{wt}_{b_n}$ left to right and picks up the coefficient of $\msf{s}$ if $i\in S$, and that of $\msf{t}$ if $i\notin S.$ It follows from Proposition~\ref{prop:ne_en_factorization} that \begin{align*} c_{w,r}= \sum_{S\in \binom{[n]}{r}} \mathrm{wt}_S, \end{align*} For an $S$ to contribute to this expression, we must necessarily have all $i$ for which $b_i=0$ belong to $S$, and all $i$ for which $b_i=1$ belong to $[n]\setminus S$. If these constraints are not satisfied, then $\mathrm{wt}_S=0$. Assuming these constraints are met, the sign of $\mathrm{wt}_S$ only depends on $|S|$ and $w$. Indeed, the exponent of $-1$ is the number of $i\in S$ with $b_i> 1$ plus the number of $i\notin S$ with $b_i< 0$. We leave it to the reader to verify that this quantity has the same parity as $|S|$ plus the number of $i$ with $b_i\leq 0$. The claim follows. \end{proof} \begin{remark} If $\lambda \subseteq m\times m$, then it is seen that $b(\lambda)$ is a rearrangement of the area sequence $a(\lambda)$ introduced in Section~\ref{subsec:hit_remixed}. So the form of Theorem~\ref{th:zigzag_coefficient} simplifies in the square case. \end{remark} \section{The abelian case of the Stanley--Stembridge conjecture} \label{sec:abelian} We relate here the algebra $\pathalg$ to chromatic symmetric functions, following Guay-Paquet~\cite{GP-notes}. To keep our exposition brief, we refer the reader to~\cite[Chapter 7]{St99} for any undefined notions pertaining to the ring $\ensuremath{\operatorname{QSym}}$ of quasisymmetric functions, and its distinguished subring $\sym$ of symmetric functions. Given a strong composition $\alpha$, we let $M_{\alpha}$ and $F_{\alpha}$ denote the corresponding \emph{monomial} and \emph{fundamental} quasisymmetric functions respectively. \subsection{Chromatic quasisymmetric functions} Consider a graph $G=([n],E)$. A \emph{coloring $\kappa$} of $G$ is an attribution of a \emph{color} in $\mathbb{Z}_+=\{1,2,\ldots\}$ to each vertex of $G$; it is \emph{proper} if $\kappa(i)\neq \kappa(j)$ whenever $\{i,j\}\in E$. An {\em ascent \textup{(}respectively descent\textup{)}} of a coloring $\kappa$ is an edge $\{i<j\}\in E$ such that $\kappa(i)<\kappa(j)$ (respectively $\kappa(i)>\kappa(j)$). Denote the number of ascents (respectively descents) by $\asc(\kappa)$ (respectively $\dsc(\kappa)$). The \emph{chromatic quasisymmetric function} of $G$~\cite{ShWa16} is the generating function of proper colorings weighted by ascents: \begin{align} \label{eq:def Xg} X_G(x,q)=\sum_{\kappa:V\to \mathbb{Z}_+ \text{proper}}q^{\asc(\kappa)}x_{\kappa(1)}x_{\kappa(2)}\dots x_{\kappa(n)}. \end{align} It is clearly in $\ensuremath{\operatorname{QSym}}$, homogeneous of degree $n$. The chromatic symmetric function is $X_G(x,1)$ and was originally defined by Stanley~\cite{St95}. Letting $\rho$ be the linear involution $\rho$ on $\ensuremath{\operatorname{QSym}}$ defined by sending $M_{\alpha_1,\ldots,\alpha_k}$ to $ M_{\alpha_k,\ldots,\alpha_1}$ one has \[\sum_{\kappa:V\to \mathbb{Z}_+ \text{proper}}q^{\dsc(\kappa)}x_{\kappa(1)}x_{\kappa(2)}\dots x_{\kappa(n)}=q^{|E|}X_G(x,q^{-1})=\rho(X_G). \] Since $\rho$ leaves $\sym$ stable, we can use descents or ascents indifferently in the definition of $X_G$ when it happens to be symmetric, which is precisely the case we will be interested in. As mentioned in the introduction, a particular class of graphs of interest to us are \emph{Dyck graphs}. \begin{definition} A simple graph $G=([n],E)$ is a Dyck graph if for any $\{i<j\}\in E$, then $\{i'<j'\}\in E$ for all $i\leq i'<j'\leq j$. \end{definition} Dyck graphs arise as incomparability graphs of natural unit interval orders; we will have no need for this description. A Dyck path $D$ uniquely determines a Dyck graph; Given all the ways to index Dyck paths, we inherit various ways to index Dyck graphs, which we will employ. \begin{proposition}[\cite{ShWa16}] For $G$ a Dyck graph, $X_G(x,q)$ is a symmetric function. \end{proposition} \subsection{Guay-Paquet's rectangular formula} \label{sub:guay-paquet-rectangular} Let $G$ be a Dyck graph on $[n]$ corresponding to Dyck path $D$. Let $I=\{i-a+1,\dots,i\}$, $J=\{j,j+1,\dots,j+{b-1}\}$ with $i<j$ be subsets of $[n]$ such that $(i-a+1,j-1)\in E$ and $(i+1,j+b-1)\in E$. This forms an ``abelian rectangle'' $[i-a+1,i]\times [j,j+b-1]$. In terms of paths, this abelian rectangle corresponds to a certain ``abelian'' subpath of $D$ with $a$ north steps and $b$ east steps. Figure~\ref{fig:abelian_subpath} depicts a Dyck path $D$. The labeled squares along the diagonal give the vertex set of the associated Dyck graph. Edges are given by squares below the path and above the diagonal squares. In this example, we have $I=\{2,3,4\}$ and $J=\{7,8\}$, and the resulting abelian rectangle $I\times J$ in gray. The subpath of $D$ in this shaded region gives the abelian subpath. \begin{figure} \includegraphics[scale=0.6]{abelian_subpath.pdf} \caption{A Dyck path with an abelian subpath in bold and the abelian rectangle highlighted.} \label{fig:abelian_subpath} \end{figure} The modular law~\cite{GP-modular} says that if a subpath $\msf{ene}$ or $\msf{nen}$ is part of an abelian subpath, then \begin{align} \label{eq:XG_modular} (1+q)X_{U\msf{ene}V}=qX_{U\msf{een}V}+X_{U\msf{nee}V},\\ (1+q)X_{U\msf{nen}V}=qX_{U\msf{enn}V}+X_{U\msf{nne}V}. \end{align} Fix $U$ and $W$, and consider the set of all Dyck paths $UvW$ where $v$ is the abelian subpath. Let us assume that $v$ has $a$ north steps and $b$ east steps, so that the abelian rectangle has dimensions $a\times b$. We denote the $\mathbb{C}(q)$-linear span of the chromatic symmetric functions $X_{UvW}$ by $\mc{X}_{a,b}$. Consider the map on the $\mathbb{C}(q)$-linear span of words with $a$ $\msf{n}$'s and $b$ $\msf{e}$'s, with image in $\mc{X}_{a,b}$, defined by \begin{align} v\mapsto X_{UvW} \end{align} and extended linearly. Comparing the relations \eqref{eq:mod_relations_path_algebra},\eqref{eq:mod_relations_path_algebra_1} of $\pathalg$ and the modular laws \eqref{eq:XG_modular}, we have in fact a map defined on $\pathalg_{a,b}$. We can thus apply this map to the relation in Theorem~\ref{th:GP}, and this gives precisely Theorem~\ref{th:intro_1}. \subsection{The Stanley--Stembridge conjecture} \label{sub:abelian_compendium} The Stanley--Stembridge conjecture~\cite{StanleyStembridge,St95} asserts that, if $G$ is a Dyck graph then the chromatic symmetric function ${X_G}_{|_{q=1}}$ has a positive expansion in the $e_\lambda$ basis. Shareshian--Wachs~\cite{ShWa16} then extended it by conjecturing that the $e$-expansion of $X_G$ had coefficients that are polynomials in $q$ with nonnegative coefficients. Writing \begin{equation} \label{eq:expansion_elambda} X_G=\sum_\lambda c_\lambda^G e_\lambda, \end{equation} the conjecture posits: \begin{conjecture} \label{conj:stanley_stembridge} For any Dyck graph $G$ and any partition $\lambda$, the coefficient $c_\lambda^{G}$ is in $\mathbb{N}[q]$. \end{conjecture} Recall that an \emph{acyclic orientation} $A$ of a graph $G$ is an orientation of its edges such that the resulting directed graph has no directed cycles. Assuming $V(G)=[n]$, an \emph{ascent} of $A$ is an edge $i\to j$ with $1\leq i<j\leq n$. If $G$ is nonempty, then $A$ has at least one \emph{source}, i.e. a vertex with no incoming edge. The following theorem was proved in~\cite[Theorem 5.3]{ShWa16}, the case $q=1$ being already known to Stanley~\cite[Theorem 3.3]{St95}. \begin{theorem} \label{th:sum_clambda} For any Dyck graph $G$ and any $k\geq 1$, the sum of $c^G_\lambda$ over all partitions with $k$ parts is the number of acyclic orientations of $G$ with $k$ sources, counted with weight $q^{\#\text{ ascents of }A}$. \end{theorem} In particular the sum over all $\lambda$ of $c^G_\lambda$ is enumerated by the acyclic orientations of $G$. \subsection{The abelian case} One case has been particularly studied and proved in different ways, called the \emph{abelian case}. In the language of the Section~\ref{sub:guay-paquet-rectangular}, this is when the Dyck graph $G$ on $n$ vertices has an associated abelian rectangle of maximal size $a\times b$ with $a+b=n$. In terms of Dyck paths, it means that the number of initial $\msf{n}$'s plus the number of final $\msf{e}$'s is $\geq n$; equivalently, the associated shape $\lambda=\lambda(G)$ satisfies $\lambda_1+\ell(\lambda)\leq n$. \smallskip We will now record and comment on two known $e$-expansions of $X_G$ when $G$ is abelian. The \emph{source sequence} $\sourceseq(A)=(m_1,\ldots,m_k)$ of $A$ is defined recursively as follows: if $S_1$ is the set of sources of $A$, then $m_1=|S_1|$ and $(m_2,\ldots,m_k)$ is the source sequence of the acyclic orientation obtained by restricting $A$ to $G\setminus S_1$. Let $G$ be an abelian Dyck graph with $\lambda=\lambda(G)$. Let $(a_1,\ldots,a_n)$ be its ascent sequence. We also assume $\lambda_1\geq \ell=\ell(\lambda)$ without loss of generality. Since the vertices of $G$ can be partitioned in two cliques, acyclic orientations can have at most two sources. The expansion of $X_G$ thus only involves partitions with at most two parts. \subsubsection{The formulas of Stanley and Harada and Precup} \label{subsub:SHP} Harada and Precup~\cite[Theorem 1.1]{HaradaPrecup} gave a proof of \ref{conj:stanley_stembridge}. They used the celebrated work of Brosnan and Chow~\cite{BC18} that showed the connection of $X_G$ for any Dyck graph $G$ with the study of Hessenberg varieties. Their result can be readily formulated as follows: \begin{equation} \label{eq:abelian_harada_precup} X_G=|\mathrm{Acy}^q_1(G)|\,e_{n}+\sum_{\{i<j\}\notin E}q^{a_i+a_j}X_{G\setminus\{i,j\}}^{+(1,1)}, \end{equation} where $\mathrm{Acy}^q_1(G)$ is the set of acyclic orientations of $G$ with one source, counted according to ascents; and for a symmetric function $f=\sum c_\mu e_\mu$, then $f^{+(a,b,\dots)}\coloneqq\sum_{\mu}c_\mu e_{\mu_1+a,\mu_2+b,\dots}$. Now by iterating the previous equation one obtains easily: \begin{equation} \label{eq:abelian_qstanley} X_G=\sum_A q^{\#\text{ ascents of }A}\,e_{n-\initial(A),\initial(A)}, \end{equation} where $\initial(A)$ is the length of the run of $2$'s at the beginning of $\sourceseq(A)$. The case $q=1$ is due to Stanley in his original paper~\cite[Theorem 3.4 and Corollary 3.6]{St95}. In fact, Stanley's proof can be extended to include $q$ and thus prove~\eqref{eq:abelian_qstanley}, which thus gives an independent proof of the result of Harada and Precup. \subsubsection{The formula of Abreu and Nigro} A second proof was given by Abreu and Nigro~\cite[Theorem 1.3]{AbreuNigro}. Their result can be stated as follows: \begin{equation} \label{eq:abreu_nigro} X_G=\sum_{j=0}^{\ell}q^j\qfact{j}\qint{n-2j}H^{n-j-1}_{j}(\lambda)\,e_{n-j,j}. \end{equation} Note that we slightly simplified their formula: the coefficient of $e_{n-\ell,\ell}$ in~\eqref{eq:abreu_nigro} is given in~\cite{AbreuNigro} as $\qfact{\ell}H^{n-\ell}_{\ell}(\lambda)$. Let us explain why they coincide, which after simplifying by $\qfact{\ell}$ reduces to the identity \begin{equation} \label{eq:simple} H^{n-\ell}_{\ell}(\lambda)=q^{\ell}\qint{n-2\ell} H^{n-\ell-1}_{\ell}(\lambda). \end{equation} \begin{proof}[Sketch of the proof of~\eqref{eq:simple}] Write $N=n-\ell$. Fix a maximal rook configuration $C$ in $R(N-1,N-1,\lambda,\ell)$. Note that since $\ell=\ell(\lambda)$, all rooks in the top $\ell$ rows are necessarily inside $\lambda$, say in columns $J=\{j_1,\ldots,j_\ell\}$. One can extend $C$ to a configuration $C'$ in $R(N,N,\lambda,\ell)$ by inserting a rook in the bottom row in one of the $N-\ell$ columns $[N-1]\setminus J\cup\{N\}$. Tracking the new unattacked cells gives us the coefficient $q^{\ell}\qint{N-\ell}$: there are $\ell$ new unattacked cells in the top $\ell$ positions of the last column of $C'$, while $\qint{N-\ell}$ comes from the inversions created by the insertion in the last row. \end{proof} \subsubsection{Comparison} It is certainly interesting to connect directly \eqref{eq:abreu_nigro} to \eqref{eq:abelian_harada_precup},\eqref{eq:abelian_qstanley}. More precisely, equating the two implies the following result \begin{proposition} Let $G$ be an abelian Dyck graph. Then the number of acyclic orientations $A$ with $\initial(A)=j$, with weight $q^{\#\text{ ascents of }A}$ is given by $q^j\qfact{j}\qint{n-2j}H^{n-j-1}_{j}(\lambda)$. \end{proposition} Let us sketch a direct bijective proof for $q=1$: Let $A$ be an acyclic orientation with $\initial(A)=j$. Let $S=(\{u_1<v_1\},\{u_2<v_2\},\ldots,\{u_j<v_j\})$ be the first $j$ sets in the source sequence decomposition of $A$. Denote by $V$ the set containing these $2j$ vertices. The orientation $A$ is then entirely characterized by $S$ together with an acyclic orientation $A_1$ of $G\setminus V$ that has a unique source by the definition of $\initial(A)$. Recall that the cells of $\lambda=\lambda(G)$ are in bijection with the non-edges of $G$. From this it follows that the vertices of $V$ can be represented by $j$ non-attacking roots in the shape $\lambda$, and they can be ordered in $j!$ ways. Let $\lambda'\subset (n-2j)\times(n-2j)$ be the shape corresponding to $G\setminus V$: it is obtained by removing the columns and rows occupied by the rooks in $\lambda$. Now the number of acyclic orientations of $G\setminus V$ with a unique source is given by $(n-2j)$ times the number $H^{n-2j-1}_0(\lambda')$, and this can be proved bijectively~\cite[\S 9.1]{Alepan}. Putting things together, we get a $1$-to-$j!(n-2j)$ map between acyclic orientations of $G$ with $\initial(A)=j$, and pairs of rook placements in $R(j,\ell,\lambda,j)\times R(n-2\ell-1,n-2\ell-1,\lambda',0)$ with $\lambda'$ as above. These two rook placements can be naturally combined to give a rook placement in $R(n-\ell-1,n-\ell-1,\lambda,j)$, which completes the bijective proof. \section*{Acknowledgements} We are extremely grateful to Mathieu Guay-Paquet for generously sharing his unpublished work. Thanks also to Ira Gessel for helpful correspondence in the context of $q$-hit numbers. Finally we would like to thank Laura Colmenarejo, Alejandro Morales, and Greta Panova for sharing an early version of their article. \bibliographystyle{hplain}
1,314,259,994,578
arxiv
\section{Introduction} \label{section.introduction} Epistemic logic investigates knowledge and belief, and change of knowledge and belief, in multi-agent systems. A foundational study is \cite{hintikka:1962}. Knowledge change was extensively modelled in temporal epistemic logics~\cite{Alur2002,halpernmoses:1990,Pnueli77,dixonetal.handbook:2015} and more recently in dynamic epistemic logics~\cite{baltagetal:1998,hvdetal.del:2007,moss.handbook:2015} including semantics based on histories of epistemic actions, both synchronously \cite{jfaketal.JPL:2009} and asynchronously \cite{degremontetal:2011}. Combinatorial topology has been used in distributed computing to model concurrency and asynchrony since \cite{BiranMZ90,FischerLP85,luoietal:1987}, with higher-dimensional topological properties entering the scene in \cite{HS99,herlihyetal:2013}. The basic structure in combinatorial topology is the \emph{simplicial complex}, a collection of subsets called \emph{simplices} of a set of \emph{vertices}, closed under containment. Geometric manipulations such as subdivision have natural combinatorial counterparts. An epistemic logic interpreted on simplicial complexes has been proposed in~\cite{goubaultetal:2018,ledent:2019,goubaultetal_postdali:2021}, including exact correspondence between simplicial complexes and certain multi-agent Kripke models where all relations are equivalence relations. Also, in those works and in e.g.\ \cite{diego:2021} the action models of~\cite{baltagetal:1998} are used to model distributed computing tasks and algorithms, with asynchronous histories treated as in \cite{degremontetal:2011}. Action models also reappear in their combinatorial topological incarnations as simplicial complexes \cite{goubaultetal:2018}. \begin{example} \label{figure.ssss} Below are some simplicial complexes and corresponding Kripke models. These simplicial complexes are for three agents. The vertices of a simplex are required to be labelled with different agents. A maximum size simplex, called facet, therefore consists of three vertices. This is called dimension $2$. These are the triangles in the figure. For $2$ agents we get lines/edges, for $4$ agents we get tetrahedra, etc. Such a triangle corresponds to a state in a Kripke model. A label like $0_a$ on a vertex represents that it is a vertex for agent $a$ and that agent $a$'s local state has value $0$, etcetera. We can see this as the Boolean value of a local proposition, where $0$ means false and $1$ means true. Together these labels determine the valuation in a corresponding Kripke model, for example in states labelled $0_a1_b1_c$ agent $a$'s value is $0$, $b$'s is $1$, and $c$'s is $1$. The single triangle corresponds to the singleton $\mathcal S5$ model below it. (We assume reflexivity and symmetry of accessibility relations.) With two triangles, if they only intersect in $a$ it means that agent $a$ cannot distinguish these states, so that $a$ is uncertain about the value of $b$; whereas if they intersect in $a$ and $c$ both $a$ and $c$ are uncertain about the value of $b$. \begin{figure}[ht] \center \scalebox{1}{ \begin{tabular}{ccc} \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (2,0) -- (4,0) -- (3,1.71) -- cycle; \fill[fill=gray!25!white] (0,0) -- (2,0) -- (1,1.71) -- cycle; \node[round] (b1) at (0,0) {$1_b$}; \node[round] (b0) at (4,0) {$0_b$}; \node[round] (c1) at (3,1.71) {$1_c$}; \node[round] (lc1) at (1,1.71) {$1_c$}; \node[round] (a0) at (2,0) {$0_a$}; \draw[-] (b1) -- (a0); \draw[-] (b1) -- (lc1); \draw[-] (a0) -- (lc1); \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c1); \draw[-] (a0) -- (c1); \end{tikzpicture} & \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (2,0) -- (2,2) -- (3.71,1) -- cycle; \fill[fill=gray!25!white] (0.29,1) -- (2,0) -- (2,2) -- cycle; \node[round] (b1) at (.29,1) {$1_b$}; \node[round] (b0) at (3.71,1) {$0_b$}; \node[round] (c1) at (2,2) {$1_c$}; \node[round] (a0) at (2,0) {$0_a$}; \draw[-] (b1) -- (a0); \draw[-] (b1) -- (c1); \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c1); \draw[-] (a0) -- (c1); \end{tikzpicture} & \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (2,0) -- (4,0) -- (3,1.71) -- cycle; \node[round] (b0) at (4,0) {$1_b$}; \node[round] (c1) at (3,1.71) {$1_c$}; \node[round] (a0) at (2,0) {$0_a$}; \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c1); \draw[-] (a0) -- (c1); \end{tikzpicture} \\ && \\ \begin{tikzpicture} \node (010) at (.5,0) {$0_a1_b1_c$}; \node (001) at (3.5,0) {$0_a0_b1_c$}; \draw[-] (010) -- node[above] {$a$} (001); \end{tikzpicture} & \begin{tikzpicture} \node (010) at (.5,0) {$0_a1_b1_c$}; \node (001) at (3.5,0) {$0_a0_b1_c$}; \draw[-] (010) -- node[above] {$ac$} (001); \end{tikzpicture} & \begin{tikzpicture} \node (010) at (.5,0) {$0_a1_b1_c$}; \end{tikzpicture} \end{tabular} } \end{figure} The current state of the distributed system is represented by a distinguished facet of the simplicial complex, just as we need a distinguished or actual state in a Kripke model in order to evaluate propositions. For example, in the leftmost triangle, as well as in the leftmost state/world, $a$ is uncertain whether the value of $b$ is $0$ or $1$, whereas $b$ knows that its value is $1$, and all three agents know that the value of $c$ is $1$. \end{example} The so-called {\em impure simplicial complexes} were beyond the scope of the epistemic logic of \cite{goubaultetal:2018}. Impure simplicial complexes encode uncertainty over which processes are still active/alive. They model information states in synchronous message passing with crash failures (processes `dying'). \begin{example} \label{figure.sergio} The impure complex below is found in \cite[Section 13.5.2]{herlihyetal:2013}, here decorated with local values in order to illustrate epistemic features. Some vertices have been named. \begin{figure}[ht] \center \scalebox{1}{ \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (2,0) -- (4,0) -- (3,1.71) -- cycle; \node[round] (b1) at (2,-2) {$0_b$}; \node (u) at (1.6,-2.2) {$u$}; \node[round] (a1) at (4,-2) {$0_a$}; \node[round] (b0) at (4,0) {$0_b$}; \node[round] (c1) at (3,1.71) {$1_c$}; \node[round] (x) at (2.6,1.51) {$x$}; \node[round] (a0) at (2,0) {$0_a$}; \node (v) at (1.6,-.2) {$v$}; \node (v) at (4.4,-.2) {$z$}; \node[round] (lc1) at (1.3,2.71) {$0_a$}; \node[round] (la0) at (.3,1) {$1_c$}; \node (w) at (-.1,.8) {$w$}; \node[round] (rb0) at (5.7,1) {$1_c$}; \node[round] (rc1) at (4.7,2.71) {$0_b$}; \draw[-] (b1) -- (a0); \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c1); \draw[-] (a0) -- (c1); \draw[-] (la0) -- (lc1); \draw[-] (a0) -- (la0); \draw[-] (lc1) -- (c1); \draw[-] (a1) -- (b1); \draw[-] (a1) -- (b0); \draw[-] (b0) -- (rb0); \draw[-] (rc1) -- (c1); \draw[-] (rb0) -- (rc1); \end{tikzpicture} } \end{figure} This simplicial complex represents the result of possibly failed message passing between $a,b,c$. In the ($2$-dimensional) triangle in the middle the messages have all been received, whereas in the ($1$-dimensional) edges on the side a process has crashed: on the left, $b$ is dead, on the right, $a$ is dead, and below, $c$ is dead. We propose to interpret this complex in terms of knowledge and uncertainty. Consider triangle $vxz$ (that is $\{v,x,z\}$) and agent $a$, who `colours' (labels) vertex $v$. Vertex $v$ is the intersection of triangle $vxz$, edge $wv$, and edge $vu$. This represents $a$'s uncertainty about the actual state of information: either all three agents are alive, or $b$ is dead, or $c$ is dead. First, we now say, as before, that in triangle $vxz$ the value of $c$ is $1$, as in vertex $x$ the value of $c$ is $1$. Second, we now also say, which is novel, that in triangle $vxz$ agent $a$ knows that $c$'s value is $1$. This is justified because in all simplices intersecting in the $a$ vertex $v$, whenever $c$ is alive, its value is $1$. It is sufficient to consider maximal simplices (facets): in edge $wv$ agent $c$'s value is $1$, in triangle $vxz$ agent $c$'s value is $1$, and in edge $vu$ agent $c$ is dead, so we are excused from considering its value. Third, the semantics we will propose also justifies that in triangle $vxz$ agent $a$ knows that $b$ knows that the value of $c$ is $1$: whenever proposition `$b$ knows that the value of $c$ is $1$' denoted $\phi$ is defined (can be interpreted), it is true: $\phi$ is undefined in $wv$ because $b$ is dead, $\phi$ is undefined in $vu$ because $b$ knows that $c$ is dead, but $\phi$ is defined in $vxz$ and (just as for $a$) true.\footnote{In fact, in the triangle $vxz$ the three agents have common knowledge ($a$ knows that $b$ knows that $c$ knows \dots) of their respective values.} The point of evaluation can be a facet such as $vxz$ or $wv$ but it may be any simplex, such as vertex $v$. Also in vertex $v$, agent $a$ knows that $c$'s value is $1$. Such knowledge is unstable: an update of information (such as a model restriction) may remove $a$'s uncertainty and make her learn that the real information state is edge $vu$ wherein $c$ is dead. Then, it is no longer true that $a$ knows that the value of $c$ is $1$. However, a different update could have confirmed that the real information state is $vxz$, or $wv$, which would bear out $a$'s knowledge. \end{example} \paragraph*{Our results.} Inspired by the epistemic logic for pure complexes of \cite{goubaultetal:2018}, the knowledge semantics for certain Kripke models involving alive and dead agents in \cite{diego:2019}, and impure complexes modelling synchronous message passing in \cite{herlihyetal:2013}, we propose an epistemic logic for impure complexes. We define a three-valued modal logical semantics for an epistemic logical language, interpreted on arbitrary simplices of simplicial complexes that are decorated with agents and with local propositional variables. The issue of the definability of formulas has to be handled with care: standard notions such as validity, equivalence, and the interdefinability of dual modalities and non-primitive propositional connectives have to be properly addressed. This involves detailed proofs. As we interpret formulas in arbitrary simplices, results are shown for upwards and downwards monotony of truth that also depend on definability. For example, agent $c$ may have value $1$ in a simplex with a $c$ vertex, but the same proposition is undefined in a face of that simplex without that vertex. We propose an axiomatization {\bf S5}$^\top$ of our logic for which we show soundness: apart from some usual {\bf S5} axioms and rules it also contains some of those in versions adapted to definability. For example, {\bf MP} (modus ponens) is invalid, but instead we have a derivation rule {\bf MP}$^\top$ stating ``from $\phi$ and $\phi\rightarrow\psi$ infer $\phi^\top\rightarrow\psi$'', where the last validity means that ``whenever $\phi$ and $\psi$ are both defined, $\psi$ is true''. We then show that the epistemic semantics of pure complexes of \cite{goubaultetal:2018} is a special case of our semantics, and also how their version of the logic {\bf S5} is then recovered. Finally we show how impure simplicial complexes correspond to certain Kripke models with special conditions on the relations and valuations. Such Kripke models have a surplus of information that is absent in their simplicial correspondents. In Kripke models all propositional variables `must' have a value, even those of agents that are dead. But not in complexes. Some issues are left for further research, notably: the completeness of the axiomatization, the proper notion of bisimulation for impure complexes, the explicit representation of life and death in the language, and the extension of the logical language with dynamic modalities representing distributed algorithms and other updates. \paragraph*{Relevance for distributed computing.} We hope that our research is relevant for modelling knowledge both in synchronous and asynchronous distributed computing. Indeed, simplicial complexes are a very versatile alternative for Kripke models where all agents know their own state, as they make the powerful theorems of combinatorial topology available for epistemic analysis \cite{herlihyetal:2013}. Whereas this was already known for pure complexes \cite{HS99}, our work reveals that it is also true for impure complexes. Indeed, crashed processes can alternatively be modelled as non-responding processes in asynchronous message passing, on (larger) pure simplicial complexes. Dually, non-responding processes can be modelled as dead processes in a synchronous setting, for example after a time-out, on (smaller) impure complexes. Impure simplicial complexes therefore appear as a useful abstraction even for modelling asynchronous computations. \paragraph*{Survey of related works.} We will compare our work in technical detail in a (therefore) later section to various related works. % Uncertainty over which agents are alive and dead relates to epistemic logics of \emph{awareness} (of other agents) \cite{faginetal:1988,halpernR13,AgotnesA14a,hvdetal.jolli:2014}. % Our modal semantics with the three values true, false and undefined relates to other \emph{multi-valued epistemic logics} \cite{Morikawa89,Fitting:1992,odintsovetal:2010,RivieccioJJ17}. % Our epistemic notion that comes `just short of knowledge' is different from various notions of {\em belief} \cite{hintikka:1962,stalnaker:2005} including such notions for distributed systems \cite{MosesS93,HalpernSS09a}, and in particular from notions of \emph{knowledge} recently proposed in \cite{diego:2019} and in \cite{goubaultetal:2021}: these are also interpreted on impure simplicial complexes and therefore of particular interest. \paragraph*{Outline of our contribution.} Section~\ref{section.preliminaries} gives an introduction to simplicial complexes. Section~\ref{section.el} presents the epistemic semantics for impure complexes. Section~\ref{section.validities} presents the axiomatization {\bf S5}$^\top$ for our epistemic semantics. Section~\ref{section.pure} shows that the special case of pure complexes returns the prior semantics and logic of \cite{goubaultetal:2018}. Section \ref{section.correspondence} transforms impure complexes into a certain kind of Kripke models and vice versa. Section \ref{section.further} compares our results to the literature. Section~\ref{section.fff} describes the above topics for further research in detail. \section{Technical preliminaries: simplicial complexes} \label{section.preliminaries} Given are a finite set $A$ of \emph{agents} (or \emph{processes}, or \emph{colours}) $a_0,a_1,\dots$ (or $a,b,\dots$) and a countable set $P = \bigcup_{a \in A} P_a$ of (\emph{propositional}) \emph{variables}, where all $P_a$ are mutually disjoint. The elements of $P_a$ for some $a \in A$ are the {\em local variables for agent $a$} and are denoted $p_a, q_a, p'_a, q'_a, \dots$ Arbitrary elements of $P$ are also denoted $p,q,p',q',\dots$ As usual in combinatorial topology, the number $|A|$ of agents is taken to be $n+1$ for some $n \in \mathbb N$, so that the dimension of a simplicial complex (to be defined below), that is one less than the number of agents, is $n$. \begin{definition}[Language] The {\em language of epistemic logic} $\mathcal L_{K}(A,P)$ is defined as follows, where $a \in A$ and $p_a \in P_a$. \[ \phi ::= p_a \mid \neg\phi \mid (\phi\wedge\phi) \mid \widehat{K}_a \phi \] \end{definition} Parentheses will be omitted unless confusion results. We will write $\mathcal L_{K}(P)$ if $A$ is clear from the context, and $\mathcal L_{K}$ if $A$ and $P$ are clear from the context. Connectives $\rightarrow$, $\leftrightarrow$, and $\vee$ are defined by abbreviation as usual, as well as $K_a\phi := \neg \widehat{K}_a \neg \phi$. In the semantics and inductive proofs of our work, $\widehat{K}_a\phi$ is a more suitable linguistic primitive than the more common $K_a\phi$. Otherwise, we emphasize that the choice whether $K_a$ or $\widehat{K}_a$ is syntactic primitive is pure syntactic sugar. For $\emptyset \neq B\subseteq A$, we write $\mathcal L_{K}|B$ for $\mathcal L_K(B,\bigcup_{a \in B} P_a)$, and where $\mathcal L_K|b$ means $\mathcal L_K|\{b\}$. For $\neg p$ we may write $\overline{p}$. Expression $K_a \phi$ stands for `agent $a$ knows (that) $\phi$' and $\widehat{K}_a \phi$ stands for `agent $a$ considers it possible that $\phi$' that we will abbreviate in English as `agent $a$ considers (that) $\phi$.' The fragment without inductive construct $\widehat{K}_a\phi$ is the {\em language of propositional logic} (or {\em Booleans}) denoted $\mathcal L_\emptyset(A,P)$. \begin{definition}[Simplicial complex] Given a non-empty set of \emph{vertices} $V$ (or {\em local states}; singular form {\em vertex}), a \emph{(simplicial) complex} $C$ is a set of non-empty finite subsets of $V$, called \emph{simplices}\footnote{Privileged plural forms are: vertices, simplices, and complexes, where the English plural is privileged as complex is a common modern term, see \cite{herlihyetal:2013}.}, that is closed under non-empty subsets (such that for all $X \in C$, $\emptyset \neq Y \subseteq X$ implies $Y \in C$), and that contains all singleton subsets of $V$. \end{definition} If $Y \subseteq X$ we say that $Y$ is a \emph{face} of $X$. A maximal simplex in $C$ is a \emph{facet}. The facets of a complex $C$ are denoted as $\mathcal F(C)$, and the vertices of a complex $C$ are denoted as $\mathcal V(C)$. The \emph{star} of $X$, denoted $\mathsf{star}(X)$, is defined as $\{ Y \in C \mid X \subseteq Y\}$, where for $\mathsf{star}(\{v\})$ we write $\mathsf{star}(v)$. The dimension of a simplex $X$ is $|X|-1$, e.g., vertices are of dimension $0$, while edges are of dimension $1$. The dimension of a complex is the maximal dimension of its facets. A simplicial complex is \emph{pure} if all facets have the same dimension. Otherwise it is \emph{impure}. Complex $D$ is a \emph{subcomplex} of complex $C$ if $D \subseteq C$. Given processes $B \subseteq A$, a subcomplex of interest is the \emph{$m$-skeleton} $D$ of an $n$-dimensional complex $C$, that is, the maximal subcomplex $D$ of $C$ of dimension $m < n$, that can be defined as $D := \{ X \in C \mid |X| \leq m+1\}$. We will use this term for pure and impure complexes, where we typically consider a pure $m$-skeleton of an impure $n$-dimensional complex. We decorate the vertices of simplicial complexes with agents' names, that we often refer to as \emph{colours}. A \emph{chromatic map} $\chi: \mathcal V(C) \rightarrow A$ assigns colours to vertices such that different vertices of the same simplex are assigned different colours. Thus, $\chi(v)=a$ denotes that the local state or vertex $v$ belongs to agent $a$. Dually, the vertex of a simplex~$X$ coloured with $a$ is denoted $X_a$. Expression $\chi(X)$, for $X \in C$, denotes $\{\chi(v) \mid v \in X\}$. A pair $(C,\chi)$ consisting of a simplicial complex $C$ and a chromatic map $\chi$ is a \emph{chromatic simplicial complex}. From now on, all simplicial complexes will be chromatic simplicial complexes. We extend the usage of the term `skeleton' as follows to chromatic simplicial complexes. Given processes $B \subseteq A$, the $B$-skeleton of a chromatic complex $(C,\chi)$, denoted $(C,\chi)|B$, is defined as $\{ X \in C \mid \chi(X) \subseteq B \}$ (it is required to be non-empty). \weg{ \begin{figure} \center \scalebox{.7}{ \raisebox{3.5em}{ \begin{tikzpicture}[every node/.style={circle,fill=white,inner sep=1}, font=\large] \fill[fill=gray!25!white] (0,0) -- (2,0) -- (1,1.71) -- cycle; \node (a0) at (0,0) {$a$}; \node (b0) at (2,0) {$b$}; \node (c0) at (1,1.71) {$c$}; \draw[-] (a0) -- (c0); \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c0); \end{tikzpicture} } \qquad\qquad \begin{tikzpicture}[every node/.style={circle,fill=white,inner sep=1}, font=\large] \fill[fill=gray!25!white] (0,0) -- (6,0) -- (3,5.13) -- cycle; \node (a0) at (0,0) {$a$}; \node (b0) at (2,0) {$b$}; \node (a1) at (4,0) {$a$}; \node (b1) at (6,0) {$b$}; \node (c0) at (1,1.71) {$c$}; \node (b3) at (2.5,2.2) {$b$}; \node (a3) at (3.5,2.2) {$a$}; \node (c3) at (3,1.24) {$c$}; \node (c1) at (5,1.71) {$c$}; \node (a2) at (2,3.42) {$a$}; \node (b2) at (4,3.42) {$b$}; \node (c2) at (3,5.13) {$c$}; \draw[-] (a0) -- (b0); \draw[-] (b0) -- (a1); \draw[-] (a1) -- (b1); \draw[-] (c0) -- (b3); \draw[-] (c1) -- (a3); \draw[-] (c3) -- (b3); \draw[-] (a3) -- (b3); \draw[-] (a3) -- (c3); \draw[-] (c2) -- (b3); \draw[-] (c2) -- (a3); \draw[-] (a0) -- (c0); \draw[-] (c0) -- (a2); \draw[-] (a2) -- (c2); \draw[-] (c2) -- (b2); \draw[-] (b2) -- (c1); \draw[-] (c1) -- (b1); \draw[-] (a0) -- (b3); \draw[-] (a0) -- (c3); \draw[-] (a2) -- (b3); \draw[-] (c3) -- (a1); \draw[-] (b0) -- (c3); \draw[-] (a3) -- (b2); \draw[-] (b1) -- (c3); \draw[-] (b1) -- (a3); \end{tikzpicture} } \caption{A two-dimensional simplicial complex and its chromatic subdivision} \label{fig.thousand} \end{figure} A \emph{subdivision} of a simplicial complex is a rather topological concept of which we only need the combinatorial counterpart called chromatic subdivision. Given $(C,\chi)$ of dimension $n$, the \emph{chromatic subdivision} is the chromatic complex $(C',\chi')$ where $\mathcal V(C') = \{(v,X)\mid v \in \mathcal V(C), X \in {\mathit{star}}(v)$, such that $C'$ is the set of simplices $X' = \{ (v_0,X_0), \dots, (v_k,X_k) \}$ for which $X_0 \subseteq \dots \subseteq X_k$ and such that for all $i,j \leq k$, if $(v_i,X_i)$ and $(v_j,X_j)$ are in $X'$ then then $X_i \subseteq X_j$, and such that $\chi'(v,X) = \chi(v)$ for all vertices $(v,X)$ in $C'$. It is easy to show that the subdivision is again a chromatic simplicial complex of the same dimension. We can then see the vertices $(v,\{v\})$ of $C'$ as the `original' vertices $v$ of $C$ and the other vertices of $C'$ as the ones `created' by the subdivision. See \cite{herlihyetal:2013} for details. And see Figure~\ref{fig.thousand} that is worth a thousand words. } \paragraph*{Simplicial models.} We decorate the vertices of simplicial complexes with local variables $p_a \in P_a$ for $a \in A$, where we recall that $\bigcup_{a \in A} P_a = P$. \emph{Valuations} (valuation functions) assigning sets of local variables for agents $a$ to vertices coloured $a$ are denoted $\ell, \ell', \dots$ Given a vertex $v$ coloured $a$, the set $\ell(v) \subseteq P_a$ consists of $a$'s local variables that are true at $v$, where those in $P_a\setminus\ell(v)$ are the variables that are false in $a$. For any $X \in C$, $\ell(X)$ stands for $\bigcup_{v \in X} \ell(v)$. \begin{definition}[Simplicial model] A \emph{simplicial model} $\mathcal C$ is a triple $(C,\chi,\ell)$ where $(C,\chi)$ is a chromatic simplicial complex and $\ell$ a valuation function, and a {\em pointed simplicial model} is a pair $(\mathcal C,X)$ where $X \in C$. \end{definition} We slightly abuse the language by allowing terminology for simplicial complexes to apply to simplicial models, for example, $\mathcal C$ is impure if $C$ is impure, $\mathcal C|B$ is a $B$-skeleton if $(C,\chi)|B$ is a $B$-skeleton, etcetera. A pointed simplicial model is also called a simplicial model. \weg{ For $\mathcal C'=(C',\chi',\ell')$ to be a subdivision of $\mathcal C=(C,\chi,\ell)$, we additionally require that for all $(v,X) \in \mathcal V(C')$ with $\chi(v)=a$, $p_a \in\ell'(v,X)$ iff $p_a\in \ell(v)$.\footnote{For an epistemic logician, the chromatic subdivision is the result of an epistemic action wherein the agents inform each other of the value of their local variables. There is a corresponding action model \cite{goubaultetal:2018,ledent:2019}.} } \paragraph*{Simplicial maps.} A \emph{simplicial map} ({\em simplicial function}) between simplicial complexes $C$ and $C'$ is a simplex preserving function $f$ between its vertices, i.e., $f: \mathcal V(C) \rightarrow \mathcal V(C')$ such that for all $X \in C$, $f(X) \in C'$, where $f(X) := \{ f(v) \mid v \in X \}$. We let $f(C)$ stand for $\{ f(X) \mid X \in C \}$. A simplicial map is {\em rigid} if it is dimension preserving, i.e., if for all $X \in C$, $|f(X)| = |X|$. We will abuse the language and also call $f$ a simplicial map between chromatic simplicial complexes $(C,\chi)$ and $(C',\chi')$, and between simplicial models $(C,\chi,\ell)$ and $(C',\chi',\ell')$. A \emph{chromatic simplicial map} is a {\em colour preserving} simplicial map between chromatic simplicial complexes, i.e., for all $v \in \mathcal V(C)$, $\chi'(f(v)) = \chi(v)$. We note that it is therefore also rigid. A simplicial map $f$ between simplicial models is {\em value preserving} if for all $v \in \mathcal V(C)$, $\ell'(f(v)) = \ell(v)$. If $f$ is not only colour preserving but also value preserving, and its inverse $f^{-1}$ as well, then~$\mathcal C$ and~$\mathcal C'$ are \emph{isomorphic}, notation ${\mathcal C \simeq \mathcal C'}$. \paragraph*{Epistemic logic on pure simplicial models.} In \cite{ledent:2019,goubaultetal:2018} an epistemic semantics is given on pure simplicial models with crucial clause that $K_a \phi$ ($\widehat{K}_a\phi$) is true in a facet $X$ of a pure simplicial model $\mathcal C = (C,\chi,\ell)$ of dimension $|A|-1$, if $\phi$ is true in all facets (some facet) $Y$ of $\mathcal C$ such that $a \in \chi(X \cap Y)$. They then proceed by showing that {\bf S5} augmented with `locality' axioms $K_a p_a \vee K_a \neg p_a$ (formalizing that all agents know their local variables) is the epistemic logic of simplicial complexes. In \cite{ledent:2019,goubaultetal_postdali:2021,hvdetal.simpl:2021} bisimulation for simplicial complexes is proposed and the required correspondence (on finite models) shown. The subsection entitled `A semantics for simplices including facets' of \cite{hvdetal.simpl:2021} proposes an alternative semantics for pure complexes wherein formulas can be interpreted in any face of a facet and not merely in facets. Somewhat surprisingly, this proposal applies to impure complexes as well, with minor adjustments. We will proceed with such an epistemic semantics based on impure complexes. \weg{ \begin{example} Reconsider the simplicial complexes of Example~\ref{figure.ssss}, again depicted below. Let $1_b$ stand for `$p_b$ is true', $0_b$ for `$p_b$ is false', etcetera. Then, according to \cite{goubaultetal:2018}, for example: $K_a (\neg p_a\wedge p_b \wedge p_c)$ is true in facet $X''$ of simplicial model $\mathcal C''$ (and $b$ and $c$ also know this, and this is even common knowledge), and $K_a (K_b p_b \vee K_b \neg p_b)$ is true in facet $X'$ of simplicial model $\mathcal C'$ ($a$ knows that $b$ knows whether $p_b$) although $K_a p_b \vee K_a \neg p_b$ is false in facet $X'$ of simplicial model $\mathcal C'$ ($a$ does not know whether $p_b$, and similarly for $c$). Whereas in $\mathcal C$ agents $b$ and $c$ have full knowledge but not $a$. \begin{figure}[ht] \scalebox{0.8}{ \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (2,0) -- (4,0) -- (3,1.71) -- cycle; \fill[fill=gray!25!white] (0,0) -- (2,0) -- (1,1.71) -- cycle; \node[round] (b1) at (0,0) {$1_b$}; \node[round] (b0) at (4,0) {$0_b$}; \node[round] (c1) at (3,1.71) {$1_c$}; \node[round] (lc1) at (1,1.71) {$1_c$}; \node[round] (a0) at (2,0) {$0_a$}; \node (f1) at (3,.65) {$Y$}; \node (f1) at (1,.65) {$X$}; \node(c) at (-1,0) {$\mathcal C:$}; \draw[-] (b1) -- (a0); \draw[-] (b1) -- (lc1); \draw[-] (a0) -- (lc1); \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c1); \draw[-] (a0) -- (c1); \end{tikzpicture} \qquad \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (2,0) -- (2,2) -- (3.71,1) -- cycle; \fill[fill=gray!25!white] (0.29,1) -- (2,0) -- (2,2) -- cycle; \node[round] (b1) at (.29,1) {$1_b$}; \node[round] (b0) at (3.71,1) {$0_b$}; \node[round] (c1) at (2,2) {$1_c$}; \node[round] (a0) at (2,0) {$0_a$}; \node (f1) at (2.6,1) {$Y'$}; \node (f1) at (1.4,1) {$X'$}; \node(cp) at (-.5,0) {$\mathcal C':$}; \draw[-] (b1) -- (a0); \draw[-] (b1) -- (c1); \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c1); \draw[-] (a0) -- (c1); \end{tikzpicture} \qquad \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (2,0) -- (4,0) -- (3,1.71) -- cycle; \node[round] (b0) at (4,0) {$1_b$}; \node[round] (c1) at (3,1.71) {$1_c$}; \node[round] (a0) at (2,0) {$0_a$}; \node (f1) at (3,.65) {$X''$}; \node(cpp) at (1,0) {$\mathcal C'':$}; \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c1); \draw[-] (a0) -- (c1); \end{tikzpicture} } \end{figure} \end{example} } \section{Epistemic logic on impure simplicial models} \label{section.el} \subsection{A logical semantics relating definability and truth} Given agents $A$ and variables $P = \bigcup_{a \in A} P_a$ as before, let $\mathcal C = (C,\chi,\ell)$ be a simplicial model, $X \in C$, and $\phi \in \mathcal L_K(A,P)$. Informally, we now wish to define a satisfaction relation $\models$ between {\bf some} but not all pairs $(\mathcal C,X)$ and formulas $\phi$. Not all, because if agent $a$ does not occur in $X$ (if $a \notin \chi(X)$), we do not wish to interpret certain formulas involving $a$, such as $p_a$ and formulas of shape $K_a \phi$. This is, because, if $X$ were a facet (a maximal simplex), the absence of $a$ would mean that the process is absent/dead, and dead processes do not have local values or know anything. The relation should therefore be partial. However, this relation is fairly complex, because we may wish to interpret formulas $K_b \phi$ in $(\mathcal C,X)$, with $b \in\chi(X)$, where after all agent $a$ `occurs' in $\phi$, for example expressing that a `live' process $b$ is uncertain whether process $a$ is `dead'. Formally, we therefore proceed slightly differently. We first define an auxiliary relation $\bowtie$ between (all) pairs $(\mathcal C,X)$ and formulas $\phi$, where $\mathcal C,X \bowtie \phi$ informally means that (the interpretation of) $\phi$ {\em is defined in} $(\mathcal C,X)$. The parentheses in $(\mathcal C,X)$ are omitted for notational brevity. For ``not $\mathcal C,X\bowtie \phi$'' we write $\mathcal C,X \not\bowtie\phi$, for ``$\phi$ is undefined in $(\mathcal C,X)$''. Subsequently we then formally also define relation $\models$ between (after all) all pairs $(\mathcal C,X)$ and formulas $\phi$, where, as usual, $\mathcal C,X \models \phi$ means that $\phi$ {\em is true in} $(\mathcal C,X)$, and $\mathcal C,X \models \neg\phi$ means that $\phi$ {\em is false in} $(\mathcal C,X)$. Again, we omit the parentheses in $(\mathcal C,X)$ for notational brevity. For ``not $\mathcal C,X \models \phi$'' we write $\mathcal C,X \not\models \phi$. Unusually, $\mathcal C,X \not\models \phi$ does not mean that $\phi$ is false in $(\mathcal C,X)$ but only means that $\phi$ is not true in $(\mathcal C,X)$, in which case it can be either false in $(\mathcal C,X)$ or undefined in $(\mathcal C,X)$. \begin{definition}[Definability and satisfaction relation] \label{def.defsat} We define the definability relation $\bowtie$ and subsequently the satisfaction relation $\models$ by induction on $\phi\in \mathcal L_K$. \[ \begin{array}{lcl} \mathcal C, X \bowtie p_a & \text{iff} & a \in \chi(X) \\ \mathcal C, X \bowtie \phi\wedge\psi & \text{iff} & \mathcal C, X \bowtie \phi \ \text{and} \ \mathcal C, X \bowtie \psi \\ \mathcal C, X \bowtie \neg \phi & \text{iff} & \mathcal C, X \bowtie \phi \\ \mathcal C,X \bowtie \widehat{K}_a\phi & \text{iff} & \mathcal C,Y \bowtie \phi \ \text{for some} \ Y \in C \ \text{with} \ a \in \chi(X \cap Y) \end{array}\] \[ \begin{array}{lcl} \mathcal C, X \models p_a & \text{iff} & a \in \chi(X) \ \text{and} \ p_a \in \ell(X) \\ \mathcal C, X \models \phi\wedge\psi & \text{iff} & \mathcal C, X \models \phi \ \text{and} \ \mathcal C, X \models \psi \\ \mathcal C, X \models \neg \phi & \text{iff} & \mathcal C, X \bowtie \phi \ \text{and} \ \mathcal C, X \not\models \phi \\ \mathcal C,X \models \widehat{K}_a\phi & \text{iff} & \mathcal C,Y \models \phi \ \text{for some} \ Y \in C \ \text{with} \ a \in \chi(X \cap Y) \end{array}\] Given $\phi,\psi\in\mathcal L_K$, $\phi$ is {\em equivalent} to $\psi$ if for all $(\mathcal C,X)$: \[\begin{array}{lll} \mathcal C,X \models \phi &\text{ iff }& \mathcal C,X \models \psi; \\ \mathcal C,X \models \neg\phi &\text{ iff }& \mathcal C,X \models \neg\psi; \\ \mathcal C,X \not\bowtie\phi &\text{ iff }& \mathcal C,X\not\bowtie\psi. \end{array}\] A formula $\phi\in\mathcal L_K$ is {\em valid} if for all $(\mathcal C,X)$: $\mathcal C,X \bowtie \phi$ implies $\mathcal C,X \models \phi$. \end{definition} The definition of equivalence is the obvious one for a three-valued logic with values true, false, and undefined. Concerning validity, it should be clear that it could not have been defined as ``$\phi\in\mathcal L_K$ is valid iff for all $(\mathcal C,X)$: $\mathcal C,X \models \phi$'' because then there would be no validities at all (if there is more than one agent), as even very simple formulas like $p_a \vee \neg p_a$ are undefined when interterpreted in a vertex for an agent $b \neq a$. An equivalent formulation of the semantics for the epistemic modality for vertices is: \[ \begin{array}{lcl} \mathcal C,v \models \widehat{K}_a\phi & \text{iff} & \mathcal C,Y \models \phi \ \text{for some} \ Y \in \mathsf{star}(v). \\ \end{array}\] In this three-valued semantics $\mathcal C,X \not \models \phi$ is {\bf not} equivalent to $\mathcal C,X \models \neg \phi$. In particular, if $a$ is an agent not colouring a vertex in $X$ ($a \notin \chi(X)$), then: \[ \begin{array}{lllll} \text{for all} \ p_a \in P_a: \mathcal C,X \not \models p_a \ \text{and} \ \mathcal C,X \not \models \neg p_a; \\ \text{for all} \ \psi \in \mathcal L_K: \mathcal C,X \not \models \widehat{K}_a \psi \ \text{and} \ \mathcal C,X \not \models \neg \widehat{K}_a \psi. \end{array} \] On the other hand, an agent may consider it possible that some proposition is true even when this proposition cannot be evaluated. That is, we may have (see Example~\ref{example.xxx}): \[ \begin{array}{ll} \mathcal C,X \models \widehat{K}_a \phi & \text{even when} \\ \mathcal C,X \not\models \phi & \text{and} \\ \mathcal C,X \not\models \neg\phi \end{array} \] \begin{example} \label{example.xxx} Consider the following impure simplicial model $\mathcal C$ for three agents $a,b,c$ with local variables respectively $p_a,p_b,p_c$. A vertex $v$ is labelled $0_a$ if $\chi(v)=a$ and $p_a \notin \ell(v)$, $1_b$ if $\chi(v)=b$ and $p_b \in \ell(v)$, etc. Some simplices have been named. \begin{center} \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (2,0) -- (4,0) -- (3,1.71) -- cycle; \node[round] (b1) at (0,0) {$1_b$}; \node[round] (b0) at (4,0) {$0_b$}; \node[round] (c1) at (3,1.71) {$1_c$}; \node[round] (a0) at (2,0) {$0_a$}; \node (f1) at (3,.65) {$Y$}; \draw[-] (b1) -- node[above] {$X$} (a0); \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c1); \draw[-] (a0) -- (c1); \end{tikzpicture} \end{center} As expected, $\mathcal C,X \models p_b \wedge \neg p_a$, where the conjunct $\mathcal C,X \models \neg p_a$ is justified by $\mathcal C,X \bowtie p_a$ and $\mathcal C,X \not\models p_a$. We also have $\mathcal C,X \models \widehat{K}_a p_b$, because $a \in \chi(X\cap X) = \chi(X)$ and $\mathcal C,X \models p_b$. Illustrating the novel aspects of the semantics, $\mathcal C,X \not\models p_c$, because $c \notin \chi(X)$ so that $\mathcal C,X \not\bowtie p_c$. Similarly $\mathcal C,X \not\models \neg p_c$. Also, $\mathcal C,X \not\models \widehat{K}_c \neg p_a$ and similarly $\mathcal C,X \not\models \neg \widehat{K}_c \neg p_a$, again because $c \notin\chi(X)$. Still, $\neg p_a$ is true throughout the model: $\mathcal C,X \models \neg p_a$ and $\mathcal C,Y \models \neg p_a$. Although $\mathcal C,X\not\bowtie p_c$, after all $\mathcal C,X \models \widehat{K}_a p_c$, because $a \in \chi(X \cap Y)$ and $\mathcal C,Y \models p_c$. Statement $\widehat{K}_a p_c$ says that agent $a$ considers it possible that atom $p_c$ is true. For this to be true agent $c$ does not have to be alive in facet $X$. It is sufficient that agent $a$ considers it possible that agent $c$ is alive. We also have $\mathcal C,Y \models K_b p_c$. This is easier to see after we have introduced the (derived) semantics for knowledge directly. We then explain in Example~\ref{example.zzz} why even $\mathcal C,X \models K_a p_c$ (not a typo). \end{example} In this three-valued modal-logical setting we need to prove many intuitively expected results anew over a fair number of lemmas. We finally obtain a version of the logic {\bf S5}. \begin{lemma} \label{lemma.modelsbowtie} If $\mathcal C,X \models \phi$ then $\mathcal C,X \bowtie \phi$. \end{lemma} \begin{proof} This is shown by induction on $\phi$. Let $(\mathcal C,X)$ be given, where $\mathcal C = (C,\chi,\ell)$. \begin{itemize} \item Let $\mathcal C,X \models p_a$. Then $a \in \chi(X)$. Therefore $\mathcal C,X \bowtie p_a$. \item $\mathcal C,X \models \neg\phi$ implies $\mathcal C,X \bowtie \phi$ by definition of the semantics. As $\mathcal C,X \bowtie \phi$ iff $\mathcal C,X \bowtie \neg\phi$ by definition of $\bowtie$, it follows that $\mathcal C,X \bowtie \neg\phi$. \item $\mathcal C,X \models \phi\wedge\psi$, iff $\mathcal C,X \models \phi$ and $\mathcal C,X \models \psi$. Therefore, by induction for $\phi$ and for $\psi$, $\mathcal C,X \bowtie \phi$ and $\mathcal C,X \bowtie \psi$, which is equivalent by definition to $\mathcal C,X \bowtie \phi\wedge\psi$. \item $\mathcal C,X \models \widehat{K}_a\phi$ implies $\mathcal C,Y \models \phi$ for some $Y \in C$ with $a \in \chi(X \cap Y)$, which implies (by induction) $\mathcal C,Y \bowtie \phi$ for some $Y \in C$ with $a \in \chi(X \cap Y)$, iff (by definition) $\mathcal C,X \bowtie \widehat{K}_a\phi$. \end{itemize} \vspace{-.8cm} \end{proof} In this semantics, $\mathcal C,X \bowtie \phi$ does not imply that all agents occurring in $\phi$ also occur in $X$. We recall Example~\ref{example.xxx} wherein $\mathcal C,X \models \widehat{K}_a p_c$ even though $c \notin\chi(X)$. On the other hand, if all agents occurring in $\phi$ also occur in $X$, then $\phi$ can be interpreted (is defined) in $X$. Let $A: \mathcal L_K \rightarrow \mathcal{P}(A)$ map each formula to the subset of agents occurring in the formula: $A(p_a) := \{a\}$, $A(\phi\wedge\psi) := A(\phi)\cup A(\psi)$, $A(\neg \phi) := A(\phi)$, and $A(\widehat{K}_a\phi) := A(\phi) \cup \{a\}$; $A(\phi)$ is denoted as $A_\phi$. \begin{lemma} \label{lemma.aphibowtie} Let $\phi\in\mathcal L_K$ and $(\mathcal C,X)$ be given. Then $A_\phi \subseteq \chi(X)$ implies $\mathcal C,X \bowtie \phi$. \end{lemma} \begin{proof} By induction on $\phi\in\mathcal L_K$. If $\phi = \widehat{K}_a\psi$, also $A_\psi \subseteq \chi(X)$, so by induction we get $\mathcal C,X \bowtie \psi$, and with $a \in \chi(X \cap X)$ it follows that $\mathcal C,X \bowtie \widehat{K}_a\psi$. All other cases are trivial. \end{proof} The following may be obvious, but worthwhile to emphasize. \begin{lemma} Let $\phi\in\mathcal L_K$, $a \in A$, and $(\mathcal C,X)$ be given. Then $\mathcal C,X\bowtie \widehat{K}_a \phi$ iff $\mathcal C,X \bowtie K_a \phi$. \end{lemma} \begin{proof} We use the definition by abbreviation of $K_a\phi$: $\mathcal C,X \bowtie K_a \phi$, iff $\mathcal C,X \bowtie \neg \widehat{K}_a \neg \phi$, iff $\mathcal C,X \bowtie \widehat{K}_a \neg \phi$, iff $\mathcal C,Y \bowtie \neg \phi$ for some $Y$ with $a \in \chi(X \cap Y)$, iff $\mathcal C,Y \bowtie \phi$ for some $Y$ with $a \in \chi(X \cap Y)$, iff $\mathcal C,X\bowtie \widehat{K}_a \phi$. \end{proof} \begin{lemma} \label{lemma.defidual} Let $\mathcal C,X\bowtie\phi$. Then $\mathcal C,X\not\models\phi$ iff $\mathcal C,X\models\neg\phi$. \end{lemma} \begin{proof} From right to left follows from the semantics of negation. From left to right we proceed by contraposition. Assume $\mathcal C,X\not\models\neg\phi$. Then not ($\mathcal C,X\bowtie\phi$ and $\mathcal C,X \not\models\phi$), that is, $\mathcal C,X\not\bowtie\phi$ or $\mathcal C,X \models\phi$. As $\mathcal C,X\bowtie\phi$ , it follows that $\mathcal C,X\models\phi$. \end{proof} \begin{corollary} \label{corollary.bowtiemodels} $\mathcal C,X \bowtie \phi$, iff $\mathcal C,X\models\phi$ or $\mathcal C,X\models\neg\phi$. \end{corollary} In other words, if $\phi$ is defined in $(\mathcal C,X)$ then it has a truth value there, and vice versa. This may help to justify the definition of semantic equivalence of formulas. \begin{lemma} Let $\phi,\psi\in\mathcal L_K$. Equivalent formulations of `$\phi$ is equivalent to $\psi$' are: \begin{enumerate} \item For all $(\mathcal C,X)$: ($\mathcal C,X \models \phi$ iff $\mathcal C,X \models \psi$), ($\mathcal C,X \models \neg\phi$ iff $\mathcal C,X \models \neg\psi$), and ($\mathcal C,X \not\bowtie\phi$ iff $\mathcal C,X\not\bowtie\psi$). \item For all $(\mathcal C,X)$: ($\mathcal C,X \bowtie\phi$ iff $\mathcal C,X\bowtie\psi$), and ($\mathcal C,X \bowtie\phi$ and $\mathcal C,X\bowtie\psi$) imply ($\mathcal C,X \models \phi$ iff $\mathcal C,X \models \psi$). \end{enumerate}\vspace{-.8cm} \end{lemma} \begin{proof} This easily follows from propositional reasoning and the observation that $\mathcal C,X\models\phi$, $\mathcal C,X\models\neg\phi$, and $\mathcal C,X\not\bowtie\phi$ are mutually exclusive (see also Lemma~\ref{lemma.defidual} and Corollary~\ref{corollary.bowtiemodels}.) \end{proof} We will sometimes use the latter formulation of equivalence instead of the former. Neither is equivalent to ``For all $(\mathcal C,X)$: $\mathcal C,X \models \phi$ iff $\mathcal C,X \models \psi$.'' For example, $\phi = p_a \wedge \neg p_a$ and $\psi = p_b \wedge \neg p_b$ would be `equivalent' in that sense, as they are never true. \begin{lemma} \label{lemma.notneg} Any formula $\phi$ is equivalent to $\neg\neg\phi$. \end{lemma} \begin{proof} Let $(\mathcal C,X)$ be such that $\mathcal C,X \bowtie \phi$. Then also $\mathcal C,X \bowtie \neg\phi$ as well as $\mathcal C,X \bowtie \neg\neg\phi$. Using Lemma~\ref{lemma.defidual} twice, we obtain that: $\mathcal C,X \models \neg\neg\phi$, iff $\mathcal C,X \not\models \neg\phi$, iff $\mathcal C,X \models \phi$. \end{proof} Let $\xi[p/\phi]$ be uniform substitution in $\xi$ of $p$ by $\phi$ (replace all occurrences in $\xi$ of $p$ with $\phi$). \begin{lemma} \label{anotherone} Let $(\mathcal C,X)$ be given. If $\phi$ is equivalent to $\psi$, then $\mathcal C,X \bowtie \xi[p/\phi]$ iff $\mathcal C,X \bowtie \xi[p/\psi]$. \end{lemma} \begin{proof} All cases of the proof by induction on $\xi$ are obvious (where the base case holds because the assumption entails that $\mathcal C,X \bowtie \phi$ iff $\mathcal C,X\bowtie \psi$) except the one for knowledge. \bigskip \noindent $\mathcal C,X \bowtie (\widehat{K}_a\xi)[p/\phi]$ \\ iff \\ $\mathcal C,X \bowtie \widehat{K}_a(\xi[p/\phi])$ \\ iff \\ $\mathcal C,Y \bowtie \xi[p/\phi]$ for some $Y$ with $a \in \chi(X \cap Y)$ \\ iff \hfill induction \\ $\mathcal C,Y \bowtie \xi[p/\psi]$ for some $Y$ with $a \in \chi(X \cap Y)$ \\ iff \\ $\mathcal C,X \bowtie \widehat{K}_a(\xi[p/\psi])$ \\ iff \\ $\mathcal C,X \bowtie (\widehat{K}_a\xi)[p/\psi]$ \end{proof} \begin{lemma} \label{lemma.seven} If $\phi$ is equivalent to $\psi$, then $\xi[p/\phi]$ is equivalent to $\xi[p/\psi]$. \end{lemma} \begin{proof} This is shown by induction on $\xi$. We show the entire proof, but only the epistemic case is of interest. The cases for propositional variables follow directly. \begin{itemize} \item $p[p/\phi] = \phi$ is equivalent to $p[p/\psi] = \psi$. \item for $q\neq p$, $q[p/\phi] = q$ is equivalent to $q[p/\psi] = q$. \end{itemize} For the remaining cases, let $(\mathcal C,X)$ be given. From the assumption and Lemma~\ref{anotherone} we obtain that $\mathcal C,X \bowtie \xi[p/\phi]$ iff $\mathcal C,X \bowtie \xi[p/\psi]$. Let us therefore assume $\mathcal C,X \bowtie \xi[p/\phi]$ and $\mathcal C,X \bowtie \xi[p/\psi]$. It remains to prove that $\mathcal C,X \models \xi[p/\phi]$ iff $\mathcal C,X \models \xi[p/\psi]$. \begin{itemize} \item On the definability assumption we have that (Lemma~\ref{lemma.defidual}) $\mathcal C,X\models\neg\xi[p/\phi]$ iff $\mathcal C,X\not\models\xi[p/\phi]$, and also $\mathcal C,X\models\neg\xi[p/\psi]$ iff $\mathcal C,X\not\models\xi[p/\psi]$. Therefore: \bigskip \noindent $\mathcal C,X\models\neg\xi[p/\phi]$ iff $\mathcal C,X\models\neg\xi[p/\psi]$, \\ iff \\ $\mathcal C,X\not\models\xi[p/\phi]$ iff $\mathcal C,X\not\models\xi[p/\psi]$, \\ iff \\ $\mathcal C,X\models\xi[p/\phi]$ iff $\mathcal C,X\models\xi[p/\psi]$. \bigskip \noindent The last holds by inductive assumption. \item $\mathcal C,X\models (\xi\wedge\xi')[p/\phi]$, iff $\mathcal C,X\models \xi[p/\phi]\wedge\xi'[p/\phi]$, iff $\mathcal C,X\models \xi[p/\phi]$ and $\mathcal C,X\models\xi'[p/\phi]$, iff (by induction) $\mathcal C,X\models \xi[p/\psi]$ and $\mathcal C,X\models\xi'[p/\psi]$, (...) iff $\mathcal C,X\models (\xi\wedge\xi')[p/\psi]$. \item $\mathcal C,X\models (\widehat{K}_a\xi)[p/\phi]$, iff $\mathcal C,X\models \widehat{K}_a(\xi[p/\phi])$, iff $\mathcal C,Y\models \xi[p/\phi]$ for some $Y \in C$ with $a \in \chi(X \cap Y)$. We now use the inductive hypothesis that $\xi[p/\phi]$ is equivalent to $\xi[p/\psi]$. Using Lemma~\ref{anotherone} again but now for $(\mathcal C,Y)$ we obtain that $\mathcal C,Y \bowtie \xi[p/\phi]$ iff $\mathcal C,Y \bowtie \xi[p/\psi]$. Also, from $\mathcal C,Y\models \xi[p/\phi]$ and Lemma~\ref{lemma.modelsbowtie} it follows that $\mathcal C,Y\bowtie \xi[p/\phi]$. Therefore assuming $\mathcal C,Y \bowtie \xi[p/\phi]$ and $\mathcal C,Y \bowtie \xi[p/\psi]$ we may conclude that $\mathcal C,Y \models \xi[p/\phi]$ iff $\mathcal C,Y \models \xi[p/\psi]$. So that, continuing the proof: $\mathcal C,Y\models \xi[p/\phi]$ for some $Y \in C$ with $a \in \chi(X \cap Y)$, iff $\mathcal C,Y\models \xi[p/\psi]$ for some $Y \in C$ with $a \in \chi(X \cap Y)$, iff (...) $\mathcal C,X\models (\widehat{K}_a\xi)[p/\psi]$. \end{itemize} \vspace{-.8cm} \end{proof} Because other logical connectives are defined by notational abbreviation, we have to prove that the implied truth conditions for those other connectives still correspond to our intuitions. This is indeed the case, however, on the strict condition that both constituents of binary connectives are defined. \begin{lemma} \label{lemma.diamond} Let $\mathcal C = (C, \chi,\ell)$, $X \in C$, and $\phi,\psi \in \mathcal L_K$ be given. Then: \[ \begin{array}{llll} \mathcal C,X \models \phi \vee \psi & \text{iff} & \mathcal C,X \bowtie \phi, \mathcal C,X \bowtie \psi, \ \text{and } \mathcal C,X \models \phi \ \text{or} \ \mathcal C,X \models \psi \\ \mathcal C,X \models \phi \rightarrow \psi & \text{iff} & \mathcal C,X \bowtie \phi, \mathcal C,X \bowtie \psi, \ \text{and } \mathcal C,X \models \phi \ \text{implies} \ \mathcal C,X \models \psi \\ \mathcal C,X \models \phi \leftrightarrow \psi& \text{iff} &\mathcal C,X \bowtie \phi, \mathcal C,X \bowtie \psi, \text{ and } \mathcal C,X \models \phi \ \text{iff} \ \mathcal C,X \models \psi \\ \mathcal C,X \models K_a\phi & \text{iff} & \mathcal C,X \bowtie K_a\phi, \ \text{and} \\ && \mathcal C,Y \bowtie \phi \text{ implies } \mathcal C,Y \models \phi \ \text{for all} \ Y \in C \ \text{with} \ a \in \chi(X \cap Y) \end{array}\] \end{lemma} \begin{proof} The proofs are straightforward, but necessary, where instead of the notational abbrevations we use their definitions. \begin{itemize} \item Firstly, $\mathcal C,X \models \neg(\neg\phi \wedge \neg\psi)$, iff $\mathcal C,X \bowtie \neg\phi \wedge \neg\psi$ and $\mathcal C,X \not\models \neg\phi \wedge \neg\psi$. Then, $\mathcal C,X \bowtie \neg\phi \wedge \neg\psi$, iff, respectively, $\mathcal C,X \bowtie \neg\phi$ and $\mathcal C,X \bowtie \neg\psi$, iff $\mathcal C,X \bowtie \phi$ and $\mathcal C,X \bowtie \psi$. Also, $\mathcal C,X \not\models \neg\phi \wedge \neg\psi$, iff $\mathcal C,X \not\models \neg\phi$ or $\mathcal C,X \not\models \neg\psi$. Now using that $\mathcal C,X \bowtie \phi$ and $\mathcal C,X \bowtie \psi$, and Lemma~\ref{lemma.defidual}, $\mathcal C,X \not\models \neg\phi$ or $\mathcal C,X \not\models \neg\psi$, iff $\mathcal C,X \models \phi$ or $\mathcal C,X \models \psi$. \item Similar to the first item. \item The third item can be obtained by seeing $\phi \leftrightarrow \psi$ as the abbreviation of $(\phi\rightarrow\psi)\wedge(\psi\rightarrow\phi)$, and then proceeding as before. \item By definition of the semantics for negation, $\mathcal C,X \models \neg \widehat{K}_a \neg \phi$ iff $\mathcal C,X \bowtie \widehat{K}_a \neg \phi$ and $\mathcal C,X \not \models \widehat{K}_a \neg\phi$. Then, $\mathcal C,X \bowtie \widehat{K}_a \neg \phi$, iff $\mathcal C,X \bowtie \neg \widehat{K}_a \neg \phi$, iff (by definition of the abbreviation) $\mathcal C,X \bowtie K_a \phi$, which establishes half of the proof obligation. Now: \medskip \noindent $\mathcal C,X \not \models \widehat{K}_a \neg\phi \\ \text{iff} \\ \text{not}[ \ \mathcal C,Y \models \neg\phi \text{ for some } Y \text{ with } a \in \chi(X \cap Y) \ ] \\ \text{iff} \\ \mathcal C,Y \not\models \neg\phi \text{ for all } Y \text{ with } a \in \chi(X \cap Y) \\ \text{iff} \\ \text{not}[ \ \mathcal C,Y \bowtie \phi \text{ and } \mathcal C,Y \not\models\phi \ ] \text{ for all } Y \text{ with } a \in \chi(X \cap Y) \\ \text{iff} \\ \mathcal C,Y \bowtie \phi \text{ implies } \mathcal C,Y \models\phi \text{ for all } Y \text{ with } a \in \chi(X \cap Y)$ \medskip \noindent That establishes the other half of the proof obligation. \end{itemize} \vspace{-.8cm} \end{proof} The direct semantics for other propositional and modal connectives of Lemma~\ref{lemma.diamond} is useful in the formulation of other results and in the proofs in the continuation, but also demonstrates our preference for the chosen linguistic primitives. In the semantics of conjunction we need not be explicit about definability. But in the semantics for the other binary connectives we need to be explicit about definability. A formula $\phi\leftrightarrow\psi$ that is an equivalence may well be true and even valid even when $\phi$ is not equivalent to $\psi$. For example, $(p_a \vee \neg p_a) \leftrightarrow (p_b \vee \neg p_b)$ is valid (whenever a simplex contains vertices for $a$ and for $b$, both are true) but $p_a \vee \neg p_a$ is not equivalent to $p_b \vee \neg p_b$ (given a simplex containing a vertex for $a$ but not for $b$, the first is defined but not the second). It is easy to see for that all $\phi \in \mathcal L_K$, $\phi\vee\neg\phi$ is valid. We only need to consider $(\mathcal C,X)$ such that $\mathcal C,X\bowtie \phi$, so that $\phi$ has a value in $(\mathcal C,X)$: either $\mathcal C,X\models\phi$ or $\mathcal C,X\models\neg\phi$, and therefore, with Lemma~\ref{lemma.diamond}, $\mathcal C,X \models \phi\vee\neg\phi$. Still, $\mathcal C,X\not\models \phi\vee\neg\phi$ if $\mathcal C,X\not\bowtie \phi$ (see Example~\ref{example.zzz}). The semantics of $\widehat{K}_a\phi$ we find more elegant than those of $K_a\phi$. In the semantics of $K_a\phi$ we can replace the part $\mathcal C,X \bowtie K_a \phi$ (also repeated below) by either of the following without changing the meaning of the definition. \[\begin{array}{lll} (i) & \mathcal C,X \bowtie K_a \phi & \quad\quad\text{as in Lemma~\ref{lemma.diamond}} \\ (ii) & \mathcal C,Y \bowtie \phi \text{ for some } Y \text{ with } a \in \chi(X \cap Y) \\ (iii) \quad & \mathcal C,Y \models \phi \text{ for some } Y \text{ with } a \in \chi(X \cap Y) \end{array}\] In proofs we often use variation $(iii)$. A consequence of the knowledge semantics is that the agent may know a proposition even if that proposition is not defined in all possible simplices. In particular the proposition may not be true in the actual simplex (although it cannot be false there): $K_a \phi$ rather means ``as far as I know, $\phi$'' than ``I know $\phi$.'' See Example~\ref{example.zzz} below. A good way to understand knowledge on impure complexes is dynamically: even if $K_a \phi$ is true, an update (such as a model restriction, or a subdivision) may be possible that makes agent $a$ learn that before the update $\phi$ was not true, namely when $\phi$ was not defined. Even when $K_a\phi$ is true, agent $a$ may be uncertain whether an agent involved in $\phi$ is alive, and the update can confirm that the agent was already dead. \begin{example}\label{example.zzz} We recall Example~\ref{example.xxx}, about the simplicial model $\mathcal C$ here depicted again. \begin{center} \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (2,0) -- (4,0) -- (3,1.71) -- cycle; \node[round] (b1) at (0,0) {$1_b$}; \node[round] (b0) at (4,0) {$0_b$}; \node[round] (c1) at (3,1.71) {$1_c$}; \node[round] (a0) at (2,0) {$0_a$}; \node (f1) at (3,.65) {$Y$}; \draw[-] (b1) -- node[above] {$X$} (a0); \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c1); \draw[-] (a0) -- (c1); \end{tikzpicture} \end{center} Observe that: \begin{itemize} \item $\mathcal C,X \models p_b\vee\neg p_b$ (because $b \in \chi(X))$ but $\mathcal C,X\not\models p_c\vee\neg p_c$ (because $c \notin \chi(X)$). \item $\mathcal C,X \models \widehat{K}_a p_b \rightarrow \widehat{K}_a p_c$ but $\mathcal C,X \not\models p_b \rightarrow p_c$ \item $\mathcal C,X\models K_a p_c$ but $\mathcal C,X\not\models p_c$. For $\mathcal C,X\models K_a p_c$ it suffices that $\mathcal C,Y \models p_c$, as $\mathcal C,X \not\bowtie p_c$. \item $\mathcal C,X\not\models K_a p_c \rightarrow p_c$, because $\mathcal C,X\not\bowtie K_a p_c \rightarrow p_c$. \end{itemize} Formula $K_a p_c$ is true `by default', because given the two facets $X$ and $Y$ that agent $c$ considers possible, as far as $a$ knows, $p_c$ is true. This knowledge is defeasible because $a$ may learn that the actual facet is $X$ and not $Y$, in which case $a$ has no knowledge about agent $c$. However, $a$ considered it possible that she would have learnt that it was $Y$, in which case her knowledge was justified. Although $\mathcal C,X\not\models K_a p_c \rightarrow p_c$, we still have that $\models K_a p_c \rightarrow p_c$, as for that we only need to consider $(\mathcal C',X')$ such that both $K_a p_c$ and $p_c$ are defined in $X'$. The validity of $K_a p_c \rightarrow p_c$ means that there is no $(\mathcal C',X')$ such that $\mathcal C',X' \models \neg p_c \wedge K_a p_c$: knowledge cannot be verifiably incorrect. \end{example} \begin{example} \label{example.stackedkn} We provide an example of stacked knowledge operators. Consider the following impure simplicial model $\mathcal C$. \begin{center} \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (2,0) -- (4,0) -- (3,1.71) -- cycle; \node[round] (b0) at (4,0) {$1_b$}; \node[round] (c1) at (3,1.71) {$1_c$}; \node[round] (a0) at (2,0) {$1_a$}; \node (f1) at (3,.65) {$X$}; \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c1); \draw[-] (a0) -- (c1); \fill[fill=gray!25!white] (6,0) -- (8,0) -- (7,1.71) -- cycle; \node[round] (b0r) at (8,0) {$0_b$}; \node[round] (c1r) at (7,1.71) {$1_c$}; \node[round] (a0r) at (6,0) {$1_a$}; \node (f1r) at (7,.65) {$W$}; \draw[-] (a0r) -- (b0r); \draw[-] (b0r) -- (c1r); \draw[-] (a0r) -- (c1r); \node[round] (b0m) at (5,1.71) {$0_b$}; \node[round] (b0mb) at (5,1.3) {$v$}; \draw[-] (c1) -- node[below] {$Y$} (b0m); \draw[-] (c1r) -- node[below] {$Z$} (b0m); \end{tikzpicture} \end{center} It holds that $\mathcal C,v \models K_b K_c p_a$. This is because $v \in Y$ and $v \in Z$ and $\mathcal C,Y \models K_c p_a$ as well as $\mathcal C,Z \models K_c p_a$. In all facets agent $b$ considers possible and wherein $K_c p_a$ is defined (and it is defined in both), it is true that agent $c$ knows that the value of $a$'s local variable is $1$. But in all facets agent $b$ considers possible agent $a$ is dead, so $b$ `knows' that $a$ is dead (we cannot say this in the logical language --- see Section~\ref{section.further} for a discussion). Also, $\mathcal C,v \not\models K_c p_a$ because $c \notin \chi(v) = \{b\}$. And also, $\mathcal C,v \not\models p_a$ because $a \notin \{b\}$. And $\mathcal C,v \not\models K_b p_a$. Summing up, $K_b K_c p_a$ is true whereas none of $K_c p_a$, $K_b p_a$, and $p_a$ are true. \end{example} \begin{lemma} \label{lemma.upbowtie} If $\mathcal C,X \bowtie \phi$ and $Y \in C$ with $X \subseteq Y$, then $\mathcal C,Y \bowtie \phi$. \end{lemma} \begin{proof} By induction on formula structure. Let $Y \in C$ with $X \subseteq Y$. \begin{itemize} \item $\mathcal C,X \bowtie p_a$, iff $a \in \chi(X)$, which implies $a \in \chi(Y)$, iff $\mathcal C,Y \bowtie p_a$. \item $\mathcal C,X \bowtie \neg\phi$, iff $\mathcal C,X \bowtie \phi$, which implies (by induction) $\mathcal C,Y \bowtie \phi$, iff $\mathcal C,Y \bowtie \neg\phi$. \item $\mathcal C,X \bowtie \phi\wedge\psi$, iff $\mathcal C,X \bowtie \phi$ and $\mathcal C,X \bowtie \psi$, which implies (by induction) $\mathcal C,Y \bowtie \phi$ and $\mathcal C,Y \bowtie \psi$, iff $\mathcal C,Y \bowtie \phi\wedge\psi$. \item $\mathcal C,X \bowtie \widehat{K}_a\phi$, iff $\mathcal C,Z \bowtie \phi$ for some $Z \in C$ with $a \in \chi(X \cap Z)$, which implies (as $X \subseteq Y$) $\mathcal C,Z \bowtie \phi$ for some $Z \in C$ with $a \in \chi(Y \cap Z)$, iff $\mathcal C,Y \bowtie \widehat{K}_a\phi$. \end{itemize} \vspace{-.8cm} \end{proof} \begin{proposition}\label{prop.star} If $\mathcal C,X \models \phi$ and $Y \in C$ such that $X \subseteq Y$, then $\mathcal C,Y \models \phi$. \end{proposition} \begin{proof} The proof is by induction on the structure of formulas $\phi$. In order to make the proof work for the case of negation we prove a stronger statement (by mutual induction). \begin{quote} {\em Let $(\mathcal C,X)$ with $\mathcal C = (C,\chi,\ell)$ be given. For all $\phi\in\mathcal L_K$ and for all $X,Y \in C$ with $X \subseteq Y$: $\mathcal C,X \models \phi$ implies $\mathcal C,Y \models \phi$, and $\mathcal C,X \models \neg\phi$ implies $\mathcal C,Y \models \neg\phi$.} \end{quote} Interestingly, the case $\widehat{K}_a\phi$ below does not require the use of the induction hypothesis. \begin{itemize} \item $\mathcal C,X \models p_a$, iff $a \in \chi(X)$ and $p_a \in \ell(X)$, iff $a \in \chi(X)$ and $p_a \in \ell(X_a)$. As $X \subseteq Y$, also $a \in \chi(Y)$, and $X_a=Y_a$. Thus, $a \in \chi(Y)$ and $p_a \in \ell(Y_a) \subseteq \ell(Y)$ which is by definition $\mathcal C,Y \models p_a$. \item $\mathcal C,X \models \neg p_a$, iff $\mathcal C,X \bowtie p_a$ and $\mathcal C,X \not\models p_a$, iff $a \in \chi(X)$ and $p_a \notin \ell(X)$, iff $a \in \chi(X)$ and $p_a \notin \ell(X_a)$. As $X \subseteq Y$, again we obtain that $a \in \chi(Y)$ and $X_a=Y_a$, so $p_a \notin \ell(Y_a)$. Thus, $a \in \chi(Y)$ and $p_a \notin \ell(Y)$ and therefore $\mathcal C,Y \models \neg p_a$. \item $\mathcal C, X\models \neg \phi$ implies $\mathcal C, Y \models \neg \phi$ by the mutual part of the induction for~$\phi$. \item $\mathcal C, X\models \neg \neg \phi$, iff (by Lemma~\ref{lemma.notneg}) $\mathcal C, X \models \phi$, which implies by induction $\mathcal C, Y \models \phi$, iff (by once more Lemma~\ref{lemma.notneg}) $\mathcal C, Y\models \neg \neg \phi$. \item $\mathcal C,X \models \phi\wedge\psi$, iff $\mathcal C,X \models \phi$ and $\mathcal C,X \models \psi$, which implies (by induction) $\mathcal C,Y \models \phi$ and $\mathcal C,Y \models \psi$, iff $\mathcal C,Y \models \phi\wedge\psi$. \item $\mathcal C,X \models \neg(\phi \wedge \psi)$, iff $\mathcal C,X \bowtie \phi \wedge \psi$ and $\mathcal C,X \not\models \phi \wedge \psi$, iff $\mathcal C,X \bowtie \phi$, $\mathcal C,X \bowtie \psi$, and $\mathcal C,X \not\models \phi$ or $\mathcal C,X \not\models \psi$, iff $\mathcal C,X \models \neg\phi$ and $\mathcal C,X \bowtie \psi$, or $\mathcal C,X \bowtie \phi$ and $\mathcal C,X\models \neg\psi$. Using induction for either $\phi$ or $\psi$ and Lemma~\ref{lemma.upbowtie}, we obtain $\mathcal C,Y \models \neg\phi$ and $\mathcal C,Y \bowtie \psi$, or $\mathcal C,Y \bowtie \phi$ and $\mathcal C,Y\models \neg\psi$, which is equivalent to $\mathcal C,Y \models \neg(\phi \wedge \psi)$. \item $\mathcal C,X \models \widehat{K}_a\phi$, iff $\mathcal C,Z \models \phi$ for some $Z \in C$ with $a \in \chi(X \cap Z)$, which implies (as $X \subseteq Y$) that $\mathcal C,Z \models \phi$ for some $Z \in C$ with $a \in \chi(Y \cap Z)$, iff $\mathcal C,Y \models \widehat{K}_a\phi$. \item $\mathcal C,X \models \lnot\widehat{K}_a\phi$, iff $\mathcal C, X \bowtie\widehat{K}_a\phi$ and $\mathcal C,X \not\models \widehat{K}_a\phi$, iff $\mathcal C,Z \bowtie \phi$ for some $Z \in C$ with $a \in \chi(X \cap Z)$ and $\mathcal C,Z \not\models \phi$ for all $Z \in C$ with $a \in \chi(X \cap Z)$, iff $a \in \chi(X)$ and $\mathcal C,Z \bowtie \phi$ for some $Z \in \mathsf{star}(X_a)$ and $\mathcal C,Z \not\models \phi$ for all $Z \in \mathsf{star}(X_a)$, which implies (because $X \subseteq Y$ and $Y_a = X_a$) that $a \in \chi(Y)$ and $\mathcal C,Z \bowtie \phi$ for some $Z \in \mathsf{star}(Y_a)$ and $\mathcal C,Z \not\models \phi$ for all $Z \in \mathsf{star}(Y_a)$. This statement is equivalent to $\mathcal C,Y \models \lnot\widehat{K}_a\phi$. \end{itemize} \vspace{-.8cm} \end{proof} \begin{proposition} \label{lemma.ysubx} If $\mathcal C,X \models \phi$ and $Y \in C$ such that $Y \subseteq X$ and $\mathcal C,Y \bowtie \phi$, then $\mathcal C,Y \models \phi$. \end{proposition} \begin{proof} Let now $Y \in C$ such that $Y \subseteq X$. In all inductive cases we assume that the formula is defined in $Y$. \begin{itemize} \item $\mathcal C,X \models p_a$, iff $a \in \chi(X)$ and $p_a \in \ell(X)$, that is, $p_a \in \ell(X_a)$. As $\mathcal C,Y \bowtie p_a$, also $a \in \chi(Y)$, so that $a \in \chi(X \cap Y)$. Therefore $X_a \in X \cap Y$, so that $p_a \in \ell(X_a) \subseteq \ell(Y)$. \item $\mathcal C,X \models \neg\phi$, iff $\mathcal C,X\bowtie\phi$ and $\mathcal C,X \not \models \phi$. Using the contrapositive of Proposition~\ref{prop.star}, $\mathcal C,X \not \models \phi$ implies $\mathcal C,Y\not \models\phi$. From that, together with the assumption $\mathcal C,Y\bowtie\phi$, we obtain by definition $\mathcal C,Y \models \neg\phi$. \item $\mathcal C,X \models \phi\wedge\psi$, iff $\mathcal C,X \models \phi$ and $\mathcal C,X \models \psi$, which implies (by induction) $\mathcal C,Y \models \phi$ and $\mathcal C,Y \models \psi$, iff $\mathcal C,Y \models \phi\wedge\psi$. \item $\mathcal C,X \models \widehat{K}_a\phi$, iff $\mathcal C,Z \models \phi$ for some $Z \in C$ with $a \in \chi(X \cap Z)$. Assumption $\mathcal C,Y \bowtie \widehat{K}_a \phi$ implies $a \in \chi(Y)$, so that it follows from $a \in \chi(X \cap Z)$ and $Y \subseteq X$ that $a \in \chi(Y \cap Z)$. Therefore $\mathcal C,Z \models \phi$ for some $Z \in C$ with $a \in \chi(Y \cap Z)$, which is by definition $\mathcal C,Y \models \widehat{K}_a\phi$. \end{itemize} \vspace{-.8cm} \end{proof} An alternative formulation of Propositions~\ref{prop.star} and \ref{lemma.ysubx} is, respectively: \begin{itemize} \item If $\mathcal C,X \models \phi$ and $Y \in \mathsf{star}(X)$, then $\mathcal C,Y \models \phi$. \item If $\mathcal C,X \models \phi$, $\mathcal C,Y \bowtie \phi$ and $X \in \mathsf{star}(Y)$, then $\mathcal C,Y \models \phi$. \end{itemize} A more efficient way to determine the truth of $\phi$ may be to consider facets only. The following are straightforward consequences of the combined Propositions~\ref{prop.star} and \ref{lemma.ysubx}. \begin{corollary} Let $\mathcal C,X \bowtie \phi$. Then all pairwise equivalent are: \begin{itemize} \item $\mathcal C,X \models \phi$ \item $\mathcal C,Y \models \phi$ for all faces $Y \in \mathsf{star}(X)$ \item $\mathcal C,Y \models \phi$ for all facets $Y \in \mathsf{star}(X)$ \end{itemize} \vspace{-.6cm} \end{corollary} \begin{example} We recall Example~\ref{example.xxx}. Let $u$ be the vertex labelled $0_a$. Then: \begin{itemize} \item $\mathcal C,u \models K_a p_c$ and $\{u\} \subseteq Y$, therefore $\mathcal C,Y \models K_a p_c$ by Proposition~\ref{prop.star}. \item $\mathcal C,Y \models p_c$ and $\{u\} \subseteq Y$, however $\mathcal C,u \not\bowtie p_c$, therefore we cannot conclude that $\mathcal C,u \models p_c$ by Proposition~\ref{lemma.ysubx}; indeed from $\mathcal C,u \not\bowtie p_c$ it follows that $\mathcal C,u \not\models p_c$. \item $\mathcal C,Y \models \neg p_a$, $\{u\} \subseteq Y$ and $\mathcal C,u \bowtie \neg p_a$, therefore $\mathcal C,u \models \neg p_a$ by Proposition~\ref{lemma.ysubx}. \end{itemize}\vspace{-.8cm} \end{example} \section{Axiomatization} \label{section.validities} In this section we present an axiomatization for our three-valued epistemic semantics. It is a version of the well-known axiomatization {\bf S5} for the two-valued epistemic semantics. We will show soundness for this axiomatization. We will not show completeness for this axiomatization: although we are confident that it is complete, the construction of a canonical simplicial model for our three-valued semantics seems a non-trivial and lengthy technical exercise. The axiomatization requires an auxiliary syntactic notion: for any formula $\phi$ we define an equidefinable but valid formula $\phi^\top$. \begin{definition} For a formula $\phi$ we define formula $\phi^\top$ recursively as follows: \begin{itemize} \item $p_a^\top := p_a \vee \neg p_a$; \item $(\neg \phi)^\top := \phi^\top$; \item $(\phi \wedge \psi)^\top := \phi^\top \wedge \psi^\top$; \item $(\widehat{K}_a \phi)^\top := \widehat{K}_a \phi^\top$. \end{itemize} Formulas $p_a^\top$ are also denoted $\top_a$. \end{definition} \begin{lemma} For any formula $\phi\in \mathcal L_K$, $\phi^\top$ is valid. \end{lemma} \begin{proof} The proof is by induction on the construction of $\phi$. The base case of $p_a$ clearly yields a valid formula. Conjunction of valid formulas is valid, and the modality $\widehat{K}_a$ applied to a valid formula produces a valid formula. \end{proof} \begin{lemma} \label{lem:equidef} For any $\phi\in \mathcal L_K$ and simplicial model $(\mathcal C,X)$: $\mathcal C, X \bowtie \phi$ iff $\mathcal C, X \bowtie \phi^\top$. \end{lemma} \begin{proof} The proof is by induction on the construction of $\phi$. \end{proof} In the continuation we will also need another, basic, lemma. \begin{lemma} \label{lem:MPloc} Let $(\mathcal C,X)$ and $\phi,\psi\in \mathcal L_K$ be given. Then $\mathcal C, X \models \phi\rightarrow \psi$ and $\mathcal C, X \models \phi$ imply $\mathcal C, X \models \psi$. \end{lemma} \begin{proof} This follows directly from the derived semantics of implication (Lemma~\ref{lemma.diamond}). \end{proof} \begin{definition}[Axiomatization S5$^\top$] The axiomatization {\bf S5}$^\top$ consists of the following axioms and rules. \[\begin{array}{l|l} \mathbf{Taut} & \text{all instantiations of propositional tautologies} \\ \mathbf{L} & K_a p_a \vee K_a \neg p_a \\ \mathbf{K^\top} & K_a (\phi \rightarrow \psi) \rightarrow K_a \phi \rightarrow K_a (\phi^\top\rightarrow\psi) \\ \mathbf{T} & K_a \phi \rightarrow \phi \\ \mathbf{4} & K_a \phi \rightarrow K_a K_a \phi \\ \mathbf{5} & \widehat{K}_a \phi \rightarrow K_a \widehat{K}_a \phi \\ \mathbf{MP^\top} & \text{from } \phi\rightarrow\psi \text{ and } \phi \text{ infer } \phi^\top \rightarrow \psi \\ \mathbf{N} & \text{from } \phi \text{ infer } K_a \phi \end{array}\] \end{definition} The soundness of this axiomatization is shown over a number of propositions. Given the presence of axiom {\bf L} featuring propositional variables $p_a$ instead of arbitrary formulas $\phi,\psi,\dots$ we can immediately observe that {\bf S5}$^\top$ is not closed under uniform substitution. For example, $K_a p_a \vee K_a \neg p_a$ is derivable (and even an axiom) but, for $a \neq b$, $K_a p_b \vee K_a \neg p_b$ is not derivable. Therefore the logic with axiomatization {\bf S5}$^\top$ is not a normal modal logic. \begin{proposition} \label{prop.taut} Let $\phi \in L_\emptyset(A,P)$ be a tautology. Then $\models \phi$. \end{proposition} \begin{proof} Given some $\phi \in L_\emptyset$ and a simplicial model $(\mathcal C,X)$, assume $\mathcal C,X \bowtie \phi$. Then either $\mathcal C,X \models \phi$ or $\mathcal C,X \models \neg \phi$. As $\phi$ is a Boolean formula, this means that $a \in \chi(X)$ for all variables $p_a$ occurring in $\phi$, and also, that its truth value only depends on the valuation of variables $P$ in $X$. In other words, we must have that either $\ell(X)\Vdash\phi$ or $\ell(X)\Vdash\neg\phi$, where $\Vdash$ is the standard (two-valued) propositional logical satisfaction relation in $L_\emptyset|\chi(X)$. As $\phi$ is a propositional tautology, this must be $\ell(X)\Vdash\phi$, and therefore also $\mathcal C,X \models \phi$. We now have shown that $\models \phi$. \end{proof} \begin{proposition} \label{prop.local} For all agents $a$ and variables $p_a$: $\models K_a p_a \vee K_a \neg p_a$. \end{proposition} \begin{proof} Let $(\mathcal C,X)$ be given such that $\mathcal C,X \bowtie K_a p_a \vee K_a \neg p_a$. Then $\mathcal C,X \bowtie K_a p_a$ and $\mathcal C,X \bowtie K_a \neg p_a$, that is, there is a $Y$ with $a \in \chi(X \cap Y)$ such that $\mathcal C,Y \bowtie p_a$ and there is a $Z$ with $a \in \chi(X \cap Z)$ such that $\mathcal C,Z \bowtie \neg p_a$ (and therefore also $\mathcal C,Z \bowtie p_a$). As $a \in \chi(X \cap Y)$ and $a \in \chi(X \cap Z)$ imply that $a \in \chi(X)$, so that $\mathcal C,X \bowtie p_a$, clearly $X=Y=Z$ serves as such a witness. Therefore $\mathcal C,X \bowtie p_a$. It is easy to see that we also have that $\mathcal C,X \bowtie p_a$ implies $\mathcal C,X \bowtie K_a p_a \vee K_a \neg p_a$. Therefore $\mathcal C,X \bowtie K_a p_a \vee K_a \neg p_a$ iff $\mathcal C,X \bowtie p_a$. On the assumption that $\mathcal C,X \bowtie p_a$, we now show that $\mathcal C,X \models K_a p_a \vee K_a \neg p_a$. As $\mathcal C,X \bowtie p_a$, we must have that $\mathcal C,X_a \models p_a$ or that $\mathcal C,X_a\models \neg p_a$. In the first case we obtain $\mathcal C,X_a \models K_a p_a$ and in the second case we obtain $\mathcal C,X_a \models K_a \neg p_a$ (the vertex coloured $a$ is a member of any simplex containing it). Therefore $\mathcal C,X_a \models K_a p_a \vee K_a \neg p_a$. With Proposition~\ref{prop.star} it follows that $\mathcal C,X \models K_a p_a \vee K_a \neg p_a$, as required. \end{proof} \begin{proposition}\label{prop.k} Let $a \in A$ and $\phi,\psi \in \mathcal L_K$ be given. Then $\models K_a (\phi \rightarrow \psi) \rightarrow K_a \phi \rightarrow K_a (\phi^\top\rightarrow \psi)$. \end{proposition} \begin{proof} Consider a simplex $X$ in a given $\mathcal C$ where the formula is defined. Then all the three $K_a$ constituents are defined. Assume that $\mathcal C, X \models K_a (\phi \rightarrow \psi)$ and $\mathcal C, X \models K_a \phi$. Now consider any $Y$ with $a \in \chi(X \cap Y)$ such that $\mathcal C, Y \bowtie \phi^\top \rightarrow \psi$. Note that such $Y$ must exist because $\mathcal C,X \bowtie K_a (\phi^\top \rightarrow \psi)$ (we recall that existence is also required in the $K_a$ semantics, see Lemma~\ref{lemma.diamond} and in particular the subsequent version $(iii)$). Then we have $\mathcal C, Y \bowtie \phi^\top$ and $\mathcal C, Y \bowtie \psi$. It follows from the former by Lemma~\ref{lem:equidef} that $\mathcal C, Y \bowtie \phi$. Hence, also $\mathcal C, Y \bowtie \phi \rightarrow \psi$. Given our assumptions, we conclude that $\mathcal C, Y \models \phi$ and $\mathcal C, Y \models \phi \rightarrow \psi$. Hence, by Lemma~\ref{lem:MPloc}, $\mathcal C, Y \models \psi$, implying that $\mathcal C, Y \models \phi^\top \rightarrow \psi$. Since $Y$ was arbitrary and since some such $Y$ exists, it follows that $\mathcal C, X \models K_a (\phi^\top\to\psi)$. \end{proof} \begin{proposition} \label{prop.standard} Let $\phi,\psi \in \mathcal L_K$ and $a \in A$ be given. \begin{itemize} \item $\models K_a \phi \rightarrow \phi$ \item $\models K_a \phi \rightarrow K_a K_a \phi$ \item $\models \widehat{K}_a \phi \rightarrow K_a \widehat{K}_a \phi$ \end{itemize} \vspace{-.6cm} \end{proposition} \begin{proof} Let $\mathcal C = (C,\chi,\ell)$ and $X \in C$ be given. \begin{itemize} \item Let $\mathcal C,X \bowtie K_a\phi$ and $\mathcal C,X \bowtie \phi$ (recall the semantics for implication from Lemma~\ref{lemma.diamond}). Now assume $\mathcal C,X \models K_a \phi$. Then $\mathcal C,Y \bowtie \phi$ implies $\mathcal C,Y \models \phi$ for all $Y$ with $a \in \chi(X \cap Y)$. In particular, for $X=Y$ we have assumed $\mathcal C,X \bowtie \phi$, and trivially $a \in \chi(X \cap X)$. Therefore $\mathcal C,X \models \phi$ as desired. \item Let $\mathcal C,X \bowtie K_a\phi$ and $\mathcal C,X \bowtie K_a K_a\phi$. It is easy to see that $\mathcal C,X \bowtie K_a\phi$ iff $\mathcal C,X \bowtie K_a K_a\phi$, so that the first suffices. But that means that the proof assumption $\mathcal C,X \models K_a \phi$ is already sufficient, because from that it follows that $\mathcal C,X\bowtie K_a\phi$. In order to prove $\mathcal C,X \models K_a K_a \phi$, let $Y \in C$ with $a \in \chi(X\cap Y)$ and $\mathcal C,Y \bowtie K_a \phi$ be arbitrary. We now must show that $\mathcal C,Y \models K_a \phi$. In order to prove that, let $Z \in C$ with $a \in \chi(Y\cap Z)$ and $\mathcal C,Z \bowtie \phi$ be arbitrary. We now must prove that $\mathcal C,Z \models \phi$. From $a \in \chi(X\cap Y)$ and $a \in \chi(Y\cap Z)$ follows $a \in \chi(X\cap Z)$. From $a \in \chi(X\cap Z)$, $\mathcal C,Z \bowtie \phi$ and assumption $\mathcal C,X\models K_a \phi$ then follows $\mathcal C,Z \models \phi$. This fulfils part of our proof requirement. However, in order to prove $\mathcal C,X \models K_a K_a \phi$ we still need to show that there is $V \in C$ with $a \in \chi(X\cap V)$ and $\mathcal C,V \models K_a\phi$. But this is elementary: take $V=X$. Therefore $\mathcal C,X \models K_a K_a \phi$. \item In this case it suffices to assume $\mathcal C,X \models \widehat{K}_a \phi$ (as from this it follows that $\mathcal C,X \bowtie \widehat{K}_a \phi$, as well as $\mathcal C,X \bowtie K_a \widehat{K}_a \phi$, so that the implication is defined in $\mathcal C,X$). Then $\mathcal C,Y \models \phi$ for some $Y \in C$ with $a \in \chi(X\cap Y)$. In order to prove $\mathcal C,X \models K_a \widehat{K}_a \phi$, first assume arbitrary $Z \in C$ with $a \in \chi(X\cap Z)$ and $\mathcal C,Z \bowtie \widehat{K}_a\phi$. We need to prove that $\mathcal C,Z \models \widehat{K}_a \phi$. From $a \in \chi(X \cap Y)$ and $a \in\chi(X \cap Z)$ it follows that $a \in\chi(Z \cap Y)$, and as we already obtained that $\mathcal C,Y \models \phi$, the required $\mathcal C,Z \models \widehat{K}_a \phi$ follows. However, in order to prove $\mathcal C,X \models K_a \widehat{K}_a \phi$ we still need to show that there is $V \in C$ with $a \in \chi(X\cap V)$ and $\mathcal C,V \models \widehat{K}_a\phi$. As in the previous case, it suffices to choose $V=X$. \end{itemize} \vspace{-.8cm} \end{proof} \begin{proposition}\label{prop.mp} Let $\phi,\psi \in \mathcal L_K$ be given. Then $\models \phi \rightarrow \psi$ and $\models \phi$ imply $\models\phi^\top\rightarrow \psi$. \end{proposition} \begin{proof} Let $(\mathcal C,X)$ be given such that $\mathcal C, X \bowtie \phi^\top \rightarrow \psi$. Then, both $\mathcal C, X \bowtie \phi^\top$ and $\mathcal C, X \bowtie \psi$. The former implies, by Lemma~\ref{lem:equidef}, that $\mathcal C, X \bowtie \phi$. Thus, $\mathcal C, X \bowtie \phi \rightarrow \psi$. It follows from the validity of $\phi\to\psi$ and $\phi$ that $\mathcal C, X \models \phi \rightarrow \psi$ and $\mathcal C, X \models\phi$. Thus, by Lemma~\ref{lem:MPloc}, we conclude that $\mathcal C, X \models \psi$. From that and $\mathcal C, X \bowtie \phi^\top \rightarrow \psi$ it follows that $\mathcal C, X \models \phi^\top \rightarrow \psi$, as required. \end{proof} \begin{proposition}\label{prop.nec} Let $a \in A$ and $\phi\in\mathcal L_K$ be given. Then $\models \phi$ implies $\models K_a \phi$. \end{proposition} \begin{proof} Let $(\mathcal C,X)$ be given such that $\mathcal C,X \bowtie K_a \phi$. We have to show that $\mathcal C,X \models K_a \phi$. We have two proof obligations. Firstly, let $Y$ be arbitrary such that $a \in \chi(X \cap Y)$ and assume that $\mathcal C,Y \bowtie \phi$. From $\mathcal C,Y \bowtie \phi$ and $\models \phi$ it follows that $\mathcal C,Y \models \phi$. Therefore, for all $Y$ with $a \in \chi(X \cap Y)$, $\mathcal C,Y \bowtie \phi$ implies $\mathcal C,Y \models \phi$. Secondly, from $\mathcal C,X \bowtie K_a \phi$ it follows that there is a $Z$ with $a \in \chi(X \cap Z)$ such that $\mathcal C,Z \bowtie \phi$. From $\mathcal C,Z \bowtie \phi$ and $\models \phi$ it follows that $\mathcal C,Z \models \phi$. Therefore, there exists a $Z$ with $a \in \chi(X \cap Z)$ such that $\mathcal C,Z \models \phi$. Therefore, $\mathcal C,X \models K_a \phi$. \end{proof} \begin{theorem} Axiomatization {\bf S5}$^\top$ is sound. \end{theorem} \begin{proof} Directly from Propositions~\ref{prop.local}, \ref{prop.k}, \ref{prop.standard}, \ref{prop.mp}, and \ref{prop.nec}. \end{proof} It is instructive to see why the usual {\bf K} axiom and {\bf MP} rule are invalid for our semantics. This insight might help to justify the unusual shape of {\bf K}$^\top$ and {\bf MP}$^\top$. We make the relevant counterexamples into formal propositions. \begin{proposition}\label{prop.nonk} There are $\phi,\psi\in \mathcal L_K$ and $a \in A$ such that $\not\models K_a (\phi\rightarrow\psi) \rightarrow K_a \phi \rightarrow K_a \psi$. \end{proposition} \begin{proof} Consider again the simplicial model from Example~\ref{example.xxx}, reprinted here. \begin{center} \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (2,0) -- (4,0) -- (3,1.71) -- cycle; \node[round] (b1) at (0,0) {$1_b$}; \node[round] (b0) at (4,0) {$0_b$}; \node[round] (c1) at (3,1.71) {$1_c$}; \node[round] (a0) at (2,0) {$0_a$}; \node (f1) at (3,.65) {$Y$}; \draw[-] (b1) -- node[above] {$X$} (a0); \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c1); \draw[-] (a0) -- (c1); \end{tikzpicture} \end{center} We can observe that: \begin{itemize} \item $\mathcal C,X \models K_a (p_c \rightarrow \neg p_b)$ \\ This is because $Y \in C$ is the single facet in $C$ such that $\mathcal C,Y \bowtie p_c \rightarrow \neg p_b$ (as this requires $\mathcal C,Y \bowtie p_c$ and $\mathcal C,Y \bowtie p_b$) and $a \in \chi(X \cap Y)$; and $\mathcal C,Y \models p_c \rightarrow \neg p_b$ because $\mathcal C,Y \models p_c$ and $\mathcal C,Y \models \neg p_b$. \item $\mathcal C,X \models K_a p_c$ \\ This is for the same reason as the previous item. \item $\mathcal C,X \not\models K_a \neg p_b$ \\ This is because there are two facets where $\neg p_b$ (and thus $p_b$) is defined: $\mathcal C,X \bowtie p_b$ and $\mathcal C,Y \bowtie p_b$. However, although $\mathcal C,Y \models \neg p_b$, $\mathcal C,X \models p_b$, and therefore $\mathcal C,X \not\models K_a \neg p_b$. \end{itemize} From the above we conclude that $\mathcal C,X \not\models K_a (p_c \rightarrow \neg p_b) \rightarrow K_a p_c \rightarrow K_a \neg p_b$, although $\mathcal C,X \bowtie K_a (p_c \rightarrow \neg p_b) \rightarrow K_a p_c \rightarrow K_a \neg p_b$. Therefore, $\not\models K_a (p_c \rightarrow \neg p_b) \rightarrow K_a p_c \rightarrow K_a \neg p_b$. \end{proof} \begin{proposition}\label{prop.nonmp} There are $\phi,\psi\in \mathcal L_K$ such that $\models\phi\rightarrow\psi$ and $\models\phi$ but $\not\models\psi$. \end{proposition} \begin{proof} The counterexample is \begin{align} \label{eq:one} \models\quad& \top_a \wedge \top_b \wedge \top_c \\ \label{eq:two} \models\quad& \top_a \wedge \top_b \wedge \top_c \rightarrow \widehat{K}_d ((\top_a \wedge p_c) \vee \widehat{K}_d(\top_b \wedge \neg p_c)) \\ \label{eq:three} \not\models\quad& \widehat{K}_d ((\top_a \wedge p_c) \vee \widehat{K}_d(\top_b \wedge \neg p_c)) \end{align} \paragraph*{Proof of \eqref{eq:one}.} We note that for any agent $e$ and any $(\mathcal C,X)$: $\mathcal C, X \bowtie \top_e$, iff $e \in \chi(X)$. Thus, \eqref{eq:one} is either undefined or true. It is true iff agents $a$, $b$, and $c$ are all alive in $X$. \paragraph*{Proof of \eqref{eq:two}.} Given some $(\mathcal C,X)$, the antecedent of the implication is defined in $X$ iff $a$, $b$, and $c$ are alive in $X$. For the consequent of the implication to be defined, it is additionally necessary to have $d$ alive in $X$. Overall, the implication \eqref{eq:two} is defined iff $\{a,b,c,d\}\subseteq \chi(X)$. Therefore, assume $\{a,b,c,d\}\subseteq \chi(X)$. To show \eqref{eq:two} it is sufficient to show that the conclusion of the implication is true. Since $c \in \chi(X)$, $p_c$ is defined, thus, there are only two possibilities, i.e., $\mathcal C, X \models p_c$ or $\mathcal C, X \models \neg p_c$. If $\mathcal C, X \models p_c$, then $\mathcal C, X \models \top_a \wedge p_c$ since $\top_a$ is true whenever defined. As we also have $\mathcal C, X \bowtie \widehat{K}_d(\top_b \wedge \neg p_c)$, it follows that $\mathcal C, X \models (\top_a \wedge p_c) \vee \widehat{K}_d(\top_b \wedge \neg p_c)$. Thus, since $d \in \chi(X \cap X)$, we have \[ \mathcal C, X \models \widehat{K}_d ((\top_a \wedge p_c) \vee \widehat{K}_d(\top_b \wedge \neg p_c)) \] and therefore \eqref{eq:two}. If $\mathcal C, X \models \neg p_c$, then $\mathcal C, X \models \top_b \wedge \neg p_c$, and since $d \in \chi(X \cap X)$, also $ \mathcal C, X \models \widehat{K}_d(\top_b \wedge \neg p_c)$. Given $\mathcal C, X \bowtie \top_a \wedge p_c$, it again follows that $\mathcal C, X \models (\top_a \wedge p_c) \vee \widehat{K}_d(\top_b \wedge \neg p_c)$ and the argument for \eqref{eq:two} being true is now the same as in the preceding case. Thus, we showed that in either case \eqref{eq:two} is true. \paragraph*{Proof of \eqref{eq:three}.} Consider the simplicial model $\mathcal C$ depicted below, wherein we have only labelled the $c$ nodes with the value of the agent's propositional variable (we do not care about the value of the variables labelling the other nodes, only about the colour of the nodes). \begin{figure}[ht] \center \scalebox{1}{ \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (2,0) -- (4,0) -- (3,1.71) -- cycle; \fill[fill=gray!25!white] (0,0) -- (2,0) -- (1,1.71) -- cycle; \node[round] (b1) at (0,0) {$b$}; \node[round] (b0) at (4,0) {$a$}; \node[round] (c1) at (3,1.71) {$0_c$}; \node[round] (lc1) at (1,1.71) {$1_c$}; \node[round] (a0) at (2,0) {$d$}; \node (f1) at (3,.65) {$Y$}; \node (f1) at (1,.65) {$X$}; \node(c) at (-1,0) {$\mathcal C:$}; \draw[-] (b1) -- (a0); \draw[-] (b1) -- (lc1); \draw[-] (a0) -- (lc1); \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c1); \draw[-] (a0) -- (c1); \end{tikzpicture} } \end{figure} For this complex we can make the following observations in tandem, schematically listed in the order of justification, thus producing a `witness' against the validity of \eqref{eq:three} (and even two, merely for the sake of symmetry). \begin{align*} \mathcal C, X &\bowtie \top_b \wedge \neg p_c & \mathcal C, Y &\not\bowtie \top_b \wedge \neg p_c \\ \mathcal C, X &\not\models \top_b \wedge \neg p_c & \\ & & \mathcal C, Y &\bowtie \widehat{K}_d(\top_b \wedge \neg p_c) \\ & & \mathcal C, Y &\not\models \widehat{K}_d(\top_b \wedge \neg p_c) \\ \mathcal C, X &\not\bowtie \top_a \wedge p_c & \mathcal C, Y &\bowtie \top_a \wedge p_c \\ & & \mathcal C, Y &\not\models \top_a \wedge p_c \\ \mathcal C, X &\not\bowtie (\top_a \wedge p_c) \vee \widehat{K}_d(\top_b \wedge \neg p_c) & \mathcal C, Y &\bowtie (\top_a \wedge p_c) \vee \widehat{K}_d(\top_b \wedge \neg p_c) \\ & & \mathcal C, Y &\not\models (\top_a \wedge p_c) \vee \widehat{K}_d(\top_b \wedge \neg p_c) \\ \mathcal C, X &\bowtie \widehat{K}_d ((\top_a \wedge p_c) \vee \widehat{K}_d(\top_b \wedge \neg p_c)) & \mathcal C, Y &\bowtie \widehat{K}_d ((\top_a \wedge p_c) \vee \widehat{K}_d(\top_b \wedge \neg p_c)) \\ \mathcal C, X &\not\models \widehat{K}_d ((\top_a \wedge p_c) \vee \widehat{K}_d(\top_b \wedge \neg p_c)) & \mathcal C, Y &\not\models \widehat{K}_d ((\top_a \wedge p_c) \vee \widehat{K}_d(\top_b \wedge \neg p_c)) \end{align*} We have shown that \eqref{eq:three} is defined but false and is therefore not valid. \end{proof} \section{Pure complexes} \label{section.pure} When the simplicial complexes are pure and of dimension $|A|-1=n$, the logical semantics should become that of \cite{goubaultetal:2018,ledent:2019} and the logic should become {\bf S5}{+}{\bf L}. We show that this is indeed the case. \begin{lemma} \label{lemma.pure} Let $\mathcal C = (C,\chi,\ell)$ be pure and $\phi\in\mathcal L_K$. \begin{itemize} \item For all $Y \in \mathcal F(C)$: $\mathcal C,Y\bowtie\phi$. \item For all $X \in C$ with $a \in \chi(X)$: $\mathcal C,X\bowtie\widehat{K}_a\phi$. \end{itemize} \vspace{-.6cm} \end{lemma} \begin{proof} The first item is shown by induction on $\phi$. If $\phi = p_a$ for some $a \in A$ and $p_a \in P_a$, then, as $C$ is pure and its facets $Y$ contain all colours, $a \in \chi(Y)$ and therefore $\mathcal C,Y \bowtie p_a$. The cases conjunction and negation are trivial. If $\phi = \widehat{K}_a\phi'$ for some $a \in A$, then $\mathcal C,Y\bowtie \widehat{K}_a\phi'$ iff there is $X$ with $a \in \chi(X\cap Y)$ and $\mathcal C,X\bowtie\phi'$. We now take $X=Y$ and we may assume $\mathcal C,Y\bowtie\phi'$ by induction. The second is shown using the first item. By definition, $\mathcal C,X\bowtie\widehat{K}_a\phi$ iff there is $Z$ with $a \in \chi(X\cap Z)$ and $\mathcal C,Z\bowtie\phi$. Take any $Z \in \mathcal F(C)$ with $X \subseteq Z$. By the first item it holds that $\mathcal C,Z\bowtie\phi$. Therefore, $\mathcal C,X\bowtie\widehat{K}_a\phi$. (For a vertex $v$ coloured with $a$ we therefore now always have that $\mathcal C,v\bowtie\widehat{K}_a\phi$. However, if $\chi(v)\neq a$ then also for pure complexes we still have that $\mathcal C,v\not\bowtie\widehat{K}_a\phi$!) \end{proof} Instead of a satisfaction relation $\models$ between pairs $(\mathcal C,X)$ consisting of a (possibly impure) simplicial model $\mathcal C$ for agents $A$ and a simplex $X$ contained in it, and a formula $\phi \in \mathcal L_K(A)$, we now define a satisfaction relation $\models_\mathsf{pure}$ between pairs $(\mathcal C,X)$ consisting of a pure simplicial model $\mathcal C$ for agents $A$ of dimension $|A|-1=n$ and a facet $X$ contained in it, and a formula $\phi \in \mathcal L_K(A)$. Note that the logical language is the same either way. We will show that this satisfaction relation $\models_\mathsf{pure}$ is as in \cite{goubaultetal:2018}. First we recall the definition of $\models$ (Definition~\ref{def.defsat}). As \cite{goubaultetal:2018} has $K_a\phi$ as linguistic primitive, not $\widehat{K}_a\phi$, we will do likewise. \[ \begin{array}{lcl} \mathcal C, X \models p_a & \text{iff} & a \in \chi(X) \ \text{and} \ p_a \in \ell(X) \\ \mathcal C, X \models \phi\wedge\psi & \text{iff} & \mathcal C, X \models \phi \ \text{and} \ \mathcal C, X \models \psi \\ \mathcal C, X \models \neg \phi & \text{iff} & \mathcal C, X \bowtie \phi \ \text{and} \ \mathcal C, X \not\models \phi \\ \mathcal C,X \models K_a\phi & \text{iff} & \mathcal C,X \bowtie K_a\phi \ \text{and} \\ && \mathcal C,Y \bowtie \phi \text{ implies } \mathcal C,Y \models \phi \ \text{for all} \ Y \in C \ \text{with} \ a \in \chi(X \cap Y) \end{array}\] When $X$ and $Y$ above are facets in a pure complex $\mathcal C$ of dimension $n$, in view of Lemma~\ref{lemma.pure} we can scrap all definability requirements and we thus obtain \[ \begin{array}{lcl} \mathcal C, X \models_\mathsf{pure} p_a & \text{iff} & p_a \in \ell(X) \\ \mathcal C, X \models_\mathsf{pure} \phi\wedge\psi & \text{iff} & \mathcal C, X \models_\mathsf{pure} \phi \ \text{and} \ \mathcal C, X \models_\mathsf{pure} \psi \\ \mathcal C, X \models_\mathsf{pure} \neg \phi & \text{iff} & \mathcal C, X \not\models_\mathsf{pure} \phi \\ \mathcal C,X \models_\mathsf{pure} K_a\phi & \text{iff} & \mathcal C,Y \models_\mathsf{pure} \phi \ \text{for all} \ Y \in \mathcal F(C) \ \text{with} \ a \in \chi(X \cap Y) \end{array}\] which is the semantics of \cite{goubaultetal:2018}. Even on pure complexes, there is nothing against privileging the $\models$ semantics, because it is local and allows us to interpret formulas in, for example, vertices, which seems a natural thing to do. We then continue to need definability requirements such as $\mathcal C, v \not\bowtie p_a$ if $a \neq \chi(v)$. All prior results obtained for impure complexes are obviously preserved for pure complexes. In particular we note that semantic equivalence of formulas $\phi,\psi$ and validity of formulas $\phi$ becomes as usual, when restricting the definitions to pure complexes of dimension $n$ and facets. This was (see again Definition~\ref{def.defsat}) \begin{quote} Given $\phi,\psi\in\mathcal L_K$, $\phi$ is {\em equivalent} to $\psi$ if for all $(\mathcal C,X)$: $\mathcal C,X \models \phi$ iff $\mathcal C,X \models \psi$, $\mathcal C,X \models \neg\phi$ iff $\mathcal C,X \models \neg\psi$, and $\mathcal C,X \not\bowtie\phi$ iff $\mathcal C,X\not\bowtie\psi$. A formula $\phi\in\mathcal L_K$ is {\em valid} if for all $(\mathcal C,X)$: $\mathcal C,X \bowtie \phi$ implies $\mathcal C,X \models \phi$. \end{quote} and now has become, for facets $X$ only, \begin{quote} Given $\phi,\psi\in\mathcal L_K$, $\phi$ is {\em equivalent} to $\psi$ if for all $(\mathcal C,X)$: $\mathcal C,X \models_\mathsf{pure} \phi$ iff $\mathcal C,X \models_\mathsf{pure} \psi$. A formula $\phi\in\mathcal L_K$ is {\em valid} if for all $(\mathcal C,X)$: $\mathcal C,X \models_\mathsf{pure} \phi$. \end{quote} This should not be surprising, as the $\models_\mathsf{pure}$ semantics is exactly the \cite{goubaultetal:2018} semantics. This brings us to the axiomatization. Again, as the $\models_\mathsf{pure}$ semantics is the \cite{goubaultetal:2018} semantics, the logic must be the same, that is, {\bf S5} plus the locality axiom {\bf L}. The difference between {\bf S5} and {\bf S5}$^\top$ only concerns the {\bf K} axiom and the {\bf MP} derivation rule. We recall that {\bf S5}$^\top$ contained \[\begin{array}{ll} \mathbf{K}^\top \qquad & \models K_a (\phi\rightarrow\psi) \rightarrow K_a \phi \rightarrow K_a (\phi^\top \rightarrow \psi) \\ \mathbf{MP}^\top \qquad & \text{From } \models \phi\rightarrow\psi \text{ and } \models \phi, \text{infer } \models \phi^\top \rightarrow \psi \end{array}\] These are now replaced by \[\begin{array}{ll} \mathbf{K} \qquad & \models_\mathsf{pure} K_a (\phi\rightarrow\psi) \rightarrow K_a \phi \rightarrow K_a \psi \\ \mathbf{MP} \qquad & \text{From } \models_\mathsf{pure} \phi\rightarrow\psi \text{ and } \models_\mathsf{pure} \phi, \text{infer } \models_\mathsf{pure} \psi \end{array}\] So we get the complete axiomatization $\mathbf{S5}+\mathbf{L}$ for free, by referring to \cite{goubaultetal:2018}. We should note that is not surprising\dots Validity of all axioms in {\bf S5}$^\top$ is preserved, as truth for all $(\mathcal C,X)$ where $\mathcal C$ may be impure and $X$ is any simplex, implies truth for all $(\mathcal C',X')$ where $\mathcal C'$ is pure of dimension $n$ and $X$ a facet. Also {\bf K}$^\top$ remains valid, and clearly $\mathbf{K}^\top \leftrightarrow \mathbf{K}$ is $\models_\mathsf{pure}$ valid. Validity preservation of the derivation rules in {\bf S5}$^\top$ also holds but was not guaranteed, as the assumption of truth for all $(\mathcal C,X)$ where $\mathcal C$ may be impure and $X$ is any simplex is stronger than the assumption of truth for all $(\mathcal C',X')$ where $\mathcal C'$ is pure of dimension $|A|-1$ and $X$ a facet. So necessation {\bf Nec} and {\bf MP} would have to be shown anew. Here we can be lazy and for {\bf Nec} refer to \cite{goubaultetal:2018} (although an actual proof would not take more than a few lines). Concerning {\bf MP}, it suffices to observe that $\models_\mathsf{pure} \psi$ is equivalent to $\models_\mathsf{pure} \phi^\top \rightarrow \psi$, as $\phi^\top$ (that is now always defined) is a $\models_\mathsf{pure}$ validity. This ends our exploration into pure complexes of dimension $|A|-1$ and this is the `sanity check' result that was expected. \section{Correspondence to Kripke models} \label{section.correspondence} A precise one-to-one correspondence between pure simplicial models and multi-agent Kripke models satisfying the following three conditions is given in \cite{goubaultetal:2018,ledent:2019}: $(i)$ all accessibility relations are equivalences, $(ii)$ all propositional variables are local, that is, there is an agent who knows the value of that variable, and $(iii)$ the intersection of all relations is the identity. We now propose Kripke models where agents may be dead {\bf or} alive. We call them {\em local epistemic models}. For local epistemic models we require that for each agent there is a subset of the domain on which the accessibility relation for that agent is an equivalence relation (these are the states where that agent is alive), whereas in the remainder of the domain the accessibility relation is empty (these are the states where that agent is dead). We also require that for any two distinct states there must always be a live agent in one that can distinguish it from the other and a live (possibly but not necessarily different) agent in the other that can distinguish it from the one, generalizing the requirement for pure complexes that the intersection of relations is the identity \cite{goubaultetal:2018}. (This requirement should be credited to \cite{goubaultetal:2021}.) Additionally we require that a formula can only be interpreted in a given state of a model if the interpretation is defined in that state. For example, we do not wish to say that formula $p_a$ is true or false in a state where agent $a$'s accessibility relation is empty, $p_a$ should be undefined in that state. Given that, however, and with the epistemic logic for impure complexes already at our disposal, we can map simplicial models to local epistemic models and vice versa, and these transformations are truth preserving. \paragraph*{Local epistemic models.} As before, given are the set $A$ of agents and the set $P = \bigcup_{a \in A} P_a$ of (local) variables. Given an abstract domain of objects called {\em states} and an agent $a$, a {\em local equivalence relation} ($\sim\!(a)$ or) $\sim_a$ is a binary relation between elements of $S$ that is an equivalence relation on a subset of $S$ denoted $S_a$ and otherwise empty. So, $\sim_a$ induces a partition on $S_a$, whereas ${\sim_a} = \emptyset$ on the complement $\overline{S}_a := S \setminus S_a$ of $S_a$. For $(s,t) \in {\sim_a}$ we write $s \sim_a t$, and for $\{ t \mid s \sim_a t \}$ we write $[s]_a$: this is an equivalence class of the relation $\sim_a$ on $S_a$. Given $s \in S$, let $A_s := \{a \in A \mid s \in S_a\}$. Set $A_s$ contains the agents that are alive in state $s$. Note that $a \in A_s$ iff $s \in S_a$. \begin{definition}[Local epistemic model] \emph{Local epistemic frames} are pairs $\mathcal M = (S,\sim)$ where $S$ is the (non-empty) domain of \emph{(global) states}, and $\sim$ is a function that maps the agents $a\in A$ to local equivalence relations $\sim_a$ that are required to be \emph{proper}, that is: for all distinct $s,t \in S$ there is a $b \in A_s$ such that $s \not\sim_b t$ and there is (therefore also) a $c \in A_t$ such that $s \not\sim_c t$. Agents $b$ and $c$ may but need not be the same. \emph{Local epistemic models} are triples $\mathcal M = (S,\sim,L)$, where $(S,\sim)$ is a local epistemic frame, and where \emph{valuation} $L$ is a function from $S$ to $\mathcal{P}(P)$ satisfying that for all $a \in A$, $p_a \in P_a$ and $s,t \in S_a$, if $s \sim_a t$ then $p_a \in L(s)$ iff $p_a \in L(t)$. We say that all variables $p_a$ are {\em local} for agent $a$. A pair $(\mathcal M,s)$ where $s \in S$ is a \emph{pointed} local epistemic model. \end{definition} Note that there is no requirement for the valuation of variables $p_a$ on the complement $\overline{S}_a$. \paragraph*{Semantics on local epistemic models.} The interpretation of a formula $\phi\in \mathcal L_K$ in a global state of a given pointed local epistemic model $(\mathcal M,s)$ is by induction on the structure of $\phi$. As before, we need relations $\bowtie$ to determine whether the interpretation is defined, and $\models$ to determine its truth value when defined. \begin{definition} Let $\mathcal M = (S,\sim,L)$ be given. We define $\bowtie$ and $\models$ by induction on $\phi\in\mathcal L_K$. \[ \begin{array}{lcl} \mathcal M,s \bowtie p_a & \text{iff} & s \in S_a \\ \mathcal M,s \bowtie \neg\phi & \text{iff} & \mathcal M,s \bowtie \phi \\ \mathcal M,s \bowtie \phi\wedge\psi & \text{iff} & \mathcal M,s \bowtie \phi \text{ and } \mathcal M,s \bowtie \psi \\ \mathcal M,s \bowtie \widehat{K}_a \phi & \text{iff} & \mathcal M,t \bowtie \phi \text{ for some } t \text{ with } s \sim_a t \end{array} \] \[ \begin{array}{lcl} \mathcal M,s \models p_a & \text{iff} & s \in S_a \text{ and } p_a \in L(s) \\ \mathcal M,s \models \neg\phi & \text{iff} & \mathcal M,s \bowtie \phi \text{ and } \mathcal M,s \not\models \phi \\ \mathcal M,s \models \phi\wedge\psi & \text{iff} & \mathcal M,s \models \phi \text{ and } \mathcal M,s \models \psi \\ \mathcal M,s \models \widehat{K}_a \phi & \text{iff} & \mathcal M,t \models \phi \text{ for some } t \text{ with } s \sim_a t \end{array} \] Formula $\phi$ is {\em valid} iff for all $(\mathcal M,s)$, $\mathcal M,s \bowtie \phi$ implies $\mathcal M,s \models \phi$. We let $\I{\phi}_\mathcal M$ stand for $\{ s \in S \mid \mathcal M,s \models \phi \}$. This set is called the {\em denotation} of $\phi$ in $\mathcal M$. \end{definition} Analogous results as for the semantics on simplicial complexes can be obtained for the semantics on local epistemic models, demonstrating the tricky interaction between $\bowtie$ and $\models$. For example, the interpretation of other propositional connectives such as disjunction and that of knowledge, now becomes (we recall Lemma~\ref{lemma.diamond}): \[ \begin{array}{llll} \mathcal M,s \models \phi \vee \psi & \text{iff} & \mathcal M,s \bowtie \phi, \mathcal M,s \bowtie \psi, \ \text{and } \mathcal M,s \models \phi \ \text{or} \ \mathcal M,s \models \psi \\ \mathcal M,s \models K_a\phi & \text{iff} & \mathcal M,s \bowtie K_a\phi, \ \text{and} \\ && \mathcal M,t \bowtie \phi \text{ implies } \mathcal M,t \models \phi \ \text{for all} \ t \in S \ \text{with} \ s \sim_a t \end{array}\] Instead of showing all this in detail, we restrict ourselves to show a truth value (and definability) preserving transformation between simplicial models and local epistemic models. \paragraph*{Correspondence between simplicial models and epistemic models.} Given agents $A$ and variables $P$, let $\mathcal K$ be the class of local epistemic models and let $\mathcal S$ be the class of (possibly impure) simplicial models. In \cite{goubaultetal:2018}, the \emph{pure} simplicial models are shown to correspond to local epistemic models of the following special kind: all relations are equivalence relations on the entire domain of the model (instead of on a subset only), and the intersection of the relations for all agents is the identity, a frame requirement they call \emph{proper} (and that is not bisimulation invariant). We give a generalization of their construction where the relations merely need to be equivalence relations on a subset of the domain. We define maps $\sigma: \mathcal K \rightarrow \mathcal S$ ($\sigma$ for \emph{S}implicial) and $\kappa: \mathcal S \rightarrow \mathcal K$ ($\kappa$ for \emph{K}ripke), such that $\sigma$ maps each local epistemic model $\mathcal M$ to a simplicial model $\sigma(\mathcal M)$, and $\kappa$ maps each simplicial model $\mathcal C$ to a local epistemic model $\kappa(\mathcal C)$. As $\sigma$ maps a state $s$ in $\mathcal M$ to a facet $X=\sigma(s)$ in $\sigma(\mathcal M)$, and $\kappa$ maps each facet $X$ in $\mathcal C$ to a state $s = \kappa(X)$ in $\kappa(\mathcal C)$, these maps are also between pointed structures $(\mathcal M,s)$ respectively $(\mathcal C,X)$. Subsequently we then show that for all $\phi\in\mathcal L_K$, $\mathcal M,s\models\phi$ iff $\sigma(\mathcal M,s) \models\phi$, and that (for facets $X$) $\mathcal C,X\models\phi$ iff $\kappa(\mathcal C,X)\models \phi$. \begin{definition} Given a local epistemic model $\mathcal M = (S,\sim,L)$, we define $\sigma(\mathcal M) = (C,\chi,\ell)$: \[\begin{array}{lll} X \in C &\text{iff}& X = \{ ([s]_a,a) \mid a \in B \} \text{ for some } s \in S \text{ and } B \subseteq A \text{ with } \emptyset\neq B \subseteq A_s \\ \chi(([s]_a,a)) &=& a \\ p_a \in \ell(([s]_a,a)) & \text{iff} & p_a \in L(s) \end{array}\vspace{-.6cm}\] \end{definition} It follows from the definition of $\sigma(\mathcal M)$ that: \[\begin{array}{lll} \mathcal V(C) &=& \{ ([s]_a,a) \mid s \in A, a \in A_s\} \\ X \in \mathcal F(C) &\text{iff}& X = \{ ([s]_a,a) \mid a \in A_s\} \text{ for some } s \in S \text{ with } A_s \neq \emptyset \end{array}\] Note that in a vertex denoted $([s]_a,a)$ the first argument $[s]_a$ is a set of states in which the name $a$ of the agent does not appear, which is why we need the second argument $a$ to determine its colour in $\chi(([s]_a,a))=a$. Given $s \in S$, we let $\sigma(s)$ denote the facet $\{ ([s]_a,a) \mid a \in A_s\}$, and by $\sigma(\mathcal M,s)$ we mean $(\sigma(\mathcal M),\sigma(s))$. The requirement that local epistemic models are proper ensures that different states are mapped to different facets, that is, for $s \neq t$ we have that neither $\sigma(s)\subseteq\sigma(t)$ nor $\sigma(t)\subseteq\sigma(s)$. This is a generalization of a similar requirement in \cite{goubaultetal:2018,ledent:2019} for epistemic models corresponding to pure complexes, namely that $\bigcap_{a \in A}\sim_a$ is the identity relation. Some issues with this requirement are discussed at the end of this section. Finally we should observe that $\sigma(\mathcal M)$ is indeed a simplicial model. This is elementary to see. It is closed under subsets of simplices. Also, $([s]_a,a) = ([t]_a,a)$ means that $s \sim_a t$, and the locality of the epistemic model then guarantees that $p_a \in L(s)$ iff $p_a \in L(t)$. \begin{definition} Given a simplicial model $\mathcal C = (C,\chi,\ell)$, we define $\kappa(\mathcal C) = (S,\sim,L)$: \[\begin{array}{lll} S & = & \mathcal F(C) \\ X \sim_a Y & \text{iff} & a \in \chi(X \cap Y) \\ p_a \in L(X) & \text{iff} & p_a \in \ell(X) \end{array}\] \end{definition} Simplices $X,Y$ above are elements of the domain, and therefore facets. We let $\kappa(\mathcal C,X)$ for facets $X$ denote $(\kappa(\mathcal C),X)$. Concerning the definition of $\kappa(\mathcal C)$ we recall that $\ell(X) = \bigcup_{a \in \chi(X)} \ell(X_a)$, where for a vertex $v$ coloured $a$ such as $X_a$, $\ell(v)\subseteq P_a$. Given $a \in A$, a facet not containing that colour (so of dimension less than $|A|-1$), that is, $X \in \mathcal F(C)$ with $a \notin \chi(X)$, is in the $\overline{S}_a$ part of the model $\kappa(\mathcal C)$, on which ${\sim_a} = \emptyset$. Whereas restricted to $S_a$, $\sim_a$ is indeed an equivalence relation between facets/states. The states in $S_a$ may also be facets of dimension less than $|A|-1$, but then they lack a vertex for a colour other than $a$. If $a \notin \chi(X)$, then obviously $p_a \notin \ell(X)$. Consequently, in $\kappa(\mathcal C)$ the valuation of variables for agent $a$ in the $\overline{S}_a$ part of the model is empty. This was not required, but it does not matter, as we will show that it is irrelevant for the truth (or definability) of formulas. Finally, note that $\kappa(\mathcal C)$ is indeed a local epistemic model. Here it is important to observe that the model is proper: distinct facets $X,Y \in C$ intersecting in $a$ are mapped to different states in $\kappa(\mathcal C)$. As $X\setminus Y \neq \emptyset$ there is a $b \in \chi(X\setminus Y)$ and therefore $X \not\sim_b Y$ in $\kappa(\mathcal C)$. As $Y\setminus X \neq \emptyset$ there is a $c \in \chi(Y\setminus X)$ and therefore $Y \not\sim_c X$ in $\kappa(\mathcal C)$. \begin{figure} \center \scalebox{.8}{ \begin{tabular}{cccccc} \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (2,0) -- (4,0) -- (3,1.71) -- cycle; \node[round] (b1) at (0,0) {$1_b$}; \node[round] (b0) at (4,0) {$0_b$}; \node[round] (c1) at (3,1.71) {$1_c$}; \node[round] (a0) at (2,0) {$0_a$}; \draw[-] (b1) -- (a0); \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c1); \draw[-] (a0) -- (c1); \end{tikzpicture} & \quad $\stackrel \kappa \Rightarrow$ \quad & \begin{tikzpicture} \node (010l) at (0,0.4) {\scriptsize$ab$}; \node (001l) at (3,0.4) {\scriptsize$abc$}; \node (010) at (.5,0) {$0_a1_b0_c$}; \node (001) at (3.5,0) {$0_a0_b1_c$}; \draw[-] (010) -- node[above] {$a$} (001); \end{tikzpicture} \\ \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (2,0) -- (4,0) -- (3,1.71) -- cycle; \fill[fill=gray!25!white] (0,0) -- (2,0) -- (1,1.71) -- cycle; \node[round] (b1) at (0,0) {$1_b$}; \node[round] (b0) at (4,0) {$0_b$}; \node[round] (c1) at (3,1.71) {$1_c$}; \node[round] (lc1) at (1,1.71) {$0_c$}; \node[round] (a0) at (2,0) {$0_a$}; \draw[-] (b1) -- (a0); \draw[-] (b1) -- (lc1); \draw[-] (a0) -- (lc1); \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c1); \draw[-] (a0) -- (c1); \end{tikzpicture} & \quad $\stackrel \kappa \Rightarrow$ \quad & \begin{tikzpicture} \node (010l) at (0,0.4) {\scriptsize$abc$}; \node (001l) at (3,0.4) {\scriptsize$abc$}; \node (010) at (.5,0) {$0_a1_b0_c$}; \node (001) at (3.5,0) {$0_a0_b1_c$}; \draw[-] (010) -- node[above] {$a$} (001); \end{tikzpicture} \\ \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \node[round] (b1) at (0,0) {$1_b$}; \node[round] (b0) at (4,0) {\color{white}$0_b$}; \node[round] (c1) at (3,1.71) {$1_c$}; \node[round] (a0) at (2,0) {$0_a$}; \draw[-] (b1) -- (a0); \draw[-] (a0) -- (c1); \end{tikzpicture} & \quad $\stackrel \kappa \Rightarrow$ \quad & \begin{tikzpicture} \node (010l) at (0,0.4) {\scriptsize$ab$}; \node (001l) at (3,0.4) {\scriptsize$ac$}; \node (010) at (.5,0) {$0_a1_b0_c$}; \node (001) at (3.5,0) {$0_a0_b1_c$}; \draw[-] (010) -- node[above] {$a$} (001); \end{tikzpicture} & \quad $\stackrel \sigma \Rightarrow$ \qquad & \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \node[round] (b1) at (0,0) {$1_b$}; \node[round] (c1) at (3,1.71) {$1_c$}; \node[round] (a0) at (2,0) {$0_a$}; \node[round] (b0) at (4,0) {\color{white}$0_b$}; \node (b1b) at (-.5,.4) {\scriptsize$(\{0_a1_b0_c\},b)$}; \node (c1b) at (1.9,1.51) {\scriptsize$(\{0_a0_b1_c\},c)$}; \node (a0b) at (3.6,.2) {\scriptsize$(\{0_a1_b0_c,0_a0_b1_c\},a)$}; \draw[-] (b1) -- (a0); \draw[-] (a0) -- (c1); \end{tikzpicture} \\ \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (2,0) -- (4,0) -- (3,1.71) -- cycle; \node[round] (b0) at (4,0) {$1_b$}; \node[round] (c1) at (3,1.71) {$0_c$}; \node[round] (a0) at (2,0) {$0_a$}; \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c1); \draw[-] (a0) -- (c1); \end{tikzpicture} & \quad $\stackrel \kappa \Rightarrow$ \quad & \begin{tikzpicture} \node (010l) at (0,0.4) {\scriptsize$abc$}; \node (010) at (.5,0) {$0_a1_b0_c$}; \end{tikzpicture} \end{tabular} } \caption{From simplicial models to local epistemic models, and vice versa in one case. Labels of states in epistemic models list the agents that are alive.} \label{figure.kappasigma} \end{figure} \begin{example} \label{example.kappasigma} Figure~\ref{figure.kappasigma} shows various examples of the transformation via $\kappa$ of simplicial models into local epistemic models. The third simplicial model consisting of two edges produces a local epistemic model that is indeed proper: agent $b$ is alive in the left state $0_a1_b0_c$ and can distinguish this state from the right state $0_a0_b1_c$, whereas agent $c$ is alive in the right state $0_a0_b1_c$ and can distinguish that state from the left state $0_a1_b0_c$. To obtain simplicial models from local epistemic models, we simply follow the arrow in the other direction, where the only difference is that the names of vertices are now pairs consisting of an equivalence class of states and an agent. This is demonstrated for the third model only. \end{example} It is easy to see that (always) $\sigma(\kappa(\mathcal C))$ is isomorphic to $\mathcal C$ and $\kappa(\sigma(\mathcal M))$ is isomorphic to $\mathcal M$. \begin{proposition}\label{prop.corr2} For all formulas $\phi \in \mathcal L_K$, for all pointed local epistemic models $(\mathcal M,s)$: $\mathcal M,s\bowtie \phi$ iff $\sigma(\mathcal M,s) \bowtie \phi$, and $\mathcal M,s\models \phi$ iff $\sigma(\mathcal M,s) \models \phi$. \end{proposition} \begin{proof} The proof is by induction on $\phi$. In the case negation for the $\models$ statement it is important that the induction hypothesis holds for the $\bowtie$ statement and for the $\models$ statement, which requires us to show them simultaneously. \medskip \noindent $\mathcal M,s \bowtie p_a \quad \text{ iff } \\ s \in S_a \quad \text{ iff } \text{(recall that } \sigma(s) = \{ ([s]_a,a) \mid a \in A_s\}) \\ {([s]_a,a)} \in \sigma(s) \quad \text{ iff } \\ a \in \chi(\sigma(s)) \quad \text{ iff } \\ \sigma(\mathcal M), \sigma(s) \bowtie p_a$ \medskip \noindent $\mathcal M,s \bowtie \neg\phi \quad \text{ iff } \\ \mathcal M,s \bowtie \phi \quad \text{ iff (by induction)} \\ \sigma(\mathcal M,s) \bowtie \phi \quad \text{ iff } \\ \sigma(\mathcal M,s) \bowtie \neg \phi$ \medskip \noindent $\mathcal M,s \bowtie \phi\wedge\psi \quad \text{ iff } \\ \mathcal M,s \bowtie \phi \text{ and } \mathcal M,s \bowtie \psi \quad \text{ iff (by induction) } \\ \sigma(\mathcal M,s) \bowtie \phi \text{ and } \sigma(\mathcal M,s) \bowtie \psi \quad \text{ iff } \\ \sigma(\mathcal M,s) \bowtie \phi\wedge\psi$ \medskip \noindent $\mathcal M,s \bowtie \widehat{K}_a\phi \quad \text{ iff } \\ \mathcal M,t \bowtie \phi \text{ for some } t \sim_a s \quad \text{ iff (by induction)} \\ \sigma(\mathcal M),\sigma(t) \bowtie \phi \text{ for some } t \sim_a s \quad \text{ iff } \\ \sigma(\mathcal M),\sigma(t) \bowtie \phi \text{ for some } \sigma(t) \text{ with } a\in \chi(\sigma(s)\cap\sigma(t)) \quad \text{ iff (use Lemma~\ref{lemma.upbowtie} for $\Leftarrow$)} \\ \sigma(\mathcal M,s) \bowtie \widehat{K}_a\phi$ \medskip We continue with the satisfaction relation. \medskip \noindent $\mathcal M,s \models p_a \quad \text{ iff } \\ s \in S_a \text{ and } p_a \in L(s) \quad \text{ iff (as $([s]_a,a) \in \sigma(s)$, and $p_a \in \ell(([s]_a,a)) \subseteq \ell(\sigma(s))$)} \\ a \in \chi(\sigma(s)) \text{ and } p_a \in \ell(\sigma(s)) \quad \text{ iff } \\ \sigma(\mathcal M), \sigma(s) \models p_a$ \medskip \noindent $\mathcal M,s \models \neg\phi \quad \text{ iff } \\ \mathcal M,s\bowtie\phi \text{ and } \mathcal M,s \not\models \phi \quad \text{ iff (by induction for $\bowtie$ and for $\models$)} \\ \sigma(\mathcal M,s) \bowtie \phi \text{ and } \sigma(\mathcal M,s) \not\models \phi \quad \text{ iff } \\ \sigma(\mathcal M,s) \models \neg \phi$ \medskip \noindent $\mathcal M,s \models \phi\wedge\psi \quad \text{ iff } \\ \mathcal M,s \models \phi \text{ and } \mathcal M,s \models \psi \quad \text{ iff (by induction) } \\ \sigma(\mathcal M,s) \models \phi \text{ and } \sigma(\mathcal M,s) \models \psi \quad \text{ iff } \\ \sigma(\mathcal M,s) \models \phi\wedge\psi$ \medskip \noindent $\mathcal M,s \models \widehat{K}_a\phi \quad \text{ iff } \\ \mathcal M,t \models \phi \text{ for some } t \sim_a s \quad \text{ iff (by induction)} \\ \sigma(\mathcal M),\sigma(t) \models \phi \text{ for some } \sigma(t) \text{ with } a\in \chi(\sigma(s)\cap\sigma(t)) \quad \text{ iff (use Proposition~\ref{prop.star} for $\Leftarrow$)} \\ \sigma(\mathcal M),\sigma(s) \models \widehat{K}_a\phi$ \end{proof} \begin{proposition}\label{prop.corr3} For all formulas $\phi \in \mathcal L_K$, for all pointed simplicial models $(\mathcal C,X)$ where $X$ is a facet: $\mathcal C,X \bowtie\phi$ iff $\kappa(\mathcal C,X) \bowtie \phi$, and $\mathcal C,X \models\phi$ iff $\kappa(\mathcal C,X) \models \phi$. \end{proposition} \begin{proof} The proof is by induction on $\phi$. Recall that $\kappa(\mathcal C,X) = (\kappa(\mathcal C),X)$. \medskip \noindent $ \mathcal C,X\bowtie p_a \quad \text{ iff }\\ a \in \chi(X) \quad \text{ iff (*)}\\ X \in \mathcal F(C)_a \quad \text{ iff }\\ \kappa(\mathcal C),X \bowtie p_a$ \medskip \noindent $(*)$: Note that $\mathcal F(C)_a = \{ X \in \mathcal F(C) \mid X \sim_a X\} = \{ X \in \mathcal F(C) \mid a \in \chi(X) \}$. \medskip \noindent $ \mathcal C,X \bowtie \neg\phi \quad \text{ iff }\\ \mathcal C,X \bowtie \phi \quad \text{ iff (induction)}\\ \kappa(\mathcal C),X \bowtie \phi \quad \text{ iff }\\ \kappa(\mathcal C),X \bowtie \neg\phi$ \medskip \noindent $ \mathcal C,X \bowtie \phi\wedge\psi \quad \text{ iff }\\ \mathcal C,X \bowtie \phi \text{ and } \mathcal C,X \bowtie \psi \quad \text{ iff (induction)}\\ \kappa(\mathcal C),X \bowtie \phi \text{ and } \kappa(\mathcal C),X \bowtie \psi \quad \text{ iff }\\ \kappa(\mathcal C),X \bowtie \phi\wedge\psi$ \medskip \noindent $ \mathcal C,X \bowtie \widehat{K}_a\phi \quad \text{ iff }\\ \mathcal C,Y \bowtie \phi \text{ for some } Y \text{ with } a \in \chi(X\cap Y) \quad \text{ iff ($\Rightarrow$: $Y \subseteq Z$ and Lemma~\ref{lemma.upbowtie}, $\Leftarrow$: trivial) } \\ \mathcal C,Z \bowtie \phi \text{ for some } Z \in \mathcal F(C) \text{ with } a \in \chi(X\cap Z) \quad \text{ iff (induction)} \\ \kappa(\mathcal C),Z \bowtie \phi \text{ for some } Z \in \mathcal F(C) \text{ with } X\sim_a Z \quad \text{ iff } \\ \kappa(\mathcal C),X \bowtie \widehat{K}_a \phi$ \medskip We continue with the satisfaction relation (in the case negation, we use induction for $\bowtie$ and for $\models$, as in the previous proposition). \medskip \noindent $ \mathcal C,X\models p_a \quad \text{ iff }\\ a \in \chi(X) \text{ and } p_a \in \ell(X) \quad \text{ iff }\\ X \in \mathcal F(C)_a \text{ and } p_a \in L(X) \quad \text{ iff }\\ \kappa(\mathcal C),X \models p_a$ \medskip \noindent $ \mathcal C,X \models \neg\phi \quad \text{ iff }\\ \mathcal C,X \bowtie \phi \text{ and } \mathcal C,X \not\models \phi \quad \text{ iff (twice induction)}\\ \kappa(\mathcal C),X \bowtie \phi \text{ and } \kappa(\mathcal C),X \not\models \phi \quad \text{ iff }\\ \kappa(\mathcal C),X \models \neg\phi$ \medskip \noindent $ \mathcal C,X \models \phi\wedge\psi \quad \text{ iff }\\ \mathcal C,X \models \phi \text{ and } \mathcal C,X \models \psi \quad \text{ iff (induction)}\\ \kappa(\mathcal C),X \models \phi \text{ and } \kappa(\mathcal C),X \models \psi \quad \text{ iff }\\ \kappa(\mathcal C),X \models \phi\wedge\psi$ \medskip \noindent $ \mathcal C,X \models \widehat{K}_a\phi \quad \text{ iff }\\ \mathcal C,Y \models \phi \text{ for some } Y \text{ with } a \in \chi(X\cap Y) \quad \text{ iff ($\Rightarrow$: $Y \subseteq Z$ and Proposition~\ref{prop.star}; $\Leftarrow$: trivial)} \\ \mathcal C,Z \models \phi \text{ for some } Z \in \mathcal F(C) \text{ with } a \in \chi(X\cap Z) \quad \text{ iff (induction)} \\ \kappa(\mathcal C),Z \models \phi \text{ for some } Z \in \mathcal F(C) \text{ with } X\sim_a Z \quad \text{ iff } \\ \kappa(\mathcal C),X \models \widehat{K}_a \phi$ \end{proof} Propositions~\ref{prop.corr2} and \ref{prop.corr3} established truth value preservation and definability preservation for corresponding simplicial models and local epistemic models. Given that validity is defined in terms of definability and truth, we may conclude that the logics (the set of validities) with respect to both classes of structures are the same. \begin{example}[Improper Kripke models] The requirement that local epistemic models are proper rules out some models that succinctly describe uncertainty of agents about other agents being alive or dead, but that have the same information content as some other, bigger models. Figure~\ref{figure.lozenge} demonstrates how two agents $a,b$ are uncertain whether a third agent $c$ is alive, and also how the $\sigma$ construction returns an isomorphic local epistemic model. The four-state model, let us call it $\mathcal M$, has the same information content as both two-state models in the middle row, that are improper. Whether the value of $p_c$ is true or false when $c$ is dead is irrelevant. That it became false is an artifact of the $\kappa$ construction: it does not label states with variables of agents that are dead, but as these variables are then undefined, their absence does not mean that they are false: $\mathcal M, 1_a1_b0_c \not\bowtie p_c$ so $\mathcal M, 1_a1_b0_c \not\models\neg p_c$. Now consider the two-agent Kripke model below in the figure. It is improper. Agent $a$ is only uncertain whether agent $b$ is alive. We do not know how to represent the same information in a simplicial model. As this seems a sad note to end a nice story, let us make it into a more appealing cliffhanger instead. Instead of simplicial complexes that are sets of subsets of a vertex set, consider multisets of subsets. These called \emph{pseudocomplexes} in \cite{hilton_wylie_1960} and \emph{simplicial sets} in \cite{ledent:2019}. One can imagine allowing impure complexes that are such sets of multisets containing multiple copies of a simplex such that one copy is the face of a facet but another copy consisting of the same vertices is a facet in itself. The bottom Kripke model thus corresponds to such a `pseudo impure simplicial model' that consists of an $ab$-edge where $b$ is alive as well as a for $a$ indistinguishable $a$-vertex $v$ that is a facet and where $b$ is dead. Similarly we can represent the other improper models as pseudocomplexes consisting of a single triangle with one of the edges occurring twice. \end{example} \begin{figure}[ht] \center \scalebox{.75}{ \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (4,0) -- (4,2) -- (5.71,1) -- cycle; \fill[fill=gray!25!white] (0.29,1) -- (2,0) -- (2,2) -- cycle; \node[round] (b1) at (.29,1) {$1_c$}; \node[round] (b0r) at (5.71,1) {$1_c$}; \node[round] (c1) at (2,2) {$1_b$}; \node[round] (c1r) at (4,2) {$1_a$}; \node[round] (a0) at (2,0) {$1_a$}; \node[round] (a0r) at (4,0) {$1_b$}; \draw[-] (b1) -- (a0); \draw[-] (b1) -- (c1); \draw[-] (a0r) -- (b0r); \draw[-] (b0r) -- (c1r); \draw[-] (a0) -- (c1); \draw[-] (a0r) -- (c1r); \draw[-] (a0) -- (a0r); \draw[-] (c1) -- (c1r); \end{tikzpicture} \quad $\stackrel \kappa \Rightarrow$ \quad \begin{tikzpicture} \node (lz) at (-2,0.4) {\scriptsize$abc$}; \node (rz) at (1.9,0.4) {\scriptsize$abc$}; \node (tz) at (-.5,1.9) {\scriptsize$ab$}; \node (bz) at (-.5,-1.9) {\scriptsize$ab$}; \node (l) at (-1.5,0) {$1_a1_b1_c$}; \node (r) at (1.5,0) {$1_a1_b1_c$}; \node (t) at (0,1.5) {$1_a1_b0_c$}; \node (b) at (0,-1.5) {$1_a1_b0_c$}; \draw[-] (l) -- node[above left] {$b$} (t); \draw[-] (l) -- node[below left] {$a$} (b); \draw[-] (t) -- node[above right] {$a$} (r); \draw[-] (b) -- node[below right] {$b$} (r); \end{tikzpicture} \quad $\stackrel\sigma\Rightarrow$ \quad \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (4,0) -- (4,2) -- (5.71,1) -- cycle; \fill[fill=gray!25!white] (0.29,1) -- (2,0) -- (2,2) -- cycle; \node[round] (b1) at (.29,1) {$1_c$}; \node[round] (b0r) at (5.71,1) {$1_c$}; \node[round] (c1) at (2,2) {$1_b$}; \node[round] (c1r) at (4,2) {$1_a$}; \node[round] (a0) at (2,0) {$1_a$}; \node[round] (a0r) at (4,0) {$1_b$}; \draw[-] (b1) -- (a0); \draw[-] (b1) -- (c1); \draw[-] (a0r) -- (b0r); \draw[-] (b0r) -- (c1r); \draw[-] (a0) -- (c1); \draw[-] (a0r) -- (c1r); \draw[-] (a0) -- (a0r); \draw[-] (c1) -- (c1r); \node (b1a) at (-.2,1.4) {\scriptsize$(\{1_a1_b1_c\},c)$}; \node (b0ra) at (6.2,1.4) {\scriptsize$(\{1_a1_b1_c\},c)$}; \node (c1a) at (1.3,2.4) {\scriptsize$(\{1_a1_b0_c,1_a1_b1_c\}, b)$}; \node (c1ra) at (4.7,2.4) {\scriptsize$(\{1_a1_b0_c,1_a1_b1_c\},a)$}; \node (a0a) at (1.3,-.4) {\scriptsize$(\{1_a1_b0_c,1_a1_b1_c\}, a)$}; \node (a0ra) at (4.7,-.4) {\scriptsize$(\{1_a1_b0_c,1_a1_b1_c\}, b)$}; \end{tikzpicture} } \bigskip \bigskip \scalebox{0.75}{ \begin{tikzpicture} \node (010l) at (0,0.4) {\scriptsize$ab$}; \node (001l) at (3,0.4) {\scriptsize$abc$}; \node (010) at (.5,0) {$1_a1_b0_c$}; \node (001) at (3.5,0) {$1_a1_b1_c$}; \draw[-] (010) -- node[above] {$ab$} (001); \end{tikzpicture} \qquad \begin{tikzpicture} \node (010l) at (0,0.4) {\scriptsize$ab$}; \node (001l) at (3,0.4) {\scriptsize$abc$}; \node (010) at (.5,0) {$1_a1_b1_c$}; \node (001) at (3.5,0) {$1_a1_b1_c$}; \draw[-] (010) -- node[above] {$ab$} (001); \end{tikzpicture} } \bigskip \bigskip \scalebox{0.75}{ \begin{tikzpicture} \node (010l) at (0.1,0.4) {\scriptsize$a$}; \node (001l) at (3.1,0.4) {\scriptsize$ab$}; \node (010) at (.5,0) {$1_a1_b$}; \node (001) at (3.5,0) {$1_a1_b$}; \draw[-] (010) -- node[above] {$a$} (001); \end{tikzpicture} } \caption{Different representations for two agents only uncertain whether a third agent is alive. Below, a model for one agent only uncertain whether a second agent is alive.} \label{figure.lozenge} \end{figure} \section{Comparison to the literature} \label{section.further} We will now discuss in greater detail relations of our proposal to work on awareness of agents, many-valued logics, other notions of knowledge and belief, and correctness. \paragraph{Awareness of agents.} Knowing that an agent is alive is like saying that you are aware of that agent. Various (propositional) modal logics combine knowledge and uncertainty with awareness and unawareness \cite{faginetal:1988,halpernR13,AgotnesA14a,hvdetal.jolli:2014,DitmarschFVW18}. However, in all those except \cite{hvdetal.jolli:2014} this is awareness of sets of \emph{formulas}, not awareness of \emph{agents}. One could consider defining awareness of agents in a given state as awareness of all formulas involving those agents, as a generalization of the so-called awareness of propositional variables (primitive propositions) of \cite{faginetal:1988}. Given a logical language $\mathcal L$ for variables $P$, this means awareness of all formulas in the language $\mathcal L|Q$ for some $Q \subseteq P$. Now if, in our setting, $Q = \bigcup_{a \in B} P_a$ for some $B \subseteq A$, then awareness of fragment $\mathcal L|(\bigcup_{a\in B} P_a)$ means unawareness of any local variables of agents not in $B$. This is somewhat as if the agents in $B$ are alive and those not in $B$ are dead. However, for definability it not merely counts what agent's variables $p_a$ occur in a formula, but also what agent's modalities $K_a$ occur in a formula. That goes beyond straightforward \cite{faginetal:1988}-style awareness of propositional variables. And even if that were possible, it would be different from our essentially modal setting: whether a formula is defined in a given state is not a function of what agents are alive in that state, but a function of what agents are alive in indistinguishable states (and so on, recursively). Still, in a simplex wherein all the agents occurring in a formula $\phi$ are alive, the formula is defined: we recall Lemma~\ref{lemma.aphibowtie} stating exactly that: $A_\phi \subseteq \chi(X)$ implies $\mathcal C,X \bowtie \phi$. \paragraph{Many-valued logic.} Our semantics is a three-valued modal logic with a propositional basis known as Kleene's weak three-valued logic: the value of any binary connective is unknown if one of it arguments is unknown \cite{sep-logic-manyvalued}. Modal logical (including epistemic) extensions of many-valued logics are found in \cite{Morikawa89,Fitting:1992,odintsovetal:2010,RivieccioJJ17} (there is no relation to embeddings of three-valued propositional logics into modal logics \cite{KooiT13}). This sounds promising, however, before a link can be made with our work there are a lot of caveats. In the three-valued logics of this crowd the third value stands for `both true and false', and the four-valued (so-called Belnapian) logics of this crowd add to this a fourth value with the intended meaning `neither true nor false', in other words `unknown'. These are paraconsistent logics. Unknown and undefined are similar but not the same: a disjunction $p \vee q$ is true if one of the disjuncts is true and the other is unknown, but a disjunction $p \vee q$ is undefined if one of the disjuncts is true and the other is undefined. More importantly, in such many-valued modal logics the modal extension tends to be independent from the many-valued propositional base. But not in our case: whether a formula is defined depends on the modalities occurring in it, not only on the propositional variables. \paragraph{Belief and knowledge.} The logic {\bf S5}$^\top$ of knowledge on impure complexes is `almost' {\bf S5}. Is it yet another epistemic notion that is `almost' like knowledge and hovering somewhere between belief and knowledge? It is not. It is not Hintikka's favourite {\bf S4} \cite{hintikka:1962} nor {\bf S4.4} \cite{stalnaker:2005,HalpernSS09a}, as axiom {\bf 5} ($\widehat{K}_a\phi \rightarrow K_a \widehat{K}_a\phi$) is valid (Proposition~\ref{prop.standard}). It is not {\bf KD45} \cite{handbookintro:2015} either, so-called `consistent belief', as {\bf T} ($K_a \phi \rightarrow \phi$) is valid (Proposition~\ref{prop.standard}). However, let us recall the peculiarity of {\bf T} in our setting: that $K_a \phi \rightarrow \phi$ is valid does {\bf not} mean that if $K_a\phi$ is true then $\phi$ is true. It means that if $K_a\phi$ is true and $\phi$ is defined then $\phi$ is true. Which is the same as saying that if $K_a\phi$ is true then $\phi$ is not false. This seems to come close to the motivation of `belief as defeasible knowledge' in \cite{MosesS93}. Let us consider that. We recall $B^\alpha \phi$ of \cite{MosesS93}: the agent believes $\phi$ on \emph{assumption} $\alpha$. Could this assumption not be that `$\phi$ is defined'? One option they consider for $B^\alpha \phi$ is to define this as $K(\alpha \rightarrow \phi) \wedge \widehat{K} \phi$ \cite[Def.\ 3, page~304]{MosesS93}. This appears to come close to our derived semantics of knowledge $K \phi$ (Proposition~\ref{lemma.diamond}) as, in the incarnation for Kripke models: `in all indistinguishable states, whenever $\phi$ is defined, it is true, and there is an indistinguishable state where $\phi$ is true'. But, although coming close, it is not the same. Their results are for assumptions $\alpha$ that are \emph{formulas}, and in a binary-valued (true/false) semantics. Whereas our definability assumption is a feature grounding a three-valued semantics. Still, our motivation is very much that of \cite{MosesS93} as well as \cite{stalnaker:2005,HalpernSS09a}: how to define belief from knowledge, rather then knowledge from belief. Semantics of knowledge for impure complexes were also considered in \cite{diego:2019} and in \cite{goubaultetal:2021}. In \cite{diego:2019} it is observed that impure simplicial models correspond to Kripke models where dead agents (crashed processes) do not have equivalence accessibility relations, and as this is undesirable for reasoning about knowledge, projections are proposed from impure complexes to pure subcomplexes for the subset of agents that are alive \cite[Section 3.3]{diego:2019}, and from impure complexes to Kripke models for the subset of agents that are alive \cite[Section 3.4]{diego:2019}. One could imagine a logic based on this observation but this would then not allow agents that are alive to reason about agents that are dead, which seems restrictive. In \cite{goubaultetal:2021}, the authors propose an epistemic semantics for impure complexes for which they also show correspondence with (certain) Kripke models with symmetric and transitive relations. There are interesting similarities and differences with our approach. In \cite{goubaultetal:2021}, the Kripke/simplicial correspondence is shown between Kripke frames (without valuations $L$) and chromatic simplicial complexes (without valuations $\ell$), not between Kripke models and simplicial models. Basically, the construction is the same as ours.\footnote{We acknowledge Goubault kindly sharing older unpublished work on this matter.} The differences appear when we go from frames to models, namely $(i)$ in their \emph{different notion of valuation on complexes}, and $(ii)$ in their \emph{different knowledge semantics}. To illustrate that, we recall once more the impure complex $\mathcal C$ from Example~\ref{example.xxx} (page~\pageref{example.xxx}) consisting of an edge $X$ and a triangle $Y$ intersecting in an $a$-vertex. First, in \cite{goubaultetal:2021}, valuations $\ell$ assign variables not to vertices but to facets. We therefore have a choice whether $\ell(X) = \{p_b\}$ so that $p_b$ is true and $p_a$ and $p_c$ are false, or $\ell'(X) = \{p_b,p_c\}$ so that $p_b$ and $p_c$ are true and $p_a$ is false. In our semantics $p_c$ is undefined in $X$ and therefore neither true nor false. Second, in \cite{goubaultetal:2021}, definability is not an issue, any formula $\phi$ is always defined, and an agent knows the proposition expressed by $\phi$, if $\phi$ is true in all facets containing that agent's vertex. Formulas are always defined because local variables always have a value, also in facets of smaller than maximal dimension, like the edge $X$ in $\mathcal C$. There are therefore two different impure complexes like $\mathcal C$, one with $\ell$ where $K_a p_c$ is false (as $p_c$ is false in $X$), and another one with $\ell'$ where $K_a p_c$ is true (as $p_c$ is true in $X$ and in $Y$). In our semantics, for $K_a p_c$ to be true, it suffices that $p_c$ is true in triangle $Y$. This is consistent with, say, edge $X$ having been a triangle where $p_c$ was true before $c$ crashed, and also consistent with edge $X$ having been a triangle where $p_c$ was false before $c$ crashed. The difference between the respective semantics becomes notable when applied to the `benchmark' modelling Example~\ref{figure.sergio} from \cite{herlihyetal:2013}: for $a$ to know that the value of $c$ is $1$, variable $p_c$ must be true on edges lacking a $c$ vertex. This reflects that agent $a$ already received confirmation of $c$'s value before becoming uncertain whether $c$ subsequently crashed.\footnote{We acknowledge several discussions on this matter with Sergio Rajsbaum and with Ulrich Schmid.} Maybe, such modelling desiderata can also be explicitly realized with a (event/action) history-based semantics for complexes just as for Kripke models in \cite{jfaketal.JPL:2009}. The logic for impure complexes of \cite{goubaultetal:2021} is axiomatized by the modal logic {\bf KB4} (where {\bf B} is the axiom $\phi \rightarrow K_a \widehat{K}_a \phi$). The logic {\bf S5}{+}{\bf L} of \cite{goubaultetal:2018} is now not the special case for impure complexes, as in Section~\ref{section.pure}: because valuations assign variables to facets and not to vertices, the locality axiom {\bf L} ($K_a p_a \vee K_a \neg p_a$) is invalid for the semantics of \cite{goubaultetal:2021}. The semantics of \cite{goubaultetal:2021} can explicitly treat life and death in the language. If $a$ is dead then $K_a \bot$ is true (where $\bot$ is the always false proposition) and if $a$ is alive then $\neg K_a \bot$ is true. Alive agents have correct beliefs: $\neg K_a \bot \rightarrow (K_a \phi \rightarrow \phi)$ is valid. These are appealing properties. We recall that in our semantics we cannot formalize that agents are alive or dead. We consider this a feature rather than a problem. In the next section, in the paragraph `Death in the language' we show that adding global variables for `agent $a$ is alive' to the language (with appropriate semantics) makes our logic more expressive. It seems of interest to determine the expressivity of a version of \cite{goubaultetal:2021} with a primitive modality meaning $\neg K_a \bot \wedge K_a \phi$, for `what alive agents know', that combines existential and universal features, somewhat more akin to our knowledge semantics and to the `hope' modality in \cite{abs-2106-11499,KuznetsP0F19} modelling correctness, discussed below. \paragraph{Alive and dead versus correct and incorrect.} Instead of being alive or dead (crashing), processes may merely be correct or incorrect, for example as a consequence of unreceived messages or Byzantine behaviour \cite{abs-2106-11499,KuznetsP0F19}. We anticipate that certain impure complexes might be equally useful to model knowledge and uncertainty about correct/incorrect agents analogously to modelling knowledge and uncertainty about alive/dead agents. \section{Further research} \label{section.fff} \paragraph*{Completeness of S5$^\top$.} We intend to prove the completeness of the axiomatization {\bf S5}$^\top$. This promises to be a lengthy technical exercise but not necessarily complex. \paragraph*{Bisimulation.} Results for bisimulation between impure complexes (generalizing such notions between pure complexes presented in \cite{ledent:2019,hvdetal.simpl:2021,goubaultetal_postdali:2021}), and analogously between local epistemic models, contained errors in the conference version \cite{Ditmarsch21} and were therefore removed from this extended version. It may be of interest that we sketch some issues there. Consider the two simplicial models below. \begin{center} \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (2,0) -- (4,0) -- (3,1.71) -- cycle; \node[round] (b1) at (0,0) {$1_b$}; \node[round] (b0) at (4,0) {$1_b$}; \node[round] (c1) at (3,1.71) {$1_c$}; \node[round] (a0) at (2,0) {$1_a$}; \node (f1) at (3,.65) {$Y$}; \draw[-] (b1) -- node[above] {$X$} (a0); \node (cc) at (-1,0) {$\mathcal C:$};% \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c1); \draw[-] (a0) -- (c1); \end{tikzpicture} \qquad\qquad\qquad \begin{tikzpicture}[round/.style={circle,fill=white,inner sep=1}] \fill[fill=gray!25!white] (2,0) -- (4,0) -- (3,1.71) -- cycle; \node[round] (b0) at (4,0) {$1_b$}; \node[round] (c1) at (3,1.71) {$1_c$}; \node[round] (a0) at (2,0) {$1_a$}; \node (f1) at (3,.65) {$Y'$}; \node (cc) at (1,0) {$\mathcal C':$};% \draw[-] (a0) -- (b0); \draw[-] (b0) -- (c1); \draw[-] (a0) -- (c1); \end{tikzpicture} \end{center} We claim that the pointed models $(\mathcal C,Y)$ and $(\mathcal C',Y')$ have the same information content with respect to our language and semantics. One can show that $(\mathcal C,Y)$ and $(\mathcal C',Y')$ are {\em modally equivalent} in the sense that for all $\phi \in \mathcal L_K$: $\mathcal C,Y \bowtie \phi$ iff $\mathcal C',Y' \bowtie \phi$, $\mathcal C,Y \models \phi$ iff $\mathcal C',Y' \models \phi$, and $\mathcal C,Y \models \neg \phi$ iff $\mathcal C',Y' \models\neg \phi$. This is the obvious sense in our three-valued semantics: whatever of the three values, they should correspond. Here, it may be surprising that the $X$ edge in $\mathcal C$ does not play havoc: whatever agent $a$ knows about $c$ is witnessed by the facet $Y$, we do not `need' $X$ for that. Therefore it can be missed in $\mathcal C'$ without pain. It is harder to come up with a notion of bisimulation such that $(\mathcal C,Y)$ is bisimilar to $(\mathcal C',Y')$. % The issue is what to do with the $X$ facet in $\mathcal C$ when doing a {\bf forth} step for agent $a$ in $Y$. We note that $\widehat{K}_b 1_c$ is undefined in $(\mathcal C,X)$ and that there is therefore no edge in $\mathcal C'$ that is modally equivalent to $X$, as this requires being equidefinable. It therefore seems that there should also be no edge in $\mathcal C'$ that is bisimilar to $X$. One can resort to ad-hoc solutions for this particular example. Why not say that $\mathcal C$ is a simulation of $\mathcal C'$ instead? But it is easy to come up with more complex structures that both contain different impurities such that neither is a simulation of of the other. Clearly we want the Hennessy--Milner property that bisimilarity corresponds to modal equivalence. \paragraph*{Death in the language.} We cannot formalize `$a$ knows that $b$ is dead' or `$a$ knows that $b$ is alive' or even `$a$ considers it possible that $b$ is dead/alive' in the current logical language. This was on purpose: we targeted the simplest epistemic logic. But it is quite possible. To the set of local variables $P = \bigcup_{a \in A} P_a$ we add a set $A_\downarrow := \{a_\downarrow \mid a \in A\}$ of {\em global variables} $a_\downarrow$ denoting ``$a$ is alive''. We further define by abbreviation $a_\uparrow := \neg a_\downarrow$, for ``$a$ is dead''. The upward pointing arrow is to suggest that dead agents go to heaven. Let now a simplicial model $\mathcal C = (C,\chi,\ell)$ be given. First, we require for $v \in \mathcal V(C)$ that $a_\downarrow \in \ell(v)$ iff $\chi(v) = a$. This still seems obvious, as agent $a$ colouring $v$ is alive. This also entails that $a_\downarrow \in \ell(Y)$ for any $Y \in C$ with $v \in Y$. Next, for any \emph{facet} $X \in \mathcal F(C)$ we define $\mathcal C,X \bowtie a_\downarrow$ (that is, always). Otherwise, $a_\downarrow$ is undefined. And we then stipulate for such $X$ that $\mathcal C,X \models a_\downarrow$ iff $\mathcal C,X \bowtie a_\downarrow$ and $a_\downarrow \in \ell(X)$. Under these circumstances, a validity is $K_a a_\downarrow$, formalizing that an agent knows that it is alive. The precise impact on the logic is unclear to us. Do we still have upwards and downwards monotony? Is it sufficient to add axioms $K_a a_\downarrow$ to the axiomatization {\bf S5}$^\top$? It has a big impact on expressivity. The two models $(\mathcal C,Y)$ and $(\mathcal C',Y)$ in the prior discussion on bisimulation, that were indistinguishable in our semantics, can now be distinguished by $\widehat{K}_a c_\uparrow$: on the left, agent $a$ considers it possible that agent $c$ is dead, but on the right, she knows that $c$ is alive. \paragraph*{Impure simplicial action models.} It is straightforward to model updates on simplicial complexes as \emph{action model} execution \cite{baltagetal:1998}, generalizing the results for pure complexes of \cite{goubaultetal:2018,goubaultetal_postdali:2021,ledent:2019,diego:2019,diego:2021}. We envisage (possibly impure) \emph{simplicial action models} representing incompletely specified tasks and algorithms, for example Byzantine agreement \cite{DworkM90} on dynamically evolving (with agents dying) impure complexes. It should be noted however that our framework does not enjoy the property that positive formulas (intuitively, those without negations before $K_a$ operators; corresponding to the universal fragment in first-order logic) are preserved after update. It might therefore be challenging to generalize results for pure complexes employing such `positive knowledge gain' after update \cite{goubaultetal:2018,goubaultetal_postdali:2021} to truly failure-prone distributed systems modelled with impure complexes. \section*{Acknowledgements} This work is the extended version of \cite{Ditmarsch21}. We acknowledge interactions and comments from: Armando Casta\~{n}eda, J\'er\'emy Ledent, \'Eric Goubault, Yoram Moses, Sergio Rajsbaum, Rojo Randrianomentsoa, David Rosenblueth, Ulrich Schmid, and Diego Vel\'azquez. Diego's exploration of the semantics of knowledge on impure complexes \cite{diego:2019} inspired this investigation. Hans warmly recalls his stay at UNAM on Sergio's invitation where his adventures in combinatorial topology for distributed computing got started, and his stay at TU Wien on Ulrich's invitation where these adventures were continued. \bibliographystyle{plain}
1,314,259,994,579
arxiv
\section{Introduction} \label{sec:intro} The James Webb Space Telescope (\citealt{gard2006}, JWST) is a space telescope developed jointly by NASA, European Space Agency (ESA) and Canadian Space Agency (CSA). This telescope, launched on December 25, 2021, has four main scientific focuses: ``The End of the Dark Ages: First Light and Reionization"; ``The Assembly of Galaxies"; ``The Birth of Stars and Protoplanetary Systems"; and ``Planetary Systems and the Origins of Life". Thirteen Early Release Science (ERS) programs have been selected to demonstrate the scientific capabilities of JWST, to provide public data to the community, to educate and inform the community regarding JWST's capabilities. This paper is part of this effort in the context of the ERS program “PDRs4all: Radiative feedback from massive stars”\footnote{\url{http://pdrs4all.org}} (ID1288) which focuses on observations of the Orion Nebula \citep{ber21}. This 40-hour program will make use of three instruments aboard JWST, and will dedicate about 12.71 hours to spectroscopy of the Orion Bar with the Near Infrared Spectrograph (NIRSpec, \citealt{bagn2007}). The NIRSpec instrument \citep{bagn2007} has 9 filters between 0.6$\mu$m and 5.27$\mu$m out of which we will use 3 in the PDRs4all ERS project. In this paper, we present a method to simulate NIRSpec hyperspectral images of the Orion Bar as planned in this ERS project, in the JWST pipeline format. The goal of these simulations is to prepare the tools that post-process the pipeline output and to test the ecosystem of analysis tools developed for JWST data\footnote{\url{https://jwst-docs.stsci.edu/jwst-post-pipeline-data-analysis}}. The paper is organised as follows: Section \ref{sec:instrument} gives an overview of the NIRSpec instrument. In Section \ref{sec:simulation}, we present how we simulate the NIRSpec image following \citet{guil2020}, and make it compatible with the NIRSpec output pipeline format. \section{NIRSpec Spectroscopy of the Orion Bar} \label{sec:instrument} The Near Infrared Spectrograph (NIRSpec, \citealt{bagn2007}) is one of the four JWST instruments. There are four observing modes of NIRSpec and we are specifically interested in the imaging spectroscopy with the Integral Field Unit (IFU)\footnote{For more information: \url{https://jwst-docs.stsci.edu/jwst-near-infrared-spectrograph}}. The IFU mode has 9 disperser-filter combinations that span a total wavelength range of $0.6 \mu m$ to $ 5.3 \mu m$, and provide three levels of resolving power\footnote{More information on IFU mode: \url{https://jwst-docs.stsci.edu/jwst-near-infrared-spectrograph/nirspec-observing-modes/nirspec-ifu-spectroscopy}}. As part of the PDRs4all ERS project, 6 NIRSpec observations are planned with 3 disperser-filter combinations covering wavelengths between $0.97 \mu m$ and $ 5.27 \mu m$ with a nominal resolving power of 2,700. Each exposure will have one integration and each integration will consist of 5 groups with 4 dithers giving a total integration time of $257.68s$. The footprints of these NIRSpec observations as specified in the Astronomer's Proposal Tool (APT\footnote{ \url{https://jwst-docs.stsci.edu/jwst-astronomers-proposal-tool-overview}}) positioned over the Orion Bar are shown in Fig.\ \ref{fig:nirspec_fov}. \begin{figure} \centering \includegraphics[height = 6cm]{figures/nirspec_fov.png} \caption{NIRSpec field of view for the PDRs4all project as specified in the ERS 1288 APT file, with HST-WFC3 \citep{kimble2008} with the F656N filter (6.56$\mu$m) image of the Orion star forming region in background. Blue regions: NIRSpec footprints corresponding to planned observations. Red cross: position of the target at the coordinates R.A.\ = 5:35:20.4749, dec.\ = -5:25:10.45.} \label{fig:nirspec_fov} \end{figure} \begin{figure} \centering \includegraphics[height = 6cm]{figures/zomm_fov.png} \caption{Zoom-in on the NIRSpec field of view from Fig.\ \ref{fig:nirspec_fov}. Blue: NIRSpec footprints corresponding to planned observations. Black: field of view of the simulation. Green: adopted field of view for the NIRSpec simulated cube.} \label{fig:zoom_fov} \end{figure} \section{Simulation} \label{sec:simulation} \subsection{Motivation and strategy} Creating NIRSpec simulations is useful to test JWST analysis tools developed by the PDRs4all team \citep{ber21} or by other teams including those of the STScI (e.g. Cubeviz, \citealt{cubeviz}). These simulations are also useful to obtain an idea of the quality (in terms of SNR) and richness of the data for a given integration time, ahead of observations. However, performing such simulations is challenging. There exists an instrument simulator \citep{piq10}, however simulating a full 3D NIRSpec cube (i.e. two spatial dimensions and one spectral dimension) with realistic spatial and spectral textures using this tool is very computationally intensive. As part of a project to develop algorithms to perform data-fusion between NIRSpec and NIRCam, these authors have created a forward mathematical model of the NIRSpec instrument. They applied this forward model to a 3 dimensional input synthetic scene of the Orion Bar to create realistic NIRSpec simulations over the $1\mu m$ to $2.35 \mu m$ wavelength range. As part of the PDRs4all project, a larger wavelength range is expected to be observed ($0.97 \mu m$ to $5.2 \mu m$) with NIRSpec. In addition, \citet{guil2020} did not implement any tools to write out the cubes in the JWST pipeline format. In this paper, we present how we have extended the method of \citet{guil2020}. To obtain a NIRSpec IFU simulated cube, we apply the direct model of \citet{guil2020} on the Orion Bar synthetic scene including the $0.97 \mu m$ to $5.2 \mu m$ range, and from this simulation we extract a cube with precisely the properties of the JWST-NIRSpec pipeline. We thus produce a realistic simulated IFU NIRSpec cube in the stage 3 format of the JWST-NIRSpec pipeline. \subsection{Choice of region to be simulated} Fig.\ \ref{fig:nirspec_fov} presents an overview of the footprints of the NIRSpec IFU observations planned in September 2022 on the Orion Bar as part of PDRs4all. They span a cut across the Orion Bar, performed with a mosaic strategy (see \citealt{ber21}). Fig.\ \ref{fig:zoom_fov} is a zoomed-in version of Fig \ref{fig:nirspec_fov} which includes additional information on the fields of view of the simulations. The black square shows the region over which we apply the direct model of \citet{guil2020} to the synthetic scene of the Orion Bar presented by the same authors. The green square in Fig.~ \ref{fig:zoom_fov} corresponds to the field of view we adopted in the simulation of this paper. It is a $3 \times 3''$ square (corresponding to NIRSpec IFU) centered on coordinates R.A.\ = 5:35:20.2570, dec.\ = -5:25:04.612. The orientation angle is $\frac{5}{9} \pi$ rad. It overlaps with the planned mosaic, however we have centered it on one of the Proplyds, simply to help for coordinate calibration. We have only simulated one dither and one pointing. However, in principle, the method presented in this paper could be extended to simulate mosaics. \subsection{Direct model of NIRSpec} \label{subsec:new_files} \subsubsection{General principles of the model} We follow and complement the formalism and notations of \citet{guil2020} to describe the forward mathematical model of NIRSpec that we will use. We use a synthetic scene of the Orion Bar $\mathbf{C_i}$, which is a 3D cube sized $(12032 \times 300 \times 300)$, where 12032 is the number of spectral elements, and $300 \times 300$ the number of spatial elements. To compute this cube we define $\mathbf{X}$, which is a vectorized version of $\mathbf{C_i}$, sized $(12032 \times 90000)$, and computed with the matrix product : \[ \mathbf{X} = \mathbf{H}\mathbf{A}, \] where $\mathbf{H}$ is a matrix of elementary spectra sized $(12032 \times 4)$ and $\mathbf{A}$ is a matrix with the weight of the spectra spatially sized $(4 \times 90000)$. $\bar \mathbf{Y}_{\text{h}}$, the hyperspectral NIRSpec image of size $(12032 \times 8649)$, is simulated using : \[ \bar \mathbf{Y}_{\text{h}} = \mathbf{L}_{\text{h}}\mathcal{H}(\mathbf{X}) \mathbf{S} + \mathbf{N}, \] where $\mathbf{L}_{\text{h}}$ is the NIRSpec throughput in a diagonal matrix, $\mathcal{H}(\cdot)$ is a spatial convolution with the JWST and NIRSpec point spread functions, which depends on wavelength and $\mathbf{S}$ is a downsampling operator corresponding to the spatial sampling of the NIRSpec instrument and $\mathbf{N}$ is the simulated noise (see details in \citealt{guil2020}). $\bar \mathbf{Y}_{\text{h}}$ is then reshaped in a $(12032 \times 93 \times 93)$ 3D cube, $\mathbf{C_s}$. Finally, $\mathbf{C_s}$ is cropped to the spatial dimensions of NIRSpec simulation, i.e. $(12032 \times 30 \times 30)$ to obtain the final NIRSpec simulated cube $\mathbf{C_f}$. For each filter set, one cube $\mathbf{C_f}$ is obtained, $\mathbf{C_f^{\rm G140H/F100LP}}$, $\mathbf{C_f^{\rm G235H/F170LP}}$ and $\mathbf{C_f^{\rm G395H/F290LP}}$. \subsubsection{Contents of model matrices} \paragraph{Matrices $\mathbf{A}$, $\mathbf{S}$, $\mathbf{N}$ } These matrices are computed in the same fashion as in \citet{guil2020}. \paragraph{Matrix $\mathbf{H}$} This matrix contains the 4 elementary spectra which have been created as part of the PDRs4all project. The initial version of these 4 spectra was presented in \citet{guil2020}, however an updated version of these spectra is described in \citet{ber21}. Here we use the former version of $\mathbf{H}$. The total wavelength range is $0.7 \mu m - 5.2 \mu m$ for 12032 spectral points. \paragraph{Matrix $\mathbf{L}_{\text{h}}$} This matrix is a diagonal matrix where the diagonal corresponds to the throughputs of NIRSpec. \texttt{Pandeia} \citep{pont2016}, a Python package developped at STScI, is used. These package calculates the throughputs of the four JWST instruments. For NIRSpec, several inputs are needed: the mode, the disperser, the filter, the readout pattern, the number of integration and the number of groups. All this information is available in the ERS APT proposal (ID 1288, 'PDRs4all'). \texttt{Pandeia} also needs the wavelengths table on which to calculate the throughputs. In our case, the mode is \texttt{IFU}, the readout pattern is \texttt{nrsrapid}, there is 1 integration and 5 groups. The observations are planned with the following disperser-filter combinations: \texttt{G140H/F100LP}, \texttt{G235H/F170LP} and \texttt{G395H/F290LP}. So, 3 different curves are obtained depending on the dispersers/filters. These curves correspond to the diagonal of the matrix, so there are 3 matrices. The code to calculate these throughputs is presented in Listing \ref{lst:pce}. Fig.\ \ref{fig:pce_curves} presents the curves obtained for each disperser-filter combination and only for those used in the ERS program. \paragraph{Operator $\mathcal{H}(\cdot)$} The operator $\mathcal{H}(\cdot)$ is a convolution with point spread functions (PSFs), stored in a matrix we call $\mathbf{G}$. $\mathbf{G}$ has 4 dimensions and stores the fast Fourier transform (fft) of the NIRSpec PSFs. The two first dimensions are the spatial dimensions of the PSF. The third dimension is the spectral dimension and the last one is for the real part and the imaginary part of the NIRSpec PSF fft. \\First, we calculate the NIRSpec PSF with \texttt{Webbpsf} \citep{perr2014}, a Python package from the STScI. This package allows to calculate the PSF of NIRSpec for each spectral point. \\ Then, the fft of the PSFs cubes are calculated and saved in fits files. They are assembled to form one unique cube with all the wavelengths in the matrix $\mathbf{G}$. The code to calculate the PSF with \texttt{Webbpsf} is presented in Listing \ref{lst:psf}. Fig.\ \ref{fig:fig_psf} shows two examples of NIRSpec PSF obtained after the fft. \begin{figure} \centering \includegraphics[height = 4 cm]{figures/psf_0.99micron.png} \includegraphics[height = 4 cm]{figures/psf_2.38micron.png} \caption{Real part of the NIRSpec fft PSF at $0.99 \mu m$ and $2.38 \mu m$.} \label{fig:fig_psf} \end{figure} \subsubsection{Format matching} \label{subsec:format} Here we consider the case of the \texttt{G140H/F100LP} filter. We create a file in the stage 3 format of the pipeline, i.e. an \texttt{\_s3d} file in fits format. This file includes data and metadata. The data is comprised of several extensions. Extension 1 is the primary data, we use $\mathbf{C_f^{\rm G140H/F100LP}}$ cube interpolated on the spectral grid of the NIRSpec simulated cube for filter \texttt{G140H/F100LP} provided by the STScI\footnote{NIRSpec IFU data set at \url{https://www.stsci.edu/jwst/science-planning/proposal-planning-toolbox/simulated-data}}. Extension 2 is the error, here we use an error of 10\% of $\mathbf{C_f^{\rm G140H/F100LP}}$. Extension 3 is the data quality array codded on 32 bits. For instance, 0 means that there are no problems with the pixel while a value of 513 ($2^9+2^0$) corresponds to a bad pixel (2$^0$) outside the science area of detector (2$^9$). Here we use a cube with dimensions of $\mathbf{C_f^{\rm G140H/F100LP}}$ with all elements set to 0 corresponding to good pixels only, since the simulation does not contain bad pixels or pixels with issues. The metadata is composed of two fits headers, a primary header and a header for extension 1 of the data (primary image). We create these headers by making a copy of the header provided by STScI for NIRSpec IFU simulated observations. The primary header contains the information related to the program such as the name of the mission, program, PI, etc. This header is common to all instruments, we replace the relevant information with that from the header created for our ERS program using the pipeline for NIRCam simulations \citep{canin2021}. In addition, information related to the target and the exposure is replaced manually using the information found in the APT proposal. In the image header, the information relative to the WCS parameters is replaced manually with the coordinates $(\texttt{CRVAL1}, \texttt{CRVAL2}) = (83.8343959,-5.4179437)$ at the reference point $(\texttt{CRPIX1}, \texttt{CRPIX2}) = (15,15)$. An extract of the image header is presented in Fig.\ \ref{fig:header}. The file corresponding to this simulation can be downloaded at \citep{cube_nirspec}. The same approach allows to compute the NIRSpec IFU files for the other filters sets (i.e. \texttt{G235H/F170LP} and \texttt{G395H/F290LP}), provided one has the template format for these filters, which is not the case at the time we publish this document. \begin{figure} \centering \includegraphics[height = 4cm]{figures/0.97micron.jpeg} \includegraphics[height = 4cm]{figures/2.87micron.jpeg} \caption{Images taken from the cube $\mathbf{C_s}$ at $0.97 \mu m$ and $2.87 \mu m$.} \label{fig:simu_scene} \end{figure} \section{Results} \begin{figure} \centering \includegraphics[height = 10 cm]{figures/spectre_simu.png} \caption{Spectra extracted from the $\mathbf{C_f}$ NIRSpec simulated cubes, corresponding to filters \texttt{G140H/F100LP}, \texttt{G235H/F170LP} and \texttt{G395H/F290LP}.} \label{fig:spectre} \end{figure} \begin{figure} \centering \includegraphics[height = 5 cm]{figures/img_crop_2.12m_region.png} \caption{Image taken from the final NIRSpec simulated cube $\mathbf{C_f}$ at 2.12$\mu$m. The green region corresponds to the one on which the spectra in Fig.\ \ref{fig:spectre} were calculated.} \label{fig:img_crop} \end{figure} Fig.\ \ref{fig:simu_scene} presents the contents of the $\mathbf{C_s}$ cube for two wavelengths, $0.97 \mu m$ and $2.87 \mu m$. Fig.\ \ref{fig:spectre} presents the spectra of the three $\mathbf{C_f}$ cubes extracted in the green circle region in Fig.\ \ref{fig:img_crop} which presents an image in of the final simulated cube $\mathbf{C_f}$, at $2.12\mu$m. This cube can be downloaded at \href{https://doi.org/10.5281/zenodo.5776707}{this link}.
1,314,259,994,580
arxiv
\section{Introduction} \emph{Answer set programming} (ASP) \cite{Niemela99:amai,MT99:slp,GL02:aij,Baral:knowledge} is an approach to declarative rule-based constraint programming that has been successively used in many knowledge representation and reasoning tasks \cite{DBLP:conf/asp/SoininenNTS01,% DBLP:conf/padl/NogueiraBGWB01,% DBLP:journals/tplp/ErdemLR06,Brooks:inferring}. In ASP, the problem at hand is solved declaratively \begin{enumerate} \item by writing down a logic program the answer sets of which correspond to the solutions of the problem and \item by computing the answer sets of the program using a special purpose search engine that has been designed for this task. \end{enumerate} A modelling philosophy of this kind suggests to treat programs as integral entities. The answer set semantics---originally defined for entire programs only~\cite{GL88:iclp,GL90:iclp}---reflects also this fact. Such indivisibility of programs is creating an increasing problem as program instances tend to grow along the demands of new application areas of ASP. It is to be expected that prospective application areas such as semantic web, bioinformatics, and logical cryptanalysis will provide us with huge program instances to create, to solve, and to maintain. Modern programming languages provide means to exploit \emph{modularity} in a number of ways to govern the complexity of programs and their development process. Indeed, the use of \emph{program modules} or \emph{objects} of some kind can be viewed as an embodiment of the classical \emph{divide-and-conquer} principle in the art of programming. The benefits of modular program development are numerous. A software system is much easier to design as a set of interacting components rather than a monolithic system with unclear internal structure. A modular design lends itself better for implementation as programming tasks are then easier to delegate amongst the team of programmers. It also enables the re-use of code organized as module libraries, for instance. To achieve similar advantages in ASP, one of the central goals of our research is to foster modularity in the context of ASP. Although modularity has been studied extensively in the context of conventional logic programs, see Bugliesi et al.~\shortcite{BLM94} for a survey, relatively little attention has been paid to modularity in ASP. Many of the approaches proposed so far are based on very strict syntactic conditions on the module hierarchy, for instance, by enforcing stratification of some kind, or by prohibiting recursion altogether \cite{EGV97,TBA05:asp,EGM97:acm}. On the other hand, approaches based on \emph{splitting-sets} \cite{LT94:iclp,EGV97,FGL05:icdt} are satisfactory from the point of view of \emph{compositional semantics}: the answer sets of an entire program are obtained as specific combinations of the answer sets of its components. A limitation of splitting-sets is that they divide logic programs in two parts, the \emph{top} and the \emph{bottom}, which is rather restrictive and conceptually very close to stratification. On the other hand, the compositionality of answer set semantics is neglected altogether in syntactic approaches \cite{IIPSC04:nmr,TBA05:asp} and this aspect of models remains completely at the programmer's responsibility. To address the deficiencies described above, we accommodate a module architecture proposed by Gaifman-Shapiro~\shortcite{GS89} to answer set programming, and in particular, in the context of the \system{smodels} system \cite{SNS02:aij}. \footnote{Also other systems such as \system{clasp} \cite{GKNS07:lpnmr} and \system{cmodels} \cite{Lierler05:lpnmr} that are compatible with the internal file format of the \system{smodels} system are implicitly covered.} There are two main criteria for the design. First of all, it is essential to establish the full compositionality of answer set semantics with respect to the module system. This is to ensure that various reasoning tasks---such as the verification of program equivalence \cite{JO07:tplp}---can be modularized. Second, for the sake of flexibility of knowledge representation, any restrictions on the module hierarchy should be avoided as far as possible. We pursue these goals according to the following plan. In Section~\ref{section:related}, we take a closer look at modularity in the context of logic programming. In order to enable comparisons later on, we also describe related approaches in the area of ASP in more detail. The technical preliminaries of the paper begin with a recapitulation of \emph{stable model semantics}~\cite{GL88:iclp} in Section~\ref{section:background}. However, stable models, or answer sets, are reformulated for a class of programs that corresponds to the input language of the \system{smodels} solver~\cite{SNS02:aij}. The definition of \emph{splitting-sets} is included to enable a detailed comparison with our results. Moreover, we introduce the concepts of \emph{program completion} and \emph{loop formulas} to be exploited in proofs later on, and review some notions of equivalence that have been proposed in the literature. In Section \ref{section:modules}, we present a \emph{module architecture} for \system{smodels} programs, in which the interaction between modules takes place through a clearly defined \textit{in\-put/out\-put interface}. The design superficially resembles that of Gaifman and Shapiro~\shortcite{GS89} but in order to achieve the full compositionality of stable models, further conditions on program composition are incorporated. This is formalized as the main result of the paper, namely the \emph{module theorem}, which goes beyond the splitting-set theorem~\cite{LT94:iclp} as \textit{negative recursion} is tolerated by our definitions. The proof is first presented for normal programs and then extended for \system{smodels} programs using a translation-based scheme. The scheme is based on three distinguished properties of translations, \emph{strong faithfulness}, \emph{preservation of compositions}, and \emph{modularity}, that are sufficient to lift the module theorem. In this way, we get prepared for even further syntactic extensions of the module theorem in the future. The respective notion of module-level equivalence, that is, \emph{modular equivalence}, is proved to be a proper \emph{congruence} for program composition. In other words, substitutions of modularly equivalent modules preserve modular equivalence. This way modular equivalence can be viewed as a reasonable compromise between \emph{uniform equivalence}~\cite{EF03:iclp} which is not a congruence for program union, and \emph{strong equivalence}~\cite{LPV01:acmtocl} which is a congruence for program union but allows only rather straightforward semantics-preserving transformations of (sets of) rules. In Section \ref{section:decomposition-and-semantical-join}, we address principles for the decomposition of \system{smodels} programs. It turns out that strongly connected components of dependency graphs can be exploited in order to extract a module structure when there is no explicit a priori knowledge about the modules of a program. In addition, we consider the possibility of relaxing our restrictions on program composition using the content of the module theorem as a criterion. The result is that the notion of modular equivalence remains unchanged but the computational cost of checking legal compositions of modules becomes essentially higher. In Section \ref{section:experiments}, we demonstrate how the module system can be exploited in practise in the context of the \system{smodels} system. We present tools that have been developed for (de)composition of logic programs and conduct a practical experiment which illustrates the performance of the tools when processing very large benchmark instances, that is, \system{smodels} programs having up to millions of rules. The concluding remarks of this paper are presented in Section \ref{section:conclusions}. \section{Modularity aspects of logic programming} \label{section:related} Bugliesi et al.~\shortcite{BLM94} address several properties that are expected from a modular logic programming language. For instance, a modular language should \begin{itemize} \item allow {\em abstraction, parameterization}, and {\em information hiding}, \item {\em ease program development} and {\em maintenance} of large programs, \item allow {\em re-usability}, \item have a {\em non-trivial notion of program equivalence} to justify replacement of program components, and \item maintain the {\em declarativity} of logic programming. \end{itemize} Two mainstream programming disciplines are identified. In {\em pro\-gram\-ming-in-the-large} approaches programs are composed with algebraic operators, see for instance \cite{Keefe85,MP88:iclp,GS89,BMPT94}. In {\em programming-in-the-small} approaches abstraction mechanisms are used, see for instance \cite{Miller86,GM94:jlp}. The {\em programming-in-the-large} approaches have their roots in the framework proposed by O'Keefe~\shortcite{Keefe85} where logic programs are seen as {\em elements of an algebra} and the operators for {\em composing programs} are seen as {\em operators in that algebra}. The fundamental idea is that a logic program should be understood as a part of a system of programs. Program composition is a powerful tool for structuring programs without any need to extend the underlying language of Horn clauses. Several algebraic operations such as {\em union}, {\em deletion}, {\em overriding union} and {\em closure} have been considered. This approach {\em supports} naturally the {\em re-use} of the pieces of programs in different composite programs, and when combined with an adequate equivalence relation also the replacement of equivalent components. This approach is highly flexible, as new composition mechanisms can be obtained by introducing a corresponding operator in the algebra or combining existing ones. Encapsulation and information hiding can be obtained by introducing suitable {\em interfaces} for components. The {\em programming-in-the-small} approaches originate from \cite{Miller86}. In this approach the composition of modules is modelled in terms of logical connectives of a language that is defined as an {\em extension of Horn clause logic}. The approach in~\cite{GM94:jlp} employs the same structural properties, but suggests a more refined way of modelling visibility rules than the one in~\cite{Miller86}. It is essential that a semantical characterization of a modular language is such that the meaning of composite programs can be defined in terms of the meaning of its components~\cite{Maher93}. To be able to identify when it is safe to substitute a module with another without affecting the global behavior it is crucial to have a notion of {\em semantical equivalence}. More formally these desired properties can be described under the terms of {\em compositionality} and {\em full abstraction}~\cite{GS89,Meyer88}. Two programs are {\em observationally congruent}, if and only if they exhibit the same observational behavior in every {\em context} they can be placed in. A semantics is compositional if semantical equality implies observational congruence, and fully abstract if semantical equivalence coincides with observational congruence. The compositionality and full abstraction properties for different notions of semantical equivalence ({\em subsumption equivalence}, {\em logical equivalence}, and {\em minimal Herbrand model equivalence}) and different operators in an algebra (union, closure, overriding union) are considered in~\cite{BLM94}. It is worth noting that minimal Herbrand model equivalence coincides with the \emph{weak equivalence} relation for positive logic programs. As to be defined in Section \ref{sect:equivalences}, two logic programs are weakly equivalent if and only if they have exactly the same answer sets. As the equivalence based on minimal Herbrand model semantics is not compositional with respect to program union~\cite{BLM94}, we note that it is not a suitable composition operator for our purposes unless further constraints are introduced. \subsection{Modularity in answer set programming} There are a number of approaches within answer set programming involving modularity in some sense, but only a few of them really describe a flexible module architecture with a clearly defined interface for module interaction. Eiter, Gottlob, and Veith~\shortcite{EGV97} address modularity in ASP in the programming-in-the-small sense. They view program modules as {\em generalized quantifiers} as introduced in~\cite{Mostowski57}. The definitions of quantifiers are allowed to nest, that is, program $P$ can refer to another module $Q$ by using it as a generalized quantifier. The main program is clearly distinguished from subprograms, and it is possible to nest calls to submodules if the so-called {\em call graph} is {\em hierarchical}, that is, {\em acyclic}. Nesting, however, increases the computational complexity depending on the depth of nesting. Ianni et al.~\shortcite{IIPSC04:nmr} propose another programming-in-the-small approach to ASP based on {\em templates}. The semantics of programs containing template atoms is determined by an \emph{explosion algorithm}, which basically replaces the template with a standard logic program. However, the explosion algorithm is not guaranteed to terminate if template definitions are used recursively. Tari et al.~\shortcite{TBA05:asp} extend the language of normal logic programs by introducing the concept of {\em import rules} for their ASP program modules. There are three types of import rules which are used to import a set of tuples $\overline{X}$ for a predicate $q$ from another module. An {\em ASP module} is defined as a quadruple of a module name, a set of parameters, a collection of normal rules and a collection of import rules. Semantics is only defined for modular programs with acyclic dependency graph, and answer sets of a module are defined with respect to the modular ASP program containing it. Also, it is required that import rules referring to the same module always have the same form. Programming-in-the-large approaches in ASP are mostly based on Lifschitz and Turner's splitting-set theorem \cite{LT94:iclp} or are variants of it. The class of logic programs considered in \cite{LT94:iclp} is that of {\em extended disjunctive logic programs}, that is, disjunctive logic programs with two kinds of negation. A {\em component structure} induced by a {\em splitting sequence}, that is, iterated splittings of a program, allows a bottom-up computation of answer sets. The restriction implied by this construction is that the dependency graph of the component chain needs to be acyclic. \newcommand{\sche}[1]{\mathbf{#1}} Eiter, Gottlob, and Mannila \shortcite{EGM97:acm} consider {\em disjunctive logic programs as a query language for relational databases}. A query program $\pi$ is instantiated with respect to an input database $D$ confined by an input schema $\sche{R}$. The semantics of $\pi$ determines, for example, the answer sets of $\pi[D]$ which are projected with respect to an output schema $\sche{S}$. Their module architecture is based on both {\em positive and negative dependencies} and no recursion between modules is tolerated. These constraints enable a straightforward generalization of the splitting-set theorem for the architecture. Faber et al.~\shortcite{FGL05:icdt} apply the {\em magic set method} in the evaluation of {\em Datalog programs with negation}, that is, effectively normal logic programs. This involves the concept of an {\em independent set} $S$ of a program $P$ which is a specialization of a splitting set. Due to a close relationship with splitting sets, the flexibility of independent sets for parceling programs is limited in the same way. The approach based on {\em lp-functions}~\cite{GG99,Baral:knowledge} is another programming-in-the-large approach. An lp-function has an interface based on input and output signatures. Several operations, for instance {\em incremental extension}, {\em interpolation}, {\em input opening}, and {\em input extension}, are introduced for composing and refining lp-functions. The composition of lp-functions, however, only allows incremental extension, and thus similarly to the splitting-set theorem there can be no recursion between lp-functions. \renewcommand{\GLred}[2]{{#1}^{#2}} \section{Preliminaries: {\sc smodels} programs} \label{section:background} To keep the presentation of our module architecture compatible with an actual implementation, we cover the input language of the \system{smodels} system---excluding {\em optimization statements}. In this section we introduce the syntax and semantics for \system{smodels} programs, and, in addition, point out a number of useful properties of logic programs under stable model semantics. We end this section with a review of equivalence relations that have been proposed for logic programs. \subsection{Syntax and semantics} \emph{Basic constraint rules} \cite{SNS02:aij} are either \emph{weight rules} of the form \begin{equation} \label{eq:weight-rule} a\leftarrow\limit{w}{\rg{b_1=w_{b_1}}{,}{b_n=w_{b_n}}, \rg{\naf c_1=w_{c_1}}{,}{\naf c_m=w_{c_m}}} \end{equation} or {\em choice rules} of the form \begin{equation} \label{eq:choice-rule} \choice{\rg{a_1}{,}{a_h}}\leftarrow\rg{b_1}{,}{b_n},\rg{\naf c_1}{,}{\naf c_m} \end{equation} where $a$, $a_i$'s, $b_j$'s, and $c_k$'s are atoms, $h>0$, $n\geq 0$, $m\geq 0$, and $\naf$ denotes \emph{negation as failure} or \emph{default negation}. In addition, a weight rule (\ref{eq:weight-rule}) involves a weight limit $w\in\mathbb{N}$ and the respective weights $w_{b_j}\in\mathbb{N}$ and $w_{c_k}\in\mathbb{N}$ associated with each \emph{positive literal} $b_j$ and \emph{negative literal} $\naf c_k$. We use a shorthand $\naf A=\set{\naf a\mid a\in A}$ for any set of atoms $A$. Each basic constraint rule $r$ consists of two parts: $a$ or $\eset{a_1}{a_h}$ is the \emph{head} of the rule, denoted by $\head{r}$, whereas the rest is called its \emph{body}. The set of atoms appearing in a body of a rule can be further divided into the set of {\em positive body atoms}, defined as $\pbody{r}=\set{b_1,\ldots, b_n}$, and the set of {\em negative body atoms}, defined as $\nbody{r}=\set{c_1,\ldots,c_m}$. We denote by $\body{r}=\pbody{r}\cup\nbody{r}$ the set of atoms appearing in the body of a rule $r$. Roughly speaking, the body gives the conditions on which the head of the rule must be satisfied. For example, in case of a choice rule (\ref{eq:choice-rule}), this means that any head atom $a_i$ can be inferred to be true if $\rg{b_1}{,}{b_n}$ hold true by some other rules but none of the atoms $\rg{c_1}{,}{c_m}$. Weight rules of the form (\ref{eq:weight-rule}) cover many other kinds of rules of interest as their special cases: \begin{eqnarray} \label{eq:cardinality-rule} a\leftarrow\limit{l}{\rg{b_1}{,}{b_n},\naf\rg{c_1}{,}{\naf c_m}} \\ \label{eq:basic-rule} a\leftarrow\rg{b_1}{,}{b_n},\naf\rg{c_1}{,}{\naf c_m} \\ \label{eq:integrity-constraint} \leftarrow\rg{b_1}{,}{b_n},\naf\rg{c_1}{,}{\naf c_m} \\ \label{eq:compute-statement} \mathsf{compute\ }\set{\rg{b_1}{,}{b_n},\rg{\naf c_1}{,}{\naf c_m}} \end{eqnarray} \emph{Cardinality rules} of the form (\ref{eq:cardinality-rule}) are essentially weight rules (\ref{eq:weight-rule}) where $w=l$ and all weights associated with literals equal to $1$. A \emph{normal rule}, or alternatively a \emph{basic rule} (\ref{eq:basic-rule}) is a special case of a cardinality rule (\ref{eq:cardinality-rule}) with $l=n+m$. The intuitive meaning of an integrity constraint (\ref{eq:integrity-constraint}) is that the conditions given in the body are never simultaneously satisfied. The same can be stated in terms of a basic rule $f\leftarrow\rg{b_1}{,}{b_n},\naf\rg{c_1}{,}{\naf c_m},\naf f$ where $f$ is a new atom dedicated to integrity constraints. Finally, \emph{compute statements} (\ref{eq:compute-statement}) of the \system{smodels} system effectively correspond to sets of integrity constraints $\rg{\leftarrow\naf b_1}{,}{\leftarrow\naf b_n}$ and $\rg{\leftarrow c_1}{,}{\leftarrow c_m}$. Because the order of literals in (\ref{eq:weight-rule}) and (\ref{eq:choice-rule}) is considered irrelevant, we introduce shorthands $A=\eset{a_1}{a_h}$, $B=\eset{b_1}{b_n}$, and $C=\eset{c_1}{c_m}$ for the sets of atoms involved in rules, and $W_B=\eset{w_{b_1}}{w_{b_n}}$ and $W_{C}=\eset{w_{c_1}}{w_{c_m}}$ for the respective sets of weights in (\ref{eq:weight-rule}). Using these notations (\ref{eq:weight-rule}) and (\ref{eq:choice-rule}) are abbreviated by $a\leftarrow\limit{w}{B=W_B,\naf C=W_C}$\footnote Strictly speaking $B=W_B$ and $\naf C=W_C$ are to be understood as sets of pairs of the form $(b,w_{b})$ and $(\naf c,w_{c})$, respectively. For convenience the exact matching between literals and weights is left implicit in the shorthand. } and $\choice{A}\leftarrow B,\naf C$. In the \system{smodels} system, the internal representation of programs is based on rules of the forms (\ref{eq:weight-rule})--(\ref{eq:basic-rule}) and (\ref{eq:compute-statement}) and one may conclude that basic constraint rules, as introduced above, provide a reasonable coverage of \system{smodels} programs. Thus we concentrate on rules of the forms (\ref{eq:weight-rule}) and (\ref{eq:choice-rule}) and view others as syntactic sugar in the sequel. \begin{definition} \label{smodels-program} An \system{smodels} program $P$ is a finite set of basic constraint rules. \end{definition} An \system{smodels} program consisting only of basic rules is called a {\em normal logic program}~(NLP), and a basic rule with an empty body is called a {\em fact}. Given an \system{smodels} program $P$, we write $\hb{P}$ for its \emph{signature}, that is, the set of atoms occurring in $P$, and $\body{P}$ and $\head{P}$ for the respective subsets of $\hb{P}$ having \emph{body occurrences} and \emph{head occurrences} in the rules of $P$. Furthermore, $\choiceheads{P}\subseteq\head{P}$ denotes the set of atoms having a head occurrence in a choice rule of $P$. Given a program $P$, an {\em interpretation} $M$ of $P$ is a subset of $\hb{P}$ defining which atoms $a\in\hb{P}$ are {\em true} ($a\in M$) and which are {\em false} ($a\not\in M$). A weight rule (\ref{eq:weight-rule}) is satisfied in $M$ if and only if $a\in M$ whenever the sum of weights $$\sum_{b\in B\cap M}w_{b}+\sum_{c\in C\setminus M}w_{c}$$ is at least $w$. A choice rule $\choice{A}\leftarrow B, \naf C$ is always satisfied in $M$. An interpretation $M\subseteq\hb{P}$ is a (classical) model of $P$, denoted by $M\models P$, if and only if $M$ satisfies all the rules in $P$. The generalization of the {\em Gelfond-Lifschitz reduct}~\cite{GL88:iclp} for \system{smodels} programs is defined as follows. \begin{definition} \label{reduct} For an \system{smodels} program $P$ and an interpretation $M\subseteq\hb{P}$, the reduct $\GLred{P}{M}{}$ contains \begin{enumerate} \item a rule $a\leftarrow B$ if and only if there is a choice rule $\choice{A}\leftarrow B, \naf C$ in $P$ such that $a\in A\cap M$, and $M\cap C=\emptyset$; \item a rule $a\leftarrow\limit{w'}{B=W_{B}}$ if and only if there is a weight rule $a\leftarrow\limit{w}{B=W_{B},\naf C=W_{C}}$ in $P$ such that $w'= \max(0,w-\sum_{c\in C\setminus M}w_{c})$. \end{enumerate} \end{definition} We say that an \system{smodels} program $P$ is {\em positive} if each rule in $P$ is a weight rule restricted to the case $C=\emptyset$. Recalling that the basic rules are just a special case of weight rules, we note that the reduct $\GLred{P}{M}$ is always positive. An interpretation $M\subseteq\hb{P}$ is the {\em least model} of a positive \system{smodels} program $P$, denoted by $\lm{P}$, if and only if $M\models P$ and there is no $M'\models P$ such that $M'\subset M$. Given the {\em least model semantics} for positive programs~\cite{JO07:tplp}, the stable model semantics~\cite{GL88:iclp} straightforwardly generalizes for \system{smodels} programs~\cite{SNS02:aij}. \begin{definition} \label{smodels-stable} An interpretation $M\subseteq\hb{P}$ is a stable model of an \system{smodels} program $P$ if and only if $M=\lm{\GLred{P}{M}{}}$. \end{definition} Given an \system{smodels} program $P$ and $a,b\in\hb{P}$, we say that $a$ {\em depends directly} on $b$, denoted by $b\leq_1 a$, if and only if $P$ contains a rule $r$ such that $a\in\head{r}$ and $b\in\pbody{r}$. The {\em positive dependency graph} of $P$, denoted by $\dep{P}$, is the graph $\pair{\hb{P}}{\leq_1}$. The reflexive and transitive closure of $\leq_1$ gives rise to the dependency relation $\leq$ over $\hb{P}$. A {\em strongly connected component} (SCC) $S$ of $\dep{P}$ is a maximal set $S\subseteq\hb{P}$ such that $b\leq a$ holds for every $a,b\in S$. \subsection{Splitting sets and loop formulas} \label{section:splitting-and-loopformulas} In this section we consider only the class of {\em normal logic programs}. We formulate the {\em splitting-set theorem} \cite{LT94:iclp} in the case of normal logic programs\footnote{ Lifschitz and Turner \shortcite{LT94:iclp} consider a more general class of logic programs, {\em extended disjunctive logic programs}, that is, disjunctive logic programs with two kinds of negation.}, and give an alternative definition of stable models based on the {\em classical models} of the {\em completion} of a program~\cite{Clark78} and its {\em loop formulas}~\cite{LinZ04}. The splitting-set theorem can be used to simplify the computation of stable models by splitting a program into parts, and it is also a useful tool for structuring mathematical proofs for properties of logic programs. \begin{definition} \label{def:splitting-set} A splitting set for a normal logic program $P$ is any set $U\subseteq \hb{P}$ such that for every rule $r$ in $P$ it holds that $\hb{r}\subseteq U$ if $\head{r}\in U$. \end{definition} The set of rules $r\in P$ such that $\hb{r}\subseteq U$ is the {\em bottom} of $P$ relative to $U$, denoted by $\bottom{P}{U}$. The set $\topp{P}{U}=P\setminus\bottom{P}{U}$ is the {\em top} of $P$ relative to $U$ which can be partially evaluated with respect to an interpretation $X\subseteq U$. The result is a program $\rest{P}{U}{X}$ defined as \begin{multline*} \{\head{r}\leftarrow (\pbody{r}\setminus U),\naf(\nbody{r}\setminus U) \mid r\in \topp{P}{U}, \\\pbody{r}\cap U\subseteq X\mbox{ and } (\nbody{r}\cap U)\cap X =\emptyset\}. \end{multline*} A {\em solution} to a program with respect to a splitting set is a pair consisting of a stable model~$X$ for the bottom and a stable model $Y$ for the top partially evaluated with respect to $X$. \begin{definition} \label{solution} Given a splitting set $U$ for a normal logic program $P$, a solution to $P$ with respect to $U$ is a pair $\pair{X}{Y}$ such that \begin{enumerate} \item[(i)] $X\subseteq U$ is a stable model of $\bottom{P}{U}$, and \item[(ii)] $Y\subseteq\hb{P}\setminus U$ is a stable model of $\rest{P}{U}{X}$. \end{enumerate} \end{definition} Solutions and stable models relate as follows. \begin{theorem}[The splitting-set theorem~\cite{LT94:iclp}] \label{thr:splitting-set} \ \\ Let $U$ be a splitting set for a normal logic program $P$ and $M\subseteq\hb{P}$ an interpretation. Then $M\in\sm{P}$ if and only if the pair $\pair{M\cap U}{M\setminus U}$ is a solution to $P$ with respect to $U$. \end{theorem} The splitting-set theorem can also be used in an iterative manner, if there is a {\em monotone sequence} of splitting sets $\{U_1,\ldots, U_i,\ldots\}$, that is, $U_i\subset U_j$ if $i<j$, for program $P$. This is called a {\em splitting sequence} and it induces a {\em component structure} for $P$. The splitting-set theorem generalizes to a {\em splitting sequence theorem}~\cite{LT94:iclp}, and given a splitting sequence, the stable models of a program $P$ can be computed iteratively bottom-up. Lin and Zhao present an alternative definition of stable models for normal logic programs based on the {\em classical models} of the {\em completion} of a program~\cite{Clark78} and its {\em loop formulas}~\cite{LinZ04}. We will apply this definition later on in the proof of the module theorem (Theorem \ref{moduletheorem}). \begin{definition}[Program completion \cite{Clark78,Fages94}] \label{clarks-completion} The completion of a normal logic program $P$ is \begin{eqnarray} \clark{P} = \bigwedge_{a \in \hb{P}} \Bigg (a \leftrightarrow \bigvee_{\head{r}=a} \Bigg ( \bigwedge_{b \in \pbody{r}} b \wedge \bigwedge_{c \in \nbody{r}} \neg c \Bigg ) \Bigg). \end{eqnarray} \end{definition} Note that an empty body reduces to true and in that case the respective equivalence for an atom $a$ is logically equivalent to $a\leftrightarrow \top$. \begin{definition} \label{def:loop} Given a normal logic program $P$, a set of atoms $L\subseteq\hb{P}$ is a loop of $P$ if for every $a,b\in L$ there is a path of non-zero length from $a$ to $b$ in $\dep{P}$ such that all vertices in the path are in $L$. \end{definition} \begin{definition} \label{def:loop-formula} Given a normal logic program $P$ and a loop $L\subseteq\hb{P}$ of $P$, the loop formula associated with $L$ is $$\loops{L}{P}= \neg\Bigg(\bigvee_{r\in\mathrm{EB}(L,P)} \Bigg ( \bigwedge_{b\in\pbody{r}}b\land\bigwedge_{c\in\nbody{r}}\neg c \Bigg ) \Bigg )\rightarrow \bigwedge_{a\in L}\neg a$$ where $\mathrm{EB}(L,P) = \{r\in P \mid \head{r}\in L\mbox{ and }\pbody{r}\cap L =\emptyset\}$ is the set of rules in $P$ which have external bodies of $L$. \end{definition} Now, stable models of a program and classical models of its completion that satisfy the loop formulas relate as follows. \begin{theorem}[\cite{LinZ04}] \label{stable-models-using-loop-formulas} Given a normal logic program $P$ and an interpretation $M\subseteq\hb{P}$, $M\in\sm{P}$ if and only if $M\models\clark{P}\cup\lfs{P}$, where $\lfs{P}$ is the set of all loop formulas associated with the loops of $P$. \end{theorem} \subsection{Equivalence relations for {\sc smodels} programs} \label{sect:equivalences} There are several notions of equivalence that have been proposed for logic programs. We review a number of them in the context of \system{smodels} programs. Lifschitz et al.~\shortcite{LPV01:acmtocl} address the notions of {\em weak/ordinary equivalence} and {\em strong equivalence}. \begin{definition} \label{weak-strong-eq} \system{smodels} programs $P$ and $Q$ are weakly equivalent, denoted by $P\lpeq{}Q$, if and only if $\sm{P}=\sm{Q}$; and strongly equivalent, denoted by $P\lpeq{s}Q$, if and only if $P\cup R\lpeq{}Q\cup R$ for any \system{smodels} program $R$. \end{definition} The program $R$ in the above definition can be understood as an arbitrary context in which the two programs being compared could be placed. Therefore strongly equivalent logic programs are semantics preserving substitutes of each other and relation $\lpeq{s}$ is a {\em congruence relation} for $\cup$ among \system{smodels} programs, that is, if $P\lpeq{s}Q$, then also $P\cup R\lpeq{s}Q\cup R$ for all \system{smodels} programs $R$. Using $R=\emptyset$ as context, one sees that $P\lpeq{s}Q$ implies $P\lpeq{}Q$. The converse does not hold in general. A way to weaken strong equivalence is to restrict possible contexts to sets of facts. The notion of {\em uniform equivalence} has its roots in the database community \cite{Sagiv87}, see \cite{EF03:iclp} for the case of the stable model semantics. \begin{definition} \label{uniform-eq} \system{smodels} programs $P$ and $Q$ are uniformly equivalent, denoted by $P\lpeq{u}Q$, if and only if $P\cup F\lpeq{}Q\cup F$ for any set of facts $F$. \end{definition} Example \ref{uni-not-strong} shows that uniform equivalence is not a congruence for union. \begin{example}{\bf (\cite[Example 1]{EFTW04:lpnmr})} \label{uni-not-strong} Consider programs $P=\{a.\}$ and $Q=\{a\leftarrow \naf b.\; a\leftarrow b.\}$. It holds that $P\lpeq{u}Q$, but $P\cup R\not\lpeq{} Q\cup R$ for the context $R=\{b\leftarrow a.\}$. This implies $P\not\lpeq{s}Q$ and $P\cup R\not\lpeq{u}Q\cup R$. \hfill $\blacksquare$ \end{example} There are also {\em relativized variants of strong and uniform equivalence} \cite{Woltran04:jelia} which allow the context to be constrained using a set of atoms $A$. For weak equivalence of programs $P$ and $Q$ to hold, $\sm{P}$ and $\sm{Q}$ have to be identical subsets of $\mathbf{2}^{\hb{P}}$ and $\mathbf{2}^{\hb{Q}}$, respectively. The same effect can be seen with $P\lpeq{s}Q$ and $P\lpeq{u}Q$. This makes these relations less useful if $\hb{P}$ and $\hb{Q}$ differ by some (local) atoms not trivially false in all stable models. The {\em visible equivalence relation}~\cite{Janhunen06:jancl} takes the interfaces of programs into account. The atoms in $\hb{P}$ are partitioned into two parts, $\hbv{P}$ and $\hbh{P}$, which determine the {\em visible} and the {\em hidden} parts of $\hb{P}$, respectively. Visible atoms form an interface for interaction between programs, and hidden atoms are local to each program and thus negligible when visible equivalence of programs is concerned. \begin{definition} \label{equivalences} \system{smodels} programs $P$ and $Q$ are visibly equivalent, denoted by $P\lpeq{v}Q$, if and only if $\hbv{P}=\hbv{Q}$ and there is a bijection $f\fun{\sm{P}}{\sm{Q}}$ such that for all $M\in\sm{P}$, $M\cap\hbv{P}=f(M)\cap\hbv{Q}$. \end{definition} Note that the number of stable models is also preserved under $\lpeq{v}$. Such a strict correspondence of models is much dictated by the answer set programming methodology: the stable models of a program usually correspond to the solutions of the problem being solved and thus the exact preservation of models is highly significant. In the fully visible case, that is, for $\hbh{P}=\hbh{Q}=\emptyset$, the relation $\lpeq{v}$ becomes very close to $\lpeq{}$. The only difference is the requirement that $\hb{P}=\hb{Q}$ insisted on $\lpeq{v}$. This is of little importance as $\hb{P}$ can always be extended by adding (tautological) rules of the form $a\leftarrow a$ to $P$ without affecting the stable models of the program. Since weak equivalence is not a congruence for $\cup$, visible equivalence cannot be a congruence for program union either. The verification of weak, strong, or uniform equivalence is a $\mathbf{coNP}$-complete decision problem for \system{smodels} programs \cite{MT91:tplp,PTW01:epia,EF03:iclp}. The computational complexity of deciding $\lpeq{v}$ is analyzed in~\cite{JO07:tplp}. If the use of hidden atoms is not limited in any way, the problem of verifying visible equivalence becomes at least as hard as the counting problem $\#\mathbf{SAT}$ which is $\#\mathbf{P}$-complete \cite{Valiant79}. It is possible, however, to govern the computational complexity by limiting the use of hidden atoms by the property of having {\em enough visible atoms}~\cite{JO07:tplp}. Intuitively, if $P$ has enough visible atoms, the EVA property for short, then each interpretation of $\hbv{P}$ {\em uniquely} determines an interpretation of $\hbh{P}$. Consequently, the stable models of $P$ can be distinguished on the basis of their visible parts. Although verifying the EVA property can be hard in general \cite[Proposition 4.14]{JO07:tplp}, there are syntactic subclasses of \system{smodels} programs with the EVA property. The use of visible atoms remains unlimited and thus the full expressiveness of \system{smodels} programs remains at programmer's disposal. Also note that the EVA property can always be achieved by declaring sufficiently many atoms visible. For \system{smodels} programs with the EVA property, the verification of visible equivalence is a {\bf coNP}-complete decision problem~\cite{JO07:tplp}. Eiter et al.~\shortcite{ETW05:ijcai} introduce a very general framework based on {\em equivalence frames} to capture various kinds of equivalence relations. All the equivalence relations defined in this section can be defined using the framework. Visible equivalence, however, is exceptional in the sense that it does not fit into equivalence frames based on {\em projected answer sets}. As a consequence, the number of answer sets may not be preserved which is somewhat unsatisfactory because of the general nature of answer set programming as discussed in the previous section. Under the EVA assumption, however, the {\em projective variant of visible equivalence} defined by $$\set{M\cap\hbv{P}\mid M\in\sm{P}}= \set{N\cap\hbv{Q}\mid N\in\sm{Q}}$$ coincides with visible equivalence. Recently Woltran presented another general framework characterizing {\em $\tuple{\mathcal{H},\mathcal{B}}$-equivalence}~\cite{Woltran07:cent}. $\tuple{\mathcal{H},\mathcal{B}}$-equivalence is defined similarly to strong equivalence, but the set of possible contexts is restricted by limiting the head and body occurrences of atoms in a context program $R$ by $\mathcal{H}$ and $\mathcal{B}$, respectively. Thus, programs $P$ and $Q$ are $\tuple{\mathcal{H},\mathcal{B}}$-equivalent if and only if $P\cup R\lpeq{}Q\cup R$ for all $R$ such that $\head{R}\subseteq\mathcal{H}$ and $\body{R}\subseteq\mathcal{B}$. Several notions of equivalence such as weak equivalence together with (relativized) strong and (relativized) uniform equivalence can be seen as special cases of $\tuple{\mathcal{H},\mathcal{B}}$-equivalence by varying the sets $\mathcal{H}$ and $\mathcal{B}$. \renewcommand{\GLred}[3]{{#1}^{#2,#3}} \section{{\sc smodels} program modules} \label{section:modules} We start this section by introducing the syntax and the stable model semantics for an individual \system{smodels} program module, and then formalize the conditions for module composition. One of the main results is the {\em module theorem} showing that module composition is suitably restricted so that compositionality of stable model semantics for \system{smodels} programs is achieved. We also introduce an equivalence relation for modules, and propose a general translation-based scheme for introducing syntactical extensions for the module theorem. The scheme is then utilized in the proof of the module theorem. We end this section with a brief comparison between our module architecture and other similar proposals. \subsection{Syntax and semantics of an {\sc smodels} program module} We define a {\em logic program module} similarly to Gaifman and Shapiro~\shortcite{GS89}, but consider the case of \system{smodels} programs instead of positive normal logic programs covered in~\cite{GS89}. An analogous module system in the context of \emph{disjunctive logic programs} is presented in \cite{JOTW07:lpnmr}. \begin{definition} \label{smodels-module} An \system{smodels} program module $\module{P}$ is a quadruple $\tuple{R,I,O,H}$ where \begin{enumerate} \item $R$ is a finite set of basic constraint rules; \item $I$, $O$, and $H$ are pairwise disjoint sets of input, output, and hidden atoms; \item $\hb{R}\subseteq\hb{\module{P}}$ which is defined by $\hb{\module{P}}=I\cup O\cup H$; and \item $\head{R}\cap I=\emptyset$. \end{enumerate} \end{definition} The atoms in $\hbv{\module{P}}=I\cup O$ are considered to be \emph{visible} and hence accessible to other modules conjoined with $\module{P}$; either to produce input for $\module{P}$ or to utilize the output of $\module{P}$. We use notations $\hbi{\module{P}}$ and $\hbo{\module{P}}$ for referring to the \emph{input signature} $I$ and the \emph{output signature} $O$, respectively. The \emph{hidden} atoms in $\hbh{\module{P}}=H=\hb{\module{P}}\setminus\hbv{\module{P}}$ are used to formalize some auxiliary concepts of $\module{P}$ which may not be sensible for other modules but may save space substantially. The use of hidden atoms may yield exponential savings in space, see~\cite[Example 4.5]{JO07:tplp}, for instance. The condition $\head{R}\cap I=\emptyset$ ensures that a module may not interfere with its own input by defining input atoms of $I$ in terms of its rules. Thus input atoms are only allowed to appear as conditions in rule bodies. \begin{example} \label{ex:hc1} Consider the Hamiltonian cycle problem for directed graphs, that is, whether there is a cycle in the graph such that each node is visited exactly once returning to the starting node. Let $n$ denote the number of nodes in the graph and let $\mathsf{arc}(x,y)$ denote that there is a directed edge from node $x$ to node $y$ in the graph. Module $\module{H}^n=\tuple{R,I,O,\set{c,d}}$ selects the edges to be taken into a cycle by insisting that each node must have exactly one incoming and exactly one outgoing edge. The input signature of $\module{H}^n$ is a graph represented as a set of edges: $I=\set{\mathsf{arc}(x,y)\mid 1\leq x,y\leq n}$. The output signature of $\module{H}^n$ represents which edges get selected into a candidate for a Hamiltonian cycle: $O=\set{\mathsf{hc}(x,y)\mid 1\leq x,y\leq n}$. The set $R$ contains rules \begin{eqnarray} \choice{\mathsf{hc}(x,y)} &\leftarrow& \mathsf{arc}(x,y) \label{eq:selection} \\ c &\leftarrow& \limit{2}{\mathsf{hc}(x,1),\ldots,\mathsf{hc}(x,n)} \label{eq:ingoing1}\\ c &\leftarrow& \naf\mathsf{hc}(x,1),\ldots,\naf\mathsf{hc}(x,n)\label{eq:ingoing2} \\ c &\leftarrow& \limit{2}{\mathsf{hc}(1,x),\ldots,\mathsf{hc}(n,x)}\label{eq:outgoing1} \mbox{ and}\\ c &\leftarrow& \naf\mathsf{hc}(1,x),\ldots,\naf\mathsf{hc}(n,x)\label{eq:outgoing2} \end{eqnarray} for each $1\leq x,y\leq n$; and a rule $d \leftarrow \naf d, c$ which enforces $c$ to be false in every stable model. The rules in (\ref{eq:selection}) encode the selection of edges taken in the cycle. The rules in (\ref{eq:ingoing1}) and (\ref{eq:ingoing2}) are used to guarantee that each node has exactly one outgoing edge, and the rules in (\ref{eq:outgoing1}) and (\ref{eq:outgoing2}) give the respective condition concerning incoming edges. We also need to check that each node is reachable from the first node along the edges in the cycle. For this, we introduce module $\module{R}^n=\tuple{R',I', O', \set{e}}$. The input signature of $\module{R}^n$ is $I'=O$, and the output signature is $O'=\set{\mathsf{reached}(x)\mid 1\leq x\leq n}$, where $\mathsf{reached}(x)$ tells that node $x$ is reachable from the first node. The set $R'$ contains rules \begin{eqnarray*} \mathsf{reached}(y)& \leftarrow &\mathsf{hc}(1,y)\\ \mathsf{reached}(y)& \leftarrow &\mathsf{reached}(x), \mathsf{hc}(x,y) \\ e&\leftarrow& \naf e, \naf\mathsf{reached}(y) \end{eqnarray*} for each $2\leq x\leq n$ and $1\leq y\leq n$. \hfill $\blacksquare$ \end{example} To generalize the stable model semantics to cover modules as well, we must explicate the semantical role of input atoms. To this end, we will follow an approach% \footnote{ There are alternative ways to handle input atoms. One possibility is to combine a module with a set of facts (or a database) over its input signature \cite{OJ06:ecai,Oikarinen07:lpnmr}. Yet another approach is to interpret input atoms as \emph{fixed atoms} in the sense of parallel circumscription~\cite{Lifschitz85:ijcai}.} from \cite{JOTW07:lpnmr} and take input atoms into account in the definition of the reduct adopted from \cite{JO07:tplp}. It should be stressed that {\em all negative literals} and {\em literals involving input atoms} get evaluated in the reduction. Moreover, our definitions become equivalent with those proposed for normal programs \cite{GL88:iclp} and \system{smodels} programs \cite{JO07:tplp} if an empty input signature $I=\emptyset$ is additionally assumed. Using the same idea, a conventional \system{smodels} program, that is, a set of basic constraint rules $R$, can be viewed as a module $\tuple{R,\emptyset,\hb{R},\emptyset}$ without any input atoms and all atoms visible. \begin{definition} \label{def:reduct} Given a module $\module{P}=\tuple{R,I,O,H}$, the \emph{reduct} of $R$ with respect to an interpretation $M\subseteq\hb{\module{P}}$ and input signature $I$, denoted by $\GLred{R}{M}{I}$, contains \begin{enumerate} \item a rule $a\leftarrow (B\setminus I)$ if and only if there is a choice rule $\choice{A}\leftarrow B, \naf C$ in $R$ such that $a\in A\cap M$, $B\cap I\subseteq M$, and $M\cap C=\emptyset$; and \item a rule $a\leftarrow\limit{w'}{B\setminus I=W_{B\setminus I}}$ if and only if there is a weight rule $a\leftarrow\limit{w}{B=W_{B},\naf C=W_{C}}$ in $R$, and $$w'=\max(0,w-\sum_{b\in B\cap I\cap M}w_b-\sum_{c\in C\setminus M}w_c).$$ \end{enumerate} \end{definition} As all occurrences of atoms in the input signature and all negative occurrences of atoms are evaluated, the generalized reduct $\GLred{R}{M}{I}$ is a positive program in the sense of \cite{JO07:tplp} and thus it has a unique least model $\lm{\GLred{R}{M}{I}}\subseteq\hb{R}\setminus I$. \begin{definition} \label{def:stable-model} An interpretation $M\subseteq\hb{\module{P}}$ is a stable model of an \system{smodels} program module $\module{P}=\tuple{R,I,O,H}$, denoted by $M\in\sm{\module{P}}$, if and only if $M\setminus I=\lm{\GLred{R}{M}{I}}$. \end{definition} If one is interested in computing stable models of a module with respect to a certain input interpretation, it is easier to use an alternative definition of stable semantics for modules~\cite{OJ06:ecai}, where an actual input is seen as a set of facts (or a database) to be combined with the module. \begin{definition} \label{def:instantiate-module} Given an \system{smodels} program module $\module{P}=\tuple{R,I,O,H}$ and a set of atoms $A\subseteq I$, the instantiation of $\module{P}$ with an actual input $A$ is $$\module{P}(A)=\tuple{R\cup \{a.\mid a\in A\}, \emptyset, I\cup O,H}.$$ \end{definition} The module $\module{P}(A)$ is essentially an \system{smodels} program with $I\cup O$ as the set of visible atoms. Thus the stable model semantics of \system{smodels} programs in Definition \ref{smodels-stable} directly generalizes for an instantiated module. \renewcommand{\GLred}[2]{{#1}^{#2}} \begin{definition} \label{def:alter-stable-model} An interpretation $M\subseteq\hb{\module{P}}$ is a stable model of an \system{smodels} program module $\module{P}=\tuple{R,I,O,H}$ if and only if $M=\lm{\GLred{R}{M}\cup\{a.\mid a\in M\cap I\}}.$ \end{definition} \renewcommand{\GLred}[3]{{#1}^{#2,#3}} It is worth emphasizing that Definitions \ref{def:stable-model} and \ref{def:alter-stable-model} result in exactly the same semantics for \system{smodels} program modules. \begin{example} \label{ex:hc2} Recall module $\module{H}^n$ from Example \ref{ex:hc1}. We consider the stable models of $\module{H}^n$ for $n=2$ to see that the rules in $\module{H}^n$ do not alone guarantee that each node is reachable along the edges taken in the cycle candidate. Consider $M=\set{\mathsf{arc}(1,1),\mathsf{arc}(2,2),\mathsf{hc}(1,1), \mathsf{hc}(2,2)}$. The reduct $\GLred{R}{M}{I}$ contains facts $\mathsf{hc}(1,1)$ and $\mathsf{hc}(2,2)$; and rules $c\leftarrow \limit{2}{\mathsf{hc}(1,1),\mathsf{hc}(1,2)}$, $c\leftarrow \limit{2}{\mathsf{hc}(2,1),\mathsf{hc}(2,2)}$, $c\leftarrow \limit{2}{\mathsf{hc}(1,1),\mathsf{hc}(2,1)}$, and $c\leftarrow \limit{2}{\mathsf{hc}(1,2),\mathsf{hc}(2,2)}$; and finally the rule $d\leftarrow c$. Now $M\in\sm{\module{H}^n}$ since $M=\lm{\GLred{R}{M}{I}}$. However, $M$ does not correspond to a graph with a Hamiltonian cycle, as node $2$ is not reachable from node $1$. \hfill $\blacksquare$ \end{example} \subsection{Composing programs from modules} \label{section:composition} The stable model semantics~\cite{GL88:iclp} does not lend itself directly for program composition. The problem is that in general, stable models associated with modules do not determine stable models assigned to their \emph{composition}. Gaifman and Shapiro~\shortcite{GS89} cover positive normal programs under logical consequences. For their purposes, it is sufficient to assume that whenever two modules $\module{P}_1$ and $\module{P}_2$ are put together, their output signatures have to be disjoint and they have to \emph{respect each other's hidden atoms}, that is, $\hbh{\module{P}_1}\cap\hb{\module{P}_2}=\emptyset$ and $\hbh{\module{P}_2}\cap\hb{\module{P}_1}=\emptyset$. \begin{definition} \label{def:program-composition} Given \system{smodels} program modules $\module{P}_1 = \langle R_1,I_1, O_1, H_1\rangle$ and $\module{P}_2=\tuple{R_2,I_2,O_2,H_2}$, their composition is \[ \module{P}_1\oplus\module{P}_2 = \tuple{R_1\cup R_2, (I_1\setminus O_2) \cup (I_2\setminus O_1), O_1\cup O_2, H_1\cup H_2} \] if $\hbo{\module{P}_1}\cap\hbo{\module{P}_2}=\emptyset$ and $\module{P}_1$ and $\module{P}_2$ respect each other's hidden atoms. \end{definition} The following example shows that the conditions given for $\oplus$ are not enough to guarantee compositionality in the case of stable models and further restrictions on program composition become necessary. \begin{example} \label{ex:gs-composition} Consider normal logic program modules $\module{P}_1=\tuple{\set{a\leftarrow b.},\set{b},\set{a},\emptyset}$ and $\module{P}_2=\tuple{\set{b\leftarrow a.},\set{a},\set{b},\emptyset}$ both of which have stable models $\emptyset$ and $\set{a,b}$ by symmetry. The \emph{composition} of $\module{P}_1$ and $\module{P}_2$ is $\module{P}_1\oplus\module{P}_2=\tuple{\set{a\leftarrow b.\; b\leftarrow a.},\emptyset,\set{a,b},\emptyset}$ and $\sm{\module{P}_1\oplus\module{P}_2}=\{\emptyset\}$, that is, $\set{a,b}$ is not a stable model of $\module{P}_1\oplus\module{P}_2$. \hfill $\blacksquare$ \end{example} We define the positive dependency graph of an \system{smodels} program module $\module{P}=\tuple{R,I,O,H}$ as $\dep{\module{P}}=\dep{R}$. Given that $\module{P}_1\oplus\module{P}_2$ is defined, we say that $\module{P}_1$ and $\module{P}_2$ are \emph{mutually dependent} if and only if $\dep{\module{P}_1\oplus\module{P}_2}$ has an SCC $S$ such that $S\cap\hbo{\module{P}_1}\neq\emptyset$ and $S\cap\hbo{\module{P}_2}\neq\emptyset$, that is, $S$ is \emph{shared by} $\module{P}_1$ and $\module{P}_2$. \begin{definition} \label{def:join} The \emph{join} $\module{P}_1\sqcup\module{P}_2$ of two \system{smodels} program modules $\module{P}_1$ and $\module{P}_2$ is $\module{P}_1\oplus\module{P}_2$, provided $\module{P}_1\oplus\module{P}_2$ is defined and $\module{P}_1$ and $\module{P}_2$ are not mutually dependent. \end{definition} \begin{example} \label{ex:join-modules} Consider modules $\module{H}^n$ and $\module{R}^n$ from Example \ref{ex:hc1}. Since $\module{H}^n$ and $\module{R}^n$ respect each other's hidden atoms and are not mutually dependent, their join $\module{H}^n\sqcup\module{R}^n=\tuple{R\cup R', I, O\cup O', \set{c,d,e}}$ is defined. \hfill $\blacksquare$ \end{example} The conditions in Definition \ref{def:join} impose no restrictions on positive dependencies {\em inside} modules or on {\em negative} dependencies in general. It is straightforward to show that $\sqcup$ has the following properties: \begin{enumerate} \item[(i)] Identity: $\module{P}\sqcup\tuple{\emptyset, \emptyset, \emptyset,\emptyset} =\tuple{\emptyset,\emptyset, \emptyset,\emptyset}\sqcup \module{P}=\module{P}$ for all modules $\module{P}$. \item[(ii)] Commutativity: $\module{P}_1 \sqcup \module{P}_2=\module{P}_2\sqcup \module{P}_1$ for all modules $\module{P}_1$ and $\module{P}_2$ such that $\module{P}_1 \sqcup \module{P}_2$ is defined. \item[(iii)] Associativity: $(\module{P}_1 \sqcup \module{P}_2)\sqcup \module{P}_3 =\module{P}_1\sqcup(\module{P}_2\sqcup \module{P}_3)$ for all modules $\module{P}_1, \module{P}_2$ and $\module{P}_3$ such that all pairwise joins are defined. \end{enumerate} The equality ``$=$'' used above denotes syntactical equality. Also note that $\module{P}\sqcup\module{P}$ is usually undefined, which is a difference with respect to $\cup$ for which it holds that $P\cup P=P$ for all programs $P$. Furthermore, considering the join $\module{P}_1\sqcup\module{P}_2$, since each atom is defined in exactly one module, the sets of rules in $\module{P}_1$ and $\module{P}_2$ are distinct, that is, $R_1\cap R_2=\emptyset$, and also, $\hb{\module{P}_1\sqcup\module{P}_2} = \hb{\module{P}_1}\cup \hb{\module{P}_2}$, $\hbv{\module{P}_1\sqcup\module{P}_2} = \hbv{\module{P}_1}\cup \hbv{\module{P}_2}$, and $\hbh{\module{P}_1\sqcup\module{P}_2} = \hbh{\module{P}_1}\cup \hbh{\module{P}_2}$. Having the semantics of an individual \system{smodels} program module now defined, we may characterize the properties of the semantics under program composition using the notion of \emph{compatibility}. \begin{definition} \label{def:compatibility} Given \system{smodels} program modules $\module{P}_1$ and $\module{P}_2$ such that $\module{P}_1\oplus\module{P}_2$ is defined, we say that interpretations $M_1\subseteq\hb{\module{P}_1}$ and $M_2\subseteq\hb{\module{P}_2}$ are compatible if and only if $M_1\cap\hbv{\module{P}_2}=M_2\cap\hbv{\module{P}_1}$. \end{definition} We use {\em natural join} $\Join$ to combine compatible interpretations. \begin{definition} \label{def:natural-join} Given \system{smodels} program modules $\module{P}_1$ and $\module{P}_2$ and sets of interpretations $A_1\subseteq\mathbf{2}^{\hb{\module{P}_1}}$ and $A_2\subseteq\mathbf{2}^{\hb{\module{P}_2}}$, the natural join of $A_1$ and $A_2$, denoted by $A_1\Join A_2$, is $$\sel{M_1\cup M_2} {M_1\in A_1, M_2\in A_2\text{ such that } M_1\text{ and }M_2\text{ are compatible}}.$$ \end{definition} The stable model semantics is compositional for $\sqcup$, that is, if a program (module) consists of several submodules, its stable models are locally stable for the respective submodules; and on the other hand, local stability implies global stability for compatible stable models of the submodules. \begin{theorem}[Module theorem \cite{Oikarinen07:lpnmr}] \label{moduletheorem} If $\module{P}_1$ and $\module{P}_2$ are \system{smodels} program modules such that $\module{P}_1\sqcup\module{P}_2$ is defined, then $$\sm{\module{P}_1\sqcup\module{P}_2}=\sm{\module{P}_1}\Join\sm{\module{P}_2}.$$ \end{theorem} Instead of proving Theorem \ref{moduletheorem} directly from scratch we will propose {\em a general translation-based scheme} for introducing syntactical extensions for the module theorem. For this we need to define a concept of {\em modular equivalence} first, and thus the proof of Theorem \ref{moduletheorem} is deferred until Section \ref{proof-mod-theorem}. It is worth noting that classical propositional theories have an analogous property obtained by substituting $\cup$ for $\sqcup$ and replacing stable models by classical models in Theorem \ref{moduletheorem}, that is, for any \system{smodels} programs $P_1$ and $P_2$, $\cm{P_1\cup P_2}=\cm{P_1}\Join\cm{P_2}$, where $\cm{P}=\{M\subseteq\hb{P}\mid M\models P\}$. \begin{example} Recall modules $\module{H}^n$ and $\module{R}^n$ in Example \ref{ex:hc1}. In Example \ref{ex:hc2} we showed that $M=\set{\mathsf{arc}(1,1),\mathsf{arc}(2,2),\mathsf{hc}(1,1), \mathsf{hc}(2,2)}$ is a stable model of $\module{H}^2$. Now module $\module{R}^2$ has six stable models, but none of them is compatible with $M$. Thus by Theorem \ref{moduletheorem} there is no stable model $N$ for $\module{H}^2\sqcup\module{R}^2$ such that $N\cap\hb{\module{H}^2}=M$. The join $\module{H}^n\sqcup\module{R}^n$ can be used to find any graph of $n$ nodes which has a Hamiltonian cycle. For instance $\module{H}^2\sqcup\module{R}^2$ has four stable models: \begin{eqnarray*} \set{\mathsf{arc}(1,2),\mathsf{arc}(2,1),\mathsf{hc}(1,2),\mathsf{hc}(2,1), \mathsf{reached}(1),\mathsf{reached}(2)}\\ \set{\mathsf{arc}(1,1),\mathsf{arc}(1,2),\mathsf{arc}(2,1),\mathsf{hc}(1,2), \mathsf{hc}(2,1),\mathsf{reached}(1),\mathsf{reached}(2)}\\ \set{\mathsf{arc}(1,2),\mathsf{arc}(2,1),\mathsf{arc}(2,2),\mathsf{hc}(1,2), \mathsf{hc}(2,1),\mathsf{reached}(1),\mathsf{reached}(2)}\\ \set{\mathsf{arc}(1,1),\mathsf{arc}(1,2),\mathsf{arc}(2,1),\mathsf{arc}(2,2), \mathsf{hc}(1,2),\mathsf{hc}(2,1),\mathsf{reached}(1),\mathsf{reached}(2)}. \end{eqnarray*} These models represent the four possible graphs of two nodes having a Hamiltonian cycle. \hfill $\blacksquare$ \end{example} Theorem \ref{moduletheorem} straightforwardly generalizes for modules consisting of several submodules. Consider a collection of \system{smodels} program modules $\module{P}_1,\ldots,\module{P}_n$ such that the join $\module{P}_1\sqcup\cdots\sqcup\module{P}_n$ is defined (recall that $\sqcup$ is associative). We say that a collection of interpretations $\{M_1,\ldots,M_n\}$ for modules $\module{P}_1,\ldots,\module{P}_n$, respectively, is {\em compatible}, if and only if $M_i$ and $M_j$ are pairwise compatible for all $1\leq i,j\leq n$. The natural join generalizes for a collection of modules as $$A_1\Join \cdots\Join A_n=\sel{M_1\cup \cdots\cup M_n} {M_i\in A_i \text{ and } \{M_1,\ldots,M_n\}\text{ is compatible}},$$ where $A_1\subseteq\mathbf{2}^{\hb{\module{P}_1}},\ldots, A_n\subseteq\mathbf{2}^{\hb{\module{P}_n}}$. \begin{corollary} \label{general-moduletheorem} For a collection of \system{smodels} program modules $\module{P}_1,\ldots,\module{P}_n$ such that the join $\module{P}_1\sqcup\cdots\sqcup \module{P}_n$ is defined, it holds that $$\sm{\module{P}_1\sqcup\cdots\sqcup \module{P}_n}=\sm{\module{P}_1}\Join \cdots\Join \sm{\module{P}_n}.$$ \end{corollary} Although Corollary \ref{general-moduletheorem} enables the computation of stable models on a module-by-module basis, it leaves us the task of excluding mutually incompatible combinations of stable models. It should be noted that applying the module theorem in a naive way by first computing stable models for each submodule and finding then the compatible pairs afterwards, might not be preferable. \begin{example} \label{ex-cor-mod-thr} Consider \system{smodels} program modules \begin{eqnarray*} \module{P}_1 &=& \tuple{\{a\leftarrow\naf b.\},\{b\},\{a\},\emptyset},\\ \module{P}_2 &=& \tuple{\{b\leftarrow\naf c.\},\{c\},\{b\},\emptyset}, \mbox{ and}\\ \module{P}_3 &=& \tuple{\{c\leftarrow\naf a.\},\{a\},\{c\}, \emptyset}, \end{eqnarray*} and their join $\module{P}=\module{P}_1\sqcup\module{P}_2\sqcup\module{P}_3 =\tuple{\{a\leftarrow\naf b.\; b\leftarrow\naf c.\; c\leftarrow\naf a.\},\emptyset, \{a,b,c\},\emptyset}.$ We have $\sm{\module{P}_1}=\{\{a\},\{b\}\}$, $\sm{\module{P}_2}=\{\{b\},\{c\}\}$, and $\sm{\module{P}_3}=\{\{a\},\{c\}\}$. To apply Corollary \ref{general-moduletheorem} for finding $\sm{\module{P}}$, a naive approach is to compute all stable models of all the modules and try to find a compatible triple of stable models $M_1$, $M_2$, and $M_3$ for $\module{P}_1$, $\module{P}_2$, and $\module{P}_3$, respectively. \begin{itemize} \item Now $\{a\}\in\sm{\module{P}_1}$ and $\{c\}\in\sm{\module{P}_2}$ are compatible, since $\{a\}\cap\hbv{\module{P}_2}=\emptyset=\{c\}\cap\hbv{\module{P}_1}$. However, $\{a\}\in\sm{\module{P}_3}$ is not compatible with $\{c\}\in\sm{\module{P}_2}$, since $\{c\}\cap\hbv{\module{P}_3}= \{c\}\ne\emptyset=\{a\}\cap \hbv{\module{P}_2}$. On the other hand, $\{c\}\in\sm{\module{P}_3}$ is not compatible with $\{a\}\in\sm{\module{P}_1}$, since $\{a\}\cap\hbv{\module{P}_3}= \{a\}\ne\emptyset=\{c\}\cap \hbv{\module{P}_1}$. \item Also $\{b\}\in\sm{\module{P}_1}$ and $\{b\}\in\sm{\module{P}_2}$ are compatible, but $\{b\}\in\sm{\module{P}_1}$ is incompatible with $\{a\}\in\sm{\module{P}_3}$. Nor is $\{b\}\in\sm{\module{P}_2}$ compatible with $\{c\}\in\sm{\module{P}_3}$. \end{itemize} Thus there are no $M_1\in\sm{\module{P}_1}$, $M_2\in\sm{\module{P}_2}$, and $M_3\in\sm{\module{P}_3}$ such that $\{M_1,M_2,M_3\}$ is compatible, which is natural as $\sm{\module{P}}=\emptyset$. \hfill $\blacksquare$ \end{example} It is not necessary to test all combinations of stable models to see whether we have a compatible triple. Instead, we use the alternative definition of stable models (Definition \ref{def:alter-stable-model}) based on instantiating the module with respect to an input interpretation, and apply the module theorem similarly to the splitting-set theorem. One should notice that the set of rules in $\module{P}$ presented in Example \ref{ex-cor-mod-thr} has no non-trivial splitting sets, and thus the splitting-set theorem is not applicable (in a non-trivial way) in this case. \begin{example} \label{ex-cor-mod-thr-2} Consider \system{smodels} program modules $\module{P}_1$, $\module{P}_2$, and $\module{P}_3$ from Example \ref{ex-cor-mod-thr}. Now, $\module{P}_1$ has two stable models $M_1=\{a\}$ and $M_2=\{b\}$. \begin{itemize} \item The set $M_1\cap\hbi{\module{P}_3}=\{a\}=A_1$ can be seen as an input interpretation for $\module{P}_3$. Module $\module{P}_3$ instantiated with $A_1$ has one stable model: $\sm{\module{P}_3(A_1)}=\set{\set{a}}$. Furthermore, we can use $A_2=\set{a}\cap\hbi{\module{P}_2}=\emptyset$ to instantiate $\module{P}_2$: $\sm{\module{P}_2(A_2)}=\set{\set{b}}$. However, $\set{b}$ is not compatible with $M_1$, and thus there is no way to find a compatible collection of stable models for the modules starting from $M_1$. \item We instantiate $\module{P}_3$ with $M_2\cap\hbi{\module{P}_3}=\emptyset=A_3$ and get $\sm{\module{P}_3(A_3)}=\set{\set{c}}$. Continuing with $\set{c}\cap\hbi{\module{P}_2}=\set{c}=A_4$, we get $\sm{\module{P}_2(A_4)}=\set{\set{c}}$. Again, we notice that $\set{c}$ is not compatible with $M_2$, and thus it is not possible to find a compatible triple of stable models starting from $M_2$ either. \end{itemize} Thus we can conclude $\sm{\module{P}}=\emptyset$. \hfill $\blacksquare$ \end{example} \subsection{Equivalence relations for modules} \label{sect:mod-eq} The notion of \emph{visible equivalence} \cite{Janhunen06:jancl} was introduced in order to neglect hidden atoms when logic programs or other theories of interest are compared on the basis of their models. The compositionality property from Theorem \ref{moduletheorem} enables us to bring the same idea to the level of program modules---giving rise to \emph{modular equivalence} of logic programs. Visible and modular equivalence are formulated for \system{smodels} program modules as follows. \begin{definition} \label{smodels-mod-eq} For two \system{smodels} program modules $\module{P}$ and $\module{Q}$, \begin{itemize} \item $\module{P}\lpeq{v}\module{Q}$ if and only if $\hbv{\module{P}}=\hbv{\module{Q}}$ and there is a bijection $f\fun{\sm{\module{P}}}{\sm{\module{Q}}}$ such that for all $M\in\sm{\module{P}}$, $$M\cap\hbv{\module{P}}=f(M)\cap\hbv{\module{Q}};\mbox{ and}$$ \item $\module{P}\lpeq{m}\module{Q}$ if and only if $\hbi{\module{P}}=\hbi{\module{Q}}$ and $\module{P}\lpeq{v}\module{Q}$. \end{itemize} \end{definition} We note that the condition $\hbv{\module{P}}=\hbv{\module{Q}}$ insisted on the definition of $\lpeq{v}$, implies $\hbo{\module{P}}=\hbo{\module{Q}}$ in the presence of $\hbi{\module{P}}=\hbi{\module{Q}}$ as required by the relation~$\lpeq{m}$. Moreover, these relations coincide for completely specified \system{smodels} programs, that is modules~$\module{P}$ with $\hbi{\module{P}}=\emptyset$. Modular equivalence lends itself for program substitutions in analogy to \emph{strong equivalence} \cite{LPV01:acmtocl}, that is, the relation $\lpeq{m}$ is a proper \emph{congruence} for the join operator $\sqcup$. \begin{theorem}[Congruence] \label{smodels-congruence} Let $\module{P},\module{Q}$ and $\module{R}$ be \system{smodels} program modules such that $\module{P}\sqcup\module{R}$ and $\module{Q}\sqcup \module{R}$ are defined. If $\module{P}\lpeq{m}\module{Q}$, then $\module{P}\sqcup\module{R}\lpeq{m}\module{Q}\sqcup\module{R}$. \end{theorem} The proof of Theorem \ref{smodels-congruence} is given in \ref{proofs}. The following examples illustrate the use of modular equivalence in practice. \begin{example} Recall programs $P=\{a.\}$ and $Q=\{a\leftarrow\naf b.\; a\leftarrow b.\}$ from Example \ref{uni-not-strong}. We can define modules based on them: $\module{P}=\tuple{P, \{b\},\{a\}, \emptyset}$ and $\module{Q}=\tuple{Q, \{b\},\{a\},\emptyset}$. Now it is impossible to define a module $\module{R}$ based on $R=\{b\leftarrow a.\}$ in a way that $\module{Q}\sqcup\module{R}$ would be defined. Moreover, it holds that $\module{P}\lpeq{m}\module{Q}$. \hfill $\blacksquare$ \end{example} \begin{example} Module $\module{HR}^n=\tuple{R'', I'', O'',\set{f}}$ is based on an alternative encoding for Hamiltonian cycle problem given in~\cite{SNS02:aij}. In contrast to the encoding described in Example \ref{ex:hc1}, this encoding does not allow us to separate the selection of the edges to the cycle and the checking of reached vertices into separate modules as their definitions are mutually dependent. The input signature of $\module{HR}^n$ is the same as for $\module{H}^n$, that is, $I''=I=\set{\mathsf{arc}(x,y)\mid 1\leq x,y\leq n}$. The output signature of $\module{HR}^n$ is the output signature of $\module{H}^n\sqcup\module{R}^n$, that is, $$O''=O\cup O'=\set{\mathsf{hc}(x,y)\mid 1\leq x,y\leq n}\cup \set{\mathsf{reached}(x)\mid 1\leq x\leq n}.$$ The set $R''$ contains rules \begin{eqnarray} \choice{\mathsf{hc}(1,x)}&\leftarrow& \mathsf{arc}(1,x) \nonumber \\ \choice{\mathsf{hc}(x,y)}&\leftarrow &\mathsf{reached}(x),\mathsf{arc}(x,y) \nonumber\\ \mathsf{reached}(y) &\leftarrow& \mathsf{hc}(x,y)\nonumber\\ f&\leftarrow& \naf f, \naf\mathsf{reached}(x) \nonumber\\ f&\leftarrow& \naf f, \mathsf{hc}(x,y), \mathsf{hc}(x,z) \mbox{ and} \label{y-ne-z}\\ f&\leftarrow& \naf f, \mathsf{hc}(x,y), \mathsf{hc}(z,y) \label{x-ne-z} \end{eqnarray} for each $1\leq x,y,z\leq n$ such that $y\ne z$ in (\ref{y-ne-z}) and $x\ne z$ in (\ref{x-ne-z}). Now, one may notice that $\module{HR}^n$ and $\module{H}^n\sqcup\module{R}^n$ have the same input/output interface, and $\sm{\module{HR}^n}=\sm{\module{H}^n\sqcup\module{R}^n}$ which implies $\module{HR}^n\lpeq{m}\module{H}^n\sqcup\module{R}^n$. \hfill $\blacksquare$ \end{example} As regards the relationship between modular equivalence and previously proposed notions of equivalence, we note the following. First, if one considers the {\em fully visible case}, that is, the restriction $\hbh{\module{P}}=\hbh{\module{Q}}=\emptyset$, modular equivalence can be seen as a special case of $A$-uniform equivalence for $A=I$. Recall, however, the restriction that input atoms may not appear in the heads of the rules as imposed by module structure. With a further restriction $\hbi{\module{P}}=\hbi{\module{Q}} =\emptyset$, modular equivalence basically coincides with weak equivalence because $\hb{\module{P}}=\hb{\module{Q}}$ can always be satisfied by extending the interface of the module. Setting $\hbi{\module{P}}=\hb{\module{P}}$ would in principle give us uniform equivalence, but the additional condition $\head{R}\cap I=\emptyset$ leaves room for the empty module only. In the general case with hidden atoms, the problem of verifying $\lpeq{m}$ for \system{smodels} program modules can be reduced to verifying $\lpeq{v}$ for \system{smodels} programs. This is achieved by introducing a special module $\module{G}_I$ containing a single choice rule, which acts as a context generator in analogy to \cite{Woltran04:jelia}. We say that two modules $\module{P}$ and $\module{Q}$ are {\em compatible} if they have the same input/output interface, that is, $\hbi{\module{P}}=\hbi{\module{Q}}$ and $\hbo{\module{P}}=\hbo{\module{Q}}$. \begin{lemma} \label{reduce-modular-to-visible} Consider compatible \system{smodels} program modules $\module{P}$ and $\module{Q}$. Now $\module{P}\lpeq{m}\module{Q}$ if and only if $\module{P}\sqcup\module{G}_I\lpeq{v}\module{Q}\sqcup\module{G}_I$ where $I=\hbi{\module{P}}=\hbi{\module{Q}}$ and $\module{G}_I=\tuple{\{\choice{I}\leftarrow\},\emptyset, I, \emptyset}$ generates all possible input interpretations for $\module{P}$ and $\module{Q}$. \end{lemma} \begin{proof} Notice that $\module{P}\sqcup\module{G}_I$ and $\module{Q}\sqcup \module{G}_I$ are \system{smodels} program modules with empty input signatures, and thus they can also be viewed as \system{smodels} programs. ($\implies$) Assume $\module{P}\lpeq{m}\module{Q}$. Since $\module{P}\sqcup\module{G}_I$ and $\module{Q}\sqcup\module{G}_I$ are defined, $\module{P}\sqcup\module{G}_I\lpeq{m}\module{Q}\sqcup\module{G}_I$ by Theorem \ref{smodels-congruence}. This implies $\module{P}\sqcup\module{G}_I\lpeq{v}\module{Q}\sqcup\module{G}_I$. ($\impliedby$) Assume $\module{P}\sqcup\module{G}_I\lpeq{v}\module{Q}\sqcup\module{G}_I$, that is, $\hbv{\module{P}}=\hbv{\module{Q}}$ and there is a bijection $f: \sm{\module{P}\sqcup\module{G}_I}\rightarrow\sm{\module{Q}\sqcup\module{G}_I}$ such that for each $M\in\sm{\module{P}\sqcup\module{G}_I}$, $M\cap\hbv{\module{P}}=f(M)\cap\hbv{\module{Q}}$. By Theorem~\ref{moduletheorem}, $\sm{\module{P}\sqcup\module{G}_I}=\sm{\module{P}}\Join\sm{\module{G}_I}$ and $\sm{\module{Q}\sqcup\module{G}_I}=\sm{\module{Q}}\Join\sm{\module{G}_I}$. Now, $\sm{\module{G}_I}=\mathbf{2}^{I}$, and thus $\sm{\module{P}\sqcup\module{G}_I}= \sm{\module{P}}$ and $\sm{\module{Q}\sqcup\module{G}_I}=\sm{\module{Q}}$. This implies $\module{P}\lpeq{v}\module{Q}$, and furthermore $\module{P}\lpeq{m}\module{Q}$ since $\module{P}$ and $\module{Q}$ are compatible \system{smodels} program modules. \end{proof} Due to the close relationship of $\lpeq{v}$ and $\lpeq{m}$, the respective verification problems have the same computational complexity. As already observed in \cite{JO07:tplp}, the verification of $\module{P}\lpeq{v}\module{Q}$ involves a \emph{counting problem} in general and, in particular, if $\hbv{\module{P}}=\hbv{\module{Q}}=\emptyset$. In this special setting $\module{P}\lpeq{v}\module{Q}$ holds if and only if $|\sm{\module{P}}|=|\sm{\module{Q}}|$, that is, the numbers of stable models for $\module{P}$ and $\module{Q}$ coincide. A reduction of computational time complexity can be achieved for modules that have \emph{enough visible atoms}, that is, the EVA property. Basically, we say that module $\module{P}=\tuple{R,I,O,H}$ has enough visible atoms, if and only if $R$ has enough visible atoms with respect to $\hbv{P}=I\cup O$. However, the property of having enough visible atoms can be elegantly stated using modules. We define the \emph{hidden part} of a module $\module{P}=\tuple{R,I,O,H}$ as $\hid{\module{P}}=\tuple{\hid{R},I\cup O,H,\emptyset}$ where $\hid{R}$ contains all rules of $R$ involving atoms of $H$ in their heads. For a choice rule $\choice{A}\leftarrow B,\naf C\in R$, we take the projection $\choice{A\cap H}\leftarrow B,\naf C$ in $\hid{R}$. \begin{definition}[The EVA property \cite{JO07:tplp}] \label{EVA-modules} \ \\ An \system{smodels} program module $\module{P}=\tuple{R,I,O,H}$ has enough visible atoms if and only if the hidden part $\hid{\module{P}}=\tuple{\hid{R},I\cup O,H,\emptyset}$ has a unique stable model $M$ for each interpretation $N\subseteq\hbv{\module{P}}=I\cup O$ such that $M\cap(I\cup O)=N$. \end{definition} Verifying the EVA property is $\mathbf{coNP}$-hard and in $\Pi^\mathbf{P}_2$ for \system{smodels} programs~\cite[Proposition 4.14]{JO07:tplp}, and thus for \system{smodels} program modules, too. It is always possible to enforce the EVA property by uncovering sufficiently many hidden atoms: a module $\module{P}$ for which $\hbh{\module{P}}=\emptyset$ has clearly enough visible atoms because $\hid{\module{P}}$ has no rules. It is also important to realize that choice rules involving hidden atoms in their heads most likely break up the EVA property---unless additional constraints are introduced to exclude multiple models created by choices. Based on the observations we can conclude that verifying the modular equivalence of modules with the EVA property is a $\mathbf{coNP}$-complete decision problem. Motivated by the complexity result and by previous proposals for translating various equivalence verification problems into the problem of computing stable models (see \cite{JO02:jelia,Turner03:tplp,Woltran04:jelia} for instance), we recently introduced a translation-based method for verifying modular equivalence~\cite{OJ08:jlc}. In the following theorem, $\mathrm{EQT}(\cdot,\cdot)$ is the linear translation function mapping two \system{smodels} program modules into one \system{smodels} program module presented in~\cite[Definition~10]{OJ08:jlc}. \begin{theorem} {\bf (\cite[Theorem 4]{OJ08:jlc})} \label{eq-test-with-context} Let $\module{P}$ and $\module{Q}$ be compatible \system{smodels} program modules with the EVA property, and $\module{C}$ any \system{smodels} program module such that $\module{P}\sqcup\module{C}$ and $\module{Q}\sqcup\module{C}$ are defined. Then $\module{P}\sqcup\module{C}\lpeq{m}\module{Q}\sqcup\module{C}$ if and only if $\sm{\mathrm{EQT}(\module{P},\module{Q})\sqcup\module{C}}= \sm{\mathrm{EQT}(\module{Q},\module{P})\sqcup\module{C}}=\emptyset$. \end{theorem} \subsection{Proving the module theorem using a general translation-based extension scheme} \label{proof-mod-theorem} Let us now proceed to the proof of the module theorem. We describe the overall strategy in this section whereas detailed proofs for the theorems are provided in \ref{proofs}. Instead of proving Theorem \ref{moduletheorem} from scratch, we first show that the theorem holds for normal logic program modules, and then present a general scheme that enables us to derive extensions of the module theorem syntactically in terms of translations. We start by stating the module theorem for normal logic program modules. \begin{theorem}[\cite{OJ06:ecai}] \label{theorem:nlp-moduletheorem} If $\module{P}_1$ and $\module{P}_2$ are normal logic program modules such that $\module{P}_1\sqcup\module{P}_2$ is defined, then $$\sm{\module{P}_1\sqcup\module{P}_2}=\sm{\module{P}_1}\Join\sm{ \module{P}_2}.$$ \end{theorem} Proof for Theorem \ref{theorem:nlp-moduletheorem} is given in \ref{proofs}. Next definition states the conditions which we require a translation function to have in order to achieve syntactical extensions to the module theorem. Intuitively, the conditions serve the following purposes: first, the translation has to be {\em strongly faithful}, that is, it preserves the roles of all atoms in the original module; second, it is {\em $\sqcup$-preserving}, that is, possible compositions of modules are not limited by the translation; and third, the translation is {\em modular}. For convenience, we define an operator $\reveal{\module{P},A}=\tuple{R, I, O\cup A, H\setminus A}$ for any program module $\module{P}=\tuple{R, I, O, H}$ and for any set of atoms $A\subseteq H$. The revealing operator is used to make a set of hidden atoms of a module visible to other modules. \begin{definition} \label{def:conditions-for-translation} Let $\mathcal{C}_1$ and $\mathcal{C}_2$ be two classes of logic program modules such that $\mathcal{C}_2\subseteq\mathcal{C}_1$. A translation function $\trop{}: \mathcal{C}_1\rightarrow \mathcal{C}_2$ is {\em strongly faithful, modular and $\sqcup$-preserving}, if the following hold for any program modules $\module{P},\module{Q}\in\mathcal{C}_1$: \begin{enumerate} \item $\reveal{\module{P},\hbh{\module{P}}}\lpeq{m}\reveal{\tr{}{\module{P}}, \hbh{\module{P}}}$; \item if $\module{P}\sqcup\module{Q}$ is defined, then $\tr{}{\module{P}}\sqcup\tr{}{\module{Q}}$ is defined; and \item $\tr{}{\module{P}}\sqcup\tr{}{\module{Q}} \tr{}{\module{P}\sqcup\module{Q}}$. \end{enumerate} \end{definition} Notice that the condition for strong faithfulness requires $\hbi{\tr{}{\module{P}}}=\hbi{\module{P}}$, $\hbo{\tr{}{\module{P}}}\cup\hbh{\module{P}}=\hbo{\module{P}} \cup\hbh{\module{P}}$, and $\hbh{\module{P}}\subseteq\hbh{\tr{}{\module{P}}}$ to hold. Moreover, strong faithfulness implies {\em faithfulness}, that is, $\module{P}\lpeq{m}\tr{}{\module{P}}$. \begin{theorem} \label{theorem:modulethr-translation} Let $\mathcal{C}_1$ and $\mathcal{C}_2$ be two classes of logic program modules such that $\mathcal{C}_2\subseteq\mathcal{C}_1$ and there is a translation function $\trop{}\!: \mathcal{C}_1\rightarrow \mathcal{C}_2$ that is strongly faithful, $\sqcup$-preserving, and modular as given in Definition \ref{def:conditions-for-translation}. If the module theorem holds for modules in $\mathcal{C}_2$, then it holds for modules in $\mathcal{C}_1$. \end{theorem} The proof of Theorem~\ref{theorem:modulethr-translation} is provided in \ref{proofs}. As regards the translation from \system{smodels} program modules to NLP modules, it suffices, for example, to take a natural translation similarly to~\cite{SNS02:aij}. Note that the translation presented in Definition \ref{smodels2basic} is in the worst case exponential with respect to the number of rules in the original module. For a more compact translation, see \cite{FL05:tplp}, for example. \begin{definition} \label{smodels2basic} Given an \system{smodels} program module $\module{P}=\tuple{R,I,O,H}$, its translation into a normal logic program module is $\tr{NLP}{\module{P}}=\tuple{R', I, O, H\cup H'}$, where $R'$ contains the following rules: \begin{itemize} \item for each choice rule $\{A\}\leftarrow B, \naf C\in R$ the set of rules $$\{a\leftarrow B, \naf C,\naf \overline{a}.\;\; \overline{a}\leftarrow \naf a\mid a\in A\};$$ \item for each weight rule $a\leftarrow\limit{w}{B=W_{B},\naf C=W_{C}}\in R$ the set of rules $$\{a\leftarrow B', \naf C'\mid B'\subseteq B, C'\subseteq C \mbox{ and }w\leq \sum_{b\in B'}w_b + \sum_{c\in C'}w_c\},$$ \end{itemize} where each $\overline{a}$ is a new atom not appearing in $\hb{\module{P}}$ and $H'=\{\overline{a}\mid a\in\choiceheads{R}\}$. \end{definition} \begin{theorem} \label{theorem:translation-smodels2nlp} The translation $\trop{NLP}$ from \system{smodels} program modules to normal logic program modules given in Definition \ref{smodels2basic} is strongly faithful, $\sqcup$-preserving, and modular. \end{theorem} The proof of Theorem \ref{theorem:translation-smodels2nlp} is given in \ref{proofs}. The module theorem now directly follows from Theorems \ref{theorem:nlp-moduletheorem}, \ref{theorem:modulethr-translation}, and \ref{theorem:translation-smodels2nlp}. \begin{proof}[Proof of Theorem \ref{moduletheorem}] By Theorem \ref{theorem:nlp-moduletheorem} we know that the module theorem holds for normal logic program modules. Theorem \ref{theorem:modulethr-translation} shows that Definition \ref{def:conditions-for-translation} gives the conditions under which Theorem \ref{theorem:nlp-moduletheorem} can be directly generalized for a larger class of logic program modules. By Theorem \ref{theorem:translation-smodels2nlp} we know that the translation $\trop{NLP}$ from \system{smodels} program modules to NLP modules introduced in Definition \ref{smodels2basic} satisfies the conditions given in Definition \ref{def:conditions-for-translation}, and therefore \system{smodels} program modules are covered by the module theorem. \end{proof} \subsection{Comparison with earlier approaches} \label{sect:compare-module-system} Our module system resembles the module system proposed in~\cite{GS89}. However, to make our system compatible with the stable model semantics we need to introduce a further restriction of mutual dependence, that is, we need to deny positive recursion between modules. Also other propositions involve similar conditions for module composition. For example, Brogi et al.~\shortcite{BMPT94} employ visibility conditions that correspond to respecting hidden atoms. However, their approach covers only positive programs under the least model semantics. Maher~\shortcite{Maher93} forbids all recursion between modules and considers Przymusinski's {\em perfect models} \cite{Przymusinski88} rather than stable models. Etalle and Gabbrielli \shortcite{EG96:tcs} restrict the composition of {\em constraint logic program}~\cite{JM94} modules with a condition that is close to ours: $\hb{P}\cap\hb{Q}\subseteq\hbv{P}\cap\hbv{Q}$ but no distinction between input and output is made, for example, $\hbo{P}\cap\hbo{Q}\neq\emptyset$ is allowed according to their definitions. Approaches to modularity within ASP typically do not allow any recursion (negative or positive) between modules \cite{EGM97:acm,TBA05:asp,LT94:iclp,GG99}. Theorem \ref{moduletheorem}, the module theorem, is strictly stronger than the splitting-set theorem~\cite{LT94:iclp} for normal logic programs, and the general case allows us to generalize the splitting-set theorem for \system{smodels} programs. Consider first the case of normal logic programs. A {\em splitting} of a program can be used as a basis for a module structure. If $U$ is a splitting set for a normal logic program $P$, then we can define $$P=\module{B}\sqcup\module{T}=\tuple{\bottom{P}{U},\emptyset, U,\emptyset}\sqcup\tuple{\topp{P}{U}, U, \hb{P}\setminus U,\emptyset}.$$ It follows directly from Theorems \ref{thr:splitting-set} and \ref{moduletheorem} that $M_1\in\sm{\module{B}}$ and $M_2\in\sm{\module{T}}$ are compatible if and only if $\langle M_1,M_2\setminus U\rangle$ is a solution for $P$ with respect to $U$. \begin{example} \label{modulethr-vs-splitting} Consider a normal logic program $P=\{a\leftarrow\naf b.\; b\leftarrow \naf a.\; c\leftarrow a.\}.$ The set $U=\{a,b\}$ is a splitting set for $P$, and therefore the splitting set-theorem (Theorem \ref{thr:splitting-set}) can be applied: $\bottom{P}{U}=\{a\leftarrow\naf b.\; b\leftarrow \naf a.\}$ and $\topp{P}{U}=\{c\leftarrow a.\}$. Now $M_1=\{a\}$ and $M_2=\{b\}$ are the stable models of $\bottom{P}{U}$, and we can evaluate the top with respect to $M_1$ and $M_2$, resulting in solutions $\pair{M_1}{\{c\}}$ and $\pair{M_2}{\emptyset}$, respectively. On the other hand, $P$ can be seen as join of modules $\module{P}_1=\tuple{\bottom{P}{U}, \emptyset, U,\emptyset}$ and $\module{P}_2=\tuple{\topp{P}{U}, U,\{c\},\emptyset}$. Now, we have $\sm{\module{P}_1}=\{M_1,M_2\}$ and $\sm{\module{P}_2}=\{\emptyset,\{b\},\{a,c\},\{a,b,c\}\}$. Out of eight possible pairs only $\pair{M_1}{\{a,c\}}$ and $\pair{M_2}{\{b\}}$ are compatible. However, it is possible to apply Theorem \ref{moduletheorem} similarly to the splitting-set theorem, that is, we only need to compute the stable models of $\module{P}_2$ compatible with the stable models of $\module{P}_1$. Notice that when the splitting-set theorem is applicable, the stable models of $\module{P}_1$ fully define the possible input interpretations for $\module{P}_2$. This leaves us with stable models $\{a,c\}$ and $\{b\}$ for the composition. \hfill $\blacksquare$ \end{example} On the other hand, consider the module $\module{P}_1=\tuple{\bottom{P}{U}, \emptyset, U,\emptyset}$ in the above example. There are no non-trivial splitting sets for the bottom program $\bottom{P}{U}=\{a\leftarrow \naf b.\;b\leftarrow \naf a.\}$. However, $\module{P}_1$ can be viewed as the join of two NLP modules $\module{Q}_1 = \tuple{\{a\leftarrow \naf b.\}, \{b\}, \{a\},\emptyset}$, and $\module{Q}_2 = \tuple{ \{b\leftarrow \naf a.\}, \{a\}, \{b\}, \emptyset}$ to which the module theorem is applicable. In the general case of \system{smodels} program modules we can use the module theorem to generalize the splitting-set theorem for \system{smodels} programs. Then the bottom module acts as an input generator for the top module, and one can simply find the stable models for the top module instantiated with the stable models of the bottom module. The latter strategy used in Example \ref{modulethr-vs-splitting} works even if there is negative recursion between the modules, as already shown in Example \ref{ex-cor-mod-thr-2}. The module theorem strengthens an earlier version given in \cite{Janhunen06:jancl} to cover programs that involve positive body literals, too. The independent sets proposed by Faber et al.~\shortcite{FGL05:icdt} push negative recursion inside modules which is unnecessary in view of our results. Their version of the module theorem is also weaker than Theorem \ref{moduletheorem}. The approach to modularity based on lp-functions~\cite{GG99,Baral:knowledge} has features similar to our approach. The components presented by lp-functions have an input/output interface and a {\em domain} reflecting the possible input interpretations. The functional specification requires an lp-function to have a {\em consistent answer set} for any interpretation in its domain. This is something that is not required in our module system. Lp-functions are flexible in the sense that there are several operators for refining them. However, the composition operator for lp-functions allows only incremental compositions, which again basically reflects the splitting-set theorem. \section{More on program (de)composition} \label{section:decomposition-and-semantical-join} So far we have established a module architecture for the class \system{smodels} programs, in which modules interact through an input/output interface and the stable model semantics is fully compatible with the architecture. In this section we investigate further the ways to understand the internal structure of logic programs by seeing them as compositions of logic program modules. First, we use the conditions for module composition to introduce a method for decomposing an \system{smodels} program into modules. A more detailed knowledge of the internal structure of a program (or a module) might reveal ways to improve search for stable models. Another application can be found in modularization of the translation-based equivalence verification method in~\cite{OJ08:jlc}. Second, we consider possibilities of relaxing the conditions for module composition, that is, whether it is possible to allow positive recursion between modules in certain cases. \subsection{Finding a program decomposition} \label{section:decomposition} Recall that any \system{smodels} program $P$ can be viewed as a module $\tuple{P,\emptyset, \hb{P}, \emptyset}$, and thus we consider here a more general case of finding a module decomposition for an arbitrary \system{smodels} program module $\module{P}=\tuple{R,I,O,H}$. The first step is to exploit the strongly connected components $D_1,\ldots,D_n$ of $\dep{\module{P}}$ and define submodules~$\module{P}_i$ by grouping the rules so that for each $D_i$ all the rules $r\in R$ such that $\head{r}\subseteq D_i$ are put into one submodule. Now, the question is whether $\module{P}_i$'s defined this way would form a valid decomposition of $\module{P}$ into submodules. First notice that input atoms form a special case because $\head{R}\cap I=\emptyset$. Each $a\in I$ ends up in its own strongly connected component and there are no rules to include into a submodule corresponding to strongly connected component $\{a\}$. Thus it is actually unnecessary to include a submodule based on such a component. Obviously, each weight rule in $R$ goes into exactly one of the submodules. One should notice that for a choice rule $r\in R$ it can happen that $\head{r}\cap D_i\ne\emptyset$ and $\head{r}\cap D_j\ne\emptyset$ for $i\ne j$. This is not a problem, since it is always possible to {\em split a choice rule} by projecting the head, that is, by replacing a choice rule of the form $\choice{A}\leftarrow B,\naf C$ with choice rules $\choice{A\cap D_i}\leftarrow B,\naf C$ for each SCC $D_i$ such that $A\cap D_i\ne\emptyset$.\footnote{ Note that in the case of disjunctive logic programs, splitting a rule into two modules is more involved, see~\cite{JOTW07:lpnmr} for a discussion on a general shifting principle.} Based on the discussion above, we define the {\em set of rules defining a set of atoms} for an \system{smodels} program module. \begin{definition} \label{rule-set-defining} Given an \system{smodels} program module $\module{P}=\tuple{R,I,O,H}$ and a set of atoms $D\subseteq\hb{\module{P}}\setminus I$, the set of rules defining $D$, denoted by $R[D]$, contains the following rules: \begin{itemize} \item a choice rule $\choice{A\cap D}\leftarrow B,\naf C$ if and only if there is a choice rule~$\choice{A}\leftarrow B,\naf C$ in $R$ such that $A\cap D\ne\emptyset$; and \item a weight rule $a\leftarrow\limit{w}{B=W_B,\naf C=W_C}$ if and only if there is a weight rule~$a\leftarrow\limit{w}{B=W_B,\naf C=W_C}$ in $R$ such that $a\in D$. \end{itemize} \end{definition} We continue by defining a submodule of $\module{P}=\tuple{R,I,O,H}$ induced by a set of atoms $D\subseteq\hb{\module{P}}\setminus I$. We use Definition \ref{rule-set-defining} for the set of rules, and choose $D\cap O$ to be the output signature and the rest of the visible atoms appearing in $R[D]$ to be the input signature. \begin{definition} \label{induced-module} Given an \system{smodels} program module $\module{P}=\tuple{R,I,O,H}$ and a set of atoms $D\subseteq\hb{\module{P}}\setminus I$, a submodule induced by $D$ is $$\module{P}[D]=(R[D], (\hb{R[D]}\setminus D)\cap(I\cup O), D\cap O, D\cap H).$$ \end{definition} Let $D_{1},\ldots D_{m}$ be the strongly connected components of $\dep{\module{P}}$ such that $D_{i}\cap I=\emptyset$. Now we can define $\module{P}_i=\module{P}[D_i]$ for each $1\leq i\leq m$. Since the strongly connected components of $\dep{\module{P}}$ are used as a basis, it is guaranteed that there is no positive recursion between any of the submodules $\module{P}_i$. Also, it is clear that the output signatures of the submodules are pairwise disjoint. Unfortunately this construction does not yet guarantee that hidden atoms stay local, and therefore the composition $\module{P}_1\oplus\cdots\oplus\module{P}_m$ might not be defined because certain $\module{P}_i$'s might not respect each others hidden atoms. A solution is to combine $D_{i}$'s in a way that modules will be closed with respect to dependencies caused by the hidden atoms, that is, if a hidden atom $h$ belongs to a component $D_{i}$, then also all the atoms in the heads of rules in which $h$ or $\naf h$ appears, have to belong to $D_{i}$, too. This can be achieved by finding the strongly connected components, denoted by $E_1,\ldots,E_k$, for $\mathrm{Dep}^\mathrm{h}(\module{P},\{D_{1},\ldots,D_{m}\})$, where $\mathrm{Dep}^\mathrm{h}(\module{P},\{D_{1},\ldots,D_{m}\})$ has $\{D_{1},\ldots,D_{m}\}$ as the set of vertices, and \begin{multline*} \{\pair{D_{i}}{D_{j}}, \pair{D_{j}}{D_{i}}\mid a\in D_{i}, b\in D_{j}, r\in R, \\ b\in\head{r}\mbox{ and }a\in\body{r}\cap\hbh{\module{P}}\} \end{multline*} as the set of edges. Now, we take the sets $$F_i=\bigcup_{D\in E_i}D$$ for $1\leq i\leq k$ and use them to induce a module structure for $\module{P}$ by defining $\module{P}_i=\module{P}[F_i]$ for $1\leq i\leq k$. As there may be atoms in $\hb{\module{P}}$ not appearing in the rules of $\module{P}$, that is, $\hb{\module{P}}=\hb{R}$ does not necessarily hold for $\module{P}=\tuple{R,I,O,H}$, it is possible that $$\hb{\module{P}}\setminus(\hb{\module{P}_1}\cup\cdots\cup \hb{\module{P}_k})\ne\emptyset.$$ To keep track of such atoms in $I\setminus\hb{R}$ we need an additional module defined as $$\module{P}_0=\tuple{\emptyset, I\setminus\hb{R},\emptyset,\emptyset}.$$ There is no need for a similar treatment for atoms in $(O\cup H)\setminus\hb{R}$ as each atom in $O\cup H$ belongs to some $\hb{\module{P}_i}$ by definition. Theorem \ref{decomposition} shows that we have a valid decomposition of $\module{P}$ into submodules. \begin{theorem} \label{decomposition} Consider an \system{smodels} program module $\module{P}$, and let $D_{1},\ldots D_{m}$ be the SCCs of $\dep{\module{P}}$ such that $D_{i}\cap I=\emptyset$, and $E_1,\ldots,E_k$ the strongly connected components of $\mathrm{Dep}^\mathrm{h}(\module{P},\{D_{1},\ldots,D_{m}\})$. Define $\module{P}_0=\tuple{\emptyset, I\setminus\hb{R}, \emptyset,\emptyset}$, and $\module{P}_i=\module{P}[F_i]$ for $F_i=\bigcup_{D\in E_i}D$ and $1\leq i\leq k$. Then the join of the submodules $\module{P}_i$ for $0\leq i\leq k$ is defined and $\module{P}\lpeq{m}\module{P}_0\sqcup\cdots\sqcup\module{P}_k$. \end{theorem} \begin{proof} Based on the construction of $F_i$'s and the discussion in this section it is clear that $\module{P'}=\module{P}_0\sqcup\cdots\sqcup\module{P}_k$ is defined. It is easy to verify that the sets of input, output, and hidden atoms of modules $\module{P'}$ and $\module{P}$ are exactly the same. The only difference between the sets of rules in $\module{P}$ and $\module{P'}$ is that some choice rules in $\module{P}$ may have been split into several rules in $\module{P'}$. This is a syntactical change not affecting the stable models of the modules, that is, $\sm{\module{P}}=\sm{\module{P}'}$. Notice also that $\dep{\module{P}}=\dep{\module{P}'}$. Thus it holds that $\module{P'}\lpeq{m}\module{P}$. \end{proof} \subsection{Semantical conditions for module composition} \label{section:sem-mod-eq} Even though Example \ref{ex:gs-composition} shows that conditions for $\oplus$ are not enough to guarantee that the module theorem holds, there are cases where $\module{P}\sqcup\module{Q}$ is not defined and still it holds that $\sm{\module{P}\oplus\module{Q}}=\sm{\module{P}}\Join\sm{\module{Q}}$. \begin{example} \label{ex3} Consider modules $\module{P}=\tuple{\{a\leftarrow b .\; a\leftarrow\naf c.\}, \{b\}, \{a,c\},\emptyset}$ and $\module{Q}=\tuple{\{b\leftarrow a.\}, \{a\}, \{b\},\emptyset}$. Now, the composition $$\module{P}\oplus\module{Q}=\tuple{\{a\leftarrow b .\; a\leftarrow\naf c.\; b\leftarrow a.\},\emptyset,\{a,b,c\}, \emptyset}$$ is defined as the output sets are disjoint and there are no hidden atoms. Since $\sm{\module{P}}=\{\{a\},\{a,b\}\}$ and $\sm{\module{Q}}=\{\emptyset,\{a,b\}\}$, we get $\sm{\module{P}\oplus\module{Q}}=\{\{a,b\}\} =\sm{\module{P}}\Join\sm{\module{Q}}$. \hfill $\blacksquare$ \end{example} Example \ref{ex3} suggests that the denial of positive recursion between modules can be relaxed in certain cases. We define a semantical characterization for module composition that maintains the compositionality of the stable model semantics. \begin{definition} \label{semantical-join} The \emph{semantical join} $\module{P}_1\underline{\sqcup}\module{P}_2$ of two \system{smodels} program modules $\module{P}_1$ and $\module{P}_2$ is $\module{P}_1\oplus\module{P}_2$, provided $\module{P}_1\oplus\module{P}_2$ is defined and $\sm{\module{P}_1 \oplus \module{P}_2}=\sm{\module{P}_1}\Join\sm{\module{P}_2}$. \end{definition} The module theorem holds by definition for \system{smodels} program modules composed with $\underline{\sqcup}$. We can now present an alternative formulation for modular equivalence taking features from strong equivalence~\cite{LPV01:acmtocl}. \begin{definition} \label{new-mod-eq} \system{smodels} program modules $\module{P}$ and $\module{Q}$ are \emph{semantically modularly equivalent}, denoted by $\module{P}\lpeq{sem}\module{Q}$, if and only if $\hbi{\module{P}}=\hbi{\module{Q}}$ and $\module{P}\underline{\sqcup}\module{R}\lpeq{v}\module{Q}\underline{\sqcup}\module{R}$ for all $\module{R}$ such that $\module{P}\underline{\sqcup}\module{R}$ and $\module{Q}\underline{\sqcup}\module{R}$ are defined. \end{definition} It is straightforward to see that $\lpeq{sem}$ is a congruence for $\underline{\sqcup}$ and reduces to $\lpeq{v}$ for modules with completely specified input, that is, modules $\module{P}$ such that $\hbi{\module{P}}=\emptyset$. \begin{theorem} \label{same-eqs} $\module{P}\lpeq{m}\module{Q}$ if and only if $\module{P}\lpeq{sem}\module{Q}$ for any \system{smodels} program modules $\module{P}$ and $\module{Q}$. \end{theorem} \begin{proof} Assume $\module{P}\lpeq{sem}\module{Q}$. Now, $\module{P}\lpeq{m}\module{Q}$ is implied by Definition \ref{new-mod-eq} with empty context module $\module{R}=\tuple{\emptyset,\emptyset,\emptyset,\emptyset}$. Assume then $\module{P}\lpeq{m}\module{Q}$, that is, there is a bijection $f: \sm{\module{P}}\rightarrow\sm{\module{Q}}$ such that for each $M\in\sm{\module{P}}$, $M\cap\hbv{\module{P}}=f(M)\cap\hbv{\module{Q}}$. Consider arbitrary $\module{R}$ such that $\module{P}\underline{\sqcup}\module{R}$ and $\module{Q}\underline{\sqcup}\module{R}$ are defined. Then $\sm{\module{P}\underline{\sqcup} \module{R}}= \sm{\module{P}}\Join\sm{\module{R}}$ and $\sm{\module{Q}\underline{\sqcup} \module{R}}= \sm{\module{Q}}\Join\sm{\module{R}}$. We now define $g: \sm{\module{P}\underline{\sqcup} \module{R}}\rightarrow \sm{\module{Q}\underline{\sqcup} \module{R}}$ such that for any $M\in\sm{\module{P}\underline{\sqcup} \module{R}}$, $$g(M)=f(M_P)\cup M_R,$$ where $M=M_P\cup M_R$ such that $M_P\in\sm{\module{P}}$ and $M_R\in\sm{\module{R}}$ are compatible. Now, $g$ is a bijection and $M\cap(\hbv{\module{P}}\cup\hbv{\module{R}})= g(M)\cap(\hbv{\module{Q}}\cup\hbv{\module{R}})$ for each $M\in\sm{\module{P}\underline{\sqcup}\module{R}}$. Since $\module{R}$ was arbitrary, $\module{P}\lpeq{sem}\module{Q}$ follows. \end{proof} Theorem \ref{same-eqs} implies that $\lpeq{m}$ is a congruence for $\underline{\sqcup}$, too. Thus it is possible to replace $\module{P}$ with modularly equivalent $\module{Q}$ in the contexts allowed by $\underline{\sqcup}$. The syntactical restriction denying positive recursion between modules is easy to check, since SCCs can be found in a linear time with respect to the size of the dependency graph~\cite{Tarjan}. To the contrary, checking whether $\sm{\module{P}_1\oplus \module{P}_2}= \sm{\module{P}_1}\Join\sm{\module{P}_2}$ is a computationally harder problem. \begin{theorem} \label{tradeoff} Given \system{smodels} program modules $\module{P}_1$ and $\module{P}_2$ such that $\module{P}_1\oplus\module{P}_2$ is defined, deciding whether it holds that $\sm{\module{P}_1\oplus \module{P}_2}= \sm{\module{P}_1}\Join\sm{\module{P}_2}$ is a $\mathbf{coNP}$-complete decision problem. \end{theorem} \begin{proof} Let $\module{P}_1$ and $\module{P}_2$ be \system{smodels} program modules such that $\module{P}_1\oplus\module{P}_2$ is defined. We can show $\sm{\module{P}_1 \oplus \module{P}_2}\ne \sm{\module{P}_1}\Join\sm{\module{P}_2}$ by choosing $M\subseteq\hb{\module{P}_1\oplus\module{P}_2}$ and checking that \begin{itemize} \item $M\in\sm{\module{P}_1\oplus\module{P}_2}$ and $M\cap\hb{\module{P}_1}\not\in\sm{\module{P}_1}$; or \item $M\in\sm{\module{P}_1\oplus\module{P}_2}$ and $M\cap\hb{\module{P}_2}\not\in\sm{\module{P}_2}$; or \item $M\not\in\sm{\module{P}_1\oplus\module{P}_2}$, $M\cap\hb{\module{P}_1}\in\sm{\module{P}_1}$, and $M\cap\hb{\module{P}_2}\in\sm{\module{P}_2}$. \end{itemize} Once we have chosen $M$, these tests can be performed in polynomial time, which shows that the problem is in $\mathbf{coNP}$. To establish $\mathbf{coNP}$-hardness we present a reduction from $\overline{\mathbf{3SAT}}$. Consider a finite set $S=\eset{C_1}{C_n}$ of three-literal clauses $C_i$ of the form $l_1\lor l_2\lor l_3$ where each $l_i$ is either an atom $a$ or its classical negation $\neg a$. Each clause $C_i$ is translated into rules $r_{i,j}$ of the form $c_i\leftarrow f_j$, where $1\leq j\leq 3$, and $f_j=a$ if $l_j=a$ and $f_j=\naf a$ if $l_j=\neg a$. The intuitive reading of $c_i$ is that clause $C_i$ is satisfied. We define modules $\module{P}_1=\tuple{\{e\leftarrow d.\}, \{d\}, \{e\},\emptyset}$ and \begin{multline*} \module{P}_2=\langle \{r_{i,j}\mid 1\leq i\leq n,1\leq j\leq 3\}\cup \{d\leftarrow e,c_1,\ldots, c_n.\}, \\ \hb{S}\cup\{e\}, \{d\},\{c_1,\ldots,c_n\} \rangle. \end{multline*} Now $\module{P}_1\oplus\module{P}_2$ is defined, and $\sm{\module{P}_1}=\{\emptyset,\{d,e\}\}$. There is $M\in\sm{\module{P}_2}$ that is compatible with $\{d,e\}$ if and only if $S\in\mathbf{3SAT}$. Since $d\not\in N$ and $e\not\in N$ for all $N\in\sm{\module{P}_1\oplus\module{P}_2}$, it follows that $S\in\overline{\mathbf{3SAT}}$ if and only if $\sm{\module{P}_1\oplus\module{P}_2}= \sm{\module{P}_1}\Join\sm{\module{P}_2}$. \end{proof} Theorem \ref{tradeoff} shows that there is a tradeoff for allowing positive recursion between modules, as more effort is needed to check that composition of such modules does not compromise the compositionality of the stable model semantics. \section{Tools and Practical Demonstration} \label{section:experiments} The goal of this section is to demonstrate how the module system introduced in Section \ref{section:modules} can be exploited in practise in the context of the \system{smodels} system and other compatible systems. In this respect, we present tools that have been developed for the (de)composition of logic programs that are represented in the \emph{internal file format}% \footnote{The reader is referred to \cite{Janhunen07:sea} for a detailed description and analysis of the format.} of the \system{smodels} engine. The binaries for both tools are available under the \system{asptools} collectio \footnote{\url{http://www.tcs.hut.fi/Software/asptools/}}. Moreover, we conduct and report a practical experiment which illustrates the performance of the tools when processing substantially large benchmark instances, that is, \system{smodels} programs having up to millions of rules (see the \system{asptools} web page for examples). The first tool, namely \system{modlist}, is targeted at program decomposition based on the strongly connected components of an \system{smodels} program given as input. In view of the objectives of Section \ref{section:decomposition}, there are three optional outcomes of the decomposition, that is, strongly connected components that take into account \begin{enumerate} \item positive dependencies only, \item positive dependencies and hidden atoms, and \item both positive and negative dependencies as well as hidden atoms. \end{enumerate} The number of modules created by \system{modlist} decreases in this order. However, our benchmarks cover program instances that get split in tens of thousands of modules. To tackle the problem of storing such numbers of modules in separate files we decided to use file compression and packaging tools and, in particular, the \system{zip} utility available in standard Linux installations. We found \system{zip} superior to \system{tar} as it allows random access to files in an archive, or a \emph{zipfile}. This feature becomes valuable when the modules are accessed from the archive for further processing. The tool for program composition has been named as \system{lpcat} which refers to the concatenation of files containing logic programs. A new version of the tool was implemented for experiments reported below for better performance as well as usability. The old version (version 1.8) is only able to combine two modules at a time which gives a quadratic nature for a process of combining $n$ modules together: modules are added one-by-one to the composition. The new version, however, is able to read in modules from several files and, even more conveniently, a \emph{stream of modules} from an individual file. The \system{zip} facility provides an option for creating such a stream that can then be forwarded for \system{lpcat} for composition. This is the strategy for composing programs in experiments that are described next. \begin{table} \begin{center} \begin{tabular}{@{\hspace{-3pt}}c@{\hspace{-3pt}}} \begin{tabular}{lrrlrrr} \textbf{Benchmark} & \textbf{na} & \textbf{nr} & \textbf{bt} & \textbf{nm} & \textbf{dt} (s) & \textbf{ct} (s) \\ \hline \hline ephp-13 & 35 518 & 90 784 & $+$ & 35 518 & 2 110 & 362 \\ & & & $+$h & 35 518 & 2 110 & 362 \\ & & &$\pm$h& 35 362 & 2 090 & 361 \\ \hline mutex3 & 276 086 & 2 406 357 & $+$ & 101 819 & 22 900 & 9 570 \\ & & & $+$h & 101 819 & 23 300 & 9 640 \\ & & &$\pm$h& 101 609 & 24 000 & 9 580 \\ \hline phi3 & 7 379 & 14 274 & $+$ & 6 217 & 74,3 & 3,32 \\ & & & $+$h & 6 217 & 74,3 & 3,35 \\ & & &$\pm$h& 5 686 & 63,2 & 2,92 \\ \hline seq4-ss4 & 6 873 & 1 197 182 & $+$ & 3 425 & 121 & 60,0 \\ & & & $+$h & 1 403 & 89,4 & 31,9 \\ & & &$\pm$h & 107 & 20,2 & 7,58 \\ \hline \end{tabular} \\ \ \\ \begin{tabular}{@{}lll@{}} Legends for abbreviations: & \textbf{na}: & Number of atoms \\ & \textbf{nr}: & Number of rules \\ & \textbf{nm}: & Number of modules \\ & \textbf{bt}: & Benchmark type \\ & \textbf{dt}: & Decomposition time \\ & \textbf{ct}: & Composition time \\ \end{tabular} \end{tabular} \end{center} \caption{Summary of benchmark results for module (de)composition \label{table:results}} \end{table} To test the performance of our tools, we picked a set of benchmark instances having from tens of thousands up to millions of rules---expressed in the \system{smodels} format. For each instance, the first task is to decompose the instance into modules using \system{modlist} and to create a zipfile containing the modules. The type of modules to be created is varied according the three schemes summarized above. The second task is to recreate the benchmark instance from a stream of modules extracted from the respective zipfile. As suggested above, the actual composition is carried out using \system{lpcat} and we also check that the number of rules matches with the original instance. Due to high number of rules, checking the equivalence of the original and composed programs \cite{JO07:tplp} is unfeasible in many cases. If all atoms are visible, this can be accomplished syntactically on the basis of sorted textual representations of the programs involved. To ensure that \system{modlist} and \system{lpcat} produce correct (de)compositions of programs, such a check was performed for all compositions created for the first three benchmarks which involve no hidden atoms. As regards computer hardware, we run \system{modlist} and \system{lpcat} on a PC with a 1.8GHz Intel Core 2 Duo CPU and 2GBs of main memory---operating under the Linux 2.6.18 system. In experimental results collected in Table \ref{table:results}, we report the sum of user and system times that are measured with the {\tt /usr/bin/time} command. There are three benchmark types (\textbf{bt} for short) as enumerated in the beginning of this section. We refer to them using the respective abbreviations $+$, $+$h, and $\pm$h. The first benchmark instance in Table \ref{table:results}, viz.~\emph{ephp-13}, is a formalization \cite{JO07:iclp} of the classical \emph{pigeon hole principle} for 13 pigeons---extended by redundant rules in analogy to Tseitin's \emph{extended resolution} proof system. This program can be deemed medium-sized within our benchmarks. There are no hidden atoms, no positive recursion and little negative recursion in this program instance as indicated by the number of atoms (35 518) and the respective numbers of modules (see column \textbf{nm}). Thus we have an example of a very fine-grained decomposition where the \emph{definition}% \footnote{The set of rules that mention the atom in question in their head.} of each atom ends up as its own module in the outcome. The given timings indicate that \system{modlist} and \system{lpcat} are able to handle $15$ and $100$ modules per second, respectively. The share of file I/O and (de)compression is substantial in program decomposition. For instance, the actual splitting of the \emph{ephp-13} benchmark ($+$h) using \system{modlist} takes only $0,59$ seconds---the rest of approximately~$2 110$ seconds is spent to create the zipfile. To the contrary, inflating the stream of modules from the zipfile is very efficient as it takes only $0,45$ seconds in case of \emph{ephp-13}. After that the restoration of the original program instance takes roughly $361$ seconds. The creation and compression of a joint symbol table for the modules accounts for the most of the time spent on this operation. It should also be stressed that it is impractical to store modules in separate files for this program. For instance, a shell command that refers to all modules fails due to excessive number of arguments at the respective command line. The next two programs in Table \ref{table:results}, \emph{mutex3} and \emph{phi3}, are related to the \emph{distributed implementability problem} of asynchronous automata, and particular formalizations of classical \emph{mutual exclusion} and \emph{dining philosophers} problems \cite{HS04:report,HS05:acsd}. These programs involve no hidden atoms and both positive and negative interdependencies of atoms occur. The extremely high numbers of rules ($2 406 357$) and modules ($101 819$) are clearly reflected in running times perceived for \emph{mutex3}. However, the respective rates of $4$ and $10$ modules per second do not differ too much from those obtained for \emph{ephp-13} given the fact that the number of rules is about 25 times higher. The data observed for \emph{phi3} is analogous to those obtained for \emph{ephp-13} and \emph{mutex3} but the respective modules-per-second rates are much higher: approximately~$90$ and $2000$. This may partly boil down to the fact \emph{phi3} is the smallest program under consideration and it has also the smallest number of rules per module ratio. Our last benchmark program, \emph{seq4-ss4}, is taken from benchmark sets of \cite{BCVF06:iclp} where the optimization of machine code using ASP techniques is of interest. The program in question formalizes the optimization of a particular sequence of four SPARC-v7 instructions. This program instance has the greatest modules as regards the number of rules---the average number of rules per module varies from about $350$ to $11 200$ depending on the module type. It has also hidden atoms which makes a difference between modules based on plain SCCs and their combinations induced by the dependencies caused by the use of hidden atoms. The respective modules-per-second rates $28$, $19$, and $5$ are all better than $4$ obtained for \emph{mutex3}. To provide the reader with a better idea of sizes of individual modules, we have collected some numbers about their distribution in Table \ref{table:distribution}. Each program involves a substantial number of modules with just one rule each of which defines a single atom of interest. On the other hand, the largest SCCs for \emph{ephp-13}, \emph{mutex3}, \emph{phi3}, and \emph{seq4-ss4} involve $949$, $2 091 912$, $2 579$, and $1071689$ rules, respectively. For \emph{mutex3}, the biggest module consists of a definition of an equivalence relation over states in the verification domain---creating a huge set of positively interdependent atoms. For \emph{ephp-13}, the greatest module is a collection of nogoods which can be shown to have no stable models in roughly $99500$ seconds using \system{smodels} (version 2.32). However, the remaining rules of \emph{ephp-13} make this fact much faster to prove: only 61 seconds elapse. \begin{table} \begin{center} \begin{tabular}{lrrrr} \hline Benchmark & ephp-13 & mutex3 & phi3 & seq-ss4 \\ \hline \hline \textbf{nr} & \multicolumn{4}{c}{\textbf{nm}} \\ \hline 1 & 14 474 & 67 749 & 2 811 & 2 969 \\ 2 & 7 014 & 2 757 & 1 434 & \\ 3--4 & 12 680 & 41 & 1 962 & \\ 5--8 & 149 & 30 798 & 2 & \\ 9--16 & 618 & 255 & 6 & \\ 17--32 & 582 & & & 11 \\ 33--64 & & & 1 & \\ 65--128 & & & & 134 \\ 129--512 & & & & 296 \\ 513--1 024 & 1 & & & 9 \\ over 1 024 & & 2 & 1 & 2 \\ \hline \end{tabular} \end{center} \caption{Distribution of the sizes of modules \label{table:distribution} (see Table \ref{table:results} for legends)} \end{table} A few concluding remarks follow. Increasing the number of modules in a program tends to decrease the number of modules that can be decomposed per time unit. This observation suggests that the creation of the zipfile has a quadratic flavor although modules themselves can be figured out in linear time (using a variant of Tarjan's algorithm). Perhaps this can be improved in the future by better integrating the creation of the zipfile into \system{modlist}. For now, it creates a shell script for this purpose. Handling the biggest program instances is also subject to the effects of memory allocation which may further slow down computations. On the other hand, the cost of increasing the number of rules in modules seems to be relatively small. Moreover, it is clear on the basis of data given in Table \ref{table:results} that the composition of programs is faster than decomposition. This would not be the case if the old version of \system{lpcat} were used for composition. Last, we want to emphasize that \system{modlist} and \system{lpcat} have been implemented as supplementary tools that are not directly related to the computation of stable models. Nevertheless, we intend to exploit these tools in order to modularize different tasks in ASP such as verifying ordinary/modular equivalence and program optimization. The existence of such tools enables modular program development and the creation of \emph{module libraries} for \system{smodels} programs, and thus puts forward the use of module architectures in the realm of ASP. \section{Conclusions} \label{section:conclusions} In this paper, we introduce a simple and intuitive notion of a logic program module that interacts with other modules through a well-defined input/output interface. The design has its roots in a module architecture proposed for conventional logic programs \cite{GS89}, but as regards our contribution, we tailor the architecture in order to better meet the criteria of ASP. Perhaps the most important objective in this respect is to achieve the compositionality of stable model semantics, that is, the semantics of an entire program depends directly on the semantics assigned to its modules. To this end, the main result of this paper is formalized as the \emph{module theorem} (Theorem \ref{moduletheorem}) which links program-level stability with module-level stability. The theorem holds under the assumption that positively interdependent atoms are always placed in the same module. The \emph{join} operation $\sqcup$ defined for program modules effectively formalizes this constraint---which we find acceptable when it comes to good programming style in ASP. The module theorem is also a proper generalization of the splitting-set theorem~\cite{LT94:iclp} recast for \system{smodels} programs. The main difference is that splitting-sets do not enable any kind of recursion between modules. Even though the module theorem is proved to demonstrate the feasibility of the respective module architecture, it is also applied as a tool to simplify mathematical proofs in this paper and recently also in~\cite{OJ08:aimsa,OJ08:jlc}. It also lends itself to extensions for further classes of logic programs which can be brought into effect in terms of \emph{strongly faithful}, $\sqcup$-\emph{preserving}, and \emph{modular} translations for the removal of new syntax (Theorem \ref{theorem:modulethr-translation}). Moreover, the module theorem paves the way for the modularization of various reasoning tasks, such as search for answer sets, query evaluation, and verification, in ASP. The second main theme of the paper is the notion of modular equivalence which is proved to be a proper congruence relation for program composition using $\sqcup$ (Theorem \ref{smodels-congruence}). Thus modular equivalence is preserved under substitutions of modularly equivalent program modules. Since uniform equivalence is not a congruence for ordinary $\cup$ but strong equivalence is by definition, modular equivalence can be viewed as a reasonable compromise between these two extremes. In addition to the congruence property, we present a number of results about modular equivalence. \begin{enumerate} \item We show that deciding modular equivalence forms a $\mathbf{coNP}$-complete decision problem for \system{smodels} program modules with the EVA property, that is, those having enough visible atoms so that their stable models can be distinguished from each other on the basis of visible atoms only. In this way, it is possible to use the \system{smodels} solver for the actual verification task. \item We consider the possibility of redefining the join operation $\sqcup$ using a semantical condition that corresponds to the content of the module theorem. The notion of modular equivalence is not affected, but the cost of verifying whether a particular join of modules is defined becomes a $\mathbf{coNP}$-complete decision problem. This is in contrast with the linear time check for positive recursion (Tarjan's algorithm for strongly connected components) but it may favorably extend the coverage of modular equivalence in certain applications. \item Finally, we also analyze the problem of decomposing an \system{smodels} program into modules when there is no a priori knowledge about the structure of the program. The strongly connected components of the program provide the starting point in this respect, but the usage of hidden atoms may enforce a higher degree of amalgamation when the modules of a program are extracted. \end{enumerate} The theoretical results presented in the paper have emerged in close connection with the development of tools for ASP. The practical demonstration in Section \ref{section:experiments} illustrates the basic facilities that are required to deal with \emph{object level} modules within the \system{smodels} system.% \footnote{Likewise, source level modules could be incorporated to the front-end of the system (\system{lparse}).} The linker, namely \system{lpcat}, enables the composition of ground programs in the \system{smodels} format. Using this tool, for instance, it is possible to add a query to a program afterwards without grounding the program again. On the other hand, individual modules of a program can be accessed from the zipfile created by the module extractor \system{modlist}. This is highly practical since we intend to pursue techniques for module-level optimization in the future. \section*{Acknowledgements} This work has been partially supported by the Academy of Finland through Projects \#211025 and \#122399. The first author gratefully acknowledges the financial support from Helsinki Graduate School in Computer Science and Engineering, Emil Aaltonen Foundation, the Finnish Foundation for Technology Promotion TES, the Nokia Foundation, and the Finnish Cultural Foundation.
1,314,259,994,581
arxiv
\section{Introduction}\label{sec:intro} Let $\mathfrak g$ be a complex semisimple Lie algebra. We fix a Cartan subalgebra $\mathfrak h$ of $\mathfrak g$. Let $(\pi,V_\pi)$ be a finite dimensional representation of $\mathfrak g$, that is, a homomorphism $\pi:\mathfrak g\to \gl(V_\pi)$ with $V_\pi$ a complex vector space. An element $\mu\in \mathfrak h^*$ is called a \emph{weight} of $\pi$ if \begin{equation*} V_\pi(\mu):= \{v\in V_\pi: \pi(X)v=\mu(X)v \text{ for all }X\in\mathfrak h\}\neq0. \end{equation*} The \emph{multiplicity} of $\mu$ in the representation $\pi$, denoted by $m_\pi(\mu)$, is defined as $\dim V_\pi(\mu)$. There are many formulas in the literature to compute $m_\pi(\mu)$ for arbitrary $\mathfrak g$, $\pi$ and $\mu$. The ones by Freudenthal~\cite{Freudenthal54} and Kostant~\cite{Kostant59} are very classical. More recent formulas were given by Lusztig~\cite{Lusztig83}, Littelmann~\cite{Littelmann95} and Sahi~\cite{Sahi00}. Although all of them are very elegant and powerful theoretical results, they may not be considered \emph{closed explicit expressions}. Moreover, some of them are not adequate for computer implementation (cf.\ \cite{Schutzer04thesis}, \cite{Harris12thesis}). Actually, it is not expected a closed formula in general. There should always be a sum over a symmetric group (whose cardinal grows quickly when the rank of $\mathfrak g$ does) or over partitions, or being recursive, or written in terms of combinatorial objects (e.g.\ Young diagrams like in \cite{Koike87}), among other ways. However, closed explicit expressions are possible for particular choices of $\mathfrak g$ and $\pi$. Obviously, this is the case for $\sll(2,\C)$ and $\pi$ any of its irreducible representations (see \cite[\S I.9]{Knapp-book-beyond}). Furthermore, for a classical Lie algebra $\mathfrak g$, it is not difficult to give expressions for the weight multiplicities of the representations $\op{Sym}^k(V_\mathrm{st})$ and $\bigwedge^p (V_\mathrm{st})$ and also for their irreducible components (see for instance Lemmas~\ref{lemCn:extreme}, \ref{lemDn:extremereps} and \ref{lemBn:extremereps} and Theorem~\ref{thmAn:multip(k,p)}; these formulas are probably well known but they are included here for completeness). Here, $V_{\mathrm{st}}$ denotes the standard representation of $\mathfrak g$. A good example of a closed explicit formula in a non-trivial case was given by Cagliero and Tirao~\cite{CaglieroTirao04} for $\spp(2,\C)\simeq\so(5,\C)$ and $\pi$ arbitrary. In order to end the description of previous results in this large area we name a few recent related results, though the list is far from being complete: \cite{Cochet05}, \cite{BaldoniBeckCochetVergne06}, \cite{Bliem08-thesis}, \cite{Schutzer12}, \cite{Maddox14}, \cite{FernandezGarciaPerelomov2014}, \cite{FernandezGarciaPerelomov2015a}, \cite{FernandezGarciaPerelomov2015b}, \cite{FernandezGarciaPerelomov2017}, \cite{Cavallin17}. The main goal of this article is to show, for each classical complex Lie algebra $\mathfrak g$ of rank $n$, a closed explicit formula for the weight multiplicities of any irreducible representation of $\mathfrak g$ having highest weight $k\omega_1+\omega_p$, for any integers $k\geq0$ and $1\leq p\leq n$. Here, $\omega_1,\dots,\omega_n$ denote the fundamental weights associated to the root system $\Sigma(\mathfrak g,\mathfrak h)$. We call \emph{$p$-fundamental string} to the sequence of irreducible representations of $\mathfrak g$ with highest weights $k\omega_1+\omega_p$ for $k\geq0$. We will write $\pi_{\lambda}$ for the irreducible representation of $\mathfrak g$ with highest weight $\lambda$. For types $\tipo B_n$, $\tipo C_n$ or $\tipo D_n$ (i.e.\ $\so(2n+1,\C)$, $\spp(n,\C)$ or $\so(2n,\C)$ respectively) an accessory representation $\pi_{k,p}$ is introduced to unify the approach (see Definition~\ref{def:pi_kp}). We have that $\pi_{k,p}$ and $\pi_{k \omega_1+\omega_p}$ coincide except for $p=n$ in type $\tipo B_n$ and $p=n-1,n$ in type $\tipo D_n$. The weight multiplicity formulas for $\pi_{k,p}$ are in Theorems~\ref{thmCn:multip(k,p)}, \ref{thmDn:multip(k,p)} and \ref{thmBn:multip(k,p)} for types $\tipo C_n$, $\tipo D_n$ and $\tipo B_n$ respectively. Their proofs follow the same strategy (see Section~\ref{sec:strategy}). The formulas for the remaining cases, namely the (spin) representations $\pi_{k \omega_1+\omega_n}$ in type $\tipo B_n$ and $\pi_{k \omega_1+\omega_{n-1}}$, $\pi_{k \omega_1+\omega_n}$ in type $\tipo D_n$, can be found in Theorem~\ref{thmDn:multip(spin)} and \ref{thmBn:multip(spin)} respectively. Given a weight $\mu=\sum_{j=1}^{n} a_j\varepsilon_j$ (see Notation~\ref{notacion}) of a classical Lie algebra $\mathfrak g$ of type $\tipo B_n$, $\tipo C_n$ or $\tipo D_n$, we set \begin{align}\label{eq:notation-one-norm} \norma{\mu}=\sum_{j=1}^{n} |a_j| \quad\text{and}\quad \contador (\mu) = \#\{1\leq j\leq n: a_j=0\}. \end{align} We call $\norma{\mu}$ the \emph{one-norm} of $\mu$. The function $\contador(\mu)$ counts the number of zero coordinates of $\mu$. It is not difficult to check that $m_{\pi_{k\omega_1}}(\mu)$ depends only on $\norma{\mu}$ for a fixed $k\geq0$. Moreover, it is known that $m_{\pi_{k,p}}(\mu)$ depends only on $\norma{\mu}$ and $\contador (\mu)$ for type $\tipo D_n$ (see \cite[Lem.~3.3]{LMR-onenorm}). This last property is extended to types $\tipo B_n$ and $\tipo C_n$ as a consequence of their multiplicity formulas. \begin{corollary}\label{cor:depending-one-norm-ceros} For $\mathfrak g$ a classical Lie algebra of type $\tipo B_n$, $\tipo C_n$ or $\tipo D_n$ and a weight $\mu=\sum_{i=1}^{n} a_i\varepsilon_i$, the multiplicity of $\mu$ in $\pi_{k,p}$ depends only on $\norma{\mu}$ and $\contador (\mu)$. \end{corollary} For $\mathfrak g=\sll(n+1,\C)$ (type $\tipo A_n$), the multiplicity formula for a representation in a fundamental string is in Theorem~\ref{thmAn:multip(k,p)}. This case is simpler since it follows immediately from basic facts on Young diagrams. Although this formula should be well known, it is included for completeness. Explicit expressions for the weight multiplicities of a representation in a fundamental string are required in several different areas. The interest of the authors on them comes from their application to spectral geometry. Actually, many multiplicity formulas have already been applied to determine the spectrum of Laplace and Dirac operators on certain locally homogeneous spaces. See Section~\ref{sec:conclusions} for a detailed account of these applications. It is important to note that all the weight multiplicity formulas obtained in this article have been checked with Sage~\cite{Sage} for many cases. This computer program uses the classical Freudenthal formula. Because of the simplicity of the expressions obtained in the main theorems, the computer takes usually a fraction of a second to calculate the result. Throughout the article we use the convention $\binom{b}{a}=0$ if $a<0$ or $b<a$. The article is organized as follows. Section~\ref{sec:strategy} explains the method to obtain $m_{\pi_{k,p}}(\mu)$ for types $\tipo B_n$, $\tipo C_n$ and $\tipo D_n$. These cases are considered in Sections~\ref{secBn:multip(k,p)}, \ref{secCn:multip(k,p)} and \ref{secDn:multip(k,p)} respectively, and type $\tipo A_n$ is in Section~\ref{secAn:multip(k,p)}. In Section~\ref{sec:conclusions} we include some conclusions. \section{Strategy}\label{sec:strategy} In this section, we introduce the abstract method used to find the weight multiplicity formulas for the cases $\tipo B_n$, $\tipo C_n$ and $\tipo D_n$. Throughout this section, $\mathfrak g$ denotes a classical complex Lie algebra of type $\tipo B_n$, $\tipo C_n$ and $\tipo D_n$, namely $\so(2n+1,\C)$, $\spp(n,\C)$, $\so(2n,\C)$, for some $n\geq2$. We first introduce some standard notation. \begin{notation}\label{notacion} We fix a Cartan subalgebra $\mathfrak h$ of $\mathfrak g$. Let $\{\varepsilon_{1}, \dots,\varepsilon_{n}\}$ be the standard basis of $\mathfrak h^*$. Thus, the sets of simple roots $\Pi(\mathfrak g,\mathfrak h)$ are given by $\{\varepsilon_1-\varepsilon_2,\dots, \varepsilon_{n-1}-\varepsilon_n,\varepsilon_n\}$ for type $\tipo B_n$, $\{\varepsilon_1-\varepsilon_2,\dots, \varepsilon_{n-1}-\varepsilon_n,2\varepsilon_n\}$ for type $\tipo C_n$, and $\{\varepsilon_1-\varepsilon_2,\dots, \varepsilon_{n-1}-\varepsilon_n,\varepsilon_{n-1}+\varepsilon_n\}$ for type $\tipo D_n$. A precise choice for $\mathfrak h$ and $\varepsilon_j$ will be indicated in each type. We denote by $\Sigma(\mathfrak g,\mathfrak h)$ the set of roots, by $\Sigma^+(\mathfrak g,\mathfrak h)$ the set of positive roots, by $\omega_1,\dots,\omega_n$ the fundamental weights, by $P(\mathfrak g)$ the (integral) weight space of $\mathfrak g$ and by $P^{{+}{+}}(\mathfrak g)$ the set of dominant weights. Let $\mathfrak g_0$ be the compact real form of $\mathfrak g$ associated to $\Sigma(\mathfrak g,\mathfrak h)$, let $G$ be the compact linear group with Lie algebra $\mathfrak g_0$ (e.g.\ $G=\SO(2n)$ for type $\tipo D_n$ in place of $\Spin(2n)$), and let $T$ be the maximal torus in $G$ corresponding to $\mathfrak h$, that is, the Lie algebra $\mathfrak t$ of $T$ is a real subalgebra of $\mathfrak h$. Write $P(G)$ for the set of $G$-integral weights and $P^{{+}{+}}(G)=P(G)\cap P^{{+}{+}}(\mathfrak g)$. By the Highest Weight Theorem, the irreducible representations of $\mathfrak g$ and $G$ are in correspondence with elements in $P^{{+}{+}}(\mathfrak g)$ and $P^{{+}{+}}(G)$ respectively. For $\lambda$ an integral dominant weight, we denote by $\pi_\lambda$ the associated irreducible representation of $\mathfrak g$. \end{notation} We recall that, under Notation~\ref{notacion}, the fundamental weights are: \begin{align*} \text{in type $\tipo B_n$},\qquad \omega_p &= \begin{cases} \varepsilon_1+\dots+\varepsilon_p &\text{if $1\leq p\leq n-1$,}\\ \frac12(\varepsilon_1+\dots+\varepsilon_n) &\text{if $p=n$,} \end{cases} \\ \text{in type $\tipo C_n$},\qquad \omega_p &=\varepsilon_1+\dots+\varepsilon_p \quad\text{for every $1\leq p\leq n$},\\ \text{in type $\tipo D_n$},\qquad \omega_p &= \begin{cases} \varepsilon_1+\dots+\varepsilon_p &\text{if $1\leq p\leq n-2$,}\\ \frac12(\varepsilon_1+\dots+\varepsilon_{n-1}-\varepsilon_{n}) &\text{if $p=n-1$,}\\ \frac12(\varepsilon_1+\dots+\varepsilon_{n-1}+\varepsilon_{n}) &\text{if $p=n$.} \end{cases} \end{align*} We set $\widetilde \omega_p=\varepsilon_1+\dots+\varepsilon_p$ for any $1\leq p\leq n$. Thus, $\widetilde \omega_p=\omega_p$ excepts for type $\tipo B_n$ and $p=n$ when $\widetilde \omega_{n}=2\omega_n$, and for type $\tipo D_n$ and $p\in\{n-1,n\}$ when $\widetilde \omega_{n-1}=\omega_{n-1}+\omega_n$ and $\widetilde \omega_{n}=2\omega_n$. \begin{definition}\label{def:pi_kp} Let $\mathfrak g$ be a classical Lie algebra of type $\tipo B_n$, $\tipo C_n$ or $\tipo D_n$. For $k\geq0$ and $1\leq p\leq n$ integers, let us denote by $\pi_{k,p}$ the irreducible representation of $\mathfrak g$ with highest weight $k\omega_1+\widetilde \omega_p$, except for $p=n$ and type $\tipo D_n$ when we set $\pi_{k,n}=\pi_{k\omega_1+2\omega_{n-1}}\oplus \pi_{k\omega_1+2\omega_{n}}$. By convention, we set $\pi_{k,0}=0$ for $k\geq0$. \end{definition} We next explain the procedure to determine the multiplicity formula for $\pi_{k,p}$. \begin{description} \item[Step 1] Obtain the decomposition in irreducible representations of \begin{equation}\label{eq:sigma_kp} \sigma_{k,p}:=\pi_{k\omega_1}\otimes \pi_{\widetilde \omega_p}, \end{equation} and consequently, write $\pi_{k,p}$ in terms of representations of the form \eqref{eq:sigma_kp} in the virtual representation ring. Fortunately, this decomposition is already known and coincides for the types $\tipo B_n$, $\tipo C_n$ and $\tipo D_n$, thus the second requirement has also a uniform statement (see Lemma~\ref{lem:step1}). \item[Step 2] Obtain a formula for the weight multiplicities of the extreme cases $\pi_{k\omega_1}$ and $\pi_{\widetilde \omega_p}$. It will be useful to realize these representations inside $\op{Sym}^k(V_{\pi_{\omega_1}})$ and $\bigwedge^p(V_{\pi_{\omega_1}})$ respectively. Note that $\pi_{\omega_1}$ is the standard representation. \item[Step 3] Obtain a closed expression for the weight multiplicities on $\sigma_{k,p}$. This is the hardest step. One has that (see for instance \cite[Exercise~V.14]{Knapp-book-beyond}) \begin{equation}\label{eq:multiptensor} m_{\sigma_{k,p}}(\mu) = \sum_{\eta} m_{\pi_{k\omega_1}}(\mu-\eta) \, m_{\pi_{\widetilde \omega_p}}(\eta), \end{equation} where the sum is over the weights of $\pi_{\widetilde \omega_p}$. Then, the multiplicity formulas obtained in Step~2 can be applied. \item[Step 4] Obtain the weight multiplicity formula for $\pi_{k,p}$. We will replace the formula obtained in Step~3 into the formula obtained in Step~1. \end{description} \smallskip The following result works out Step~1. \begin{lemma}\label{lem:step1} Let $\mathfrak g$ be a classical Lie algebra of type $\tipo B_n$, $\tipo C_n$ or $\tipo D_n$ and let $k\geq0$, $1\leq p\leq n$ integers. Then \begin{equation}\label{eq:funsionrule(sigma)} \sigma_{k,p} = \pi_{k\omega_1}\otimes \pi_{\widetilde \omega_p} = \pi_{k-1,1}\otimes \pi_{0,p} \simeq \pi_{k,p}\oplus \pi_{k-1,p+1} \oplus \pi_{k-2,p}\oplus \pi_{k-1,p-1}. \end{equation} Furthermore, in the virtual ring of representations, we have that \begin{equation}\label{eq:virtualring(sigma)} \pi_{k,p} = \sum_{j=1}^p (-1)^{j-1} \sum_{i=0}^{j-1} \sigma_{k+j-2i,p-j}. \end{equation} \end{lemma} \begin{proof} The decomposition \eqref{eq:funsionrule(sigma)} is proved in \cite[page 510, example (3)]{KoikeTerada87} by Koike and Terada, though their results are much more general and this particular case was probably already known. We now show \eqref{eq:virtualring(sigma)}. The case $p=1$ is trivial. Indeed, the right hand side equals $\sigma_{k+1,0}=\pi_{k,1}$ by definition. We assume that the formula is valid for values lower than or equal to $p$. By this assumption and \eqref{eq:funsionrule(sigma)} we have that \begin{align*} \pi_{k,p+1} &= \sigma_{k+1,p} - \pi_{k+1,p} -\pi_{k-1,p} -\pi_{k,p-1} = \sigma_{k+1,p} - \sum_{j=1}^p (-1)^{j-1}\sum_{i=0}^{j-1} \sigma_{k+1+j-2i,p-j}\\ &\qquad - \sum_{j=1}^p (-1)^{j-1}\sum_{i=0}^{j-1} \sigma_{k-1+j-2i,p-j} - \sum_{j=1}^{p-1} (-1)^{j-1}\sum_{i=0}^{j-1} \sigma_{k+j-2i,p-1-j}. \end{align*} By making the change of variables $h=j+1$ in the last term, one gets \begin{align*} \pi_{k,p+1} &= \sigma_{k+1,p}- \sum_{j=1}^p (-1)^{j-1}\sum_{i=0}^{j-1} \sigma_{k+1+j-2i,p-j}- \sigma_{k,p-1} - \sum_{j=2}^p (-1)^{j-1} \sigma_{k+1-j,p-j}. \end{align*} The rest of the proof is straightforward. \end{proof} \section{Type C} \label{secCn:multip(k,p)} In this section we consider the classical Lie algebra $\mathfrak g$ of type $\tipo C_n$, that is, $\mathfrak g=\spp(n,\C)$. In this case, according to Notation~\ref{notacion}, $\widetilde \omega_p=\omega_p$ for every $p$, thus $\pi_{k\omega_1+\omega_p} = \pi_{k,p}$. The next theorem gives the explicit expression of $m_{\pi_{k,p}}(\mu)$ for any weight $\mu$. This expression depends on the terms $\norma{\mu}$ and $\contador (\mu)$, introduced in \eqref{eq:notation-one-norm}. \begin{theorem}\label{thmCn:multip(k,p)} Let $\mathfrak g=\spp(n,\C)$ for some $n\geq2$ and let $k\geq0$, $1\leq p\leq n$ integers. For $\mu\in P(\mathfrak g)$, if $r(\mu):=(k+p-\norma{\mu})/2$ is a non-negative integer, then \begin{align*} m_{\pi_{k,p}}(\mu) &= \sum_{j=1}^{p} (-1)^{j-1} \sum_{t=0}^{\lfloor\frac{p-j}{2}\rfloor} \frac{n-p+j+1}{n-p+j+t+1}\binom{n-p+j+2t}{t} \\ &\quad \sum_{\beta=0}^{p-j-2t} 2^{p-j-2t-\beta} \binom{n-\contador (\mu)}{\beta} \binom{\contador (\mu)}{p-j-2t-\beta} \\ &\quad \sum_{\alpha=0}^\beta \binom{\beta}{\alpha} \sum_{i=0}^{j-1} \binom{r(\mu)-i-p+\alpha+t+j+n-1}{n-1}, \end{align*} and $m_{\pi_{k,p}}(\mu)=0$ otherwise. \end{theorem} The rest of this section is devoted to prove this formula following the procedure described in Section~\ref{sec:strategy}. We first set the notation for this case. Here $G=\Sp(n,\C)\cap \U(2n)$ where $\Sp(n,\C) = \{g\in\SL(2n,\C): g^t J_ng=J_n:=\left(\begin{smallmatrix}0&\op{Id}_n\\ -\op{Id}_n&0\end{smallmatrix}\right)\}$, $\mathfrak g_0=\mathfrak{sp}(n,\C)\cap \mathfrak{u}(2n)$, \begin{align}\label{eqCn:maximaltorus} T&= \left\{ \diag\left(e^{\mi \theta_1},\dots, e^{\mi \theta_n},e^{-\mi \theta_1},\dots, e^{-\mi \theta_n} \right) :\theta_i\in\R\;\forall\,i \right\},\\ \label{eqCn:subalgCartan} \mathfrak h &= \left\{ \diag(\theta_1,\dots,\theta_n,-\theta_1,\dots,-\theta_n): \theta_i\in\C \;\forall\,i \right\}, \end{align} $\varepsilon_i\big(\diag\left(\theta_1,\dots,\theta_n,-\theta_1,\dots,-\theta_n \right)\big) =\theta_i$ for each $1\leq i\leq n$, $\Sigma^+(\mathfrak g,\mathfrak h)= \{\varepsilon_i\pm \varepsilon_j: 1\leq i<j\leq n\}\cup\{2\varepsilon_i:1\leq i\leq n\}$, and \begin{align*} P(\mathfrak g) &= P(G)= \Z\varepsilon_1\oplus\dots\oplus\Z\varepsilon_{n},\\ P^{{+}{+}}(\mathfrak g) &= P^{{+}{+}}(G)= \left\{\textstyle\sum_{i}a_i\varepsilon_i \in P(\mathfrak g) :a_1\geq a_2\geq \dots \geq a_{n}\geq0\right\}. \end{align*} The following well known identities (see for instance \cite[\S17.2]{FultonHarris-book}) will be useful to show Step~2, \begin{align}\label{eqCn:extremereps} \pi_{k\omega_1}=\pi_{k\varepsilon_1} &\simeq \op{Sym}^k(\C^{2n}),& \textstyle\bigwedge^p(\C^{2n}) &\simeq \pi_{\omega_p}\oplus \textstyle\bigwedge^{p-2}(\C^{2n}), \end{align} for any integers $k\geq0$ and $1\leq p\leq n$. Here, $\C^{2n}$ denotes the standard representation of $\mathfrak g=\spp(2n,\C)$. Since $G=\Sp(n)$ is simply connected, $\pi_{\lambda}$ descends to a representation of $G$ for any $\lambda\in P^{{+}{+}}(\mathfrak g)$. In what follows we will work with representations of $G$ for simplicity. Thus, m_{\pi}(\mu) = \dim \{v\in V_\pi : \pi (\exp X) v = e^{\mu(X)}v\quad \forall\, X\in\mathfrak t\}. \begin{lemma}\label{lemCn:extreme} Let $n\geq2$, $\mathfrak g=\spp(n,\C)$, $k\geq0$, $1\leq p\leq n$ and $\mu=\sum_{j=1}^n a_j\varepsilon_j\in P(\mathfrak g)$. Then \begin{align}\label{eqCn:multip(k)} m_{\pi_{k\omega_1}}(\mu) &=m_{\pi_{k\varepsilon_1}}(\mu)= \begin{cases} \binom{r(\mu)+n-1}{n-1} & \text{ if }\, r(\mu):=\frac{k-\norma{\mu}}{2}\in \N_0,\\ 0 & \text{ otherwise,} \end{cases} \\ m_{\pi_{\omega_p}}(\mu) &= \begin{cases} \frac{n-p+1}{n-p+r(\mu)+1}\binom{n-p+2r(\mu)}{r(\mu)} & \text{if }\,r(\mu):=\frac{p-\norma{\mu}}{2}\in \N_0 \text{ and } |a_j|\leq1\;\forall\,j,\\ 0&\text{otherwise.} \end{cases} \label{eqCn:multip(p)} \end{align} \end{lemma} \begin{proof} By \eqref{eqCn:extremereps}, $\pi_{k\varepsilon_1}$ is realized in the space of homogeneous polynomials $\mathcal P_k\simeq \op{Sym}^k(\C^{2n})$ of degree $k$ in the variables $x_1,\dots,x_{2n}$. The action of $g\in G$ on $f(x)\in \mathcal P_k$ is given by $(\pi_{k\varepsilon_1}(g)\cdot f)(x) = f(g^{-1}x)$, where $x$ denotes the column vector $(x_1,\dots,x_{2n})^t$. The monomials $x_1^{k_1}\dots x_n^{k_n}x_{n+1}^{l_1}\dots x_{2n}^{l_n}$ with $k_1,\dots,k_n,l_1,\dots,l_n$ non-negative integers satisfying that $\sum_{j=1}^{n} k_j+l_j=k$ form a basis of $\mathcal P_k$ given by weight vectors. Indeed, one can check that the action of $h=\diag\left(e^{\mi \theta_1},\dots, e^{\mi \theta_n},e^{-\mi \theta_1},\dots, e^{-\mi \theta_n} \right) \in T$ on the monomial $x_1^{k_1}\dots x_n^{k_n}x_{n+1}^{l_1}\dots x_{2n}^{l_n}$ is given by multiplication by $ e^{\mi\sum_{j=1}^n\theta_j(k_j-l_j)}. $ Hence, $x_1^{k_1}\dots x_n^{k_n}x_{n+1}^{l_1}\dots x_{2n}^{l_n}$ is a weight vector of weight $\mu=\sum_{j=1}^n (k_j-l_{j}) \varepsilon_j$. Consequently, the multiplicity of a weight $\mu=\sum_{j=1}^n a_j\varepsilon_j\in\mathcal P(\mathfrak g)$ in $\mathcal P_k$ is the number of different tuples $(k_1,\dots,k_{n},l_1,\dots,l_{n})\in\N_0^{2n}$ satisfying that $\sum_{j=1}^{n} (k_j+ l_j)=k$ and $a_j=k_j-l_{j}$ for all $j$. For such a tuple, we note that $k-\norma{\mu}= k-\sum_{i=1}^n |a_i|=2\sum_{i=1}^n \operatorname{min}(k_i,l_i)$. It follows that $\mu$ is a weight of $\mathcal{P}_k$ if and only if $k-\norma{\mu}=2r$ with $r$ a non-negative integer. Moreover, its multiplicity is the number of different ways one can write $r$ as an ordered sum of $n$ non-negative integers, which equals $\binom{r+n-1}{n-1}$. This implies \eqref{eqCn:multip(k)}. For \eqref{eqCn:multip(p)}, we consider the representation $\bigwedge^p(\C^{2n})$. The action of $G$ on $\bigwedge^p(\C^{2n})$ is given by $ g\cdot v_1\wedge\dots\wedge v_p = (g v_1)\wedge\dots\wedge (g v_p), $ where $gv$ stands for the matrix multiplication between $g\in G\subset \GL(2n,\C)$ and the column vector $v\in \C^{2n}$. Let $\{e_1,\dots,e_{2n}\}$ denote the canonical basis of $\C^{2n}$. For $I=\{i_1,\dots,i_p\}$ with $1\leq i_1<\dots<i_p\leq 2n$, we write $w_I=e_{i_1}\wedge\dots\wedge e_{i_p}$. Clearly, the set of $w_I$ for all choices of $I$ is a basis of $\bigwedge^p(\C^{2n})$. Since $h=\diag\left(e^{\mi \theta_1},\dots, e^{\mi \theta_n} ,e^{-\mi \theta_1},\dots, e^{-\mi \theta_n} \right) \in T$ satisfies $h e_j = e^{\mi \theta_j} e_j$ and $h e_{j+n} = e^{-\mi \theta_j}e_{j+n}$ for all $1\leq j\leq n$, we see that $w_I$ is a weight vector of weight $\mu=\sum_{j=1}^n a_j\varepsilon_j$ where \begin{align}\label{eq:weight_exteriorCn} a_j=\begin{cases} 1&\quad\text{if $j\in I$ and $j+n\notin I$,}\\ -1&\quad\text{if $j\notin I$ and $j+n\in I$,}\\ 0&\quad\text{if $j,j+n\in I$ or $j,j+n\notin I$.} \end{cases} \end{align} Thus, an arbitrary element $\mu=\sum_j a_j\varepsilon_j\in P(\mathfrak g)$ is a weight of $\bigwedge^p(\C^{2n})$ if and only if $|a_j|\leq 1$ for all $j$ and $p-\norma{\mu}= 2r$ for some non-negative integer $r$. It remains to determine the multiplicity in $\bigwedge^p(\C^{2n})$ of a weight $\mu=\sum_{j=1}^n a_j\varepsilon_j\in P(\mathfrak{g})$ satisfying $|a_j|\leq 1$ for all $j$ and $r:=\frac{p-\norma{\mu}}{2}\in\N_0$. Let $I_\mu=\{i:1\leq i\leq n, \, a_i=1\}\cup\{i:n+1\leq i\leq 2n,\, a_{i-n}=-1\}$. The set $I_\mu$ has $p-2r$ elements. For $I=\{i_1,\dots,i_p\}$ with $1\leq i_1<\dots<i_p\leq 2n$, it is a simple matter to check that $w_I$ is a weight vector with weight $\mu$ if and only if $I$ has $p$ elements, $I_\mu\subset I$ and $I$ has the property that $j\in I\smallsetminus I_\mu \iff j+n\in I\smallsetminus I_\mu$ for $1\leq j\leq n$. One can see that there are $\binom{n-p+2r}{r}$ choices for $I$. Hence $ m_{\bigwedge^p(\C^{2n})}(\mu) = \binom{n-p+2r}{r}$. From \eqref{eqCn:extremereps}, we conclude that $m_{\pi_{\omega_p}}(\mu) = m_{\bigwedge^p(\C^{2n})}(\mu) - m_{\bigwedge^{p-2}(\C^{2n})}(\mu) = \binom{n-p+2r}{r} - \binom{n-p+2+2r}{r}$ and \eqref{eqCn:multip(p)} is proved. \end{proof} We next consider Step~3, namely, a multiplicity formula for $\sigma_{k,p}$. \begin{lemma}\label{lemCn:multip(sigma_kp)} Let $n\geq2$, $\mathfrak g=\spp(n,\C)$, $k\geq0$, $1\leq p<n$, and $\mu\in P(\mathfrak g)$. If $r(\mu):=(k+p-\norma{\mu})/2$ is a non-negative integer, then \begin{align*} m_{\sigma_{k,p}}(\mu) &= \sum_{t=0}^{\lfloor{p}/{2}\rfloor} \frac{n-p+1}{n-p+t+1}\binom{n-p+2t}{t}\sum_{\beta=0}^{p-2t} 2^{p-2t-\beta} \binom{n-\contador (\mu)}{\beta} \binom{\contador (\mu)}{p-2t-\beta} \\ &\qquad \sum_{\alpha=0}^\beta \binom{\beta}{\alpha} \binom{r(\mu)-p+\alpha+t+n-1}{n-1}, \end{align*} and $m_{\sigma_{k,p}}(\mu)=0$ otherwise. \end{lemma} \begin{proof} Write $r=r(\mu)$ and $\ell=\contador (\mu)$. We may assume that $\mu$ is dominant, thus $\mu=\sum_{j=1}^{n-\ell} a_j\varepsilon_j$ with $a_1\geq \dots \geq a_{n-\ell}>0$ since it has $\ell$ zero-coordinates. In order to use \eqref{eq:multiptensor}, by Lemma~\ref{lemCn:extreme}, we write the set of weights of $\pi_{\omega_p}$ as $$ \mathcal P(\pi_{\omega_p}) := \bigcup_{t=0}^{\lfloor {p}/{2}\rfloor} \;\bigcup_{\beta=0}^{p-2t} \;\bigcup_{\alpha=0}^{\beta} \;\mathcal P_{t,\beta,\alpha}^{(p)} $$ where \begin{equation}\label{eq:calP} \mathcal P_{t,\beta,\alpha}^{(p)} = \left\{ \sum_{h=1}^{p-2t} b_h\varepsilon_{i_h}: \begin{array}{l} i_1<\dots<i_\beta\leq n-\ell< i_{\beta+1}<\dots<i_{p-2t} \\ b_j=\pm1\quad \forall j,\quad \#\{1\leq j\leq \beta: b_j=1\}=\alpha \end{array} \right\}. \end{equation} A weight $\eta\in\mathcal P_{t,\beta,\alpha}^{(p)}$ has all entries in $\{0,\pm 1\}$ and satisfies $\norma{\eta}=p-2t$, thus $m_{\pi_{\omega_p}}(\eta)=\frac{n-p+1}{n-p+t+1}\binom{n-p+2t}{t}$ by \eqref{eqCn:multip(p)}. It is a simple matter to check that \begin{equation}\label{eq:card(P)} \# \mathcal P_{t,\beta,\alpha}^{(p)} = 2^{p-2t-\beta} \binom{n-\ell}{\beta}\binom{\beta}{\alpha} \binom{\ell}{p-2t-\beta}. \end{equation} From \eqref{eq:multiptensor}, since the triple union above is disjoint, we obtain that \begin{align*} m_{\sigma_{k,p}}(\mu) &= \sum_{t=0}^{\lfloor {p}/{2}\rfloor}\;\sum_{\beta=0}^{p-2t}\; \sum_{\alpha=0}^{\beta} \; \sum_{\eta\in \mathcal P_{t,\beta,\alpha}^{(p)}} m_{\pi_{k\varepsilon_1}}(\mu-\eta) \;m_{\pi_{\omega_p}}(\eta) . \end{align*} One has that $\norma{\mu-\eta} = (k+p-2r) +(\beta-\alpha)-\alpha + (p-2t-\beta) = k-2(r+t+\alpha-p)$ for every $\eta \in \mathcal P_{t,\beta,\alpha}^{(p)}$. If $r\notin \N_{0}$, \eqref{eqCn:multip(k)} forces $m_{\pi_{k\varepsilon_1}}(\mu-\eta)=0$ for all $\eta \in \mathcal P_{t,\beta,\alpha}^{(p)}$, consequently $m_{\sigma_{k,p}}(\mu)=0$. Otherwise, \begin{align*} m_{\sigma_{k,p}}(\mu) &= \sum_{t=0}^{\lfloor {p}/{2}\rfloor}\;\sum_{\beta=0}^{p-2t}\; \sum_{\alpha=0}^{\beta} \; \binom{r+t+\alpha-p+n-1}{n-1} \;\frac{n-p+1}{n-p+t+1}\; \binom{n-p+2t}{t} \; \# \mathcal P_{t,\beta,\alpha}^{(p)} \end{align*} by Lemma~\ref{lemCn:extreme} . The proof is complete by \eqref{eq:card(P)}. \end{proof} Theorem~\ref{thmCn:multip(k,p)} follows by replacing the multiplicity formula given in Lemma~\ref{lemCn:multip(sigma_kp)} into \eqref{eq:virtualring(sigma)}. \section{Type D}\label{secDn:multip(k,p)} We now consider type $\tipo D_n$, that is, $\mathfrak g=\so(2n,\C)$ and $G=\SO(2n)$. We assume that $n\geq2$, so the non-simple case $\mathfrak g=\so(4,\C)\simeq \sll(2,\C)\oplus \sll(2,\C)$ is also considered. Since $G$ is not simply connected and has a fundamental group of order $2$, the lattice of $G$-integral weights $P(G)$ is strictly included with index $2$ in the weight space $P(\mathfrak g)$. Consequently, a dominant weight $\lambda$ in $P(\mathfrak g)\smallsetminus P(G)$ corresponds to a representation $\pi_{\lambda}$ of $\Spin(2n)$, which does not descend to a representation of $G=\SO(2n)$. In this case, for all $k\geq0$ and $1\leq p\leq n-2$, we have that \begin{align}\label{eqDn:pi_kp} \pi_{k,p} &=\pi_{k \omega_1+\omega_{p}},& \pi_{k,n-1} &= \pi_{k\omega_1+\omega_{n-1}+\omega_n},& \pi_{k,n} &= \pi_{k\omega_1+2\omega_{n-1}}\oplus \pi_{k\omega_1+2\omega_{n}}. \end{align} Each of them descends to a representation of $G$ and its multiplicity formula is established in Theorem~\ref{thmDn:multip(k,p)}. The remaining cases $\pi_{k \omega_1+\omega_n-1}$ and $\pi_{k \omega_1+\omega_n}$, are spin representations. Their multiplicity formulas were obtained in \cite[Lem.~4.2]{BoldtLauret-onenormDirac} and are stated in Theorem~\ref{thmDn:multip(spin)}. \begin{theorem}\label{thmDn:multip(k,p)} Let $\mathfrak g=\so(2n,\C)$ and $G=\SO(2n)$ for some $n\geq2$ and let $k\geq0$, $1\leq p\leq n$ integers. For $\mu\in P(G)$, if $r(\mu):=(k+p-\norma{\mu})/2$ is a non-negative integer, then \begin{align*} m_{\pi_{k,p}}(\mu) &= \sum_{j=1}^{p} (-1)^{j-1} \sum_{t=0}^{\lfloor\frac{p-j}{2}\rfloor} \binom{n-p+j+2t}{t} \sum_{\beta=0}^{p-j-2t} 2^{p-j-2t-\beta} \binom{n-\contador (\mu)}{\beta} \binom{\contador (\mu)}{p-j-2t-\beta} \\ &\quad \sum_{\alpha=0}^\beta \binom{\beta}{\alpha} \sum_{i=0}^{j-1} \binom{r(\mu)-i-p+\alpha+t+j+n-2}{n-2}, \end{align*} and $m_{\pi_{k,p}}(\mu)=0$ otherwise. Furthermore, $m_{\pi_{k,p}}(\mu)=0$ for every $\mu\in P(\mathfrak g)\smallsetminus P(G)$. \end{theorem} \begin{theorem}\label{thmDn:multip(spin)} Let $\mathfrak g=\so(2n,\C)$ and $G=\SO(2n)$ for some $n\geq2$ and let $k\geq0$ an integer. Let $\mu\in P(\mathfrak g)\smallsetminus P(G)$. Write $r(\mu)= k+\frac{n}{2}- \norma{\mu}$, then \begin{align*} m_{\pi_{k\omega_1+\omega_{n}}}(\mu) &= \begin{cases} \binom{r(\mu)+n-2}{n-2} &\text{ if }r(\mu)\geq0 \text{ and } \op{neg}(\mu)\equiv r(\mu)\pmod 2, \\ 0&\text{ otherwise}, \end{cases} \\ m_{\pi_{k\omega_1+\omega_{n-1}}}(\mu) &= \begin{cases} \binom{r(\mu)+n-2}{n-2} &\text{ if }r(\mu)\geq0 \text{ and } \op{neg}(\mu)\equiv r(\mu)+1\pmod 2, \\ 0&\text{ otherwise}, \end{cases} \end{align*} where $\op{neg}(\mu)$ stands for the number of negative entries of $\mu$. Furthermore, $m_{\pi_{k\omega_1+\omega_{n-1}}}(\mu) = m_{\pi_{k\omega_1+\omega_{n}}}(\mu) =0$ for every $\mu\in P(G)$. \end{theorem} The proof of Theorem~\ref{thmDn:multip(k,p)} will follow the steps from Section~\ref{sec:strategy}. Let us first set the necessary elements introduced in Notation~\ref{notacion}. Define $\mathfrak h= \left\{ \diag\left( \left[\begin{smallmatrix}0&\theta_1\\ -\theta_1&0\end{smallmatrix}\right] , \dots, \left[\begin{smallmatrix}0&\theta_n\\ -\theta_n&0\end{smallmatrix}\right] \right): \theta_i\in\C \;\forall\,i \right\} $ and $ \varepsilon_i\big(\diag\left( \left[\begin{smallmatrix}0&\theta_1\\ -\theta_1&0\end{smallmatrix}\right] , \dots, \left[\begin{smallmatrix}0&\theta_n\\ -\theta_n&0\end{smallmatrix}\right] \right)\big)=\theta_i $ for each $1\leq i\leq n$. Thus $\Sigma^+(\mathfrak g,\mathfrak h)=\{\varepsilon_i\pm\varepsilon_j: i<j\}$, \begin{align*} P(\mathfrak g) &= \{\textstyle \sum_i a_i\varepsilon_i: a_i\in\Z\,\forall i, \text{ or } a_i-1/2\in\Z\,\forall i\},& P(G)&=\Z\varepsilon_1\oplus\dots\oplus\Z\varepsilon_{n}, \\ P^{{+}{+}}(\mathfrak g) &=\left\{\textstyle\sum_{i}a_i\varepsilon_i \in P(\mathfrak g) :a_1\geq \dots\geq a_{n-1}\geq |a_n|\right\},& P^{{+}{+}}(G)&= P^{{+}{+}}(\mathfrak g)\cap P(G). \end{align*} It is now clear that $P(G)$ has index $2$ in $P(\mathfrak g)$. The multiplicity formulas in type $\tipo D_n$ for the extreme representations in Step~2 are already determined. A proof can be found in \cite[Lem.~3.2]{LMR-onenorm}. \begin{lemma}\label{lemDn:extremereps} Let $n\geq2$, $\mathfrak g=\so(2n,\C)$, $G=\SO(2n)$, $k\geq0$ and $1\leq p\leq n$. For $\mu=\sum_{j=1}^n a_j\varepsilon_j\in P(G)$, we have that \begin{align} m_{\pi_{k\omega_1}}(\mu) = &\;m_{\pi_{k\varepsilon_1}}(\mu) = \begin{cases} \binom{r(\mu)+n-2}{n-2} & \text{ if }\, r(\mu):=\frac{k-\norma{\mu}}{2} \in\N_0,\\ 0 & \text{ otherwise,} \end{cases} \label{eqDn:multip(k)} \\ m_{\pi_{\widetilde \omega_p}}(\mu) =& \begin{cases} \binom{n-p+2r(\mu)}{r(\mu)} & \text{if }\, r(\mu):=\frac{p-\norma{\mu}}{2}\in \N_{0} \text{ and } |a_j|\leq1\;\forall\,j,\\ 0&\text{otherwise.} \end{cases} \label{eqDn:multip(p)} \end{align} \end{lemma} \begin{lemma}\label{lemDn:multip(sigma_kp)} Let $n\geq2$, $\mathfrak g=\so(2n,\C)$, $G=\SO(2n)$, $k\geq0$, $1\leq p\leq n-1$, and $\mu\in P(G)$. Write $r(\mu)=(k+p-\norma{\mu})/2$. If $r(\mu)$ is a non-negative integer, then \begin{align*} m_{\sigma_{k,p}}(\mu) = &\sum_{t=0}^{\lfloor{p}/{2}\rfloor} \binom{n-p+2t}{t}\sum_{\beta=0}^{p-2t} 2^{p-2t-\beta} \binom{n-\contador (\mu)}{\beta} \binom{\contador (\mu)}{p-2t-\beta}\\ &\qquad \sum_{\alpha=0}^\beta \binom{\beta}{\alpha} \binom{r(\mu) -p+\alpha+t+n-2}{n-2}, \end{align*} and $m_{\sigma_{k,p}}(\mu)=0$ otherwise. \end{lemma} \begin{proof} We will omit several details in the rest of the proof since it is very similar to the one of Lemma~\ref{lemCn:multip(sigma_kp)}. Write $r=(k+p-\norma{\mu})/2$ and $\ell=\contador (\mu)$. We assume that $\mu$ is dominant. Lemma~\ref{lemDn:extremereps} implies that the set of weights of $\pi_{\widetilde \omega_p}$ is $ \mathcal P(\pi_{\widetilde \omega_p}) := \bigcup_{t=0}^{\lfloor {p}/{2}\rfloor} \;\bigcup_{\beta=0}^{p-2t} \;\bigcup_{\alpha=0}^{\beta} \;\mathcal P_{t,\beta,\alpha}^{(p)}, $ with $\mathcal P_{t,\beta,\alpha}^{(p)}$ as in \eqref{eq:calP}. One has that $\norma{\mu-\eta} = k-2(r+t+\alpha-p)$ for any $\eta\in \mathcal P_{t,\beta,\alpha}^{(p)}$. Hence, \eqref{eq:multiptensor} and Lemma~\ref{lemDn:extremereps} imply $m_{\sigma_{k,p}}(\mu)=0$ if $r\notin\N_0$ and \begin{align*} m_{\sigma_{k,p}}(\mu) = &\sum_{t=0}^{\lfloor {p}/{2}\rfloor}\;\sum_{\beta=0}^{p-2t}\; \sum_{\alpha=0}^{\beta} \; \binom{r+t+\alpha-p+n-2}{n-2} \;\binom{n-p+2t}{t} \; \# \mathcal P_{t,\beta,\alpha}^{(p)} \end{align*} otherwise. The proof follows by \eqref{eq:card(P)}. \end{proof} Theorem~\ref{thmDn:multip(k,p)} then follows by substituting in \eqref{eq:virtualring(sigma)} the multiplicity formula in Lemma~\ref{lemDn:multip(sigma_kp)}. \begin{remark} By Definition~\ref{def:pi_kp}, $\pi_{k,n}$ in type $\tipo D_n$ is the only case where $\pi_{k,p}$ is not irreducible. We have that $\pi_{k,n}= \pi_{k\omega_1+\widetilde \omega_{n}} \oplus \pi_{k\omega_1+\widetilde \omega_{n}-2\varepsilon_n} = \pi_{k\omega_1+2\omega_{n-1}} \oplus \pi_{k\omega_1+2\omega_{n}}$ for every $k\geq0$. One can obtain the corresponding multiplicity formula for each of these irreducible constituents from Theorem~\ref{thmDn:multip(k,p)} by proving the following facts. If $\mu\in P(G)$ satisfies $\norma{\mu}=k+n$, then $m_{\pi_{k\omega_1+2\omega_n}}(\mu) = m_{\pi_{k,n}}(\mu)$ and $m_{\pi_{k\omega_1+2\omega_{n-1}}}(\mu) = 0$ or $m_{\pi_{k\omega_1+2\omega_n}}(\mu) = 0$ and $m_{\pi_{k\omega_1+2\omega_{n-1}}}(\mu) = m_{\pi_{k,n}}(\mu)$ according $\mu$ has an even or odd number of negative entries respectively. Furthermore, if $\mu\in P(G)$ satisfies $\norma{\mu} <k+n$, then $m_{\pi_{k\omega_1+2\omega_n}}(\mu) = m_{\pi_{k\omega_1+2\omega_{n-1}}}(\mu) = {m_{\pi_{k,n}}(\mu)}/{2}$. \end{remark} \section{Type B}\label{secBn:multip(k,p)} We now consider $\mathfrak g=\so(2n+1,\C)$ and $G=\SO(2n+1)$, so $\mathfrak g$ is of type $\tipo B_n$. The same observation in the beginning of Section~\ref{secDn:multip(k,p)} is valid in this case. Namely, a weight in $P^{{+}{+}}(\mathfrak g) \smallsetminus P^{{+}{+}}(G)$ induces an irreducible representation of $\Spin(2n+1)$ which does not descend to $G$. For any $k\geq0$ and $1\leq p\leq n-1$, we have that \begin{align} \pi_{k,p} &=\pi_{k \omega_1+\omega_{p}},& \pi_{k,n} &=\pi_{k\omega_1+2\omega_{n}}. \end{align} All of them descend to representations of $G$. The corresponding multiplicity formula is in Theorem~\ref{thmBn:multip(k,p)} and the remaining case, $\pi_{k\omega_1+\omega_n}$ for $k\geq0$, is considered in Theorem~\ref{thmBn:multip(spin)}. \begin{theorem}\label{thmBn:multip(k,p)} Let $\mathfrak g=\so(2n+1)$, $G=\SO(2n+1)$ for some $n\geq2$ and let $k\geq0$, $1\leq p\leq n$ integers. For $\mu\in P(G)$, write $r(\mu)=k+p-\norma{\mu}$, then \begin{align*} m_{\pi_{k,p}}(\mu) = &\sum_{j=1}^{p} (-1)^{j-1} \sum_{t=0}^{\lfloor\frac{p-j}{2}\rfloor} \binom{n-p+j+2t}{t} \\ &\qquad \sum_{\beta=0}^{p-j-2t} 2^{p-j-2t-\beta} \binom{n-\contador (\mu)}{\beta} \binom{\contador (\mu)}{p-j-2t-\beta} \\ &\qquad \sum_{\alpha=0}^\beta \binom{\beta}{\alpha} \sum_{i=0}^{j-1} \binom{\lfloor\frac{r(\mu)}{2}\rfloor-i-p+j+\alpha+t+n-1}{n-1}\\ &+\sum_{j=1}^{p-1} (-1)^{j-1} \sum_{t=0}^{\lfloor\frac{p-j-1}{2}\rfloor} \binom{n-p+j+2t+1}{t} \\ &\qquad \sum_{\beta=0}^{p-j-2t-1} 2^{p-j-2t-\beta-1} \binom{n-\contador (\mu)}{\beta} \binom{\contador (\mu)}{p-j-2t-\beta-1} \\ &\qquad \sum_{\alpha=0}^\beta \binom{\beta}{\alpha} \sum_{i=0}^{j-1} \binom{\lfloor\frac{r(\mu)+1}{2}\rfloor -i-p+j+\alpha+t+n-1}{n-1}. \end{align*} Furthermore, $m_{\pi_{k,p}}(\mu)=0$ for all $\mu\in P(\mathfrak g)\smallsetminus P(G)$. \end{theorem} \begin{remark} Notice that, in Theorem~\ref{thmBn:multip(k,p)}, $m_{\pi_{k,p}}(\mu)=0$ if $r(\mu)<0$ because of the convention $\binom{b}{a}=0$ if $b<a$. \end{remark} We will omit most of details since this case is very similar to the previous ones, specially to type $\tipo D_n$. According to Notation~\ref{notacion}, we set $\mathfrak h= \left\{ \diag\left( \left[\begin{smallmatrix}0&\theta_1\\ -\theta_1&0\end{smallmatrix}\right] , \dots, \left[\begin{smallmatrix}0&\theta_n\\ -\theta_n&0\end{smallmatrix}\right],0 \right): \theta_i\in\C \;\forall\,i \right\}$, $ \varepsilon_i\big(\diag\left( \left[\begin{smallmatrix}0&\theta_1\\ -\theta_1&0\end{smallmatrix}\right] , \dots, \left[\begin{smallmatrix}0&\theta_n\\ -\theta_n&0\end{smallmatrix}\right],0 \right)\big)=\theta_i $ for each $1\leq i\leq n$, $\Sigma^+(\mathfrak g,\mathfrak h)=\{\varepsilon_i\pm\varepsilon_j: i<j\}\cup\{\varepsilon_i\}$, \begin{align*} P(\mathfrak g) &= \{\textstyle \sum_i a_i\varepsilon_i: a_i\in\Z\,\forall i, \text{ or } a_i-1/2\in\Z\,\forall i\},& P(G)&=\Z\varepsilon_1\oplus\dots\oplus\Z\varepsilon_{n}, \\ P^{{+}{+}}(\mathfrak g) &=\left\{\textstyle\sum_{i}a_i\varepsilon_i \in P(\mathfrak g) :a_1\geq a_{2}\geq \dots\geq a_{n}\geq0\right\},& P^{{+}{+}}(G)&= P^{{+}{+}}(\mathfrak g)\cap P(G). \end{align*} It is well known that (see \cite[Exercises~IV.10 and V.8]{Knapp-book-beyond}) \begin{align}\label{eqBn:extremereps} \op{Sym}^k(\C^{2n+1}) &\simeq \pi_{k\omega_1}\oplus \op{Sym}^{k-2}(\C^{2n+1}),& \pi_{\widetilde \omega_p}& \simeq \textstyle \bigwedge^p(\C^{2n+1}), \end{align} where $\C^{2n+1}$ denotes the standard representation of $\mathfrak g$. Actually, $\pi_{k\omega_1}$ can be realized inside $\op{Sym}^k(\C^{2n+1})$ as the subspace of harmonic homogeneous polynomials of degree $k$. \begin{lemma}\label{lemBn:extremereps} Let $n\geq2$, $\mathfrak g=\so(2n+1,\C)$, $G=\SO(2n+1)$, $k\geq0$ and $1\leq p\leq n$. For $\mu=\sum_{j=1}^n a_j\varepsilon_j\in P(G)$, we have that \begin{align} m_{\pi_{k\omega_1}}(\mu) = &\;m_{\pi_{k\varepsilon_1}}(\mu) = \tbinom{r(\mu)+n-1}{n-1} \quad \text{ where } r(\mu)=\lfloor \tfrac{k-\norma{\mu}}{2}\rfloor, \label{eqBn:multip(k)} \\ m_{\pi_{\widetilde \omega_p}}(\mu) =& \begin{cases} \binom{n-p+r(\mu)}{\lfloor {r(\mu)}/{2}\rfloor} & \text{if }\, |a_j|\leq1\;\forall\,j, \\ 0&\text{otherwise,} \end{cases}\qquad\text{where $r(\mu)=p-\norma{\mu}$}. \label{eqBn:multip(p)} \end{align} \end{lemma} \begin{proof} Let $\mathcal P_k$ be the space of complex homogeneous polynomials of degree $k$ in the variables $x_1,\dots,x_{2n+1}$. Set $f_j=x_{2j-1}+ix_{2j}$ and $g_{j}= x_{2j-1}-ix_{2j}$ for $1\leq j\leq n$. One can check that the polynomials $ f_1^{k_1}\dots f_n^{k_n} g_1^{l_1}\dots g_{n}^{l_{n}}x_{2n+1}^{k_0} $ with $k_0,\dots,k_n,l_1,\dots,l_n$ non-negative integers satisfying that $\sum_{j=0}^{n} k_j+\sum_{j=1}^{n} l_j=k$ form a basis of $\mathcal P_k$ given by weight vectors, each of them of weight $\mu=\sum_{j=1}^n (k_j-l_{j})\varepsilon_j$. Notice that the number $k_0$ does not take part of $\mu$. Consequently, $m_{\pi_{\mathcal P_k}}(\mu)$ for $\mu=\sum_{j=1}^n a_j\varepsilon_j$ is the number of tuples $(k_0,\dots,k_{n}, l_1,\dots,l_{n})\in \N_0^{2n+1}$ satisfying that $a_j=k_j-l_{j}$ for all $1\leq j\leq n$ and \begin{equation}\label{eqBn:conditionweightP_k} \sum_{j=0}^{n} k_j+\sum_{j=1}^{n} l_j=k. \end{equation} Note that \eqref{eqBn:conditionweightP_k} implies $k-\norma{\mu}-k_0=2s$ for some integer $s\geq0$. We fix an integer $s$ satisfying $0\leq s\leq r:=\lfloor (k-\norma{\mu})/2\rfloor $. Set $k_0=k-\norma{\mu}-2s\geq0$. As in the proof of Lemma~\ref{lemCn:extreme}, the number of $(k_1,\dots,k_n,l_1,\dots,l_n)\in \N_0^{2n}$ satisfying that $a_j=k_j-l_j$ for all $1\leq j\leq n$ and \eqref{eqBn:conditionweightP_k} is equal to $\binom{s+n-1}{n-1}$. Hence, \begin{equation*} m_{\mathcal P_k}(\mu) = \sum_{s=0}^{r} \binom{s+n-1}{n-1}= \binom{r+n}{n}. \end{equation*} The second equality is well known. It may be proven by showing that both sides are the $r$-term of the generating function $(1-z)^{-(n+1)}$. From \eqref{eqBn:extremereps} we conclude that $m_{\pi_{k\varepsilon_1}}(\mu) = m_{{\mathcal P}_k}(\mu) - m_{{\mathcal P}_{k-2}}(\mu) = \binom{r+n}{n}- \binom{r-1+n}{n} = \binom{r+n-1}{n-1}$. We have that $\pi_{\widetilde\omega_p}\simeq \bigwedge^p(\C^{2n+1})$ by \eqref{eqBn:extremereps}. By setting $v_j=e_{2j-1}-i e_{2j}$, $v_{j+n}=e_{2j-1}+i e_{2j}$ and $v_{2n+1}=e_{2n+1}$, one obtains that the vectors $w_I:=v_{i_1}\wedge \dots\wedge v_{i_p}$ for $I=\{i_1,\dots,i_p\}$ satisfying $1\leq i_1<\dots<i_p\leq 2n+1$, form a basis of $\bigwedge^p(\C^{2n+1})$. Furthermore, $w_I$ is a weight vector of weight $\mu=\sum_{j=1}^n a_j\varepsilon_j$ given by \eqref{eq:weight_exteriorCn}. Note that the condition of $2n+1$ being or not in $I$ does not influence on $\mu$. Hence, $\mu=\sum_j a_j\varepsilon_j$ is a weight of $\bigwedge^p(\C^{2n+1})$ if and only if $|a_j|\leq 1$ for all $j$ and $p-\norma{\mu}\geq0$. Proceeding as in Lemma~\ref{lemCn:extreme}, by writing $s=\lfloor \frac{p-\norma{\mu}}{2} \rfloor\geq0$, the multiplicity of $\mu$ is $\binom{n-p+2s}{s}$ if $p-\norma{\mu}$ is even and $\binom{n-p+2s+1}{s}$ if $p-\norma{\mu}$ is odd. \end{proof} \begin{theorem}\label{thmBn:multip(spin)} Let $\mathfrak g=\so(2n+1,\C)$ and $G=\SO(2n+1)$ for some $n\geq2$ and let $k\geq0$ an integer. Let $\mu\in P(\mathfrak g)\smallsetminus P(G)$. Write $r(\mu)=k+\frac{n}{2}-\norma{\mu}$, then \begin{equation}\label{eqBn:multip(spin)} m_{\pi_{k\omega_1+\omega_n}}(\mu) =\binom{r(\mu)+n-1}{n-1}. \end{equation} Furthermore, $m_{k\omega_1+\omega_n}(\mu)=0$ for all $\mu\in P(G)$. \end{theorem} \begin{proof} This proof is very similar to \cite[Lem.~4.2]{BoldtLauret-onenormDirac}. The assertion $m_{k\omega_1+\omega_n}(\mu)=0$ for every $\mu\in P(G)$ is clear since any weight of $\pi_{k\omega_1+\omega_n}$ is equal to the highest weight $k\omega_1+\omega_n$ minus a sum of positive roots, which clearly lies in $P(\mathfrak g)\smallsetminus P(G)$. Let $\mu\in P(\mathfrak g)\smallsetminus P(G)$. We may assume that $\mu$ is dominant, thus $\mu=\frac{1}{2}\sum_{i=1}^n a_i\varepsilon_i$ with $a_1\geq \dots \geq a_n \geq1$ odd integers. One has that \begin{align}\label{eq:fusionrule_spin} \pi_{k \omega_1}\otimes \pi_{\omega_n} \simeq \pi_{k \omega_1+\omega_n} \oplus \pi_{(k-1) \omega_1+\omega_n} \end{align} for any $k\geq1$. Indeed, it follows immediately by applying the formula in \cite[Exercise~V.19]{Knapp-book-beyond} since in its sum over the weights of $\pi_{\omega_n}$, the only non-zero terms are attained at the weights $\omega_n$ and $\omega_n-\omega_1$. It is well known that the set of weights of $\pi_{\omega_n}$ is $\mathcal{P}(\pi_{\omega_n}) :=\{ \frac{1}{2}\sum_{i=1}^n b_i\varepsilon_i: |b_i|=1\}$ and $m_{\pi_{\omega_n}}(\nu)=1$ for all $\nu\in \mathcal{P}(\pi_{\omega_n})$ (see for instance \cite[Exercise V.35]{Knapp-book-beyond}). We proceed now to prove \eqref{eqBn:multip(spin)} by induction on $k$. It is clear for $k=0$ by the previous paragraph. Suppose that it holds for $k-1$. By this assumption and \eqref{eq:fusionrule_spin}, we obtain that \begin{equation}\label{eqBn:multip(tensorspin)} m_{\pi_{k\omega_1+\omega_n}}(\mu)= m_{\pi_{k\omega_1}\otimes \pi_{\omega_n}}(\mu)-m_{\pi_{(k-1)\omega_1+\omega_n}}(\mu) = m_{\pi_{k\omega_1}\otimes \pi_{\omega_n}}(\mu)- \binom{r+n-2}{n-1}, \end{equation} where $r=k+\frac{n}{2}-\norma{\mu}$. It only remains to prove that $m_{\pi_{k\omega_1}\otimes \pi_{\omega_n}}(\mu)=\binom{r+n-1}{n-1}+\binom{r+n-2}{n-1}$. Similarly to \eqref{eq:multiptensor}, we have that $m_{\pi_{k\omega_1}\otimes \pi_{\omega_n}}(\mu)=\sum_{\eta\in\mathcal{P}(\pi_{\omega_n})} m_{\pi_{k\omega_1}}(\mu-\eta)$. Since $\mu$ is dominant, for any $\eta=\frac{1}{2}\sum_{i=1}^n b_i\varepsilon_i\in\mathcal{P}(\pi_{\omega_n})$, it follows that $$ \norma{\mu-\eta}= \frac{1}{2}\sum_{i=1}^n (a_i-b_i) = \norma{\mu}+\frac{n}{2}-\ell_1(\eta)= k-r+n-\ell_1(\eta), $$ where $\ell_1(\eta)=\#\{1\leq i\leq n: b_i=1\}$. By Lemma \ref{lemBn:extremereps}, $m_{\pi_{k \omega_1}}(\mu-\eta)\neq0$ only if $r +\ell_1(\eta)-n\geq0$. For each integer $\ell_1$ satisfying $n-r\leq\ell_1\leq n$, there are $\binom{n}{\ell_1}$ weights $\eta\in \mathcal{P}(\pi_{\omega_n})$ such that $\ell_1(\eta)=\ell_1$. On account of the above remarks, \begin{align}\label{eq:multiptensor_spin} m_{\pi_{k\omega_1}\otimes \pi_{\omega_n}}(\mu)=& \sum_{\ell_1=n-r}^{n} \binom{\lfloor \frac{r+\ell_1-n}{2}\rfloor +n-1}{n-1} \binom{n}{\ell_1}= \sum_{j=0}^{r} \binom{\lfloor \frac{r-j}{2}\rfloor +n-1}{n-1} \binom{n}{j}. \end{align} We claim that the last term in \eqref{eq:multiptensor_spin} equals $\binom{r+n-1}{n-1}+\binom{r+n-2}{n-1}$. Indeed, a simple verification shows that both numbers are the $r$-th term of the generating function $\frac{1+z}{(1-z)^n}$. From \eqref{eqBn:multip(tensorspin)} and \eqref{eq:multiptensor_spin} we conclude that $m_{\pi_{k\omega_1+\omega_n}}(\mu)= \binom{r+n-1}{n-1}$ as asserted. \end{proof} \begin{lemma}\label{lemBn:multip(sigma_kp)} Let $n\geq2$, $\mathfrak g=\so(2n+1,\C)$, $G=\SO(2n+1)$, $k\geq0$, $1\leq p<n$, and $\mu\in P(G)$. Write $r(\mu)=k+p-\norma{\mu}$. Then \begin{align*} m_{\sigma_{k,p}}(\mu) = &\sum_{t=0}^{\lfloor{p}/{2}\rfloor} \binom{n-p+2t}{t}\sum_{\beta=0}^{p-2t} 2^{p-2t-\beta} \binom{n-\contador (\mu)}{\beta} \binom{\contador (\mu)}{p-2t-\beta}\\ &\qquad \sum_{\alpha=0}^\beta \binom{\beta}{\alpha} \binom{\lfloor\frac{r(\mu)}{2}\rfloor-p+\alpha+t+n-1}{n-1}\\ &+\sum_{t=0}^{\lfloor{(p-1)}/{2}\rfloor} \binom{n-p+1+2t}{t}\sum_{\beta=0}^{p-1-2t} 2^{p-1-2t-\beta} \binom{n-\contador (\mu)}{\beta} \binom{\contador (\mu)}{p-1-2t-\beta}\\ &\qquad \sum_{\alpha=0}^\beta \binom{\beta}{\alpha} \binom{\lfloor\frac{r(\mu)+1}{2}\rfloor-p+\alpha+t+n-1}{n-1}. \end{align*} \end{lemma} \begin{proof} Write $r=k+p-\norma{\mu}$ and $\ell=\contador (\mu)$ and assume $\mu$ dominant. Define $\mathcal P_{t,\beta,\alpha}^{(p)}$ as in \eqref{eq:calP}. From Lemma~\ref{lemBn:extremereps}, we deduce that the set of weights of $\pi_{\widetilde \omega_p}$ is $$ \mathcal P(\pi_{\widetilde \omega_p}) := \big(\bigcup_{t=0}^{\lfloor {p}/{2}\rfloor} \;\bigcup_{\beta=0}^{p-2t} \;\bigcup_{\alpha=0}^{\beta} \;\mathcal P_{t,\beta,\alpha}^{(p)} \big) \cup \big(\bigcup_{t=0}^{\lfloor {p-1}/{2}\rfloor} \;\bigcup_{\beta=0}^{p-1-2t} \;\bigcup_{\alpha=0}^{\beta} \;\mathcal P_{t,\beta,\alpha}^{(p-1)}\big). $$ This fact and \eqref{eq:multiptensor} give \begin{align*} m_{\sigma_{k,p}}(\mu) = &\sum_{t=0}^{\lfloor {p}/{2}\rfloor}\;\sum_{\beta=0}^{p-2t}\; \sum_{\alpha=0}^{\beta} \; \binom{\lfloor \frac{r}{2}\rfloor+t+\alpha-p+n-1}{n-1} \;\binom{n-p+2t}{t} \; \# \mathcal P_{t,\beta,\alpha}^{(p)} \\ & +\sum_{t=0}^{\lfloor {(p-1)}/{2}\rfloor}\;\sum_{\beta=0}^{p-1-2t}\; \sum_{\alpha=0}^{\beta} \; \binom{\lfloor \frac{r-1}{2}\rfloor+t+\alpha-p+n}{n-1} \;\binom{n-p+1+2t}{t} \; \# \mathcal P_{t,\beta,\alpha}^{(p-1)}, \end{align*} since $\norma{\mu-\eta}= k-r-2(t+\alpha-p)$ for all $\eta\in\mathcal P_{t,\beta,\alpha}^{(p)}$ and $\norma{\mu-\eta}= k-r-2(t+\alpha-p)-1$ for all $\eta\in\mathcal P_{t,\beta,\alpha}^{(p-1)}$. The proof follows by \eqref{eq:card(P)}. \end{proof} Lemmas~\ref{lem:step1} and \ref{lemBn:multip(sigma_kp)} complete the proof of Theorem~\ref{thmBn:multip(k,p)}. \section{Type A}\label{secAn:multip(k,p)} Type $\tipo A_n$ is the simplest case to compute the weight multiplicity formula of $\pi_{k,p}$. Actually, it follows immediately by standard calculations using Young diagrams. We include this formula to complete the list of all classical simple Lie algebras. We consider in $\mathfrak g=\sll(n+1,\C)$, $ \mathfrak h =\{\diag\big(\theta_1,\dots,\theta_{n+1}\big) : \theta_i\in\C\;\forall\, i,\; \sum_{i=1}^{n+1}\theta_i=0\}. $ We set $\varepsilon_i\big(\diag(\theta_1,\dots,\theta_{n+1})\big)= \theta_i$ for each $1\leq i\leq n+1$. We will use the conventions of \cite[Lecture~15]{FultonHarris-book}. Thus \begin{equation*} \mathfrak h^* = \bigoplus_{i=1}^{n+1} \C\varepsilon_i / \langle \textstyle\sum\limits_{i=1}^{n+1}\varepsilon_i=0 \rangle, \end{equation*} the set of positive roots is $\Sigma^+(\mathfrak g,\mathfrak h)=\{\varepsilon_i-\varepsilon_j: 1\leq i<j\leq n+1\}$, and the weight lattice is $ P(\mathfrak g)=\bigoplus_{i=1}^{n+1} \Z\varepsilon_i / \langle \textstyle\sum\limits_{i=1}^{n+1}\varepsilon_i=0 \rangle. $ By abuse of notation, we use the same letter $\varepsilon_i$ for the image of $\varepsilon_i$ in $\mathfrak h^*$. A weight $\mu=\sum_{i=1}^{n+1}a_i\varepsilon_i$ is dominant if $a_1\geq a_2\geq \dots \geq a_{n+1}$. The representations having highest weights $\lambda=\sum_{i=1}^{n+1} a_i\varepsilon_i$ and $\mu=\sum_{i=1}^{n+1} b_i\varepsilon_i$ are isomorphic if and only if $a_i-b_i$ is constant, independent of $i$. Consequently, we can restrict to those $\lambda=\sum_{i=1}^{n+1} a_i\varepsilon_i$ with $a_{n+1}=0$. Then, $$ P^{++}(\mathfrak g)= \left\{ \textstyle\sum\limits_{i=1}^n a_i\varepsilon_i\in P(\mathfrak g): a_1\geq a_2\geq \dots \geq a_{n}\geq 0 \right\}. $$ The corresponding fundamental weights are given by $\omega_p=\varepsilon_{1} + \dots + \varepsilon_p$ for each $1\leq p\leq n$. It is well known that, for $\lambda\in P^{++}(\mathfrak g)$ and $\mu$ a weight of $\pi_{\lambda}$, one can assume that $\mu=\sum_{i=1}^{n+1} a_i \varepsilon_i$ with $a_i\in \N_0$ for all $i$ and $\sum_{i=1}^{n+1} a_i=\norma{\lambda}$. \begin{theorem}\label{thmAn:multip(k,p)} Let $\mathfrak g=\sll(n+1,\C)$ for some $n\geq1$ and let $k\geq0$, $1\leq p\leq n$ integers. Let $\mu=\sum_{i=1}^{n+1} a_i \varepsilon_i\in P(\mathfrak g)$ with $a_i\in \N_0$ for all $i$ and $\sum_{i=1}^{n+1} a_i=k+p$. If $a_1+a_2+\dots +a_j\leq k+j$ for all $1\leq j\leq p$, then \begin{align*} m_{\pi_{k\omega_1+\omega_p}}(\mu) &= \binom{n- \contador (\mu)}{p-1}, \end{align*} and $m_{\pi_{k,p}}(\mu)=0$ otherwise. \end{theorem} \begin{proof} The Young diagram corresponding to the representation $\pi_{k \omega_1+\omega_p}$ is the diagram with $p$ rows, having all length $1$, excepting the first one which has length $k+1$. It is well known that the multiplicity of the weight $\mu$ in this representation is equal to the number of ways one can fill its Young diagram with $a_1$ $1$'s, $a_2$ $2$'s, $\dots$, $a_{n+1}$ $(n+1)$'s, in such a way that the entries in the first row are non-decreasing and those in the first column are strictly increasing (see for instance \cite[\S15.3]{FultonHarris-book}). Consequently, the multiplicity of $\mu$ is equal to the number of ways of filling the first column. Since the first entry is uniquely determined, one has to choose $p-1$ different numbers for the rest of the entries. Hence, the theorem follows. \end{proof} \section{Concluding remarks}\label{sec:conclusions} For a classical complex Lie algebra $\mathfrak g$, it has been shown a closed explicit formula for the weight multiplicities of a representation in any $p$-fundamental string, namely, any irreducible representation of $\mathfrak g$ having highest weight $k\omega_1+\omega_p$, for some integers $k\geq0$ and $1\leq p\leq n$. When $\mathfrak g$ is of type $\tipo A_n$, the proof was quite simple and the corresponding formula could be probably established from a more general result. In the authors' best knowledge, the obtained expressions of the weight multiplicities for types $\tipo B_n$, $\tipo C_n$ and $\tipo D_n$ are new, except for small values of $n$, probably $n\leq 3$. Although the formulas in Theorem~\ref{thmCn:multip(k,p)}, \ref{thmDn:multip(k,p)} and \ref{thmBn:multip(k,p)} (types $\tipo C_n$, $\tipo D_n$ and $\tipo B_n$ respectively) look complicated and long, they are easily handled in practice. It is important to note that all sums are over (integer) intervals, without including any sum over partitions or permutations. Furthermore, there are only combinatorial numbers in each term. Consequently, it is a simple matter to implement them in a computer program, obtaining a very fast algorithm even when the rank $n$ of the Lie algebra is very large. Moreover, for $p$ and a weight $\mu$ fixed, the formulas become a quasi-polynomial on $k$. This fact was already predicted and follows by the Kostant Multiplicity Formula, such as M.~Vergne pointed out to Kumar and Prasad in \cite{KumarPrasad14} (see also \cite{MeinrenkenSjamaar99}, \cite{Bliem10}). For instance, when $\mathfrak g=\so(2n,\C)$ (type $\tipo D_n$), Theorem~\ref{thmDn:multip(k,p)} ensures that \begin{equation} m_{\pi_{k\omega_1}}(\mu) = \begin{cases} \binom{\frac{k-\norma{\mu}}{2}+n-2}{n-2} &\text{if $k\geq\norma{\mu}$ and $k\equiv\norma{\mu} \pmod 2$,} \\ 0 &\text{otherwise.} \end{cases} \end{equation} Consequently, the generating function encoding the numbers $\{m_{\pi_{k\omega_1}}(\mu):k\geq0\}$ is a rational function. Indeed, \begin{equation} \sum_{k\geq0} m_{\pi_{k\omega_1}}(\mu) z^k = \sum_{k\geq0} m_{\pi_{(2k+\norma{\mu})\omega_1}}(\mu) z^{2k+\norma{\mu}} = \frac{z^{\norma{\mu}}}{(1-z^2)^{n-1}}. \end{equation} From a different point of view, for fixed integers $k$ and $p$, the formulas are quasi-polynomials in the variables $\norma{\mu}$ and $\contador(\mu)$. We end the article with a summary of past (and possible future) applications of multiplicity formulas in spectral geometry. We consider a locally homogeneous space $\Gamma\ba G/K$ with the (induced) standard metric, where $G$ is a compact semisimple Lie group, $K$ is a closed subgroup of $G$ and $\Gamma$ is a finite subgroup of the maximal torus $T$ of $G$. When $G=\SO(2n)$, $K=\SO(2n-1)$ and $\Gamma$ is cyclic acting freely on $G/K\simeq S^{2n-1}$, we obtain a \emph{lens space}. In order to determine explicitly the spectrum of a (natural) differential operator acting on smooth sections of a (natural) vector bundle on $\Gamma\ba G/K$ (e.g.\ Laplace--Beltrami operator, Hodge--Laplace operator on $p$-form, Dirac operator), one has to calculate ---among other things--- numbers of the form $\dim V_\pi^\Gamma$ for $\pi$ in a subset of the unitary dual $\widehat G$ depending on the differential operator. Since $\Gamma\subset T$, $\dim V_\pi^\Gamma$ can be computed by counting the $\Gamma$-invariant weights in $\pi$ according to its multiplicity, so the problem is reduced to know $m_\pi(\mu)$. At the moment, some weight multiplicity formulas have been successfully applied to the problem described above. The multiplicity formula for $\pi_{k\omega_1}$ in type $\tipo D_n$ (Lemma~\ref{lemDn:extremereps}) was used by Miatello, Rossetti and the first named author in \cite{LMR-onenorm} to determine the spectrum of the Laplace--Beltrami operator on a lens space. Furthermore, Corollary~\ref{cor:depending-one-norm-ceros} for type $\tipo D_n$ was shown in the same article (\cite[Lem.~3.3]{LMR-onenorm}) obtaining a characterization of lens spaces $p$-isospectral for all $p$ (i.e.\ their Hodge--Laplace operators on $p$-forms have the same spectra). Later, Boldt and the first named author considered in \cite{BoldtLauret-onenormDirac} the Dirac operator on odd-dimensional spin lens spaces. In this work, it was obtained and used Theorem~\ref{thmDn:multip(spin)}, namely, the multiplicity formula for type $\tipo D_n$ of the spin representations $\pi_{k\omega_1+\omega_{n-1}}$ and $\pi_{k \omega_1+\omega_{n}}$. As a continuation of the study begun in \cite{LMR-onenorm}, Theorem~\ref{thmDn:multip(k,p)} was applied in the preprint \cite{Lauret-pspectralens} to determine explicitly every $p$-spectra of a lens space. Here, as usual, $p$-spectrum stands for the spectrum of the Hodge--Laplace operator acting on smooth $p$-forms. The article \cite{Lauret-pspectralens} was the motivation to write the present paper. The remaining formulas in the article may be used with the same goal. Actually, any application of the formulas for type $\tipo D_n$ can be translated to an analogue application for type $\tipo B_{n-1}$, working in spaces covered by $S^{2n-2}$ in place of $S^{2n-1}$ (cf.\ \cite[\S4]{IkedaTaniguchi78}). This was partially done in \cite{Lauret-spec0cyclic}, by applying Lemma~\ref{lemBn:extremereps}. The result extends \cite{LMR-onenorm} (for the Laplace--Beltrami operator) to even-dimensional lens orbifolds. A different but feasible application can be done for type $\tipo A_n$. One may consider the complex projective space $P^n(\C)=\SU(n+1)/\op{S}(\U(n)\times\U(1))$. However, more general representations must be used. Indeed, in \cite{Lauret-spec0cyclic} was considered the Laplace--Beltrami operator and the representations involved had highest weights $k(\omega_1+\omega_n)$ for $k\geq0$. Theorem~\ref{thmCn:multip(k,p)} (type $\tipo C_n$) does not have an immediate application since the spherical representations of the symmetric space $\Sp(n)/(\Sp(n-1)\times\Sp(1))$ have highest weight of the form $k\omega_2$ for $k\geq0$. Maddox~\cite{Maddox14} obtained a multiplicity formula for these representations. However, this expression it is not explicit enough to be applied in this problem. An exception was the case $n=2$, since in \cite{Lauret-spec0cyclic} was applied the closed multiplicity formula in \cite{CaglieroTirao04}. It is not know by the authors if there is a closed subgroup $K$ of $G=\Sp(n)$ such that the spherical representations of $G/K$ are $\pi_{k\omega_1}$ for $k\geq0$, that is, \begin{equation} \{\pi\in \widehat G: V_\pi^K\simeq \op{Hom}_K(V_\pi,\C)\neq0\} = \{\pi_{k\omega_1}:k\geq0\}. \end{equation} In such a case, Theorem~\ref{thmCn:multip(k,p)} could be used. \section*{Acknowledgments} The authors wish to thank the anonymous referee for carefully reading the article and giving them helpful comments. \bibliographystyle{plain}
1,314,259,994,582
arxiv
\section{} \section{Introduction} The first-quantized formulation of particle theory has been proposed by Feynman `as an alternative to the formulation of second quantization' \cite{feynman50}. Its efficiency in calculations of one-loop boson scattering amplitudes has been fully recognized after the formulation of the Bern-Kosower rules \cite{Bern-Kosower91, Bern-Kosower92}. Within the Bern-Kosower approach, particle scattering amplitudes are obtained as infinite tension limits of some string amplitudes. Strassler has derived a similar set of rules `from first-quantized field theory' \cite{Strassler92}. The superior organization of the amplitudes, resulting in compact expressions, is an attractive feature of the first-quantized (world-line) approach. It is important, in particular, in calculations of multi-particle amplitudes in gauge theories where the major difficulties lie in the great number of Feynman diagrams and terms. The five-gluon one-loop amplitude \cite{bdk93} and the four-graviton one-loop amplitude in quantum gravity \cite{bds93} have been calculated by Bern-Kosower method. Experimentally driven calculations of amplitudes in nonabelian gauge theories do not rely entirely on the world-line formalism. They are based nowadays on a large set of ideas and techniques (for a review see Ref.~\cite{bddk05}). Important results have been obtained recently \cite{csw04,gk04,ggk04,wz04,kosower05, bddk05} on the tree and one-loop amplitudes in nonabelian gauge theories, and, in particular, on the tree amplitudes with external fermions and scalars \cite{gk04,ggk04,wz04}. On the other hand, the world-line methods have been extremely useful in strong field calculations where the background field configuration cannot be treated as a small correction to a `trivial' one. Such calculations, when performed by standard field-theoretical methods, are often difficult even if the number of Feynman diagrams involved is not large. World-line techniques have been used in calculations of one-loop effective actions \cite{ss93, chd95, rss97, gmca98, shovkovy98, gs99, asz04}, as well as in computations of amplitudes in a strong field \cite{AS96, shai96, ds00, schubert00}. Multi-loop generalizations \cite{Schmidt-Schubert94,Schmidt-Schubert96,dss96,roland-sato96,sato-schmidt98} have been used for calculations of the effective actions in QED \cite{rss97, ks99} and in Yang-Mills theory \cite{sato-schmidt99, ssz00}. The powerful techniques, based on the Bern-Kosower and Strassler's rules, have been developed for processes without fermions in the initial and final states and are not directly applicable to scattering amplitudes between states containing fermions. The development of world-line techniques for these amplitudes was slower \cite{McKR93, KK99, kk01}, although various path-integral (world-line) representations for the spinning particle propagator are known \cite{Fradkin65, Fradkin66, Barbashov65, BF70, HT82, Borisov-Kulish82, Polyakov87, Fainberg-Marshakov88, Fainberg-Marshakov88a, Fainberg-Marshakov90, ADJ90, Fradkin-Gitman91, Gitman-Saa93, AFP94}. Two-photon Compton scattering cross-sections \cite{Herold79, BAM86} and the two-photon emission rate \cite{SL99} in a constant magnetic field in $(3+1)$-dimensional space-time have been calculated by summation over the intermediate states. To our opinion, the world-line methods could be more promising, in particular, for generalizations to multi-photon processes. We calculate the amplitude (in the tree approximation) for two-photon emission by a charged relativistic particle in a constant uniform magnetic field in $2+1$-dimensional space-time within the world-line (first-quantized) framework. Schwinger proper-time integral is used. In the present paper, we use the operator method, although a path-integral formulation is also possible. The work on such a formulation is in progress. The paper is organized as follows. In Sect.~2 we present the form of the amplitude for the two-photon emission in the constant magnetic field in $2+1$ dimensional space-time, as it comes in the quantum field theory. In Sect.~3 the one-fermion space is constructed along the lines of the Schwinger method \cite{Schwinger51}. In Sect.~4 we show that the relevant dynamics is that of a supersymmetric system with one bosonic and one fermionic degrees of freedom. In Sect.~5 an expression is derived for the amplitude. In the Appendix the eigenstates of the Dirac hamiltonian are obtained along the lines of the Johnson-Lippmann construction \cite{JL49} in $3+1$ dimensions. \section{The $S$-matrix element} In a constant magnetic field the vacuum is stable and the three-potential $\mathcal{A}$ can be taken to be time-independent. Then Dirac hamiltonian \begin{equation} \label{hamiltonian0} \hat H=-\boldsymbol{\alpha}\cdot\left(i\nabla+e\boldsymbol{\mathcal{A}}\right) +m\gamma^0+e\mathcal{A}^0 \end{equation} is time-independent. Let us denote by $\phi_N^{(+)}(\mathbf{x})$ and $\phi_N^{(-)}(\mathbf{x})$ (our notation here is close to that of Ref.~\cite{GFS90}) the positive and negative-energy eigenfunctions of the Dirac hamiltonian (\ref{hamiltonian0}), \[ \hat H\phi_N^{(+)}(\mathbf{x})=E_N^{(+)}\phi_N^{(+)}(\mathbf{x}), \qquad \hat H\phi_N^{(-)}(\mathbf{x})=E_N^{(-)}\phi_N^{(-)}(\mathbf{x}), \] where $E_N^{(+)}>0$, $E_N^{(-)}<0$ and $N$ stays for all the quantum numbers specifying the stationary state, excepting the energy sign. We assume that the eigenfunctions are normalized, \[ (\phi_M^{(+)}, \,\phi_N^{(+)})=(\phi_M^{(-)}, \,\phi_N^{(-)})=\delta_{MN}, \qquad (\phi,\,\chi)=\int\phi^\dagger(\mathbf{x})\chi(\mathbf{x}) \,d^2 x, \] then a completeness relation holds, \[ \sum_N\left[\phi_N^{(+)}(\mathbf{x})\phi_N^{(+)\dagger}(\mathbf{y}) +\phi_N^{(-)}(\mathbf{x})\phi_N^{(-)\dagger}(\mathbf{y})\right]=\delta^2(\mathbf{x}-\mathbf{y}). \] Consider, for definiteness, the transition from a positive-energy state $\phi_{N_i}^{(+)}(\mathbf{x})$ to the posi\-tive-energy state $\phi_{N_f}^{(+)}(\mathbf{x})$ with emission of two photons with momenta $\mathbf{k}_1$ and $\mathbf{k}_2$. According to quantum field theory \cite{BD65}, the $S$-matrix element is given by \[ S_{fi}=\frac{ie^2}{4\pi|\mathbf{k}_1||\mathbf{k}_2|}\left[R( \mathbf{k}_1, \mathbf{k}_2)+ R( \mathbf{k}_2, \mathbf{k}_1) \right], \] where \begin{eqnarray} \label{R} && R( \mathbf{k}_1, \mathbf{k}_2)= \int d^3 x\int d^3y \; \bar\phi_{N_f}^{(+)}(\mathbf{x})\frac{e^{i (E_{N_f}^{(+)}+|\mathbf{k}_2|)x^0}}{\sqrt{2\pi}}e^{-i\mathbf{k}_2\cdot\mathbf{x}}\nonumber\\ &&\times\boldsymbol{\varepsilon}(\mathbf{k}_2)\cdot\boldsymbol{\gamma}\,S^{c}(x,y\mid \mathcal{A})\, \boldsymbol{\varepsilon}(\mathbf{k}_1)\cdot\boldsymbol{\gamma}\, \frac{e^{i(|\mathbf{k}_1|-E_{N_i}^{(+)})y^0}}{\sqrt{2\pi}} e^{-i\mathbf{k}_1\cdot\mathbf{y}}\; \phi_{N_i}^{(+)}(\mathbf{y}), \end{eqnarray} $S^{c}(x, y\mid \mathcal{A})$ is the fermion propagator in the external field, \begin{equation} \label{propagator1} \left[\gamma^\mu\left( i\partial-e\mathcal{A}\right)_\mu-m+i\epsilon \right]S^c(x,y\mid \mathcal{A})=-\delta^3(x-y), \end{equation} and $\boldsymbol{\varepsilon}(\mathbf{k})$ is the polarization vector associated with $\mathbf{k}$, \[ \mathbf{k}\cdot\boldsymbol{\varepsilon}(\mathbf{k})=0, \quad \mathbf{k}\wedge\boldsymbol{\varepsilon}(\mathbf{k}) \equiv{k}^1{\varepsilon}^2(\mathbf{k})-{k}^2{\varepsilon}^1(\mathbf{k}) =|\mathbf{k}|. \] As is known, there exist two irreducible two-dimensional representations of the Clifford algebra with three generating elements. The representations can be labelled by $s=\pm 1$ and chosen in such a way that the corresponding generators obey \begin{equation} \label{s} [\gamma^\mu,\,\gamma^\nu]_-=-2is\epsilon^{\mu\nu\lambda}\gamma_\lambda. \end{equation} We omit the representation label $s$ in the sequel. \section{The one-fermion space} We represent, following Schwinger \cite{Schwinger51}, the propagator as a matrix element of an operator $S$ acting in a Hilbert space containing the (off-shell) one-fermion states (and no photons). Consider a Hilbert space $\mathcal{H}(X,\mathcal{P})$ and a set of self-adjoint operators $X^\mu$, $\mathcal{P}_\mu$ $(\mu=0,1,2)$ satisfying the commutation relations \[ [\mathcal{P}_\mu, X^\nu]_-=i\delta_\mu^\nu \qquad [X^\mu, X^\nu]_-=0, \qquad [\mathcal{P}_\mu, \mathcal{P}_\nu]_-=0. \] We assume that the associative algebra, generated by those operators, acts irreducibly in $\mathcal{H}(X,\mathcal{P})$ and, moreover, \begin{equation} \label{momenta} \langle x \mid \mathcal{P}_\mu\mid \psi\rangle=i\partial_\mu \langle x \mid \psi\rangle, \qquad \mid\psi\rangle\in \mathcal{H}(X,\mathcal{P}). \end{equation} where $|x\rangle$ are normalized common eigenvectors of $X^\mu$, \begin{equation} \label{Xbasis} X^\mu|x\rangle=x^\mu|x\rangle, \quad \langle x\mid y\rangle=\delta^3(x-y), \quad \int d^3x|x\rangle\langle x|= I. \end{equation} Let $\mathcal{H}(\Gamma)$ be a two-dimensional complex space and $\mid \rho\rangle$ ($\rho=1,2$) be vectors forming an orthonormal basis in $\mathcal{H}(\Gamma)$, \begin{equation} \label{Gbasis} \langle \rho\mid\rho^\prime\rangle=\delta_{\rho\rho^\prime}, \qquad \sum_{\rho=1}^2 \mid\rho\,\rangle\langle\,\rho\mid=I. \end{equation} The operators $\Gamma^\mu$, defined by $\langle \rho\mid \Gamma^\mu \mid\rho^\prime\rangle=\gamma^\mu_{\rho\rho^\prime}$, satisfy \begin{equation} \label{Gammas} [\Gamma^\mu, \Gamma^\nu]_+ =2\eta^{\mu\nu},\qquad [\Gamma^\mu, \Gamma^\nu]_- =-2is\epsilon^{\mu\nu\lambda}\Gamma_\lambda. \end{equation} The state vector in the one-particle space $\mathcal{H}=\mathcal{H}(X,\mathcal{P})\otimes\mathcal{H}(\Gamma)$, corresponding to the wave function $(2\pi)^{-1/2}e^{-ip_0x^0}\phi_{N}^{(+)}(\mathbf{x})$ is given by \begin{equation} \mid p_0; E>0, N\rangle=\frac 1 {\sqrt{2\pi}}\int d^3x \sum_{\rho=1}^{2}\mid x, \rho\rangle \, \left[\phi_N^{(+)}\right]_\rho(\mathbf{x})e^{-ip_0 x^0}. \end{equation} It is, generally, an off-shell state since the value of $p^0$ is arbitrary. The propagator is a matrix element of an operator $S$ in $\mathcal{H}$, \[ S^c_{\rho\rho^\prime}(x,y)=\langle x, \rho\mid S \mid y, \rho^\prime\rangle, \] Using eqs.~(\ref{propagator1}), (\ref{momenta}), (\ref{Xbasis}), and (\ref{Gbasis}), one finds that $S$ satisfies \begin{equation} \label{S1} \left(m-\not\!\Pi\right)S=I, \end{equation} where $\not\! \Pi =\Gamma^\mu\Pi_\mu$ and $\Pi^\mu$ are the operators of the kinematic momenta, \begin{equation} \label{kinematic} \Pi^\mu = \mathcal{P}^\mu-e \mathcal{A}^\mu. \end{equation} Using the corresponding completeness relations for the bases, one can put eq.~(\ref{R}) into the form: \begin{eqnarray} \label{R1} && R(\mathbf{k}_1, \mathbf{k}_2)\\ &&= \langle p^0_f; E>0,\, N_f\mid\nonumber \Gamma^0e^{-i\mathbf{k}_2\cdot \mathbf{X}} \boldsymbol{\varepsilon}(\mathbf{k}_2)\cdot\boldsymbol{\Gamma}\,S\, \boldsymbol{\varepsilon}(\mathbf{k}_1)\cdot\boldsymbol{\Gamma}\, e^{ -i\mathbf{k}_1\cdot\mathbf{X}} \mid p^0_i; E>0,\, N_i\rangle, \nonumber \end{eqnarray} where \[ p^0_f=E_{N_f}^{(+)}+|\mathbf{k}_2|,\qquad p^0_i=E_{N_i}^{(+)}-|\mathbf{k}_1|. \] \section{The superoscillator} Using the Schwinger proper-time integral \cite{Schwinger51}, we represent the operator $S$ (in a general external field $\mathcal{A}$) as \begin{equation} \label{S2} S=i(m+\not\!\Pi)\int_0^\infty d\lambda \exp\left[ -i\lambda(m^2-\not \!\Pi^2)\right]. \end{equation} The factor $(m+\not\!\Pi)$ in eq.~(\ref{S2}) can be expressed as an integral over a Grassmann variable $\chi$, \begin{equation} \label{grassmann} m+\not\!\Pi=i\int d\chi(1-i\chi m)e^{-i\chi\not\Pi}, \end{equation} where we assume that $\chi$ anticommutes with the $\Gamma^\mu$ and commutes with the rest of the operators. Substituting eq.~(\ref{grassmann}) in eq.~(\ref{S2}) one obtains \begin{equation} \label{S3} S=\int d\chi (i\chi m-1)\int_0^\infty d\lambda\,\exp(-i\lambda m^2) \,\exp\left[-i\chi\not\!\Pi +i\lambda\not \!\Pi^2\right]. \end{equation} The operator \begin{equation} \label{U} U=\exp\left[-i\chi\not\!\Pi+i\lambda\not \!\Pi^2\right] \end{equation} can be considered as an evolution operator (on the ``time" interval $0\le\tau\le 1$) of a quantum-mechanical system with the hamiltonian $\chi\not\!\!\Pi-\lambda\not \!\!\Pi^2$, containing a Grassmann-odd parameter $\chi$. The system is rather simple if the external field is a constant magnetic one. We choose in what follows \begin{equation} \label{potential} \mathcal{A}^0=0, \qquad \mathcal{A}^1=-\frac{Bx^2}{2}, \qquad \mathcal{A}^2= \frac{Bx^1}{2}. \end{equation} The magnetic field strength $B$ is assumed positive. The operators $\Pi_\mu$ satisfy the commutation relations \begin{equation} \label{crkinematic} [\Pi^k, \,\mathcal{P}^0]_-=0,\qquad [\Pi^1, \,\Pi^2]_-=ieB. \end{equation} We choose, for definiteness, $e>0$ and make a linear canonical change. The operators $X^0$, $-\mathcal{P}^0$, \begin{equation} \label{QP} Q=\frac {\sqrt{2}} b \Pi^1, \qquad P=\frac {\sqrt{2}} b \Pi^2, \end{equation} and \begin{equation} \label{Q2P2} \tilde Q=\frac {\sqrt{2}} b \left(\mathcal{P}^2+\frac{b^2}{4}X^1\right), \qquad \tilde P=\frac {\sqrt{2}} b \left(\mathcal{P}^1-\frac{b^2}{4}X^2\right), \end{equation} where $b=\sqrt{2e B}$, are canonical, i.e., the following commutation relations hold \begin{equation} \label{CCR} [\mathcal P^0,\, X^0]_-=i,\qquad [Q, \, P]_-=i, \qquad [\tilde Q, \, \tilde P]_-=i, \end{equation} the rest of the commutators being equal to zero. In the three-potential (\ref{potential}) the operator $U$, eq.~(\ref{U}), factorizes, \begin{eqnarray} \label{U1} &&U=U_0 U_1, \qquad U_0=\exp\left\{-i\chi \Gamma^0\mathcal{P}_0+i\lambda\mathcal{P}_0^2 \right\}, \\ \label{U-1} &&U_1=\exp\left\{i\frac{\chi b}{\sqrt{\,2}}\left(\Gamma^1Q+\Gamma^2 P\right) +i\frac{\lambda b^2}{2}\left(\Gamma^1Q+\Gamma^2 P\right)^2\right\}. \end{eqnarray} The operator $U_1$ acts in the factor $\bar\mathcal{H}=\mathcal{H}(Q,P)\otimes\mathcal{H}(\Gamma)$ of the tensor-product space $\mathcal{H}$. It can be considered as an evolution operator (on the ``time" interval $0\le\tau\le 1$) of a quantum-mechanical system with one bosonic and one fermionic degrees of freedom. The hamiltonian \[ h=-\frac {\chi b}{\sqrt{\,2}}\left(\Gamma^1Q+\Gamma^2 P\right)- \frac{\lambda b^2}{2}\left(\Gamma^1Q+\Gamma^2 P\right)^2, \] contains the Grassmann-odd parameter $\chi$. The operators \begin{equation} \label{creation} a=\frac 1 {\sqrt{\,2}}(Q+iP) \qquad a^\dagger=\frac 1 {\sqrt{\,2}}(Q-iP) \end{equation} are (bosonic) lowering and raising operators, \begin{equation} \label{bosonic} [a,\, a^\dagger]_-=1, \end{equation} while the operators \begin{equation} \label{creationF} \alpha=\frac 1 {2i}(\Gamma^1+i\Gamma^2), \qquad \alpha^\dagger=\frac 1 {2i}(\Gamma^1-i\Gamma^2) \end{equation} are fermionic ones, \begin{equation} \label{anti} [\alpha,\, \alpha^\dagger]_+=1, \qquad \alpha^2=(\alpha^\dagger)^2=0. \end{equation} Using eq.~(\ref{Gammas}), one finds \begin{equation} \label{comm} [\alpha^\dagger,\Gamma^0 ]_-=2s\alpha^\dagger, \qquad [\alpha,\Gamma^0 ]_-=-2s\alpha, \qquad [\alpha,\alpha^\dagger ]_-=s\Gamma^0. \end{equation} When expressed in terms of these operators, Dirac operator in the potential (\ref{potential}) reads \begin{equation} \not\!\Pi-m=\Gamma^0\mathcal{P}_0-ib\left( \alpha a^\dagger+\alpha^\dagger a \right) \end{equation} and the hamiltonian $h$ in the proper-time representation is given by \begin{equation} \label{h2} h=\lambda b^2(a^\dagger a+\alpha^\dagger\alpha)-i\chi b(\alpha a^\dagger+\alpha^\dagger a). \end{equation} The hamiltonian $h$, as well as $H$, eq.~(\ref{hamQP}), are supersymmetric. The nilpotent operators $K=\alpha a^{\dagger}$, $K^{\dagger}$ and the operator $h_{SUSY}=a^\dagger a+\alpha^{\dagger}\alpha$ (the hamiltonian for the supersymmetric oscillator \cite{Nicolai76,BM75,BDZVH76,Witten81,SH82}) generate a Lie superalgebra. The nonvanishing (anti-)commutator is \[ [K, K^{\dagger}]_{+}=h_{SUSY}. \] The hamiltonian $h$ is invariant with respect to the supertransformations generated by $K_{-}=K-K^{\dagger}$ only, \[ [K_{-}, h]=0. \] The existence of a natural supersymmetric structure associated with the Dirac equation is known \cite{BM75,BCL76,BDZVH76, Witten81} for a long time. We note that the hamiltonian (\ref{h2}) possesses only a residual supersymmetry (with respect to transformations generated by $K_{-}$) if compared with the supersymmetric oscillator. Finally, using eqs.~(\ref{S3}), (\ref{U1}) and (\ref{U-1}), one obtains the following expression for the operator $S$ in a constant magnetic field: \begin{eqnarray} \label{S4} S&=&\int d\chi \int_0^\infty d\lambda\,(i\chi m-1)\exp\left[-i\lambda (m^2-\mathcal{P}_0^2)-i\chi\Gamma^0\mathcal{P}_0\right]\nonumber\\ &&\times\exp\left\{-i\left[ \lambda b^2(a^\dagger a+\alpha^\dagger\alpha)-i\chi b(\alpha a^\dagger+\alpha^\dagger a) \right]\right\}. \end{eqnarray} \section{The two-photon emission amplitude} Using the orthonormality of the basis $\{|p^0\rangle\}$ in $\mathcal{H}(X^0, \mathcal{P}^0)$, and taking into account that the states in eq.~(\ref{R1}) are on-shell ones, one can represent $R(\mathbf{k}_1, \mathbf{k}_2)$ as a product of the energy-conservation $\delta$-function and a matrix element in the space $\mathcal{H}(Q, P)\otimes\mathcal{H}(\tilde Q, \tilde P)\otimes\mathcal{H}(\Gamma)$, \begin{eqnarray} R(\mathbf{k}_1, \mathbf{k}_2)=\delta(E_{N_f}^{(+)})+|\mathbf{k}_1|+ |\mathbf{k}_2|-E_{N_i}^{(+)})T^{(2,1)}_{fi}, \end{eqnarray} where \begin{eqnarray*} T^{(2,1)}_{fi} =\langle E>0,\, N_f\mid e^{-i\mathbf{k}_2\cdot\mathbf{X}} W(\mathcal{E}) e^{-i\mathbf{k}_1\cdot\mathbf{X}} \mid E>0,\, N_i\rangle, \end{eqnarray*} and \begin{eqnarray*} W(\mathcal{E})&=&\Gamma^0\boldsymbol{\epsilon}(\mathbf{k}_2)\cdot\boldsymbol{\Gamma} \left[m-\Gamma^0\mathcal{E}+\frac{b}{\sqrt{\,2}}\left( \Gamma^1 Q+\Gamma^2 P \right)\right]^{-1} \boldsymbol{\epsilon}(\mathbf{k}_1)\cdot\boldsymbol{\Gamma}\\ &=&\Gamma^0\boldsymbol{\epsilon}(\mathbf{k}_2)\cdot\boldsymbol{\Gamma} \frac{m+\Gamma^0\mathcal{E}-\frac{b}{\sqrt{\,2}}\left( \Gamma^1 Q+\Gamma^2 P\right)} {m^2+\frac{b^2}{2}\left(Q^2+P^2-i\Gamma^1\Gamma^2\right)-\mathcal{E}^2} \boldsymbol{\epsilon}(\mathbf{k}_1)\cdot\boldsymbol{\Gamma}, \\ \mathcal{E}&=&E_{N_i}^{(+)}-|\mathbf{k}_1|=E_{N_f}^{(+)}+|\mathbf{k}_2|. \end{eqnarray*} The operator $W(\mathcal{E})$ can be expressed in terms of the lowering and raising operators, \begin{eqnarray} \label{W} W(\mathcal{E})=-\Gamma^0 \left(\varepsilon_2^-\alpha+\varepsilon_2^+\alpha^\dagger\right) \frac{m+\Gamma^0\mathcal{E}-i{b}\left( \alpha a^\dagger+\alpha^\dagger a\right)} {m^2+{b^2}\left(a^\dagger a+\alpha^\dagger \alpha\right)-\mathcal{E}^2} \left(\varepsilon_1^-\alpha+\varepsilon_1^+\alpha^\dagger\right), \end{eqnarray} where \[ \varepsilon_i^{\pm}=\varepsilon_i^1(\mathbf{k}_i)\pm i\varepsilon_i^2(\mathbf{k}_i), \qquad i=1,2. \] \paragraph{Factorization of the operators $e^{-i\mathbf{k}_i\cdot\mathbf{X}}$.} With the help of eqs.~(\ref{QP}) and (\ref{Q2P2}) the operator $e^{-i\mathbf{k}\cdot\mathbf{X}}$ can be represented as a product \[ e^{-i\mathbf{k}_i\cdot\mathbf{X}}=V(\mathbf{k})\tilde V(\mathbf{k}), \] where \begin{eqnarray} \label{V} V(\mathbf{k})&=&\exp\left[\frac 1 b \left(k^-a-k^+a^\dagger\right)\right], \qquad k^{\pm}=k^1\pm ik^2,\\ \tilde V(\mathbf{k})&=&\exp\left[-i\frac {\sqrt{\,2}} {b} \left(k^1\tilde Q-k^2\tilde P\right)\right]. \end{eqnarray} The label $N$ of the state vector $\mid E>0, N\rangle$ is composite and consists of the label $n$ of the state vector in the space $\mathcal{H}(Q, P)\otimes\mathcal{H}(\Gamma)$ (see Appendix~A) and $\tilde n$ labelling the basis vectors in the space $\mathcal{H}(\tilde Q, \tilde P)$. We do not specify the latter basis here. The operators $\tilde V(\mathbf{k}_i)$ commute with $W(\mathcal{E})$. Moreover, the amplitude $T^{(2,1)}_{fi}$ factorizes: \[ T^{(2,1)}_{fi}= \mathcal{T}^{(2,1)}_{fi}\;\tilde{\mathcal{T}}^{(2,1)}_{fi}, \] where \[ \mathcal{T}^{(2,1)}_{fi}=\langle E>0,\, n_f\mid V(\mathbf{k}_2) W(\mathcal{E}) V(\mathbf{k}_1)\mid E>0,\, n_i\rangle, \qquad \tilde{\mathcal{T}}^{(2,1)}_{fi}= \langle\tilde n_f \mid\tilde V(\mathbf{k}_2)\tilde V(\mathbf{k}_1) \mid \tilde n_i\rangle. \] The matrix element $\tilde{\mathcal{T}}^{(2,1)}_{fi}$ can be easily calculated in a suitable basis (for example, in the oscillator basis in $\mathcal{H}(\tilde Q, \tilde P)$). However, it is, in some sense, irrelevant, since in the expression for the rate one has to perform a summation over all possible values of $\tilde{n}_f$, and this summation can be done without using the explicit form of $\tilde{\mathcal{T}}^{(2,1)}_{fi}$ and $\tilde{\mathcal{T}}^{(1,2)}_{fi}$: \begin{eqnarray*} \sum_{\tilde{n}_f} |\mathcal{T}^{(2,1)}_{fi}\tilde{\mathcal{T}}^{(2,1)}_{fi} +\mathcal{T}^{(1,2)}_{fi}\tilde{\mathcal{T}}^{(1,2)}_{fi}|^2 &=&|\mathcal{T}^{(2,1)}_{fi}|^2 +\mathcal{T}^{(2,1)}_{fi}{\mathcal{T}}^{(1,2)*}_{fi} \exp\left[\frac{2i}{b^2}\left(\mathbf{k_1}\wedge \mathbf{k}_2\right)\right]\\ &+&\mathcal{T}^{(1,2)}_{fi}{\mathcal{T}}^{(2,1)*}_{fi} \exp\left[-\frac{2i}{b^2}\left(\mathbf{k_1}\wedge \mathbf{k}_2\right)\right] +|\mathcal{T}^{(1,2)}_{fi}|^2. \end{eqnarray*} Substituting the decomposition (\ref{positive}) of the state vectors $\mid E>0,\, n_i\rangle$, one finds \begin{eqnarray} \label{calT} &&\mathcal{T}^{(2,1)}_{fi}= \frac{1}{2\sqrt{E_{n_f}E_{n_i}}}\nonumber\\ &&\times\Big\{\sqrt{(E_{n_f}+sm)(E_{n_i}+sm)} \,\langle n_f+1, s\mid V(\mathbf{k}_2) W(\mathcal{E}) V(\mathbf{k}_1)\mid n_i+1, s\rangle\nonumber \\ &&+b\sqrt{\frac{E_{n_f}+sm}{E_{n_i}+sm}(n_i+1)} \,\langle n_f+1, s\mid V(\mathbf{k}_2) W(\mathcal{E}) V(\mathbf{k}_1)\mid n_i, -s\rangle\nonumber \\ &&+b\sqrt{\frac{E_{n_i}+sm}{E_{n_f}+sm}(n_f+1)} \,\langle n_f, -s\mid V(\mathbf{k}_2) W(\mathcal{E}) V(\mathbf{k}_1)\mid n_i+1, s\rangle \\. &&+b^2\sqrt{\frac{(n_f+1)(n_i+1)}{(E_{n_f}+sm)(E_{n_f}+sm)}} \,\langle n_f, -s\mid V(\mathbf{k}_2) W(\mathcal{E}) V(\mathbf{k}_1)\mid n_i, -s\rangle\Big\} \nonumber. \end{eqnarray} \paragraph{The matrix elements in $\mathcal{H}(\Gamma)$.} One can compute the matrix elements of the operator $W(\mathcal{E})$ in the basis $\mid \pm s\rangle$ in $\mathcal{H}(\Gamma)$. These matrix elements are operators in $\mathcal{H}(Q, P)$. Using the representation (\ref{W}) and eqs.~(\ref{gamma-s}), (\ref{alpha-s}), and (\ref{gamma-dagger}), one obtains \begin{eqnarray} &&\langle s\mid W(\mathcal{E})\mid s\rangle =\varepsilon_1^+\varepsilon_2^- \left(\mathcal{E}-sm\right)\left[m^2+b^2(a^\dagger a+1)-\mathcal{E}^2\right]^{-1},\\ &&\langle s\mid W(\mathcal{E})\mid -s\rangle =isb\varepsilon_1^-\varepsilon_2^- \left[m^2+b^2(a^\dagger a+1)-\mathcal{E}^2\right]^{-1}a,\\ &&\langle -s\mid W(\mathcal{E})\mid s\rangle =-isb\varepsilon_1^+\varepsilon_2^+ a^\dagger \left[m^2+b^2(a^\dagger a+1)-\mathcal{E}^2\right]^{-1},\\ &&\langle -s\mid W(\mathcal{E})\mid -s\rangle =\varepsilon_1^-\varepsilon_2^+ \left(\mathcal{E}+sm\right)\left[m^2+b^2a^\dagger a-\mathcal{E}^2\right]^{-1}. \end{eqnarray} \paragraph{The proper time integral.} Using the Schwinger proper-time representation, \[ \left[m^2+b^2(a^\dagger a+1)-\mathcal{E}^2\right]^{-1}=i\int_0^\infty d\lambda \exp\left\{-i\lambda\left[m^2+b^2(a^\dagger a+1)-\mathcal{E}^2\right]\right\} \] and taking into account that \[ \mid n, \pm s\rangle= \mid n\rangle\otimes \mid \pm s\rangle \] one obtains \begin{eqnarray} \label{n+n+} &&\langle n_2,\, s\mid V(\mathbf{k}_2) W(\mathcal{E})V(\mathbf{k}_1)\mid n_1,\, s\rangle \nonumber\\ &&=i\varepsilon_1^+\varepsilon_2^- \left(\mathcal{E}-sm\right) \int_0^\infty d\lambda \exp\left[-i\lambda\left(m^2+b^2-\mathcal{E}^2\right)\right] \\ &&\phantom{=}\times\langle n_2\mid V(\mathbf{k}_2)\exp\left(-i\lambda b^2a^\dagger a\right) V(\mathbf{k}_1)\mid n_1\rangle,\nonumber \\ \nonumber \\ \label{n+n-} &&\langle n_2,\, s\mid V(\mathbf{k}_2) W(\mathcal{E}) V(\mathbf{k}_1)\mid n_1,\, -s\rangle \nonumber \\ &&=-sb\varepsilon_1^-\varepsilon_2^- \int_0^\infty d\lambda \exp\left[-i\lambda\left(m^2+b^2-\mathcal{E}^2\right)\right]\\ &&\phantom{=}\times\langle n_2\mid V(\mathbf{k}_2)\exp\left(-i\lambda b^2a^\dagger a\right)a V(\mathbf{k}_1)\mid n_1\rangle, \nonumber \\ \nonumber \\ \label{n-n+} &&\langle n_2, \, -s\mid V(\mathbf{k}_2) W(\mathcal{E}) V(\mathbf{k}_1)\mid n_1, \, s\rangle \nonumber\\ &&=sb\varepsilon_1^+\varepsilon_2^+ \int_0^\infty d\lambda \exp\left[-i\lambda\left(m^2+b^2-\mathcal{E}^2\right)\right] \\ &&\phantom{=}\times\langle n_2\mid V(\mathbf{k}_2)\,a^\dagger\exp\left(-i\lambda b^2a^\dagger a\right) V(\mathbf{k}_1)\mid n_1\rangle,\nonumber \\ \nonumber \\ \label{n-n-} &&\langle n_2, \,-s\mid V(\mathbf{k}_2) W(\mathcal{E})V(\mathbf{k}_1)\mid n_1,\,-s\rangle \nonumber \\ &&=i\varepsilon_1^-\varepsilon_2^+\left(\mathcal{E}+sm\right) \int_0^\infty d\lambda \exp\left[-i\lambda\left(m^2-\mathcal{E}^2\right)\right] \\ &&\phantom{=}\times\langle n_2\mid V(\mathbf{k}_2)\exp\left(-i\lambda b^2a^\dagger a\right) V(\mathbf{k}_1)\mid n_1\rangle. \nonumber \end{eqnarray} Using eqs.~(\ref{V}) and (\ref{bosonic}) one easily derives \begin{equation} \label{aV} [a, V(\mathbf{k})]_-=-\frac{k^+}{b}V(\mathbf{k}), \qquad [a^\dagger, V(\mathbf{k})]_-=-\frac{k^-}{b}V(\mathbf{k}). \end{equation} Substituting eqs.~(\ref{n+n+})-(\ref{n-n-}) in eqs.~(\ref{calT}) and using eqs.~(\ref{aV}), one can put the amplitude $\mathcal{T}_{fi}^{(2,1)}$ into the form \begin{eqnarray} \label{calT1} &&\mathcal{T}^{(2,1)}_{fi}= \frac{1}{2\sqrt{E_{n_f}E_{n_i}}}\int_{0}^\infty d\lambda \exp\left[-i\lambda \left( m^2+b^2-\mathcal{E}^2\right)\right] \nonumber\\ &&\times\Bigg\{i\varepsilon_1^+\varepsilon_2^-(\mathcal{E}-sm) \sqrt{(E_{n_f}+sm)(E_{n_i}+sm)} \,\langle n_f+1\mid D(\mathbf{k}_2, \mathbf{k}_1)\mid n_i+1\rangle \nonumber \\ &&-sb\varepsilon_1^-\varepsilon_2^-\sqrt{\frac{E_{n_f}+sm}{E_{n_i}+sm}(n_i+1)} \Big[\,b\langle n_f+1\mid D(\mathbf{k}_2, \mathbf{k}_1)\mid n_i-1\rangle \nonumber\\ &&\phantom{xxxxxxxxxxxxxxxxxxxxxx} -k_1^+\langle n_f+1\mid D(\mathbf{k}_2, \mathbf{k}_1)\mid n_i\rangle\Big] \\ &&+sb\varepsilon_1^+\varepsilon_2^+\sqrt{\frac{E_{n_i}+sm}{E_{n_f}+sm}(n_f+1)} \Big[\,b\langle n_f-1\mid D(\mathbf{k}_2, \mathbf{k}_1)\mid n_i+1\rangle \nonumber \\ &&\phantom{xxxxxxxxxxxxxxxxxxxxxx} +k_2^-\langle n_f\mid D(\mathbf{k}_2, \mathbf{k}_1)\mid n_i+1\rangle\Big] \nonumber \\ &&+ib^2e^{i\lambda b^2}\varepsilon_1^-\varepsilon_2^+(\mathcal{E}-sm) \sqrt{\frac{(n_f+1)(n_i+1)}{(E_{n_f}+sm)(E_{n_f}+sm)}} \,\langle n_f\mid D(\mathbf{k}_2,\mathbf{k}_1)\mid n_i\rangle\Bigg\}, \nonumber \end{eqnarray} where $D(\mathbf{k}_2,\mathbf{k}_1)=V(\mathbf{k}_2)\exp(-i\lambda b^2a^\dagger a) V(\mathbf{k}_1)$. The matrix element $\langle n_f-1\mid D(\mathbf{k}_2, \mathbf{k}_1)\mid n_i+1\rangle$ in (\ref{calT1}) must be replaced by zero for transitions to the ground state. \paragraph{The matrix elements in $\mathcal{H}(Q, P)$.} We are going to compute the matrix elements in the space $\mathcal{H}(Q, P)$ in eq.~(\ref{calT1}). The operator $V(\mathbf{k})$ has simple form in the coherent state basis (this is widely used in quantum optics). A harmonic oscillator coherent state \cite{Glauber63}, see, e.g.,~\cite{ZhFG90}, is given by \begin{equation} \label{coherent} \mid z\rangle=\exp(za^\dagger-z^*a)\mid 0\rangle =\sum_{n=0}^{\infty}\frac{z^n}{\sqrt{n!}}\mid n\rangle \exp\left(-\frac 1 2 z^* z\right), \end{equation} where $\mid 0\rangle$ is the harmonic oscillator vacuum (\ref{vacuum}) and $z$ is a complex number. Using eqs.~(\ref{V}) and (\ref{coherent}), one derives \begin{eqnarray} &&V(\mathbf{k})\mid z \rangle=\mid z-\frac{k^+}{b}\rangle \exp\left(\frac{k^-z-k^+z^*}{2b}\right)\\ &&\langle z \mid V(\mathbf{k})=\exp\left(\frac{k^-z-k^+z^*}{2b}\right)\langle z+\frac{k^+}{b}\mid. \end{eqnarray} Then, using the properties of the coherent states \cite{ZhFG90}, one finds that the matrix elements are represented as \begin{eqnarray} \label{mel} &&\langle n_2\mid V(\mathbf{k}_2)\exp\left(-i\lambda b^2a^\dagger a\right) V(\mathbf{k}_1)\mid n_1\rangle\nonumber \\ &&= \frac{1}{\sqrt{n_1!n_2!}}\exp\left\{-\frac{1}{2b^2}\left[ \mathbf{k}_1^2+\mathbf{k}_2^2+2(\mathbf{k}_1\cdot\mathbf{k}_2 -i\mathbf{k}_1\wedge\mathbf{k}_2)e^{-i\lambda b^2} \right]\right\}\nonumber \\ &&\phantom{=}\times J^{(0,n_2,n_1,0)} \left(\frac{k_1^-+k_2^-e^{-i\lambda b^2}}{b}, 0, 0, -\frac{k_2^++k_1^+e^{-i\lambda b^2}}{b} \right) \end{eqnarray} where \begin{eqnarray*} &&J^{(l_1,l_2,l_3,l_4)}(\zeta_1,\,\zeta_2,\,\zeta_3,\,\zeta_4)= \prod_{i=1}^4\left(\frac{\partial}{\partial\zeta_i}\right)^{l_i} J(\zeta_1,\,\zeta_2,\,\zeta_3,\,\zeta_4), \end{eqnarray*} the generating function $J(\zeta_1,\,\zeta_2,\,\zeta_3,\,\zeta_4)$ is given by \begin{eqnarray*} &&J(\zeta_1,\,\zeta_2,\,\zeta_3,\,\zeta_4)=\frac{1} {(2\pi i)^2}\int dz_2^* dz_2 dz_1^* dz_1 \exp\big(-z_1^* z_1 -z_2^* z_2 \\ &&\phantom{xxxxxxxxxxxxxx} +z_2^*z_1e^{-i\lambda b^2}+\zeta_1 z_1+ \zeta_2 z_2+\zeta_3 z_1^*+\zeta_4 z_2^*\big). \end{eqnarray*} A direct calculation yields \begin{equation} \label{J0} J(\zeta_1,\,\zeta_2,\,\zeta_3,\,\zeta_4)=\exp\left( \zeta_1\zeta_3+\zeta_2\zeta_4+\zeta_2\zeta_3e^{-i\lambda b^2} \right). \end{equation} Introducing the dimensionless momenta \begin{equation} \label{kappa} \boldsymbol{\kappa}_i=b^{-1}\mathbf{k}_i, \qquad i=1,2; \qquad \kappa_{i\pm}=b^{-1}k_i^\pm, \end{equation} and using eq.~(\ref{J0}), one obtains, for $n_2\le n_1$, \begin{eqnarray} \label{J0nn0} &&J^{(0,n_2,n_1,0)} \left(\kappa_{1-}+\kappa_{2-}e^{-i\lambda b^2}, 0, 0, -\kappa_{2+}-\kappa_{1+}e^{-i\lambda b^2} \right) \nonumber \\ &&=\sum_{l=0}^{n_2}\sum_{r=0}^{n_1-n_2+l}\sum_{t=0}^{l} \frac{(-)^{l}\,n_1!n_2!}{(n_2-l)!\,r!\,(n_1-n_2+l-r)!\,t!\,(l-t)!} \nonumber \\ &&\phantom{=}\times \kappa_{1+}^t\kappa_{1-}^{r} \kappa_{2+}^{l-t}\kappa_{2-}^{n_1-n_2+l-r} \exp\left[-i\lambda b^2(n_1-r+t)\right]. \end{eqnarray} Substituting eqs.~(\ref{J0nn0}) and (\ref{kappa}) in eq.~(\ref{mel}), one finds \begin{eqnarray} \label{mel1} &&\langle n_2\mid D(b\boldsymbol{\kappa}_2,b\boldsymbol{\kappa}_1)\mid n_1\rangle= \langle n_2\mid V(b\boldsymbol{\kappa}_2)\exp\left(-i\lambda b^2a^\dagger a\right) V(b\boldsymbol{\kappa}_1)\mid n_1\rangle\nonumber \\ &&= \sqrt{n_1!n_2!}\,\exp\left\{-\frac{1}{2}\left( \boldsymbol{\kappa}_1^2+\boldsymbol{\kappa}_2^2\right)-\left[\boldsymbol{\kappa}_1\cdot\boldsymbol{\kappa}_2 -i\boldsymbol{\kappa}_1\wedge\boldsymbol{\kappa}_2 \exp(-i\lambda b^2) \right]\right\}\nonumber\\ &&\times\sum_{l=0}^{n_2}\sum_{r=0}^{n_1-n_2+l}\sum_{t=0}^{l} \frac{(-)^{l}}{(n_2-l)!\,r!\,(n_1-n_2+l-r)!\,t!\,(l-t)!} \nonumber \\ &&\phantom{=}\times \kappa_{1+}^t\kappa_{1-}^{r} \kappa_{2+}^{l-t}\kappa_{2-}^{n_1-n_2+l-r} \exp\left[-i\lambda b^2(n_1-r+t)\right]. \end{eqnarray} One has to substitute this expression in eq.~(\ref{calT1}). The general formula is large and we do not present it here. The transition amplitude from the second to the first (fundamental) Landau level can serve as an example. The amplitude $\mathcal{T}^{(2,1)}_{01}$ is given by \begin{eqnarray} \label{10} &&\mathcal{T}^{(2,1)}_{01}=\frac{e^{-i\theta_1}}{2\sqrt{E_{0}E_{1}}}\exp\left(-\frac{{\kappa}_1^2+{\kappa}_2^2}{2}\right)\sum_{n=0}^\infty\frac{\left(-{\kappa}_1{\kappa}_2 e^{-i\phi}\right)^n}{n!} \nonumber\\ &&\times\Bigg\{(\mathcal{E}-sm) \sqrt{\frac{(E_{0}+sm)(E_{1}+sm)}{2}} \Big[-\frac{{\kappa}_1^2{\kappa}_2}{E_n^2-\mathcal{E}^2} \nonumber\\ &&+\frac{{\kappa}_1\left(2-{\kappa}_1^2-2{\kappa}_2^2\right)}{E_{n+1}^2-\mathcal{E}^2}e^{-i\phi} +\frac{{\kappa}_2\left(2-2{\kappa}_1^2-{\kappa}_2^2\right)}{E_{n+2}^2-\mathcal{E}^2}e^{-2i\phi} -\frac{{\kappa}_1{\kappa}_2^2}{E_{n+3}^2-\mathcal{E}^2}e^{-3i\phi} \Big] \nonumber \\ &&+isb^2\sqrt{\frac{2\left(E_{0}+sm\right)}{E_{1}+sm}} \Big[\frac{{\kappa}_2\left(1-{\kappa}_1^2\right)}{E_n^2-\mathcal{E}^2} +\frac{{\kappa}_1\left(2-{\kappa}_1^2-{\kappa}_2^2\right)}{E_{n+1}^2-\mathcal{E}^2}e^{-i\phi} -\frac{{\kappa}_1{\kappa}_2}{E_{n+2}^2-\mathcal{E}^2}e^{-2i\phi}\Big] \nonumber\\ &&+isb^2\sqrt{\frac{E_{1}+sm}{2\left(E_{0}+sm\right)}} \Big[\frac{{\kappa}_1^2{\kappa}_2}{E_{n}^2-\mathcal{E}^2} +\frac{2{\kappa}_1{\kappa}_2^2}{E_{n+1}^2-\mathcal{E}^2}e^{-i\phi} +\frac{{\kappa}_2^3}{E_{n+2}^2-\mathcal{E}^2}e^{-2i\phi}\Big] \nonumber\\ &&+b^2\left(\mathcal{E}+sm\right) \sqrt{\frac{2}{(E_{0}+sm)(E_{1}+sm)}}\Big[ \frac{{\kappa}_1}{E_{n-1}^2-\mathcal{E}^2}e^{i\phi}+ \frac{{\kappa}_2}{E_{n}^2-\mathcal{E}^2}\Big]\Bigg\} \end{eqnarray} where $\theta_i$ is the angle between the $x$-axes and $\boldsymbol{\kappa}_i$, $\phi=\theta_2-\theta_1$, $\kappa_i=|\boldsymbol{\kappa}_i|$, and we have put $E_{-1}=m$. The second part of the amplitude is obtained by the replacements \[ \kappa_1\leftrightarrow\kappa_2, \quad \theta_1\leftrightarrow\theta_2,\quad \phi\rightarrow -\phi, \quad \mathcal{E}\rightarrow \mathcal{E}^{\prime}=E_0+k_2. \] The sum in eq.~(\ref{10}) can be taken and the result can be expressed in terms of the confluent hypergeometric function \cite{Herold79}. \section{Conclusions} Within the framework of the Schwinger proper-time method the dynamics of a fermion in a constant magnetic field in $2+1$ dimensional space-time reduces to that of a supersymmetric quantum mechanical system with one bosonic and one fermionic degrees of freedom. An expression is obtained for the two-photon emission amplitude. Similar technique can be used in $3+1$ dimensions as well. The work on this case is in progress.
1,314,259,994,583
arxiv
\section{Introduction} The development of deep learning has led to a significant surge of research activities for multimedia content generation in multimedia and computer vision community. In between, image-to-image translation is one of the widely studied tasks and the recent advances in Generative Adversarial Networks (GANs) have successfully obtained remarkable improvements on image translation across domains. The achievements of image-to-image translation are on the assumption that a large amount of annotated and matching image pairs are accessible for model training. In practice, nevertheless, the manual labeling of such paired data is cost-expensive and even unrealistic. To address this issue, \cite{liu2016coupled,zhu2017unpaired,kim2017learning,yi2017dualgan} tackle image-to-image translation in an unsupervised manner, which only capitalizes on unpaired data (i.e., two sets of unlabeled images from two domains). In this paper, we go one step further and extend such synthesis from image-to-image to video-to-video, which is referred as an emerging problem of ``unpaired video-to-video translation." It enables a general-purpose video translation across domains in the absence of paired training data, making it flexible to be applied in a variety of video-to-video translation tasks (see Figure \ref{fig:teaser}). One straightforward way to tackle unpaired video-to-video translation is to capitalize on unpaired image-to-image translation approach, e.g., Cycle-GAN \cite{zhu2017unpaired} (Figure \ref{fig:figintro2}(a)) that enforces an inverse translation for each frame. However, this way only explores visual appearance on frames for video synthesis and will inevitably result in temporal discontinuity when the synthetic frames are deteriorated by flickering artifacts as in video style transfer \cite{chen2017coherent}. This limitation originates from the fact that video is an information-intensive media with complexities along both spatial and temporal dimensions. Such facts motivate and highlight the exploration of both appearance structure and temporal continuity in video synthesis. In this sense, not only the visual appearance in each frame but also motion between consecutive frames are ensured to be realistic and consistent for video translation. \begin{figure}[!tb] \vspace{-0.00in} \centering {\includegraphics[width=0.33\textwidth]{framework_intro.pdf}} \vspace{-0.25in} \caption{\small Comparison between two unpaired translation approaches and our Mocycle-GAN. (a) \emph{Cycle-GAN} exploits cycle-consistency constraint to model appearance structure for unpaired image-to-image translation. (b) \emph{Recycle-GAN} utilizes temporal predictor ($P_X$ and $P_Y$) to explore cycle consistency across both domains and time for unpaired video-to-video translation. (c) \emph{Mocycle-GAN} explicitly models motion across frames with optical flow ($f_{x_t}$ and $f_{y_s}$), and pursuits cycle consistency on motion that enforces the reconstruction of motion. Motion translation is further exploited to transfer the motion across domains via motion translator ($M_X$ and $M_Y$), strengthening the temporal continuity in video synthesis. Dotted line denotes consistency constraint between its two endpoints.} \label{fig:figintro2} \vspace{-0.22in} \end{figure} A recent pioneering practice in unpaired video-to-video translation is Recycle-GAN \cite{bansal2018recycle} (Figure \ref{fig:figintro2}(b)). The basic idea is to directly synthesize future frames via temporal predictor to explore cycle consistency across both domains and time. Regardless of the spatio-temporal constraint in Recycle-GAN for enhancing video translation, a common issue not fully studied is the exploitation of motion between consecutive frames, which is well believed to be helpful for video-to-video translation. Instead, we novelly consider the use of motion information for unpaired video-to-video translation from the viewpoint both motion cycle consistency and motion translation, as depicted in Figure \ref{fig:figintro2}(c). The objective of motion cycle consistency constraint is to pursuit cycle consistency on motion between input adjacent frames, which in turn implicitly enforces the temporal continuity between synthetic adjacent frames. In addition, we exploit the constraint of motion translation to further strengthen temporal continuity in synthetic videos via transferring motion across domains. One naive method for enforcing temporal coherence is to warp the synthetic frame with the estimated motion (i.e., optical flow) between input frames to produce the subsequent frame as in \cite{ruder2016artistic,huang2017real}. Nevertheless, this paradigm ignores the occlusions, blur, and appearance variations, e.g., raised by the change of lighting in different domains. As such, the temporal coherence is enforced in a brute-force manner regardless of the scene dynamics in target domain. In comparison, we leverage motion translator to transfer the estimated motion in source domain to target domain, which characterizes the temporal coherence across synthetic frames more tailored to target domain. By consolidating the idea of exploiting motion information for facilitating unpaired video-to-video translation, we present a novel Motion-guided Cycle GAN (Mocycle-GAN), as shown in Figure \ref{fig:framework}. The whole architecture consists of generators and discriminators under the backbone of standard Conditional GANs, coupled with motion translator for transferring motion across domains. Specifically, the motion information in each domain is estimated in the form of optical flow between consecutive frames. During training, three types of spatial/temporal constrains, i.e., adversarial constraint, cycle consistency on both frame and motion, and motion translation, are devised to explore both the appearance structure and temporal continuity for unpaired video translation. The adversarial constraint discriminates between synthetic and real frames in an adversarial manner, making each synthetic frame realistic at appearance. For the cycle consistency on both frame and motion, it encourages the reconstruction of both appearance structure of frames and temporal continuity in motion. The motion translation constraint transfers the estimated motion from source to target domain via motion translator and then warps the synthetic frame with the transferred motion to the subsequent frame. In this sense, the temporal continuity among synthetic frames in target domain is further strengthened with the guidance from transferred motion. However, unlike in supervised video-video translation, we cannot train the motion translator with paired video data in unpaired scenario. Thus, we optimize the whole architecture in a Expectation Maximization (EM) procedure which iteratively updates generators and discriminators with the three spatial/temporal constrains (E-step), and refines motion translator with an auxiliary motion consistency loss (M-step). Such procedure gradually improves the motion translation as well as the video-to-video translation. \begin{figure*}[!tb] \centering {\includegraphics[width=0.88\textwidth]{framework3.pdf}} \vspace{-0.20in} \caption{\small The overview of Mocycle-GAN for unpaired video-to-video translation ($X$: source domain; $Y$: target domain). Note that here we only depict the forward cycle $X \to Y \to X$ for simplicity. Mocycle-GAN consists of generators ($G_X$ and $G_Y$) to synthesize frames across domains, discriminators ($D_X$ and $D_Y$) to distinguish real frames from synthetic ones, and motion translator ($M_X$) for motion translation across domains. Given two real consecutive frames $x_t$ and $x_{t+1}$, we firstly translate them into the synthetic frames $\widetilde{x}_t$ and $\widetilde{x}_{t+1}$ via $G_X$, which are further transformed into the reconstructed frames $x^{rec}_t$ and $x^{rec}_{t+1}$ through the inverse mapping $G_Y$. In addition, two optical flow $f_{x_t}$ and $f_{x^{rec}_t}$ are obtained by capitalizing on FlowNet to represent the motion before and after the forward cycle. During training, we leverage three kinds of spatial/temporal constrains to explore appearance structure and temporal continuity for video translation: 1) \emph{Adversarial Constraint} ($\mathcal{L}_{Adv}$) ensures each synthetic frame realistic at appearance through adversarial learning; 2) \emph{Frame and Motion Cycle Consistency Constraint} ($\mathcal{L}_{FC}$ and $\mathcal{L}_{MC}$) encourage an inverse translation on both frames and motions; 3) \emph{Motion Translation Constraint} ($\mathcal{L}_{MT}$) validates the transfer of motion across domains in video synthesis. Specifically, the motion translator $M_X$ converts the optical flow $f_{x_t}$ in source to $\widetilde{f}_{x_t}$ in target, which will be utilized to further warp the synthetic frame $\widetilde{x}_t$ to the subsequent frame $W(\widetilde{f}_{x_t}, \widetilde{x}_{t})$. This constraint encourages the synthetic subsequent frame $\widetilde{x}_{t+1}$ to be consistent with the warped version $W(\widetilde{f}_{x_t}, \widetilde{x}_{t})$ in the traceable points, leading to pixel-wise temporal continuity.} \label{fig:framework} \vspace{-0.19in} \end{figure*} \section{Related Work}\label{sec:RW} \textbf{Image-to-Image Translation.} Image-to-image translation aims to learn a mapping function from an input image in one domain to the output image in another domain. The recent advances in GANs \cite{goodfellow2014generative} have inspired the remarkable improvement of this task \cite{isola2017image,zhu2017toward,choi2018stargan,wang2018high}. An early pioneering work \cite{isola2017image} presents a general-purpose solution which leverages Conditional GANs for image-to-image translation. This paradigm enables a variety of graphics tasks, e.g., semantic labels to photo, edges to photo, and photo inpainting. \cite{zhu2017toward} further extends \cite{isola2017image} by encouraging the bijective consistency between the latent and output spaces, leading to more realistic and diverse results. Furthermore, \cite{liu2016coupled,zhu2017unpaired,kim2017learning,yi2017dualgan, yang2018crossing,mejjati2018unsupervised} begin to tackle unsupervised image-to-image translation, i.e., learning to translate images across domains without paired data. In particular, Cycle-GAN \cite{zhu2017unpaired} is devised to learn the mapping function in the absence of paired training data. A cycle consistency loss is utilized to train this mapping coupled with an inverse mapping between the two domains, enforcing the translation to be cycle consistent. Dual GAN \cite{yi2017dualgan} is a concurrent work which also exploits the cycle consistency for unpaired image-to-image translation. Beyond the still image translation across different domains, our work pursuits its video counterpart by tackling unpaired video-to-video translation in a complex spatio-temporal context. In addition to make each frame realistic, a video translator should be capable of enhancing the temporal coherence among adjacent frames. \textbf{Video-to-Video Translation.} Video-to-video translation is a natural extension of image-to-image translation in video domain. Specifically, \cite{wang2018video} is one of the early attempts to tackle video-to-video translation, which integrates a spatio-temporal adversarial objective into conditional GANs. The global and local temporal consistency is exploited in \cite{wei2018video} to ensure the local and global consistency across frames for video-to-video translation. However, the above methods require manual supervision for aligning paired videos across domains, which is extremely expensive and costly to obtain. Inspired from Cycle-GAN \cite{zhu2017unpaired}, \cite{bansal2018recycle} devises Recycle-GAN to facilitate unpaired video-to-video translation. Instead of solely employing spatial constraint for each frame as Cycle-GAN, Recycle-GAN additionally exploits a recurrent temporal predictor to model the dependency between nearby frames, enabling a spatio-temporal constraint (i.e., the recycle consistency) for unpaired video-to-video translation. Video style transfer is another related problem which transfers the style of a reference image to an input video. When directly applying the image style transfer techniques \cite{gatys2016image,ulyanov2016texture,johnson2016perceptual,zhang2018multi,ghiasi2017exploring} to videos, the generated stylized video will inevitably be affected with severe flickering artifacts. As such, to alleviate the flickering artifacts, a number of video style transfer approaches \cite{anderson2016deepmovie,ruder2016artistic,chen2017coherent,huang2017real,gupta2017characterizing,gao2018reconet} are proposed by additionally utilizing temporal constraints to ensure the temporal consistency across frames. In our work, we also target for an unsupervised solution for video translation. Unlike Recycle-GAN \cite{bansal2018recycle} that directly predicts future frames to enforce the translation to be recycle consistent, our Mocycle-GAN explicitly models the motion across frames with optical flow and pursuits a cycle consistency on motion. Moreover, a motion translator is leveraged to transfer motion in source domain to target domain, aiming to strengthen temporal continuity across synthetic frames with the guidance from transferred motion. \section{Approach: Mocycle-GAN} In this paper, we devise Motion-guided Cycle GAN (Mocycle-GAN) architecture to integrate motion estimation into unpaired video translator, exploring both appearance structure and temporal continuity for video translation. The whole architecture of Mocycle-GAN is illustrated in Figure \ref{fig:framework}. We begin this section by elaborating the notation and problem formulation of unpaired video-to-video translation, followed with a brief review of Cycle-GAN with spatial constrain. Then, two kinds of motion-guided temporal constrains, i.e., motion cycle consistency and motion translation, are introduced to further strengthen the temporal continuity. In this sense, both visual appearance in each frame and motion between consecutive frames are ensured to be realistic and consistent across transformation. Finally, the optimization strategy at training along with inference stage are provided. \subsection{Overview} \textbf{Notation.} In unpaired video-to-video translation task, we are given two video collections: $X=\{\bf{x}\}$ in source domain and $Y=\{{\bf{y}}\}$ in target domain, where ${\bf{x}} = \{x_t\}^T_{t=1}$ and ${\bf{y}} = \{y_s\}^S_{s=1}$ denotes the video in source and target domain respectively. $x_t$ and $y_s$ represent the $t$-th frame in source video ${\bf{x}}$ and $s$-th frame in target video ${\bf{y}}$. The goal of this task is to learn two mapping functions between source domain $X$ and target domain $Y$, i.e., $G_X : X \to Y$ and $G_Y : Y \to X$. Here the two mapping functions $G_X$ and $G_Y$ are implemented as generators in Conditional GANs for synthesizing frames. As such, by performing video translation via $G_X$ and $G_Y$, ${\bf{x}}$ and ${\bf{y}}$ are converted as the synthetic videos $\widetilde{\bf{x}} = \{\widetilde{x}_t\}^T_{t=1}$ and $\widetilde{\bf{y}} = \{\widetilde{y}_s\}^S_{s=1}$, where $\widetilde{x}_t=G_X(x_t)$ and $\widetilde{y}_s=G_Y(y_s)$ are synthetic frames. Moreover, one discriminator $D_Y$ is leveraged to distinguish real frames $\{y_s\}$ from synthetic ones $\{\widetilde{x}_t\}$. Similarly, another discriminator $D_X$ distinguishes between $\{x_t\}$ and $\{\widetilde{y}_s\}$. Since we ultimately aim to integrate motion estimation into video translation, we capitalize on off-the-shelf FlowNet \cite{ilg2017flownet} ($\mathcal{F}$) to directly represent the estimated motion between two consecutive frames (e.g., $x_t$ and $x_{t+1}$) as optical flow: $f_{x_t} = \mathcal{F}(x_t, x_{t+1})$. Furthermore, two motion translators, i.e., $M_X$ and $M_Y$, are devised to transfer optical flows across domains. More details about how we conduct motion translation will be elaborated in Section \ref{sec:mo}. \textbf{Problem Formulation.} Inspired by the recent success of Cycle-GAN in unpaired image-to-image translation and temporal coherence/dynamics exploration in video understanding \cite{pan2016learning,pan2017video,pan2016jointly,li2018jointly}, we formulate our unpaired video translation model in a cyclic paradigm which enforces the learnt mappings ($G_X$ and $G_Y$) to be cycle consistent on both frame and motion. Specifically, let $x^{rec}_t = G_Y(G_X(x_t))$ and $y^{rec}_s = G_X(G_Y(y_s))$ denotes the reconstructed frame of $x_t$ and $y_s$ in forward cycle and backward cycle, respectively. Hence the frame cycle consistency constraint aims to reconstruct each frame in source and target domain via translation cycle: $x_t \to \widetilde{x}_t \to x^{rec}_t \approx x_t$ and $y_s \to \widetilde{y}_s \to y^{rec}_s \approx y_s$. Besides the preservation of appearance structure in translation cycle via cycle consistency on frame, we additionally pursuit the reconstruction of motion in translation cycle, which enforces the temporal continuity between consecutive frames. As such, the motion cycle consistency constraint is introduced to reconstruct the motion between every two consecutive frames through translation cycle: $f_{x_t} \to f_{x^{rec}_t} \approx f_{x_t}$ and $f_{y_s} \to f_{y^{rec}_s} \approx f_{y_s}$. In addition, the motion translation constraint is especially devised to exploit motion translation across domains. The transferred motion will be directly utilized to warp the synthetic frame to the subsequent frame, which further strengthens temporal continuity among synthetic frames. \subsection{Cycle-GAN} We briefly review Cycle-GAN \cite{zhu2017unpaired} for unpaired translation at frame level. Cycle-GAN is composed of two generators ($G_X$ and $G_Y$) to synthesize frames across domains, and two discriminators ($D_X$ and $D_Y$) for discriminating real frames from synthetic ones, coupled with the adversarial constraint and cycle consistency constraint on frame. The main idea behind Cycle-GAN is to make each frame realistic via adversarial constraint, and encourage the translation cycle-consistent via cycle consistency constraint on frame. \textbf{Adversarial Constraint.} As in image/video generation \cite{goodfellow2014generative,pan2017create,qiu2017deep,vondrick2016generating}, the generators and discriminators are adversarially trained in a two-player minimax game mechanism. Specifically, given the real frames ($x_t$ and $y_s$) and the corresponding synthetic frames ($\widetilde{x}_t=G_X(x_t)$ and $\widetilde{y}_s=G_Y(y_s)$), the discriminators are trained to correctly distinguish between real and synthetic frames, i.e., maximizing the adversarial constraint: \begin{equation}\label{Eq:Eq1}\small \begin{array}{l} \mathcal{L}_{Adv} = \sum\limits_{s}\log D_Y(y_s) + \sum\limits_{t}\log (1 - D_Y(\widetilde{x}_t)) \\ \quad\quad\quad\quad~~~ + \sum\limits_{t}\log D_X(x_t) + \sum\limits_{s}\log (1 - D_X(\widetilde{y}_s)). \end{array} \end{equation} Meanwhile, the generators are learnt to minimize this adversarial constraint, aiming to fool the discriminators with synthetic frames. \textbf{Frame Cycle Consistency Constraint.} Moreover, to tackle the unpaired translation, a cycle consistency constraint on each frame is additionally exploited to penalize the difference between the primary input frame $x_t$/$y_s$ and its reconstructed frame $x^{rec}_t = G_Y(G_X(x_t))$/$y^{rec}_s = G_X(G_Y(y_s))$: \begin{equation}\label{Eq:Eq2}\small \begin{array}{l} {\mathcal{L}_{FC}(G_X, G_Y)} = \sum\limits_{t}\left\| x^{rec}_t - x_t\right\|_1 + \sum\limits_{s}\left\| y^{rec}_s - y_s\right\|_1. \end{array} \end{equation} By minimizing the frame cycle consistency constraint above, the frame translation is enforced to be cycle-consistent, targeting to capture high-level appearance structure across domains. \subsection{Motion Guided Temporal Constraints}\label{sec:mo} Unlike Cycle-GAN that only explores appearance structure at frame level, an unpaired video translator should further exploit temporal continuity across frames to ensure both the visual appearance and motion between frames to be realistic and consistent. Existing pioneer in unpaired video translation is Recycle-GAN \cite{bansal2018recycle} that predicts future frames via temporal predictor to enable the cycle consistency across both domains and time, while leaving the inherent motion information unexploited. Here we explicitly model the motion across frames in the form of optical flow throughout the translation. Two temporal constraints, i.e., motion cycle consistency and motion translation, are especially devised to strengthen temporal continuity in synthetic videos with the guidance of motion reconstruction in translation cycle and motion translation across domains. \textbf{Motion Cycle Consistency Constraint.} To resolve unpaired scenario of video translation, we go one step further and extend the cycle consistency constraint from single frame in Cycle-GAN to motion between consecutive frames. Formally, given two consecutive frames ($x_{t}$ and $x_{t+1}$) from domain $X$, the forward translation cycle is encouraged to reconstruct the two frames ($x^{rec}_{t}$ and $x^{rec}_{t+1}$) with the consistent optical flow. In other words, the estimated optical flow $f_{x^{rec}_t}$ between $x^{rec}_{t}$ and $x^{rec}_{t+1}$ should be similar to the primary optical flow $f_{x_t}$ between $x_{t}$ and $x_{t+1}$. Similarly, for two consecutive frames ($y_{s}$ and $y_{s+1}$) from domain $Y$, the backward translation cycle is enforced to be cycle-consistent on optical flow: $f_{y_s} \to f_{y^{rec}_s} \approx f_{y_s}$. Accordingly, the motion cycle consistency constraint is defined as the L$_1$ distance between the optical flows before and after the translation cycle: \begin{equation}\label{Eq:Eq3}\small \begin{array}{l} {\mathcal{L}_{MC}(G_X, G_Y)} = \sum\limits_{t}\sum\limits_{i}C^{(i)}_{x_t}\left\|f^{(i)}_{x^{rec}_t} - f^{(i)}_{x_t} \right\|_1 \\ \quad\quad\quad\quad\quad\quad\quad + \sum\limits_{s}\sum\limits_{i}C^{(i)}_{y_s}\left\| f^{(i)}_{y^{rec}_s}- f^{(i)}_{y_s}\right\|_1, \end{array} \end{equation} where $f^{(i)}_{x_t}$ denotes a 2-dimensional displacement vector for $i$-th pixel in optical flow $f_{x_t}$. As in \cite{huang2017real}, we leverage two visibility masks $C_{x_t}$ and $C_{y_s}$ as weight matrixes, where each pixel $C^{(i)}_{x_t}, C^{(i)}_{y_s} \in \left[ 0,1 \right]$ represents the per-pixel confidence of the pixel $f^{(i)}_{x_t}$ in optical flow $f_{x_t}$: $1$ for traceable pixels by optical flow, and $0$ at occluded regions or near motion boundaries. Accordingly, by minimizing the motion cycle consistency constraint, the video translation is ensured to preserve the motion between real consecutive frames after translation cycle, which in turn implicitly enhances the temporal continuity between synthetic consecutive frames. \textbf{Motion Translation Constraint.} The cycle consistency on motion only constraints temporal coherence between synthetic frames in an unsupervised manner, but ignores the straightforward transfer of motion across domains. Nevertheless, the transfer of motion across domains has been seldom exploited for unpaired video translation, possibly because such motion translation needs pairs of optical flows for training, while in the unpaired settings, no paired video data is provided. One naive way to exploit motion across domains for video synthesis is to directly warp the synthetic frame with the source motion into the subsequent frame as in \cite{ruder2016artistic,huang2017real}. This scheme pursuits the motion consistency across domains in a brute-force manner regardless of the scene dynamics in target. Instead, we design a novel motion translator to transfer optical flow from source domain to target domain, which captures temporal coherence tailored to target domain. Such transferred optical flow via motion translator can be further leveraged to guide video synthesis in target domain, pursuing the pixel-wise temporal continuity. Technically, given the optical flow $f_{x_t}$ between $x_{t}$ and $x_{t+1}$ from domain $X$, the motion translator $M_X$ is utilized to transform the primary optical flow $f_{x_t}$ into the transferred one $\widetilde{f}_{x_t} = M_X(f_{x_t})$ in domain $Y$. Note that motion translators are implemented as paired translator Pix2Pix in \cite{isola2017image}. Each motion translator is constrained with an auxiliary motion consistency loss, aiming to correctly predict the optical flow in the target domain. Here we directly utilize the optical flow $f_{\widetilde{x}_{t}}$ between the corresponding synthetic frames in target domain as the ``pseudo" target optical flow for training motion translator. Similarly, with the input of optical flow $f_{y_s}$ from domain $Y$, another motion translator $M_Y$ produces the transferred optical flow $\widetilde{f}_{y_s} = M_Y(f_{y_s})$ in domain $X$, which is enforced to resemble the ``pseudo" target optical flow $f_{\widetilde{y}_{s}}$ in domain $X$. Thus, the auxiliary motion consistency loss is defined as L$_1$ distance between the transferred optical flow and ``pseudo" target optical flow: \begin{equation}\label{Eq:Eq4}\small \begin{array}{l} {\mathcal{L}_{AM}(M_X, M_Y)} = \sum\limits_{t}\left\| \widetilde{f}_{x_t}-f_{\widetilde{x}_{t}} \right\|_1 + \sum\limits_{s}\left\| \widetilde{f}_{y_s}-f_{\widetilde{y}_{s}} \right\|_1. \end{array} \end{equation} After that, the transferred optical flow $\widetilde{f}_{x_t}$/$\widetilde{f}_{y_s}$ is utilized to further warp the synthetic frame $\widetilde{x}_t$/$\widetilde{y}_s$ to the subsequent frame via bi-linear interpolation, leading to the warped frame $W(\widetilde{f}_{x_t}, \widetilde{x}_{t})$/$W(\widetilde{f}_{y_s}, \widetilde{y}_{s})$ in target domain at time $t+1$. Therefore, we define the motion translation constraint as the L$_1$ distance between the warped frame and the synthetic frame at time $t+1$: \begin{equation}\label{Eq:Eq5}\small \begin{array}{l} {\mathcal{L}_{MT}(G_X, G_Y)} = \sum\limits_{t}\sum\limits_{i}C^{(i)}_{x_t}\left\|W^{(i)}(\widetilde{f}_{x_t}, \widetilde{x}_{t}) - \widetilde{x}^{(i)}_{t+1}\right\|_1 \\ \quad\quad\quad\quad\quad\quad\quad\quad~~ + \sum\limits_{s}\sum\limits_{i}C^{(i)}_{y_s}\left\|W^{(i)}(\widetilde{f}_{y_s}, \widetilde{y}_{s}) - \widetilde{y}^{(i)}_{s+1}\right\|_1. \end{array} \end{equation} This motion translation constraint ensures the synthetic frame to be consistent with the warped version of previous synthetic frame in the traceable points. As such, the pixel-wise temporal continuity among synthetic frames are strengthened. \subsection{Training and Inference}\label{subsubsec:opt} \textbf{Optimization.} The overall training objective of our Mocycle-GAN integrates the adversarial constraint, the cycle consistency constraints on frame and motion, and the motion translation constraint for generators and discriminators, and the auxiliary motion consistency loss for motion translators. During training, we adopt the EM procedure to iteratively optimize motion translators, and generators \& discriminators. Specifically, in \textbf{E-step}, we fix the parameters in motion translators ($M_X$ and $M_Y$) and update the parameters of generators ($G_X$ and $G_Y$) by minimizing the combined loss of the three spatial/temporal constrains: \begin{equation}\label{Eq:Eq6}\small \begin{array}{l} \mathcal{L}(G_X, G_Y) = \mathcal{L}_{Adv} + \lambda_{FC} \cdot {\mathcal{L}_{FC}}(G_X, G_Y) \\ \quad\quad\quad\quad~~ + \lambda_{MC} \cdot {\mathcal{L}_{MC}}(G_X, G_Y) + \lambda_{MT} \cdot {\mathcal{L}_{MT}}(G_X,G_Y), \end{array} \end{equation} where $\lambda_{FC}$, $\lambda_{MC}$, and $\lambda_{MT}$ are tradeoff parameters. Meanwhile, the discriminators ($D_X$ and $D_Y$) are optimized by maximizing the adversarial constraint $\mathcal{L}_{Adv}$ in Eq.(\ref{Eq:Eq1}). In \textbf{M-step}, we fix the parameters in generators and discriminators, and update motion translators by minimizing the auxiliary motion consistency loss ${\mathcal{L}_{AM}(M_X, M_Y)}$ in Eq.(\ref{Eq:Eq4}). We alternate the E-step and M-step in each training iteration until a convergence criterion is met. The detailed training process of our Mocycle-GAN is given in Algorithm \ref{ag:ag01}. Note that in practice, the generators \& discriminators are pre-trained with the combined loss of adversarial constraint and cycle consistency constraints on frame \& motion. Next, we pre-train the motion translators with the auxiliary motion consistency loss. \begin{algorithm}[!tb]\scriptsize \caption{\small The training process of Mocycle-GAN}\label{ag:ag01} \begin{algorithmic}[1] \STATE \textbf{Input:} The number of maximum training iteration $N$; Initialize generators ($G_X$, $G_Y$), discriminators ($D_X$, $D_Y$), and motion translators ($M_X$, $M_Y$). \FOR{$n=1$ to $N$} \STATE Fetch input batch with sampled consecutive frame pairs $\{(x_t,x_{t+1}), (y_s, y_{s+1})\}$. \FOR {Each consecutive frame pair $(x_t,x_{t+1})$, $(y_s, y_{s+1})$} \STATE Generate synthetic frames $(\widetilde{x}_t,\widetilde{x}_{t+1}), (\widetilde{y}_s,\widetilde{y}_{s+1})$ and reconstructed frames $ (x^{rec}_t, x^{rec}_{t+1}), (y^{rec}_s, y^{rec}_{s+1})$ via generators ($G_X$, $G_Y$). \STATE Calculate the corresponding optical flow $f_{x_t}$, $f_{y_s}$, $f_{\widetilde{x}_t}$, $f_{\widetilde{y}_s}$, $f_{x^{rec}_t}$, $f_{y^{rec}_s}$ via FlowNet. \STATE Produce the transferred flow $\widetilde{f}_{x_t}$ and $\widetilde{f}_{y_s}$ via motion translators ($M_X$, $M_Y$). \ENDFOR \STATE -\textbf{E-step:} \STATE \hspace{7pt}Fix motion translators ($M_X$, $M_Y$). \STATE \hspace{7pt}Update generators ($G_X$, $G_Y$) w.r.t loss in Eq.(\ref{Eq:Eq6}). \STATE \hspace{7pt}Update discriminators ($D_X$, $D_Y$) w.r.t loss in Eq.(\ref{Eq:Eq1}). \STATE -\textbf{M-step:} \STATE \hspace{7pt}Fix generators ($G_X$, $G_Y$) and discriminators ($D_X$, $D_Y$). \STATE \hspace{7pt}Update motion translators ($M_X$, $M_Y$) w.r.t loss in Eq.(\ref{Eq:Eq4}). \ENDFOR \end{algorithmic} \end{algorithm} \textbf{Inference.} After the optimization of our Mocycle-GAN, we can obtain the learnt generator $G_X$ and motion translator $M_X$. During inference, given an input video ${\bf{x}} = \{x_t\}^T_{t=1}$, the simplest way for video translation is to directly employ generator $G_x$ to convert ${\bf{x}}$ into the synthetic video $\widetilde{\bf{x}} = \{\widetilde{x}_t\}^T_{t=1}$ frame-by-frame. An alternative solution is to leverage the warped version of previous synthetic frame based on the transferred optical flow to smooth the output: \begin{equation}\label{Eq:Eq9}\small \begin{array}{l} {\widetilde{x}_{t+1}} = \frac{G_X(x_{t+1}) + W(\widetilde{f}_{x_t}, \widetilde{x}_{t})}{2}. \end{array} \end{equation} However, for fair comparison to other image/video translation approaches, we adopt the simplest single-frame translation without any post processing for evaluation in the experiments. \section{Experiments}\label{sec:EX} We empirically verify the merit of our Mocycle-GAN by conducting experiments on four different unpaired video translation scenarios, including video-to-labels, labels-to-video, four ambient condition transfers (day-to-night, night-to-day, day-to-sunset, sunset-to-day) on Viper \cite{richter2017playing} and flower-to-flower on Flower Video Dataset \cite{bansal2018recycle}. \subsection{Datasets and Experimental Settings} \textbf{Viper} is a popular visual perception benchmark to facilitate both low-level and high-level vision tasks, e.g., optical flow and semantic segmentation. It consists of videos from a realistic virtual world (i.e. GTA gameplay), which are collected while driving, riding and walking in diverse ambient conditions (day, sunset, snow, rain, and night). Each frame (resolution: $1920 \times 1080$) is annotated with pix-level labels, i.e., segmentation label map. Following \cite{bansal2018recycle}, we split 77 videos under diverse environmental conditions into 57 for training and 20 for testing. For video-to-labels and labels-to-video, we evaluate the translations between videos and segmentation label maps. For ambient condition transfers, we consider the translation across different ambient conditions: day $\leftrightarrow$ night and day $\leftrightarrow$ sunset. \textbf{Flower Video Dataset} is a recent released dataset for video translation. This dataset includes the time-lapse videos which depict the blooming or fading of various flowers but without any sync. The resolution of each video is 256 $\times$ 256. For flower-to-flower, we evaluate the translation between different types of flowers, aiming to align the high-level semantic content among them, e.g., the two flowers simultaneously bloom or fade at the same pace. \textbf{Implementation Details.} We mainly implement our Mocycle-GAN on Pytorch \cite{paszke2017automatic} architecture. For generators, we follow the settings of \cite{zhu2017unpaired, bansal2018recycle}, and adopt the encoder-decoder architecture \cite{johnson2016perceptual}. In particular, each generator is composed of two convolution layers (stride: 2) for down-sampling, six residual blocks \cite{he2016deep}, and two deconvolution layers for up-sampling. Each discriminator is built as the $70 \times 70$ PatchGAN in \cite{isola2017image}. For motion translators, we adopt the similar architecture of generator by modifying the input and output channel as 2, which enables the translation of optical flow across domains. In all experiments, we set the tradeoff parameters in Eq.(\ref{Eq:Eq6}) as $\lambda_{FC} = 10$, $\lambda_{MC} = 10$, and $\lambda_{MT} = 10$. During training, the batch size is set as 1. Adam \cite{kingma2014adam} is utilized to optimize the parameters in generators, discriminators and motion translators with the initial learning rate of 0.0002, 0.0002 and 0.0001, respectively. \begin{table}[!tb]\scriptsize \centering \caption{\small Segmentation score (\%) of our Mocycle-GAN and other methods for video-to-labels translation on Viper.} \setlength{\tabcolsep}{2.8pt} \def1.2{1.2} \vspace{-0.15in} \label{table:vid2seg viper} \begin{tabular}{@{}c| l| c c c c c c } \Xhline{2\arrayrulewidth} Criterion& Approach & day & sunset & rain & snow & night & all \\ \hline\hline \multirow{5}*{\textbf{MP}} & Cycle-GAN \cite{zhu2017unpaired} &46.0 &68.7 & 41.1 &39.2 &32.2 &40.2 \\ & Recycle-GAN \cite{bansal2018recycle} &53.0 & 75.6 & 51.6 &55.5 &39.7 &58.8 \\ & Recycle-GAN$_{cmb}$ \cite{bansal2018recycle} &54.7 & 76.3 & 51.0 &57.0 &44.7 &60.1 \\ & Cycle-GAN$_{SF}$ \cite{huang2017real} &55.2 &77.1 &49.9 &59.6 &42.2 &62.3 \\ & Mocycle-GAN &\textbf{64.2} & \textbf{82.1} & \textbf{67.0} &\textbf{66.1} &\textbf{64.5} &\textbf{64.9} \\ \hline \multirow{5}*{\textbf{AC}} & Cycle-GAN \cite{zhu2017unpaired} &12.0 & 13.1 & 5.1 &9.5 &4.9 &9.6 \\ & Recycle-GAN \cite{bansal2018recycle} &13.5 & 16.8 & 9.9 &11.0 &8.4 &14.4 \\ & Recycle-GAN$_{cmb}$ \cite{bansal2018recycle} &15.3 & 15.7 & 10.9 &11.3 &10.2 &14.9 \\ & Cycle-GAN$_{SF}$ \cite{huang2017real} &16.2 &17.0 &10.7 &13.0 &9.7 &16.1\\ & Mocycle-GAN &\textbf{20.5} & \textbf{23.0} & \textbf{18.4} &\textbf{17.8} &\textbf{16.4} &\textbf{17.7} \\ \hline \multirow{5}*{\textbf{IoU}} & Cycle-GAN \cite{zhu2017unpaired} &7.4 & 9.9 & 3.1 &5.8 &2.9 &5.3 \\ & Recycle-GAN \cite{bansal2018recycle} &9.4 & 13.1 & 6.6 &7.8 &5.2 &10.5 \\ & Recycle-GAN$_{cmb}$ \cite{bansal2018recycle} &10.8 & 12.4 & 6.8 &8.1 &6.4 &11.0 \\ & Cycle-GAN$_{SF}$ \cite{huang2017real} &11.6 &13.4 &6.3 &9.0 &6.4 &11.0\\ & Mocycle-GAN &\textbf{15.2} & \textbf{18.1} & \textbf{11.9} &\textbf{12.3} &\textbf{11.6} &\textbf{13.2} \\ \Xhline{2\arrayrulewidth} \end{tabular} \vspace{-0.2in} \end{table} \textbf{Evaluation Metrics.} For video-to-labels translation, as in \cite{zhu2017unpaired, bansal2018recycle}, we adopt three standard segmentation metrics in \cite{long2015fully} for evaluation, i.e., Mean Pixel Accuracy (\textbf{MP}), Average Class Accuracy (\textbf{AC}), and Intersection-Over-Union (\textbf{IoU}). For labels-to-video translation, we follow \cite{zhu2017unpaired, bansal2018recycle} and report the \textbf{FCN score} on target domain. FCN score represents the quality of synthetic frames according to an off-the-shelf semantic segmentation network. Specifically, we pre-train a fully-convolutional network, i.e., FCN \cite{long2015fully}, on Viper. Next, the FCN model is utilized to predict the segmentation label map for each synthetic frame. By comparing the predicted segmentation label map against the ground-truth labels, we can obtain the FCN scores with regard to the three standard segmentation metrics described above (i.e., MP, AC, and IoU). The intuition is that the higher the FCN scores, the more realistic the synthetic frames at appearance. \textbf{Compared Approaches.} We include the following state-of-the-art unpaired translation methods for performance comparison: $(1)$ \textbf{Cycle-GAN}\cite{zhu2017unpaired} is an unpaired image translator that pursuits an inverse translation only at frame level. $(2)$ \textbf{Recycle-GAN}\cite{bansal2018recycle} leverages a recurrent temporal predictor to generate future frames and pursues a new cycle consistency (i.e. recycle loss) across domains and time for unpaired video-to-video translation. $(3)$ \textbf{Recycle-GAN}$_{cmb}$ \cite{bansal2018recycle} is an upgraded version of Recycle-GAN by combining recycle loss in Recycle-GAN and cycle loss in Cycle-GAN for training video translator. $(4)$ \textbf{Cycle-GAN}$_{SF}$ remoulds a state-of-the-art video style transfer approach \cite{huang2017real} for unpaired video translation by equipping its short temporal constraint with the cycle loss in Cycle-GAN. The basic idea of the short temporal constraint is to directly warp the synthetic frame with the source motion into the subsequent frame, aiming to enforce the pixel-wise temporal consistency. $(4)$ \textbf{Mocycle-GAN} is the proposal in this paper. Please note that for fair comparison, all the baselines and our Mocycle-GAN utilize the same architecture for generators and discriminators. \begin{table}[!tb]\scriptsize \centering \caption{\small FCN score (\%) of our Mocycle-GAN and other methods for labels-to-video translation on Viper.} \setlength{\tabcolsep}{2.8pt} \def1.2{1.2} \vspace{-0.15in} \label{table:seg2vid viper} \begin{tabular}{@{}c |l| c c c c c c } \Xhline{2\arrayrulewidth} Criterion& Approach & day & sunset & rain & snow & night & all \\ \hline\hline \multirow{6}*{\textbf{MP}} & Cycle-GAN \cite{zhu2017unpaired} &36.3 & 48.7 & 23.7 &40.0 &22.8 &37.9 \\ & Recycle-GAN \cite{bansal2018recycle} &37.5 & 53.9 & 27.4 &42.7 &23.6 &41.3 \\ & Recycle-GAN$_{cmb}$ \cite{bansal2018recycle} &37.0 & 54.4 & 27.6 &40.8 &26.6 &43.5 \\ & Cycle-GAN$_{SF}$ \cite{huang2017real} &38.7 &57.0 &25.2 &42.1 &24.4 &44.6 \\ & Mocycle-GAN &\textbf{42.1} &\textbf{61.2} & \textbf{34.6} &\textbf{48.1} &\textbf{30.5} &\textbf{47.6} \\ \hline \multirow{6}*{\textbf{AC}} & Cycle-GAN \cite{zhu2017unpaired} &10.7 & 15.3 & 9.1 &11.4 &10.0 &10.2 \\ & Recycle-GAN \cite{bansal2018recycle} &12.3 & 14.9 & 10.0 &11.5 &11.1 &12.0 \\ & Recycle-GAN$_{cmb}$ \cite{bansal2018recycle} &12.7 & 15.6 & 10.1 &12.0 &11.8 &12.2 \\ & Cycle-GAN$_{SF}$ \cite{huang2017real} &13.2 &15.4 &9.7 &13.0 &10.4 &12.9 \\ & Mocycle-GAN &\textbf{15.4} & \textbf{17.6} & \textbf{12.6} &\textbf{14.9} &\textbf{16.5} &\textbf{16.0} \\ \hline \multirow{6}*{\textbf{IoU}} & Cycle-GAN\cite{zhu2017unpaired} &7.4 & 9.2 & 4.7 &6.2 &4.5 &6.1 \\ & Recycle-GAN \cite{bansal2018recycle} &8.1 & 10.0 & 5.5 &6.9 &4.7 &6.7 \\ & Recycle-GAN$_{cmb}$ \cite{bansal2018recycle} &8.3 & 10.2 & 5.5 &6.9 &5.6 &7.0 \\ & Cycle-GAN$_{SF}$ \cite{huang2017real} &8.0 &10.4 &5.0 &7.0 &5.3 &7.4 \\ & Mocycle-GAN &\textbf{9.7} & \textbf{11.9} & \textbf{7.5} &\textbf{8.8} &\textbf{7.7} &\textbf{10.1} \\ \Xhline{2\arrayrulewidth} \end{tabular} \vspace{-0.23in} \end{table} \subsection{Performance Comparison and Analysis} \begin{figure*}[!tb] \centering {\includegraphics[width=0.9\textwidth]{viper.pdf}} \vspace{-0.30in} \caption{\small Examples of (a) video-to-labels and (b) labels-to-video results in Viper dataset under various ambient conditions. The original inputs, the output results by different models, and the ground truth outputs are given.} \vspace{-0.18in} \label{fig:viper} \end{figure*} \textbf{Evaluation on Video-to-Labels.} In this scenario, the video translator takes a game scene video as input and outputs the corresponding segmentation label maps. The performance comparisons of different models for video-to-labels translation task are summarized in Table \ref{table:vid2seg viper}. Overall, the results across three segmentation metrics consistently indicate that our proposed Mocycle-GAN obtains better performances against state-of-the-art techniques. The results generally highlight the key advantage of exploring motion information for unpaired video translation, enforcing the synthetic videos to be both realistic at appearance and temporal continuous across frames. Specifically, by encouraging the cycle consistency across domains and time via a spatio-temporal constraint, Recycle-GAN exhibits better performance than Cycle-GAN that only pursuits cycle consistency at frame level. Moreover, by simultaneously utilizing the spatial constraint in Cycle-GAN and spatio-temporal constraint in Recycle-GAN, Recycle-GAN$_{cmb}$ further boosts up the performances. Different from Recycle-GAN$_{cmb}$ that enforces temporal coherence via future frame prediction, Cycle-GAN$_{SF}$ encourages pixel-wise temporal consistency by directly warping the synthetic frame with source optical flow, and achieves better performances. This confirms the effectiveness of modeling motion information in video synthesis. Nevertheless, the performances of Cycle-GAN$_{SF}$ are still lower than our Mocycle-GAN which further strengthens temporal continuity via motion cycle consistency and motion translation across domains. \begin{table}[]\scriptsize \centering \caption{\small Ablation study for each design (i.e., Motion Cycle Consistency (MC) and Motion Translation (MT)) in Mocycle-GAN for video-to-labels on Viper.} \setlength{\tabcolsep}{3.98pt} \def1.2{1.2} \vspace{-0.15in} \label{table:ablation-vid2seg} \begin{tabular}{@{}c| l| c c |c c c c c c } \Xhline{2\arrayrulewidth} Criterion& Approach &MC & MT &day & sunset & rain & snow & night & all \\ \hline\hline \multirow{3}*{\textbf{MP}} & Cycle-GAN + MC &$\surd$ &~ &60.2 & 81.1 & 61.3 &63.0 &50.7 &63.1 \\ & Cycle-GAN + MT &~ &$\surd$ &62.5 & 81.5 & 65.4 &64.6 &63.0 &63.0 \\ & Mocycle-GAN &$\surd$ &$\surd$ &\textbf{64.2} &\textbf{82.1} & \textbf{67.0} &\textbf{66.1} &\textbf{64.5} &\textbf{64.9} \\ \hline \multirow{3}*{\textbf{AC}} & Cycle-GAN + MC &$\surd$ &~ &18.2 & 21.3 & 15.7 &14.4 &12.2 &17.4 \\ & Cycle-GAN + MT &~ &$\surd$ &19.3 & 21.4 & 17.6 &17.4 &16.1 &17.2 \\ & Mocycle-GAN &$\surd$ &$\surd$ &\textbf{20.5} & \textbf{23.0} & \textbf{18.4} &\textbf{17.8} &\textbf{16.4} &\textbf{17.7} \\ \hline \multirow{3}*{\textbf{IoU}} & Cycle-GAN + MC &$\surd$ &~ &13.3 & 16.9 & 9.6 &10.4 &7.9 &12.9 \\ & Cycle-GAN + MT &~ &$\surd$ &14.4 & 17.1 & 11.5 &11.9 &11.1 &12.8 \\ & Mocycle-GAN &$\surd$ &$\surd$ &\textbf{15.2} & \textbf{18.1} & \textbf{11.9} &\textbf{12.3} &\textbf{11.6} &\textbf{13.2} \\ \Xhline{2\arrayrulewidth} \end{tabular} \vspace{-0.24in} \end{table} Figure \ref{fig:viper}(a) showcases five examples of video-to-labels results with different methods under various ambient conditions. As illustrated in the figure, our Mocycle-GAN obtains much more promising video-to-labels results. For instance, the majority categories, e.g., road (first row), cannot be well translated for baselines. Instead, even the minority classes such as car (third row) and building (fourth row) are translated nicely using our Mocycle-GAN. \textbf{Evaluation on Labels-to-Video.} In this scenario, given an input sequence of segmentation label maps, the video translator outputs a video that resembles a real game scene video. Table \ref{table:seg2vid viper} shows the results on labels-to-video translation task on Viper. Our Mocycle-GAN performs consistently better than other methods over three metrics. Similar to the observations on the video-to-labels translation task, Recycle-GAN exhibits better performance than Cycle-GAN, by synthesising future frames via temporal predictor to explore cycle consistency across both domains and time. The further performance improvement is attained when combining Cycle-GAN and Recycle-GAN. In addition, Cycle-GAN$_{SF}$ explores motion across domains to directly constrain the temporal dynamics between synthetic frames with source motion and achieves better performances than Recycle-GAN$_{cmb}$. Furthermore, by steering unpaired video translation with the guidance from motion cycle consistency and motion translation across domains, our Mocycle-GAN boosts up the performances over all the three metrics. Figure \ref{fig:viper}(b) shows five examples of labels-to-video results under variant ambient conditions. Clearly, our Mocycle-GAN generates more natural and vivid frames compared with the results of baselines. Concretely, our results contain more realistic objects (e.g., road, tree, and car) with plenty of details, while the other methods always generate repeated patterns and fail to capture the details. \begin{table}[]\scriptsize \centering \caption{\small Ablation study for each design (i.e., Motion Cycle Consistency (MC) and Motion Translation (MT)) in Mocycle-GAN for labels-to-video on Viper.} \setlength{\tabcolsep}{3.98pt} \def1.2{1.2} \vspace{-0.15in} \label{table:ablation-seg2vid} \begin{tabular}{@{}c |l |c c |c c c c c c } \Xhline{2\arrayrulewidth} Criterion& Approach &MC &MT & day & sunset & rain & snow & night & all \\ \hline\hline \multirow{3}*{\textbf{MP}} & Cycle-GAN + MC &$\surd$ &~ &40.3 & 58.6 & 29.5 &43.8 &27.9 &44.7 \\ & Cycle-GAN + MT &~ &$\surd$ &39.0 & 57.7 & 33.3 &46.3 &27.7 &47.0 \\ & Mocycle-GAN &$\surd$ &$\surd$ &\textbf{42.1} & \textbf{61.2} & \textbf{34.6} &\textbf{48.1} &\textbf{30.5} &\textbf{47.6} \\ \hline \multirow{3}*{\textbf{AC}} & Cycle-GAN + MC &$\surd$ &~ &14.5 & 16.3 & 11.0 &13.2 &14.7 &13.6 \\ & Cycle-GAN + MT &~ &$\surd$ &14.6 & 16.1 & 11.3 &13.9 &14.5 &14.5 \\ & Mocycle-GAN &$\surd$ &$\surd$ &\textbf{15.4} & \textbf{17.6} & \textbf{12.6} &\textbf{14.9} &\textbf{16.5} &\textbf{16.0} \\ \hline \multirow{3}*{\textbf{IoU}} & Cycle-GAN + MC &$\surd$ &~ &9.4 & 11.0 & 6.2 &7.3 &7.0 &7.6 \\ & Cycle-GAN + MT &~ &$\surd$ &9.2 & 11.0 & 6.5 &8.2 &6.7 &8.6 \\ & Mocycle-GAN &$\surd$ &$\surd$ &\textbf{9.7} &\textbf{11.9} &\textbf{7.5} & \textbf{8.8} &\textbf{7.7} &\textbf{10.1} \\ \Xhline{2\arrayrulewidth} \end{tabular} \vspace{-0.2in} \end{table} \textbf{Ablation Study.} In this section, we further study how each design in our Mocycle-GAN affects the overall performance. Motion Cycle consistency (\textbf{MC}) exploits the cycle consistency on motion to enforce the reconstruction of motion through translation cycle. Motion Translation (\textbf{MT}) transfers the optical flow across domains and further strengthens the temporal continuity in target domain by steering video translation with the transferred optical flow. Table \ref{table:ablation-vid2seg} and Table \ref{table:ablation-seg2vid} details the performance improvements by considering different designs for video-to-labels and labels-to-video on Viper, respectively. In particular, by further integrating motion cycle consistency and motion translation constraint into Cycle-GAN, Cycle-GAN + MC and Cycle-GAN + MT exhibit better performance than Cycle-GAN. Combining the two motion-guided temporal constraints, our Mocycle-GAN obtains the best performances on both video-to-labels and labels-to-video translations. Moreover, to fully verify the effectiveness of the devised motion translation constraint, here we compare Cycle-GAN + MT against the best competitor Cycle-GAN$_{SF}$ which also exploits the motion information across domains. Unlike Cycle-GAN$_{SF}$ enforces the temporal coherence among synthetic frames in a brute-force manner, Cycle-GAN + MT elegantly transfers optical flow across domains to model the temporal coherence in target domain and thus achieves better performances. Figure \ref{fig:MT} further showcases two examples of motion translation in video-to-labels. As illustrated in the figure, the optical flows in source and target domains are substantially different, and the transferred optical flow obtained by our motion translator ends up matching closely to the ground truth optical flow in target. The results again confirms the importance of transferring motion across domains for video translation. \begin{figure}[!tb] \vspace{-0.05in} \centering {\includegraphics[width=0.44\textwidth]{night2day.pdf}} \vspace{-0.28in} \caption{\small Examples of night-to-day results in Viper dataset. The original inputs and the output results by different models are given. Each row denotes one sequence of frames.} \label{fig:environment} \vspace{-0.3in} \end{figure} \begin{figure}[!tb] \centering {\includegraphics[width=0.43\textwidth]{flow_translation.pdf}} \vspace{-0.253in} \caption{\small Examples of motion translation results in video-to-labels. From left to right: Source frame overlay, optical flow in source, transferred optical flow via motion translator, ground truth optical flow in target, and ground truth target frame overlay.} \label{fig:MT} \vspace{-0.25in} \end{figure} \subsection{Other Video Translations} \textbf{Ambient Condition transfer.} As an universal unpaired video translator, we test our Mocycle-GAN on ambient condition transfers which explore the translation between different ambient conditions. Figure \ref{fig:environment} shows the translated videos by our Mocycle-GAN and other baselines on night-to-day task. As depicted in the figure, the baselines all generate frames whose overall color is somewhat bleak. In contrast, the color of our results gets much brighter, which better matches the style of day-time videos. Besides, our Mocycle-GAN takes the advantages of exploring both motion cycle consistency and motion translation, and thus achieves more realistic and temporal consistent videos than other methods. \textbf{Flower-to-Flower.} We further evaluate our Mocycle-GAN on flower-to-flower that considers the translation between different flowers. The examples of translated videos by different methods are shown in Figure \ref{fig:flowrer}. Similar to the observations for ambient condition transfer, our Mocycle-GAN generates the most realistic and temporal continuous frames, where the target flower blooms and fades in synch with the source flower. This again validates the effectiveness of guiding video translation with motion information. \begin{figure}[!tb] \vspace{-0.05in} \centering {\includegraphics[width=0.475\textwidth]{flower.pdf}} \vspace{-0.435in} \caption{\small Examples of flower-to-flower results. The original inputs and the output results by different models are given. Each row denotes one sequence of frames.} \vspace{-0.18in} \label{fig:flowrer} \end{figure} \textbf{Human Evaluation.} We additionally conducted a human study to quantitatively evaluate Mocycle-GAN against three baselines, i.e., Cycle-GAN, Recycle-GAN$_{cmb}$, and Cycle-GAN$_{SF}$ on ambient condition transfer and flower-to-flower tasks. For each task, we invite 10 labelers and randomly select 80 videos clips from testing set for human evaluation. We show each input video clip with two translated results (generated by our Mocycle-GAN and one baseline) at a time and ask the labelers: which one looks more realistic and natural? According to all labelers' feedback, we measure the human preference score of one method as the percentage of its translation results that are preferred. Table \ref{table:human study} shows the results of human study. Clearly, our Mocycle-GAN is the winner on both translation tasks. \begin{table}[]\scriptsize \centering \caption{\small Human preference score ($\%$) on translation quality for ambient condition transfer and flower-to-flower.} \setlength{\tabcolsep}{5pt} \def1.2{1.2} \vspace{-0.15in} \label{table:human study} \begin{tabular}{@{}l| c c } \Xhline{2\arrayrulewidth} Human preference score & Ambient Condition Transfer & Flower-to-Flower \\ \hline\hline Mocycle-GAN / Cycle-GAN &\textbf{82.5} / 17.5 & \textbf{77.5} / 22.5 \\ Mocycle-GAN / Recycle-GAN$_{cmb}$ &\textbf{73.8} / 26.2 & \textbf{72.5} / 27.5 \\ Mocycle-GAN / Cycle-GAN$_{SF}$ &\textbf{66.3} / 33.7 &\textbf{88.8} / 11.2 \\ \Xhline{2\arrayrulewidth} \end{tabular} \vspace{-0.23in} \end{table} \section{Conclusions} We have presented Motion-guided Cycle GAN (Mocycle-GAN) architecture, which explores both appearance structure and temporal continuity for video-to-video translation in an unsupervised manner. In particular, we study the problem from the viewpoint of integrating motion estimation into unpaired video translator. To verify our claim, we devise three types of spatial/temporal constrains: adversarial constraint is to discriminate between synthetic and real frames in an adversarial manner and thus enforce each synthetic frame realistic at appearance; frame and motion cycle consistency constraints encourage the reconstruction of both appearance structure in frames and temporal continuity in motion; motion translation constraint validates the transfer of motion across domains which further strengthens the temporal continuity. Extensive experiments conducted on video-to-labels and labels-to-video translation validate our proposal and analysis. More remarkably, the qualitative results and human study on more translations, e.g., flower-to-flower and ambient condition transfer, demonstrate the efficacy of Mocycle-GAN. \textbf{Acknowledgments.} This work was supported in part by NSFC projects 61872329 and 61572451. \bibliographystyle{ACM-Reference-Format}
1,314,259,994,584
arxiv
\section{Introduction} One of the most well-studied family of problems in algorithmic graph theory is that of so-called network flow problems. Such problems originate from the seemingly simple idea that one wishes to move from one location to another in a network, while minimizing some cost function. From this first idea, a wealth of natural problems were derived, where one usually seeks to transport a set of items from a prescribed set of sources to a non-necessarily predetermined set of target locations. In this paper, we study the problem known as $k$-{\sc Sink Location} on dynamic networks, where one seeks to find a set of locations in a network which minimizes the amount of time required to evacuate all the supply located on each vertex of the graph to these locations. A dynamic network is graph with a prescribed set of sources, and nonnegative capacities and integral transit times for every edge. A similar problem, called {\sc Quickest Transshipment}, was studied by Hoppe and Tardos~\cite{HT00}. In this problem, one is given a set of sinks together with the sources and each sink has a demand value. The task is then to send the supply from the sources to the sinks as quickly as possible, in such a way that each sink receives exactly as much supply as its demand value. They showed that the problem is polynomial-time solvable in the case where the dynamic network is directed, which can easily be seen to imply polynomial time solvability of the $k$-{\sc Sink Location} problem on directed graphs, for every fixed $k$. Note that polynomial-time solvability for the directed case does not readily imply solvability for the undirected case. This is due to the fact that while in a directed network an optimal solution can only be located at a vertex, an optimal solution in an undirected network may be located at any point along an edge. {\em Related work.} For the 1-{\sc Sink Location} problem, Kamiyama, Katoh and Takizawa~\cite{KKT06,KKT09} gave efficient algorithms for several cases where the structure of the network satisfies requirements regarding the length of the edges and the structure of the given graph. Mamada et al.~\cite{MUMF06} gave an $O(n\cdot \log^2{n})$ algorithm for the case where the input graph is a tree. Most of the recent work on the {\sc Sink Location} problem has been considering computationally harder variants of the problem on restricted graph classes. In~\cite{HGK14}, Higashikawa, Golin and Katoh considered the generalized version of the problem where one seeks not only to find a single sink, but some given number $k$, when the input graph is an undirected path. They showed that this problem can be solved in $O(kn)$ time in the so-called minimax setting, and $O(n^2 \cdot \min\{k,2^{\sqrt{\log{k} \log\log{n}}}\})$ in the minisum setting. In~\cite{HGK14a}, the same authors considered the minimax regret version of the problem when the input graph is a tree and the edges have uniform capacities, and showed that the problem can be solved in $O(n^2 \log^2 n)$ time. As an intermediate result, they provide an $O(n \log n)$ algorithm to find a sink location that minimizes the evacuation time. {\em Some definitions.} We now define formally the notion of dynamic network and the $k$-{\sc Sink Location} problem. A {\em dynamic network} ${\cal N} = (G, c, \tau, S)$ consists of a graph $G=(V,E)$ where each edge $e\in E(G)$ is given a {\em capacity} $c(e)\in\mathbb{N}$ and a {\em transit time} $\tau(e)\in \mathbb{N}^+$, and a prescribed set of {\em sources} $S\in V$. The notion of dynamic network was first introduced by Ford and Fulkerson~\cite{FF62}. In the $k$-{\sc Sink Location} problem, one is given a dynamic network ${\cal N}$ together with a supply value $\sigma(u)\in \mathbb{N}$ for each vertex $u$ of the set $S$ of sources of ${\cal N}$. The task is then to find $k$ positions $X=\{x_1,\ldots,x_k\}$ in the graph, called {\em sinks}, which minimizes the amount of time required to send the supply $\sigma(u)$ from each vertex $u\in S$ to the positions $X$. A position $x$ is defined as a triple $(uv,\tau(ux),\tau(vx))$, where $e$ is an edge of $G$, and $\tau(pq)\in\mathbb{N}$ represent the time required to travel from position $p$ to $q$, and $\tau(ux)+\tau(xv)=\tau(uv)$. Observe that the number of positions for every edge $e$ is exactly $\tau(e)+1$, and can therefore be exponentially large in the size of the input graph $G$. In the {\sc Quickest Transshipment} problem, we are also given a function $\sigma: S \rightarrow \mathbb{Z}\setminus \{0\}$. For every vertex $s\in S$, if $\sigma(s) > 0$ then $s$ is called a {\em source}, otherwise it is called a {\em sink}. The question is then to send all the supply from sources to the sinks in a minimum amount of time, in such a way that each sink $s$ receives exactly $-\sigma(s)$ units of supply. Note that this immediately implies $\sum_{s\in S}\sigma(s)=0$. Hoppe and Tardos~\cite{HT00} proved that {\sc Quickest Transshipment} can be solved in polynomial time on directed graphs. For additional terminology and notation, refer to the monograph by Diestel~\cite{Die05}. {\em Our contribution.} We study the computational complexity of the $k$-{\sc Sink Location} problem on general undirected graphs and prove the following result: \begin{theorem} \label{thm:FPTAS} The $k$-{\sc Sink Location} problem admits an FPTAS on undirected dynamic networks for every fixed $k$. \end{theorem} A parameterized problem is said to be FPT by some parameter $k$ if there is an algorithm that solves the problem in time $f(k)\cdot n^{O(1)}$. Intuitively, a $W[1]$-hard problem is a problem that is unlikely to admit an FPT algorithm. We refer the reader to~\cite{FG06,Nie06} for more information about parameterized complexity and algorithms. We complement Theorem~\ref{thm:FPTAS} by showing that is unlikely to be significantly improved: \begin{theorem} \label{thm:W-hard} The $k$-{\sc Sink Location} problem is $W[1]$-hard when parameterized by $k$. \end{theorem} \section{Polynomial-time approximation scheme} In this section, we prove our main result, namely that the $k$-{\sc Sink Location} problem admits an FPTAS on general undirected graphs, for every fixed $k$. To that end, we will first need to show that, given an instance $({\cal N}, \sigma)$ of the $k$-{\sc Sink Location} problem and a set of $k$ positions $X$ in $G$, one can compute the minimum amount of time required to send all the supply to $X$. Note the following result of Hoppe and Tardos~\cite{HT00}: \begin{theorem}[\cite{HT00}] The {\sc Quickest Transshipment} problem can be solved in polynomial time on directed graphs. \end{theorem} Recall that in the {\sc Quickest Transshipment} problem, one is given the set of sinks $X\subseteq V(G)$, and each sink $x\in X$ is given a demand value that must be met exactly. We show that the $k$-{\sc Sink Location} problem on undirected graphs can be reduced to the {\sc Quickest Transshipment} problem on directed graphs when the set of sinks is given. This will later allow us to use Hoppe and Tardos' algorithm to evaluate the time required to send all the supply to a given set of sinks. \begin{lemma} \label{lem:reduction} There is an algorithm that, given an instance $({\cal N}, \sigma)$ of the $k$-{\sc Sink Location} problem where $G$ is undirected and a set of $k$ positions $X$ in $G$ such that no two positions in $X$ lie on the same edge, computes the minimum amount of time required to send all supply to $X$ and runs in polynomial time. \end{lemma} \begin{proof} We reduce our problem to {\sc Quickest Transshipment} on directed graphs. Since Hoppe and Tardos~\cite{HT00} proved that the problem is polynomial-time solvable in that case, this will immediately imply our lemma. Given an instance $({\cal N}, \sigma)$ of the $k$-{\sc Sink Location} problem with ${\cal N}=(G,c,\tau,S)$, where $G$ is undirected, and a set of positions $X=\{x_1,\ldots,x_k\}$ such that $x_i=(u_iv_i,\tau_i^u,\tau_i^v)$ in $G$, we create an instance $({\cal N}', \sigma')$ of the {\sc Quickest Transshipment} problem with ${\cal N}'=(G',c',\tau',S')$, where $G'=(V',E')$ is directed. Our construction is as follows: \begin{itemize} \item $V'=V \cup X \cup \{s^*\}$, with $X=\{x_1,\ldots,x_k\}$; \item $S'=S\cup \{s^*\}$; \item For every edge $ww'\in E\setminus \bigcup_{i=1}^k\{u_iv_i\}$, we create~2 new opposite edges $ww'$ and $w'w$ such that $c'(ww')=c'(w'w)=c(ww')$ and $\tau'(ww')=\tau'(w'w)=\tau(ww')$; \item For every edge $uv$ such that there is a position $x\in X$ that lies on $uv$, we replace $uv$ with~4 edges $ux,xu,vx,xv$ such that $c'(ux)=c'(xu)=c'(vx)=c'(xv)=c(uv)$ and $\tau'(ux)=\tau'(xu)=\tau_u$ and $\tau'(vx)=\tau'(xv)=\tau_v$; \item We add edges $x_is^*$ for every $1\leq i\leq k$ and set $c(x_is^*)=\sum_{s\in S}\sigma(s)$ and $\tau(x_is^*)=0$; \item For every vertex $w\in S, \sigma'(w)=\sigma(w)$, $\sigma'(x_i)=0$ for every $1\leq i\leq k$ and $\sigma'(x^*)=-\sum_{s\in S}\sigma(s)$. \end{itemize} We now claim that the minimum amount of time required to send all supplies to $x$ in $({\cal N}, \sigma)$ is equal to the minimum amount of time required in $({\cal N}', \sigma')$. The fact that any solution in $({\cal N}, \sigma)$ corresponds to an equivalent solution in $({\cal N}', \sigma)'$ follows from the fact that in $({\cal N}, \sigma)$, an edge is never traversed in both directions at a given time, and the length and capacity of every edge is the same as in $({\cal N}', \sigma)'$. Similarly, for every routing scheme in $({\cal N}', \sigma')$, there exists an equivalent routing that can be completed in the same amount of time where, at any given time, at most one edge out of every pair of opposite edges has non-zero flow in the routing. This concludes the proof of the lemma. \qed \end{proof} We are now ready to describe our FPTAS for the $k$-{\sc Sink Location} problem on undirected graphs. Roughly speaking, we will ``guess'' near-optimal positions for the $k$ sinks by performing sampling at regular interval over each edge of $G$. One can then reduce the approximation ratio by increasing the amount of sampling points. In this section, given an instance of the $k$-{\sc Sink Location} problem on undirected graphs, we denote by $OPT(X)$ the minimum amount of time required to send all supplies to the set of positions $X$, and $OPT=\min\{OPT(X)\mid X=\{x_i=(u_iv_i,\tau(u_ix_i,\tau(x_iv_i))\}$, for all $1\leq i\leq k$, such that $u_iv_i\in E$ and $\tau(u_ix_i)+\tau(x_iv_i)=\tau(u_iv_i)$. \begin{proof}[of Theorem~\ref{thm:FPTAS}] We describe an algorithm that takes as input an instance $({\cal N},\sigma)$ of the $k$-{\sc Sink Location} problem and $\varepsilon>0$ and returns a set of $k$ positions $X$ in $G$ such that $OPT(X) \leq (1+\varepsilon)\cdot OPT$. Let us define $t_e=\max\{1, \left\lfloor\varepsilon\cdot \tau(e)\rfloor\right.\}, \forall e\in E$. We first define a set ${\cal X}$ of positions in the following way: ${\cal X}$ consists of all the vertices of $G$, together with positions $X_{uv}=\{x_1.\ldots,x_{\ell_{uv}}\}, \forall uv\in E$, with $\ell_{uv}=\min\{\tau(uv), \left\lceil \frac{1}{\varepsilon} \right\rceil\}$. Observe that $|{\cal X}| \leq |V| + \frac{|E|}{\varepsilon}$. Our algorithm then tries every possible set $X$ of $k$ positions in ${\cal X}$, computes $OPT(X)$ using Lemma~\ref{lem:reduction}, and returns $\min\{OPT(X) \mid X\subseteq {\cal X} \wedge |X|=k\}$. Our algorithm runs in time ${|{\cal X}| \choose k} \cdot H(n)$, where $H(n)$ is the running time of Hoppe and Tardos' algorithm. The running time of our algorithm is then $O((|V|+\frac{|E|}{\varepsilon})^k \cdot H(n))$, as desired. To complete the proof, it only remains to show that there exists a set $X$ of $k$ vertices in ${\cal X}$ such that $OPT(X)\leq(1+\varepsilon)\cdot OPT$. Consider a set of $k$ positions $X^*$ in $G$ such that sending all the supply in $G$ to positions in $X^*$ takes time $OPT$, i.e., $X^*$ is an optimal set of positions in $({\cal N},\sigma)$. First, we show that we may assume, without loss of generality, that at most~2 positions in $X^*$ lie on the same edge $uv$, and if exactly~2 positions of $X^*$ lie on $uv$, then these positions are exactly $u$ and $v$. Indeed, observe first that we may safely assume that no edge $uv$ contains more than~2 positions of $X^*$, since every unit of supply that is sent to a sink on $uv$ will have to pass through either the position $x$ in $X^*$ closest from $u$, or the position $y$ closest from $v$. Note that $u=x$ and $v=y$ may happen. Therefore, we may remove all positions of $X^*$ that lie on $uv$ other than $x$ and $y$. Additionally, since every unit of supply sent to $x$ and $y$ have to pass through either $u$ or $v$ in order to reach a sink, we may safely replace $x$ and $y$ with $u$ and $v$ in $X^*$, without increasing the total amount of time required to send the supply to the sinks. Consider now the set $X\subseteq {\cal X}$ such that for every position $x\in X^*$, we add $x$ to $X$ if $x\in{\cal X}$, otherwise we add the position $x'$ in ${\cal X}$ closest from $x$. Note that if $x\not\in {\cal X}$, then $x\not\in V$, and therefore $x$ lies on a edge, and lies between two positions of ${\cal X}$. In case $x$ is equidistant from these two positions, we choose one of them arbitrarily. Moreover, since every edge $uv$ contains either at most~1 position of ${\cal X}$ or both $u$ and $v$, no two distinct positions of $X^*$ correspond to the same position $x'$ in $X$, and hence, each position $x\in X^*$ is associated with a position $x'\in X$ in a bijective manner. We now claim that $X$ satisfies $OPT(X)\leq(1+\varepsilon)\cdot OPT$, as desired. To prove this claim, we show that any routing to $X^*$ that can be achieved in time $OPT$ can be transformed into a new routing to $X$ that can be achieved in time at most $OPT+\max\{\frac{t_e}{2} \mid e\in E \wedge \exists x\in X, x\in E\}$. This immediately follows from the fact that for every pair of positions $x$ and $x'$, the supply sent to $x$ either passes through $x'$, in which case it can simply stop there, or it can be sent to $x'$ as soon as it reaches $x$, without violating the capacity constraint. Since $x'$ is chosen to be closest from $x$ among the positions in ${\cal X}$, and the distance between two positions of ${\cal X}$ lying on an edge $e$ is at most $t_e$, we obtain that the supply reaches $x'$ in the new routing at most $\frac{t_e}{2}$ units of time after it reaches $x$. Note that, if we denote by $OPT(x)$ the time at which the last unit of supply reaches $x$ in the original routing, and $OPT'(x')$ the time at which the last unit of supply reaches $x'$ in the modified routing, we have for every pair of positions $x,x'$: \[OPT'(x') \leq OPT(x)+\frac{t_e}{2}\] Where $e$ is the edge that contains $x$ and $x'$. Moreover, observe that if $x\in X$, then $x'=x$ and $OPT'(x')=OPT(x)$, and if $x'\neq x$ then $t_e = \left\lfloor\varepsilon\cdot \tau(e)\rfloor\right.$. Therefore, we have \[OPT'(x')\leq OPT(x) + \frac{\varepsilon\cdot \tau(e)}{2}\] Finally, observe that if $OPT'(x')\neq OPT(x)$, we way assume without loss of generality that $x\not\in \{u,v\}$, and hence there is at least~1 unit of supply reaching $x$ in the original routing that passes through $u$ and at least~1 unit that passes through $v$. Hence, for every position $x\in X$ lying on edge $e$: \[OPT(x) \geq \frac{\tau(e)}{2}\] Which in turn implies \[OPT'(x') \leq OPT(x) + \frac{\varepsilon \cdot OPT(x)}{2} \leq OPT(x)(1+\varepsilon)\] Since this inequality holds for every pair of positions $x$ and $x'$, we obtain the following inequality for the global solutions $OPT$ and $OPT'$: \[OPT'\leq OPT(1+\varepsilon)\] This concludes the proof of our main theorem. \qed \end{proof} \section{Hardness of $k$-{\sc Sink Location} parameterized by $k$} In this section, we provide a simple reduction from the well-known $W[2]$-complete problem $k$-{\sc Hitting Set}. In this problem, one is given as input a ground set $U$ and a family of sets ${\cal X}$, and the task is to find a subset $U'$ of $U$ containing at most $k$ elements, such that every set in ${\cal X}$ contains at least~1 element of $U'$. \begin{theorem} The $k$-{\sc Sink Location} problem is $W[2]$-hard parameterized by $k$ on undirected graphs, even when all the edges in the input graph $G$ have unit length and capacity. \end{theorem} \begin{proof} Given an instance $(U,{\cal X})$ of {\sc Hitting Set}, we build a graph $G=(V,E)$ such that $V=U\cup{\cal X}$, and two vertices $x\in U$ and $y\in{\cal X}$ are made adjacent whenever $x\in y$. We then set $c(e)=\tau(e)=1$ for every edge $e\in E$ and add exactly~1 unit of supply to each vertex in ${\cal X}$. We now claim that $(U,{\cal X})$ admits a hitting set of size $k$ if and only if there exist $k$ sinks in $G$ to which all the supply can be sent in exactly~1 unit of time. For the forward direction, observe that if $(U,{\cal X})$ has a hitting set of size $k$, then choosing those $k$ elements of $U$ as sinks will allow to send all the supply in a single unit of time. For the converse direction, observe that since the vertices of ${\cal X}$ form an independent and every edge in $E$ has length~1, every sink that lies on an edge $uv$ with $u\in U$, but not at $u$, will only be able to receive supply from $v$. Hence, we may assume that all the sinks lie on vertices of $U$. It is then clear that the $k$ sinks must be adjacent to every vertex of ${\cal X}$, and form therefore a hitting set of $(U,{\cal X})$. \qed \end{proof} \section{Conclusion} In this paper, we proved that the $k$-{\sc Sink Location} problem admits an FPTAS on general undirected graphs for every fixed $k$, but that, on the negative side, it is $W[2]$-hard when $k$ is not fixed, but used as a parameter instead. Two natural questions immediately follow. The first of these two questions is whether the $k$-{\sc Sink Location} problem can be solved in polynomial time for every fixed $k$. The second is whether the problem admits an FPTAS when parameterized by $k$, i.e., can be solved in time $f(k)\cdot poly(n,\frac{1}{\varepsilon})$ with approximation ratio $1+\varepsilon$, for some function $k$.
1,314,259,994,585
arxiv
\section{Introduction} Type Ia supernovae (SNe~Ia) have been used as ``standard candles'' to estimate the distance to galaxies in cosmology. \citet{phi93law} found a significant correlation between their absolute magnitude at maximum, $M$, and decay rate, and proposed that a better distance indicator can be obtained by calibrating it. As well as the decay rate, the observed color also exhibits a clear correlation with $M$. This is mainly due to the interstellar extinction in both their host and our galaxies, while it is proposed that there is a variation in the intrinsic color of SNe~Ia at maximum (\cite{con07col,fol11vel}). In addition to these two, a number of variables have been proposed as explanatory variables of $M$. They are, for example, the equivalent widths, velocities, or depths of absorption lines, or their ratios (for a review, see \cite{bsnip3}). The search for a good set of variables, in other words, the ``model,'' have recently been intensified including arbitrary ratios of the fluxes in spectra. Using the 58 objects observed by Nearby Supernova Factory, \citet{bai09frat} report that the model with a single ratio of the flux at 642~nm to that at 443~nm, hereafter $\mathcal{R}(642\,{\rm nm}/443\,{\rm nm})$, has a smaller residual of $M$ than the classical model with the color and decay rate (, or light-curve width). Using 26 objects observed by the CfA Supernova Program, \citet{blo11frat} confirm the conclusion in \citet{bai09frat} with a slightly different ratio, $\mathcal{R}(6630\,{\rm \AA}/ 4400\,{\rm \AA})$, although the improvement of the model has low significance. In addition, they propose another model with the color and the color-corrected flux ratio, $\mathcal{R}^c(4610\,{\rm \AA}/4260\,{\rm \AA})$ at $t=-2.5\,{\rm d}$ from maximum light. \citet{bsnip3}, using 62 object observed by the Berkeley Supernova~Ia Program, report that the best set of variables is the light-curve width, color, and $\mathcal{R}^c(3780\,{\rm \AA}/4580\,{\rm \AA})$. On the other hand, their analysis did not confirm the results in \citet{bai09frat} and \citet{blo11frat}. Thus, the resulting models of each work are not completely consistent, and the model for the prediction of $M$ has not been established. In previous studies, a linear regression model of $M$ has been assumed: \begin{eqnarray} M_B \simeq M_{B,0} + \beta_1 x_1 + \beta_2 x_2 + \cdots + \beta_L x_L, \end{eqnarray} where $M_B$ is the absolute magnitude in the $B$-band, which has been conventionally used in past studies. $M_{B,0}$ is a constant. The vector, $\bm{x}=(x_1, x_2, \cdots, x_L)^T$ is a set of explanatory variables of $M_B$. The elements in $\bm{x}$ are, for example, the color, decay rate (, or light-curve width), and variables about the lines. $\bm{\beta}=(\beta_1, \beta_2,\cdots, \beta_L)^T$ is the vector of their coefficients. Suppose that $N$ samples of SNe~Ia are available, and the observations are summarized as $\bm{y}\simeq X\bm{\beta}$, where $\bm{y}=(M_{B1},M_{B2},\cdots,M_{BN})^T$ and $X=(\bm{x}_1,\cdots,\bm{x}_N)^T$. The goal of the study is to find an appropriate set of variables in $\bm{x}$ for the prediction of $\bm{y}$. We prefer the model to have a small generalization error for the prediction of $\bm{y}$. If $N\geq L$, it is possible to estimate the values of all elements in $\bm{\beta}$ with the least-square method. However, the risk of over-fitting increases as $N/L$ becomes smaller. Furthermore, the least-square method cannot determine a unique model when $N<L$. Such a situation can appear when arbitrary flux ratios in spectra are included into $X$. Hence, previous studies included only one or two flux ratios in a model, and search for the best set of the variables for the observations. Finding an appropriate set of variables to describe $M_B$ of SNe~Ia is a variable selection problem, which has been studied in the field of statistics and machine learning. In this paper, we report a result of variable selection approach applied for $M_B$. We controlled the generalization error with a regularization term, whose size is chosen via cross-validation, and a subset of the variables are selected from $L$ components by Least Absolute Shrinkage and Selection Operator, or the so-called, LASSO method (\cite{LASSO}). This method can find the unique solution even in the case of $N<L$. In section~2, we describe the method. In section~3, we report on the results of our experiments. We apply our method to the data provided by the Berkeley supernova database. In section~4, we discuss the implication of our results, and summarize our findings. \section{Method} \subsection{LASSO-type estimation} Here, we consider a linear regression model, $\bm{y}=X\bm{\beta}+\bm{e}$, where $X$ is a given real $N\times L$ matrix and $\bm{e}$ is a Gaussian noise with $E[\bm{e}]=\bm{0}$ and $E[\bm{e}\bm{e}']=\sigma^2I_N$. Our goal is to find an appropriate set of variables from $L$ variables and $N$ samples and compute the corresponding coefficients of $\bm{\beta}$. For this sort of estimation problems, \citet{LASSO} proposed a method, Least Absolute Shrinkage and Selection Operator, or the so-called, LASSO, for selecting the best set of explanatory variables. LASSO provides a solution $\bm{\hat{\beta}}$ by minimizing the following function which includes the $\ell 1$-norm of $\bm{\beta}$ as a regularization term \begin{eqnarray} \bm{\hat{\beta}}_\lambda = \argmin{\bm{\beta}} \left\{ \| \bm{y}-X\bm{\beta} \|^2_2 + \lambda \| \bm{\beta} \|_1 \right\}, \end{eqnarray} where $\|\bm{\beta}\|_1$ is the $\ell 1$-norm, defined as $\|\bm{\beta}\|_1=\sum_i |\beta_i|$, and $\lambda$ is a tunable constant. The estimate $\bm{\hat{\beta}}$ includes $0$ components, that is, variables selection is realized with LASSO-type estimation. The number of $0$ components increases as $\lambda$ becomes larger. We apply the LASSO-type estimation in order to select an appropriate model to predict $M_B$ of SNe~Ia. The data, $\bm{y}$, is $M_B$ and each column of $X$ corresponds to an observed variables, such as, the color, light-curve width, and variables about spectra. Recent projects have provided high-quality and uniform samples of SNe~Ia in both photometric and spectroscopic data. The number of available samples, $N$, is now $\sim 100$. The number of candidate explanatory variables can be $>10^4$ if arbitrary flux ratios are included. However, we can expect that the number of effective variables is small. In other words, our interest focuses on a model in which $M_B$ is explained not with $\sim 10^4$, but with only a few variables of $\bm{x}$. Exhaustive search for every subset of candidate variables is not tractable, and the LASSO-type estimation gives us a data-driven approach to select the best subset of variables for the data-set. \subsection{Cross-validation} The cost function for the estimation expressed in equation~(2) contains a tunable parameter, $\lambda$. This parameter controls the weight of the regularization term, which has an influence on the generalization error. We choose the best $\lambda$ by the cross-validation method. In the $K$-fold cross-validation, the data is divided into $K$ roughly equal sub-samples, $\bm{y}_k$ ($k=1,2,\cdots,K$). For each $k$, the training data is defined as all the $K-1$ sub-samples except for the validation data, $\bm{y}_k$. The optimization of the model to the training data gives $\hat{\bm{\beta}}_{k,\lambda}$ at a certain $\lambda$. The generalization error of the model is evaluated with the mean of weighted mean square errors (wMSE; $E(\lambda)$) of the $K$ sub-samples; \begin{eqnarray} E(\lambda) &=& \frac{1}{K} \sum_{k=1}^K E_k(\lambda)\\ E_k(\lambda) &=& \frac{\sum_{i=1}^{M_k}(y_{k,i}-\hat{y}_{k,\lambda,i})^2/\sigma_{k,i}^2} {\sum_{i=1}^{M_k} 1/\sigma_{k,i}^2}\\ \hat{y}_{k,\lambda,i} &=& \sum_{j=1}^N x_{i,j}\hat{\beta}_{k,\lambda,j} \end{eqnarray} where $M_k$ is the number of the validation data, $\bm{y}_k$, and $\sigma_{k,i}$ is the measurement error of the $i$-th element in $\bm{y}_k$. In a very large $\lambda$ regime, the least-square term is large, and thereby $E(\lambda)$ also becomes large. In a very small $\lambda$ regime, on the other hand, the model can reproduce the noise in the data (over-fitting), and thereby have a large generalization error, and eventually lead to a large $E(\lambda)$. Thus, we can find the minimum value of $E(\lambda)$ at a certain $\lambda$. The best model can be considered as the simplest model whose $E(\lambda)$ is within one standard error of the minimal $E(\lambda)$. This is the so-called ``one standard error rule''. Models having $\lambda$ smaller than the best one are statistically indistinguishable from the over-fitting situation. In this paper, we use this rule to select $\lambda$, and set $K=10$. Another common variable selection scheme is to use an information criterion, such as Akaike information criterion (AIC) or Bayesian information criterion (BIC). We employed the regularization term and the cross-validation because we expect not only the observation noise $\bm{e}$ but also the measurement errors in $X$, and we do not have a good model selection criterion for this situation. The measurement error of $M_B$ is occasionally quite small, an order of 0.01~mag. On the other hand, the error of the elements in $X$ can be large. For example, a ratio between low fluxes can have a large error. \subsection{Demonstration of the method} \begin{figure*} \begin{center} \includegraphics[width=17cm]{spcor_demofalse.eps} \end{center} \caption{Simulations of the LASSO-type estimation. The red and black points represent the assumed and estimated values, respectively. The number of samples is 50. The numbers of explanatory variables are $L=10^2$ (left), $10^3$ (middle), and $10^4$ (right), as shown in each panel. The upper and lower panels depict the cases for small and large errors assumed in $X$. }\label{fig:demo} \end{figure*} We performed simple simulations of the LASSO-type estimation for the current problem. The vector, $\bm{\beta}$, was set to be a sparse vector, containing only three non-zero values in $L$ elements. We set three cases: $L=10^2$, $10^3$, and $10^4$. The matrix, $X$, was set to be a $N\times L$ matrix whose elements were random values generated by $\mathcal{N}(0,1)$, a normal distribution with a mean of $0$ and variance of $1$. We set $N=50$ in all cases. Then, we calculated the data vector, $X\bm{\beta}$, and added noise, $\bm{y} = X\bm{\beta}+\bm{e}$, $\bm{e}\sim\mathcal{N}(\bm{0},0.01\sigma^2_{\bm{y}}I_N)$, where $\sigma_{\bm{y}}$ represents the standard deviation of observation noise. Here, we assumed a small error in $\bm{y}$ because $M_B$ is occasionally determined with such a high precision. We also added noise in the elements of $X$, $\tilde{x}_{ij}=x_{ij}+\epsilon_{ij}$, $\epsilon_{ij}\sim\mathcal{N}(0,\sigma_X^2)$, and generated $\tilde{X}$. We assumed small and large errors in $X$, that is, $\sigma_{X}=0.01$ and $0.25$. We estimated $\bm{\beta}$ from $\bm{y}$ and $\tilde{X}$ using the $\ell 1$-norm minimization. The best model and its $\lambda$ were determined by cross-validation. The results are shown in figure~\ref{fig:demo}. In the case of the small $\sigma_{X}$, the assumed $\bm{\beta}$, indicated by the red points, are successfully reconstructed in all $L$ cases, albeit with a $3$--$20$\,\% systematic bias. In the case of the large $\sigma_{X}$, all non-zero elements in $\bm{\beta}$ are detected in the cases of $L=10^2$ and $10^3$, while their coefficients are significantly underestimated and weak false signals are also seen. In the case of $L=10^4$, the assumed weak signal is lost in the reconstruction, and false signals have large coefficients. This experiment demonstrated two important points about the proposed method. First, it can reconstruct the original vector even with the case of $N<L$. Second, even with this method, we cannot avoid detecting false signals which are coincidentally fit the data in the case of a large $L$. The latter point could have a significant implication for using the arbitrary flux ratios in the current problem. The number of the flux ratios is more than 17000, while the number of samples is $\lesssim 100$. Hence, we should reduce the number of columns in $X$ in order to avoid detecting the false signals. In this paper, as described in the next subsection, we use two kinds of spectra normalized by the continuum level and by the total flux. LASSO tends to underestimate the coefficients if the measurement error of the target variable is not negligible, as can be seen in figure~\ref{fig:demo}. Hence, it should be used to select the best set of variables. Then, the model of $M_B$ can be obtained by a refit to the data with the selected variables. In this paper, we focus on variable selection. \subsection{Sample and variables} We used the data from SuperNova DataBase provided by Berkeley Supernova Ia program\footnote{$\langle$http://hercules.berkeley.edu/database/index\_public.html$\rangle$}. Our sample selection was based on the criteria in \citet{bsnip3}: The redshift of the sample ranged from 0.01 to 0.1. We used the spectral data from 3500 to 8500 \AA. The rest-frame days relative to the maximum is ranged from $-5$ to $+5\;{\rm d}$. We used the spectrum having the smallest value of the rest-frame days relative to the maximum for each object in the case that multiple spectra were available. We only used samples having the color parameter, $c$, less than 0.5. We found two Type~Iax objects, SN~2003gq and 2005hk in the sample, and excluded them (\cite{fol13Iax}). As a result, we found 78 objects in the database. The available data contains, for example, the redshift, $z$, light-curve width, $x_1$, color, $c$, apparent magnitude, $m_B$, and spectra. As mentioned in section~1, it is believed that $x_1$ and $c$ are important explanatory variables for $M_B$. We calculated $M_B$ from $m_B$ and $z$ by adopting the standard $\Lambda$ cold dark matter cosmology with $\Omega_m=0.27$, $\Omega_\Lambda=0.73$, and $w=-1$. The calibration of the spectral data was performed in the standard manner: The flux was corrected for the reddening in our galaxy using $E(B-V)$. We used the $E(B-V)$ values obtained from the supernova database, which refers to \citet{sch98dust} and \citet{pee10dust}. The red-shift correction was performed on the wavelength. Then, the spectra were divided into 134~bins which were equally spaced in the logarithmic velocity scale between 3500 and 8500~\AA, as in \citet{bsnip3}. We calculated the arbitrary flux ratios using the binned spectra. The number of the ratios is then $134\times 133=17822$. Including arbitrary flux ratios may provide an exhaustive search for an appropriate set of explanatory variables of $M_B$. However, the number of candidate variables is so large that false signals can be detected, as demonstrated in the last subsection. Hence, we need to consider other sets of candidate variables which are related to the flux ratios, but have much smaller dimension. In this paper, we use two kinds of normalized spectra. First, the variables of the most interest are the flux ratios of the line areas to the continuum level. Indeed, most of previously proposed ratios are such variables: $\mathcal{R}(6420/4430)=$ Fe\,\textsc{ii}/continuum (\cite{bai09frat}), $\mathcal{R}(6630/4400)=$ Fe\,\textsc{ii}/continuum, $\mathcal{R}(6420/5290)=$ continuum/S\,\textsc{ii}, and $\mathcal{R}(4610/4260)=$ continuum/Fe\,\textsc{ii} (\cite{blo11frat}). They can be substituted by the spectra normalized by the continuum level. The continuum level was approximated by a cubic smoothing spline fitted to masked spectra. The mask is depicted in figure~\ref{fig:cont} with the binned spectra of a typical sample, SN~2006et. The data points indicated by the filled circles were used to calculate the continuum curve. In addition, the points with the maximum flux in each shaded area were also used. The several examples of the continuum-normalized spectra are shown in the lower panel of figure~\ref{fig:sample}. We call the set of the continuum-normalized spectra as $\bm{f}_{\rm cnt}$. \begin{figure} \begin{center} \includegraphics[width=8.5cm]{continuum.eps} \end{center} \caption{Mask for calculating the continuum level. The spectrum of SN~2006et is also plotted as a reference. For detail, see the text.}\label{fig:cont} \end{figure} Second, the local colors in the continuum which may have independent information of the broadband colors are also variables of interest. They can be substituted by the spectra normalized by the total flux between 3500 and 8500~\AA. We call the set of this total flux normalized spectra as $\bm{f}_{\rm tot}$. The intrinsic color can be bluer than the observed one because of the interstellar reddening effect in the host galaxy. In previous studies, the color correction for this effect has been performed by assuming that all SNe~Ia have the same intrinsic color. We also performed this correction using the SNe~Ia color-law for the SALT2 data (\cite{guy07salt2}). The color-corrected spectra is, then normalized by the total flux, and named $\bm{f}^c_{\rm tot}$. We include those two kinds of normalized spectra, $(\bm{f}_{\rm cnt}, \bm{f}_{\rm tot})$ or $(\bm{f}_{\rm cnt}, \bm{f}^c_{\rm tot})$ as the candidates, instead of the arbitrary flux ratios. In addition, we use the flux in the logarithmic scale in order to include the information of arbitrary flux ratios. We can identify a good flux-ratio parameter by searching for the two fluxes having the similar coefficients with the opposite sign: $c \cdot \log(f_1/f_2)=c \cdot \log(f_1) - c \cdot \log(f_2)$. Figure~\ref{fig:sample} shows examples of the spectra that are normalized by the total flux ($\bm{f}_{\rm tot}$, the upper panel) and by the continuum ($\bm{f}_{\rm cnt}$, the lower panel). \begin{figure} \begin{center} \includegraphics[width=8.5cm]{sample.eps} \end{center} \caption{Examples of the spectra in our sample. Upper panel: the spectra normalized by the total flux between 3500--8500\AA. Lower panel: the spectra normalized by the continuum. The spectra of three examples, SN~2005dm, SN~2000dk, and SN~2008ar, are shown in the both panels.}\label{fig:sample} \end{figure} As well as $x_1$, $c$, $\bm{f}_{\rm cnt}$, $\bm{f}_{\rm tot}$, and $\bm{f}^c_{\rm tot}$, we include previously proposed flux-ratios, $\bm{\mathcal{R}}$ into the model as candidate explanatory variables for $M_B$. We consider six flux-ratios proposed in \citet{bai09frat}, \cite{blo11frat}, and \citet{bsnip3}, that is, $\bm{\mathcal{R}}=\{\mathcal{R}(3780/4580),\; \mathcal{R}(4610/4260),\; \mathcal{R}(5690/5360),$ $\mathcal{R}(6420/4430),\; \mathcal{R}(6420/5290),\; \mathcal{R}(6630/4400)\}$. The flux ratios which are calculated from the color-corrected spectra, $\bm{f}^c_{\rm tot}$, are called as $\bm{\mathcal{R}}^c$. \citet{bsnip3} presents tables of measured values of the lines: Ca\,\textsc{ii} H\&K and near-infrared triplet, Si\,\textsc{ii} 4000, 5972, and 6355~\AA, Mg\,\textsc{ii}, Fe\,\textsc{ii}, S\,\textsc{ii} ``W,'' and O\,\textsc{i} triplet. We can use pEW, Delta pEW (i.e., the measured pEW subtracted by the template evolution), velocity ($v$), line depth ($a$), and FWHM for the explanatory variables. We note that those line variables are incomplete for our sample. Hence, the number of samples reduces when those line variables are used as the candidate variables, and we used them element by element. We represent a set of the line values as $\bm{\mathcal{L}}$. For example, $\bm{\mathcal{L}}_ {\rm Si\,\textsc{ii} 4000}$ means those variables of Si\,\textsc{ii} 4000\AA. For the optimization of the model to the data, we used the \texttt{glmnet} package for \texttt{R}. \footnote{$\langle$http://www.r-project.org/$\rangle$} The selection of $\lambda$ was performed using the function for the cross-validation, \texttt{cv.glmnet}, adopting the one-standard error rule. The cross-validation is based on random sub-sampling and the selected variables might be influenced by it. We performed $10^4$ experiments for each model, and calculated the selection probability, $p$, of each variable. In this paper, we discuss selected variables only with $p>0.3$. Each column in $X$ was normalized to have zero mean and unit variance, by a linear scaling, $x^\prime_{ij}=(x_{ij}-\bar{x}_j)/\sigma_j$, where $\bar{x}_j$ and $\sigma_j$ are the mean and standard deviation of the $j$-th column. We need this normalization to compare the coefficients, $\bm{\beta}$, of variables having different units. \textbf{The list of objects and explanatory variables used in this paper is available as an online supplement material.} \section{Results} \begin{table*} \caption{Models and Results}\label{tab:model} \begin{center} \begin{tabular}{lrrrrr} \hline Model & Target variable & Explanatory variables & Non-zero elements & coefficients & $p$ \\ & $\bm{y}$ $(N)$& $X$ $(L)$& &$\bm{\beta}$ &\\ \hline 1 & $M_B$ $(78)$& $x_1,c,\bm{f}_{\rm tot},\bm{f}_{\rm cnt},\bm{\mathcal{R}}$ $(276)$ & $c$ &$0.376$ & 1.00\\ & & & $f_{\rm tot}(6373)$&$0.100$ & 1.00\\ & & & $x_1$ &$-0.050$& 0.98\\ & & & $f_{\rm cnt}(6084)$&$-0.034$& 0.98\\ & & & $f_{\rm cnt}(6289)$&$-0.045$& 0.95\\ & & & $f_{\rm cnt}(6631)$&$-0.061$& 0.80\\ & & & $\mathcal{R}(3780/4580)$&$-0.050$& 0.74\\ & & & $f_{\rm tot}(3752)$&$0.063$& 0.73\\ \hline 2 & $M_B-\beta_1 c$ $(78)$ & $x_1,\bm{f}_{\rm tot},\bm{f}_{\rm cnt},\bm{\mathcal{R}}$ $(275)$ & $x_1$ &$-0.020$& 0.99\\ \hline 3 & $M_B-\beta_1 c$ $(78)$ & $x_1,\bm{f}^c_{\rm tot},\bm{f}_{\rm cnt},\bm{\mathcal{R}}^c$ $(275)$ & $x_1$ &$-0.014$& 0.85\\ \hline 4a & $x_1$ $(76)$& $c,\bm{f}^c_{\rm tot}, \bm{f}_{\rm cnt}, \bm{\mathcal{R}}^c,\bm{\mathcal{L}}_{\rm Si\,II\,4000}$ (280)& ${\rm DpEW}_{\rm Si\,II\,4000}$&$-0.455$& 1.00\\ & & & $f_{\rm cnt}(5770)$&$0.518$ & 1.00\\ & & & $f_{\rm cnt}(3982)$&$-0.262$& 1.00\\ & & & $f_{\rm cnt}(7038)$&$-0.485$& 0.96\\ & & & $f^c_{\rm tot}(4988)$&$-0.238$& 0.77\\ & & & $f_{\rm cnt}(6084)$&$0.281$ & 0.62\\ \hline 4b & $x_1$ $(74)$ & $c,\bm{f}^c_{\rm tot}, \bm{f}_{\rm cnt}, \bm{\mathcal{R}}^c,\bm{\mathcal{L}}_{\rm S\,II ``W''}$ (280)& $f_{\rm cnt}(5770)$& $1.034$& 1.00\\ & & & $f_{\rm cnt}(6084)$& $0.440$& 1.00\\ & & & $f^c_{\rm tot}(6458)$& $0.300$& 1.00\\ & & & $f_{\rm cnt}(3982)$& $0.041$& 1.00\\ & & & $f_{\rm cnt}(7179)$& $0.289$& 0.99\\ & & & $f_{\rm cnt}(6458)$&$-0.236$& 0.94\\ & & & $f_{\rm cnt}(6331)$& $0.612$& 0.92\\ \hline 5 & $M_B-(\beta_1 c + \beta_2 x_1)$ $(78)$ & $\bm{f}^c_{\rm tot},\bm{f}_{\rm cnt},\bm{\mathcal{R}}^c$ $(273)$ & --- & --- & ---\\ \hline \end{tabular} \end{center} \end{table*} First, we choose the light curve width ($x_1$), color ($c$), spectra normalized by the total flux ($\bm{f}_{\rm tot}$), those by the continuum ($\bm{f}_{\rm cnt}$), and previously proposed flux-ratios ($\bm{\mathcal{R}}$) as the candidate explanatory variables, and $M_B$ as the target variable. We call this complete model as Model~1. It can be rewritten as: \begin{eqnarray} M_B &=& M_{B,0} + \beta_1 c + \beta_2 x_1 \nonumber \\ &+& \beta_3 f_{\rm tot}(3512) + \beta_4 f_{\rm tot}(3534) + \cdots + \beta_{136} f_{\rm tot}(8472) \nonumber\\ &+& \beta_{137} f_{\rm cnt}(3512) + \beta_{138} f_{\rm cnt}(3534) + \cdots + \beta_{270} f_{\rm cnt}(8472) \nonumber \\ &+& \beta_{271} \mathcal{R}(3780/4580) + \beta_{272} \mathcal{R}(4610/4260) \nonumber \\ &+& \beta_{273} \mathcal{R}(5690/5360) + \beta_{274} \mathcal{R}(6420/4430) \nonumber \\ &+& \beta_{275} \mathcal{R}(6420/5290) + \beta_{276} \mathcal{R}(6630/4400) +e. \end{eqnarray} Using LASSO-type method for 78 samples of $M_B$, we choose the appropriate set of explanatory variables from 276 candidates and estimate coefficients vector $\bm{\beta}$. The tuning parameter, $\lambda$, is determined by cross-validation. Figure~\ref{fig:cv_model1} shows the cross-validation curve for Model~1. In this figure, we can confirm that wMSE take the minimum value in a given range of $\lambda$, and the best model is properly determined by the one-standard-error rule. \begin{figure} \begin{center} \includegraphics[width=8.5cm]{cv_model1.eps} \end{center} \caption{Cross-validation curve for Model~1. The lower and upper horizontal axes denote $\lambda$ and the number of non-zero elements, respectively. The vertical axis denotes wMSE. The left dotted line indicates $\lambda$ having the minimal wMSE. The right dotted line indicates the best model under the one-standard error rule.}\label{fig:cv_model1} \end{figure} Table~\ref{tab:model} lists all models and results presented in this paper. From Model~1, the classical variables, that is, $c$ and $x_1$ are selected. $f_{\rm tot}(6373)$ is also selected, having a coefficient even larger than that of $x_1$ in the absolute values. Figure~\ref{fig:model1} indicates the non-zero elements of $\bm{f}_{\rm tot}$ and $\bm{f}_{\rm cnt}$. As can be seen in this figure, $f_{\rm tot}(6373)$, indicated by the red vertical line, lies in the continuum area. Hence, it may be related to the local color which could have specific information against the broad-band color, $c$. As can be seen in figure~4, some fluxes in line regions are also selected: $f_{\rm cnt}(6084)$ and $f_{\rm cnt}(6289)$ are probably related to the continuum-normalized depths of Si\,\textsc{ii}(6355). In addition, $\mathcal{R}(3780/4580)$ and $f_{\rm tot}(3752)$ are probably relate to Ca\,\textsc{ii} H\&K. $f_{\rm cnt}(6631)$ corresponds to the continuum flux of the continuum-normalized spectra, which suggests a false signal. \begin{figure} \begin{center} \includegraphics[width=8.5cm]{model1.eps} \end{center} \caption{Non-zero elements of spectral data in Model~1. The red and blue lines indicate the non-zero elements in the total-flux-normalized and continuum-normalized spectra, respectively. The spectrum of SN~2006et is also plotted as a reference. The solid and dashed lines and red points are unbinned spectra, estimated continuum level, and binned spectra, respectively.}\label{fig:model1} \end{figure} We confirmed that the result was consistent even when we included the line variables, $\bm{\mathcal{L}}$, and all the line variables have zero coefficients in any elements. The lack of the dependency on $\bm{\mathcal{L}}$ is common in other subsequent models, except for Model~4 (see below). Hence, we present the models only without $\bm{\mathcal{L}}$ in this paper. In general, when some explanatory variables are correlated, LASSO could select a few of them. In the present case, the variables are measurements having non-negligible errors. Among correlated variables, a variable having a smaller error results in a smaller generalization error of $M_B$. Hence, in the case of large $\lambda$, the variable having the smallest error is first selected. In the case of smaller $\lambda$, the other correlated variables are selected. It is possible that a high correlation between $c$ and $f_{\rm tot}(6373)$ would cause the non-zero coefficient of $f_{\rm tot}(6373)$ in Model~1. If this is the case, it is unclear that $f_{\rm tot}(6373)$ is a significant variable that has independent information of $c$. We performed a regression analysis with $M_B=\beta_1 c + M_{B,0}$, and corrected for the effect of $c$ in $M_B$ by using $M_B-\beta_1 c$ as the target. We call this complete model as Model~2. The number of samples is the same as that of Model~1, 78, while the number of candidate explanatory variables is 275, one smaller than in Model~1 because $c$ is omitted. As can be seen in table~\ref{tab:model}, $x_1$ is the only variable having a non-zero coefficient. A similar result was obtained for Model~3, where we used the color-corrected spectral data, $\bm{f}^c_{\rm tot}$ and $\mathcal{R}^c$ instead of $\bm{f}_{\rm tot}$ and $\mathcal{R}$. Hence, the lack of $f_{\rm tot}(6373)$ is independent of the color correction. These results suggest that the high correlation between $c$ and $f_{\rm tot}(6373)$ causes the apparently high coefficient of $f_{\rm tot}(6373)$ in Model~1. \begin{figure} \begin{center} \includegraphics[width=8.5cm]{model_x1.eps} \end{center} \caption{Same as figure~\ref{fig:model1}, but for Models~4a and 4b.}\label{fig:model_x1} \end{figure} As well as $f_{\rm tot}(6373)$, Model~1 indicates possible dependency of $M_B$ on the variables related to the line areas, that is, $f_{\rm cnt}(6084)$ and $f_{\rm cnt}(6289)$. It has been reported that $x_1$ depends on the line strength, for example, the EW of Si\,\textsc{ii}~4000 (e.g. \cite{hac06snsp,ars08x1}). It is possible that the line dependency in Model~1 may be due to a high correlation between $x_1$ and the line strength of Si\,\textsc{ii}. For examining this possibility, we considered Model~4 in which the target is $x_1$. Model~4a includes pEW, DpEW, $v$, $a$, FWHM of Si\,\textsc{ii} 4000, as well as $c$, $\bm{f}^c_{\rm tot}$, $\bm{f}_{\rm cnt}$, and $\bm{\mathcal{R}}^c$ as the candidate explanatory variables of $x_1$. This is the unique case where the coefficients of $\bm{\mathcal{L}}$ have non-zero values in the analysis presented in this paper. Model~4b is for S\,\textsc{ii} ``W'', as a typical case of the other lines. The results are shown in table~\ref{tab:model} and figure~\ref{fig:model_x1}. In Model~4a, DpEW of Si\,\textsc{ii}~4000 has a non-zero coefficient. The importance of this line is also confirmed by the selection of $f_{\rm cnt}(3982)$ both in Models~4a and b. In addition to Si\,\textsc{ii}~4000, $f_{\rm cnt}(5770)$ and $f_{\rm cnt}(6084)$ have non-zero coefficients in both models, corresponding to Si\,\textsc{ii} 5972 and 6355. There are several other non-zero elements in Model~4a, although they are not confirmed in Model~4b. The dependence of $x_1$ on Si\,\textsc{ii} supports the previous studies about $x_1$. Finally, we employed Model~5, in which the target is $M_B$ corrected for $c$ and $x_1$, that is, $M_B-(\beta_1 c + \beta_2 x_1)$, where $\beta_1$ and $\beta_2$ are determined by a regression analysis. The candidate explanatory variables of Model~5 are $\bm{f}^c_{\rm tot}$, $\bm{f}_{\rm cnt}$, and $\bm{\mathcal{R}}^c$. However, any of them is not selected. The result suggests that the high correlation between $x_1$ and the Si\,\textsc{ii} line strength results in the apparent dependency of $M_B$ on the line depths in Model~1. Hence, the best set of explanatory variables is $(c,x_1)$ in our analysis. We re-fit the data with these variables, and obtained the following model: \begin{eqnarray} M_B = -19.26(\pm 0.03) + 2.75 (\pm 0.17) c - 0.10 (\pm 0.02) x_1 \end{eqnarray} Note that these values are calculated not from normalized values of the variables as in table~1, but from raw values. \section{Discussion and Conclusion} Our analysis confirms the classical understanding of SNe~Ia, that is, i)~the light-curve width ($x_1$) and color ($c$) are the important explanatory variables of the absolute magnitude at maximum ($M_B$) (\cite{phi93law}), and ii)~the light-curve width correlates with the strength (EW or depth) of Si\,\textsc{ii} (e.g. \cite{hac06snsp,ars08x1}). Furthermore, our variable selection approach using the LASSO-type estimation does not support to add any other variables, such as the normalized spectra ($\bm{f}_{\rm tot}$, $\bm{f}^c_{\rm tot}$, $\bm{f}_{\rm cnt}$), previously proposed flux ratios ($\bm{\mathcal{R}}$), and line measurements ($\bm{\mathcal{L}}$), in order to have a better generalization error of $M_B$. We confirmed that the above conclusion is robust to small changes in our analysis: using the flux in logarithmic or linear scale, excluding or including two Type~Iax objects, and normalizing each column in $X$ or not. Our analysis implies that over-fitting can cause partly inconsistent results seen in previous studies which used the arbitrary flux ratios (\cite{bai09frat,blo11frat,bsnip3}). Our conclusion is inconsistent with that reported by \citet{bsnip3} although the both samples are obtained from the Berkeley supernova database with the common data selection. The model selection method is also common: Following \citet{blo11frat}, \citet{bsnip3} performed 10-fold cross-validation, and calculated the mean and standard error of 10 weighted root-mean squares (wRMS) of residuals. They measured the significance of the improvement of the model with the mean wRMS and its standard error. They found that the model with $c$, $x_1$, and $\mathcal{R}^c(3780/4580)$ improves the prediction error by a level of $1.7\sigma$ compared with the classical one with $c$ and $x_1$. This flux ratio is also detected as an explanatory variable in our Model~1, while it is not in the other models of $M_B$ (see table~1). The wavelength of 3780\,\AA~corresponds to the mid-point of Ca\,\textsc{ii} H\&K, and 4580\,\AA~to the border between the Mg\,\textsc{ii} and Fe\,\textsc{ii} complexes. Figure~\ref{fig:frat-c} shows $\mathcal{R}^c(3780/4580)$ of our sample against $c$. Those two variables exhibit a weak anticorrelation, as can be seen in this figure. Our result that $\mathcal{R}^c(3780/4580)$ is detected in Model~1 and not in Models~2 and 3 can be explained by this anticorrelation. \begin{figure} \begin{center} \includegraphics[width=8.5cm]{frat-c.eps} \end{center} \caption{$\mathcal{R}^c(3780/4580)$ of our sample against $c$}\label{fig:frat-c} \end{figure} In principle, the spectral data, that is, the values of the flux density and their ratios, could be important explanatory variables of $M_B$. The interstellar extinction is definitely the most important variable. As well as the color parameter, $c$, the continuum flux of the total-flux normalized spectra, that is, $\bm{f}_{\rm tot}$ could be an indicator of the extinction. Indeed, $f_{\rm tot}(6373)$ has a relatively large coefficient in Model~1. Our analysis suggests that $c$ is a better variable rather than $f_{\rm tot}(6373)$ and other normalized fluxes. This is probably because of the uncertainty of measurements and observation epochs. In general, the flux calibration of spectra has larger errors than the differential photometry. Moreover, the observation epochs of spectra are different from one object to another in our sample. The color parameter, $c$, is based on differential photometry, and corrected to the color at maximum. A similar situation is also expected in the light-curve width, $x_1$. \citet{maz01decl} propose that $x_1$, or decline rate, so-called, $\Delta m_{15}$ is a function of the amount of $^{56}{\rm Ni}$ produced in SNe. The absorption line variables are also possible indicators of the amount of synthesized elements in SNe. Indeed, the correlation between $x_1$ and the strength of the Si\,\textsc{ii} lines was confirmed in Model~4. Our method selected not the Si\,\textsc{ii} strength, but $x_1$ probably because of the small measurement error in $x_1$ for the amount of synthesized elements. In our analysis, the best set of the explanatory variables is $c$ and $x_1$, while it is trivial that our result does not imply a physical causal relationship between $M_B$ and those two variables. It is possible that, in future, the increasing number of samples revises the model having a better generalization error by finding additional or alternative explanatory variables compared with the model in this paper. It may also be meaningful to add variables which were not used in this paper, such as those about host galaxies of SNe~Ia (\cite{sul10host,pan15host}). In any of these cases, our proposed method offers a framework for finding an appropriate set of explanatory variables of $M_B$ even in the case that the number of samples is smaller than the number of variables. A possible extension of the model may be to include the measurement errors of explanatory variables. As can be seen in equation~(2), our method does not include the errors, while errors are expected to be large in several variables, for example, flux ratios of low fluxes. \cite{bai09frat}, proposing the flux ratio, $\mathcal{R}(642\,{\rm nm}/443\,{\rm nm})$, as a good explanatory variable of $M_B$, claimed that the spectral slope need to be calibrated with very small errors for their model with the flux ratio. The instrument that they used was SNIFS (SuperNova Integral Field Spectrograph), which was developed to perform flux-calibration with a high accuracy (\cite{ald02snif}). On the other hand, the instruments which were used in \citet{blo11frat} and \citet{bsnip3} were standard slit spectrographs. It is possible that the lack of detection of $\mathcal{R}(642\,{\rm nm}/443\,{\rm nm})$ in our analysis is due to large errors of the flux ratios in our data sample from \citet{bsnip3}. A better model including the errors might be provided by a Bayesian approach in which the error is included into the model as prior probability distributions. We would like to thank Drs. J. M. Silverman and A. V. Filippenko for providing the Berkeley supernova database. We also appreciate comments and suggestions from anonymous referee. This work was supported by JSPS KAKENHI Grant Number 25120007, 25120008, and 26800100. The work by K.M. is partly supported by WPI Initiative, MEXT, Japan. \if0
1,314,259,994,586
arxiv
\section*{The Twin Paradox Put to Rest} The twin paradox, or clock paradox, is a century-old problem in Special Relativity. It has been continuously discussed since the standard formulation of Langevin in 1911 \cite{Langevin}. Einstein by himself found a qualitative solution based on general relativity but did not achieve consensus. In the '50s, a long debate about the theme was done\cite{DINGLE}. Though the correct answer has never been in question, many solutions have been found in the literature. Therefore the matter of how to explain the apparent paradox is far from settled. For a review see Ref \cite{Shuler}. Many solutions follow Einstein and appeal to the acceleration of the rocket\cite{Einstein}. Other solutions rely only on Special Relativity(SR) and our solution is in this class. In general, the SR solutions are of two kinds: one involves light signals sent from the traveler to the Earth-based twin and the other uses the relativity of simultaneity. In both cases, spacetime diagrams are used to solve the problem. These are the solutions presented, for example, in the famous book of Taylor and Wheeler \cite{Taylor}. The solutions which appeal to acceleration use seem to suggest that General Relativity is necessary to solve the paradox. However, Special Relativity is consistent by itself and some solution must exist without this. At some point, a way to avoid the acceleration of the twin was found\cite{Romer}. However, the last author did not give a simple and analytical solution. It came as a surprise to us when writing a book about relativity \cite{livro}, the lack of a simple, analytical and direct solution to the paradox. Beyond this, in general, the solutions are very evolved and unclear. Up to now, there is no solution that, together, does not involve any kind of acceleration, signals, or spacetime diagrams and is based only on the Lorentz transformations. Below we present our solution, which has all these properties. To avoid long explanations and to ensure simplicity, we will be very direct. Let us define the problem. We have two twins on Earth, Alice, and Bob. Bob will travel to planet ``Air" and come back to Earth. The proper distance, from Earth to the planet Air, is measured by Alice and given by $L$. The relevant events are shown in Fig. \ref{two}. \begin{figure}[!h] \label{two} \centering \includegraphics[scale=0.9]{twoplanets.png} \caption{The relevant events as described by Alice: $I$) Bob departs from Earth, $II$) Bob arrives at ``Air" and $III$) Bob meets Alice at Earth} \label{two} \end{figure} From Alice's clock viewpoint, the time travel of Bob is given by \begin{equation}\label{relogioAlice} \Delta T_A=2L/v, \end{equation} where $v$ is the velocity of the rocket. However, Alice will see Bob younger due to time dilatation $$ \Delta T_B=\frac{\Delta T_A}{\gamma}=\frac{2L}{v\gamma}, \gamma =\frac{1}{\sqrt{1-\frac{v^2}{c^2}}} $$ where $\gamma$ is the Lorentz factor. What about Bob? From his viewpoint, due to Lorentz contraction, the distance from the planet Fire is given by $L/\gamma$. Therefore the time travel is given by \begin{equation}\label{tempobobbob} \Delta T'_B=\frac{2L}{v\gamma}. \end{equation} So, both agree with the time travel of Bob. The apparent paradox is that, for Bob, the clock of Alice goes slow by the factor \begin{equation} \label{tempoaliceparabob} \Delta T'_A=\frac{\Delta T'_B}{\gamma}=\frac{2L}{v\gamma^2}. \end{equation} Therefore, when they meet, Alice's clock will show (Eq. (\ref{relogioAlice})) or $2L/(v\gamma^2)$ (Eq. (\ref{tempoaliceparabob}))? This is the paradox. Alice says that Bob will come back younger due to its velocity. According to Bob, Alice is the one with velocity and therefore will be younger. The point is: when they meet and compare their clocks, who is correct? Bob is an astronaut, but Alice is a Physicist and says that Bob is not correct and he will be younger. The solution is the following. As said before, we will avoid acceleration and use only Lorentz transformations. The way to do this was pointed out by Romer \cite{Romer}. We will use a slightly different configuration. To avoid accelerated frames, we will consider a third twin, John, traveling to Earth from a third planet, called ``Fire". The three planets are at rest with each other. Planet Air and Fire are at positions $L$ and $2L$ from Earth. The relevant events are shown in Fig. \ref{three}. \begin{figure}[!h] \centering \includegraphics[scale=0.9]{threeplanets.png} \caption{The relevant events as described by Alice: $I$) Bob and John depart from Earth and Fire, $II$) Bob and John arrive at ``Air" and $III$) John meets Alice on Earth and Bob arrives at Fire.} \label{three} \end{figure} To be more precise we will use the notation $I_E$ for the event $I$ at Earth in the reference frame of Alice. A prime, $I'_E$, will mean the same event in the reference frame of John. For Alice, the departures of Bob and John are simultaneous, with velocities $v$ and $-v$ respectively. Therefore, the events $I_E$ and $I_F$ are simultaneous for her. This way, John will arrive on earth at the same time as Bob arrives at Fire and events $III_E$ and $III_F$ are also simultaneous. Of course, Alice will say that John and Bob will be the same age. This is because the relative velocities, for her, are the same. This will simplify the problem a lot, since John has no acceleration and the clocks that will be compared are that of John and Alice, at event $III_E$. Of course, we could imagine another situation, in which Bob stops at Air and comes back to Earth. However, this is irrelevant since, for Alice, both of them must be younger and the same age. So, let us focus on Alice and John. As said above, for Alice the events $I_E$ and $I_F$ are simultaneous. However, for John, Earth and Fire have velocities $-v$. For him, and according to the Lorentz transformations \begin{equation}\label{simultaneidade} \Delta T'=\gamma(\Delta T+\frac{v}{c^2}\Delta x)=\gamma\frac{2Lv}{c^2}. \end{equation} In the second equality, we have used $\Delta T=0$ and $\Delta x=2L$. Therefore, the turning on of his clocks (event $I'_F$) and of Alice (event $I'_E$) are not simultaneous. The time difference between these events is given by \begin{equation}\label{eq01} I'_E-I'_F=\gamma\frac{2Lv}{c^2}. \end{equation} This is the time difference in John's clock. Now, remember that, according to him, Alice's clock goes slow by the factor $1/\gamma$. Therefore, when John departs, the clock of Alice will be showing $2Lv/c^2$. To discover what appears on her clock when John arrives on Earth, we must add the time of travel. As said above, from the viewpoint of Bob, this time goes slow and is given by Eq. \ref{tempoaliceparabob}. When we sum both times we arrive at \begin{equation}\label{eq02} \frac{2 Lv}{c^2}+\frac{2L}{v\gamma^2}=\frac{2L}{v}. \end{equation} Therefore, when John meets Alice, we will say that her clock will be showing $2L/v$. His clock, according to him, will show only the time travel, given by (\ref{tempobobbob}). We can state, so, that: \\ \\ \textbf{From the viewpoint of the two reference frames, both agree that the Alices's clock will be showing $2L/v$ and the one of John will be showing $2L/(v\gamma)$.} \\ \\ We conclude that, even though for John the clock of Alice goes slow, both agree that John, and therefore Bob, are younger. Due to the absence of simultaneity, for John, Alice's clock began to run before. For him, Alice is older. It is very interesting that, for John, the absence of simultaneity compensates exactly the time dilatation of Alice, in such a way that all of them agree that Alice will be older. Finally, we should point out that Romer has achieved an important step by avoiding acceleration \cite{Romer}. However, in the last paper, the author just suggests that simultaneity \textbf{should} solve the problem, but does not present the exact calculation. As far as we know, the same attitude is present in all the solutions found in the literature. The authors never show that simultaneity is \textbf{exactly} enough to solve the paradox. As far as we know, this is the first time that Eqs. (\ref{eq01}) and (\ref{eq02}) are presented. Beyond being new, we believe that our solution deserves attention due to its simplicity and clearness since: a) It does not rely on accelerated frames, b) It does not depend on any kind of signals, c) It does not involve spacetime diagrams or any other complications and finally, the more important d) Eq. (\ref{eq02}) shows that simultaneity is exactly enough to solve the paradox. In fact, unlike the solutions found in the literature, it demands just two paragraphs! It is so simple that it is accessible to a high school student. Therefore we believe that it should be a standard solution contained in any relativity textbooks. We also expect that, with this, we can put a dead end to this century old problem. \section*{Acknowledgments} We acknowledge the financial support provided by the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) N 315568/2021-6 and Fundação Cearense de Apoio ao Desenvolvimento Científico e Tecnológico (FUNCAP) through PRONEM PNE0112- 00085.01.00/16. \\ The authors have no conflicts to disclose.
1,314,259,994,587
arxiv
\section{Introduction} \label{sec:Introduction} The planetary~nebula~(PN)~/~open~cluster~(OC) pair NGC\,2452\,/\,NGC\,2453 has been widely studied, and the membership of the PN to the stellar cluster has been heavily contested. The measurements of both distance and age of the cluster ($\alpha_{2000}=07^{h}47^{m}36\fs7$, $\beta_{2000}=-27\degr 11^{\prime} 35^{\prime\prime}$) in the literature have not reached an agreement. The early photometric study and main sequence (MS) fit of 21 cluster stars by \citet[hereafter MF]{moffat1974}\defcitealias{moffat1974}{MF} established a distance of $d\sim 2.9$~kpc and an age of $\tau\sim 40$~Myr. Other studies approximately agreed, proposing cluster distances in the range $d$ $\approx$2.4-3.3~kpc \citep{glushkova1997,hasan2008}, while \citet{gathier1986} obtained almost twice the distance value (5.0$\pm$0.6~kpc) via Walraven photometry on five stars previously reported as members by \citetalias{moffat1974}. Later, \citet{mallik1995} revealed a deeper MS of the cluster by means of $BVI$ photometry. These latter authors determined a distance of about $d\approx 5.9$~kpc, with a mean age of $\tau\approx 25$~Myr, but they also showed that the best fit depended on which stars were considered cluster members. In fact, the line of sight to the PN/OC pair is highly contaminated by field stars belonging to the Puppis OB associations and the Perseus arm \citep{peton1981,majaess2007}. This complex mix of different stellar populations in the color-magnitude~diagram (CMD) inevitably adds uncertainty to the results of an isochrone fit, which could be easily affected by field stars. NGC\,2452 ($\alpha_{2000}=07^{h}47^{m}26\fs26$, $\beta_{2000}=-27\degr 20^{\prime} 06\farcs83$) is a massive PN \citep{cazetta2000}, whose progenitor must have been an intermediate-mass MS star close to the upper limit allowed for PN formation. This is consistent with the $\sim$ 40 Myr age of NGC\,2453 proposed by \citetalias{moffat1974} and \citet{moitinho2006}, which implies a turnoff mass of $\approx$7~M$_{\odot}$. The cluster age is an important parameter to discard membership to young OCs because evolved stars in clusters younger than $\sim$30~Myr are thought to end as type-II supernovae rather than forming a PN \citep[see][hereafter MB14]{majaess2007,bidin2014}.\defcitealias{bidin2014}{MB14} Distance estimates for this PN can also be found in the literature, from 1.41 kpc, passing through 2.84 kpc, to 3.57 kpc \citep[respectively, among others]{khromov1979,stanghellini2008,gathier1986}. The value obtained by \citet{gathier1986} ($d=3.57\pm0.5$~kpc) from a reddening-distances diagram was very different from the cluster value derived from zero-age MS (ZAMS) fitting in the CMD and two-color diagram (TCD, $\sim$5~kpc). However, their estimate of the PN reddening ($E_{B-V}=0.43\pm0.5$) roughly matched the literature value for the cluster, which is in the range $\sim$0.47-0.49 \citep{moffat1974,gathier1986,mallik1995}. The association between NGC\,2453 and NGC\,2452 has been proposed and studied by many authors, in light of their angular proximity in the sky (angular separation $\sim 8\farcm5$) and the data available (see, e.g., \citetalias{moffat1974}, \citealt{gathier1986}, \citealt{mallik1995}, \citetalias{bidin2014}). Nevertheless, the results have not been conclusive. \citet[]{moffat1974} found coincidences between the radial velocity (RV) of the PN measured by \citet[][68~km~s$^{-1}$]{campbell1918} and that of an evolved blue giant star in the cluster (67$\pm$4~km~s$^{-1}$). Subsequent measurements yielded consistent RVs for the PN in the range $\sim$62-68~km~s$^{-1}$ \citep{meatheringham1988,wilson1953,durand1998}. Nevertheless, \citet{majaess2007} advocated additional observations needed to evaluate potential membership. \citetalias{bidin2014} recently studied the RV of ten stars in the cluster area, supporting the cluster membership of NGC\,2452. However, they claimed that their result was not definitive, because the identification of cluster stars was problematic. In this work, we have adopted the methodology followed by \citetalias{bidin2014} and expanded the sample to 11 potential members to assess the membership of NGC\,2452 to NGC\,2453 via RV measurements on intermediate-resolution spectra. In addition, deep $UBVRI$ photometry was paired with data from\textit{ Gaia}'s second data release \citep[DR2,][]{gaia2018} to revise the cluster distance and to accurately determine its fundamental parameters. \section{Observations and data reduction} \label{s_data} \subsection{Spectroscopic data} \label{ss_dataspec} The intermediate-resolution spectra of 11 bright stars of NGC\,2453 were collected on April 18, 2013, during one night of observations at the duPont 2.5m telescope, Las Campanas, Chile. The targets were selected on the IR CMD based on 2MASS data, prioritizing the brightest stars next to the cluster upper MS. The SIMBAD names and 2MASS photometry of the targets are given in Table~\ref{Tab:RV_Dis}. The 1200 line/mm grating of the B\&C spectrograph was used with a grating angle of 16$\fdg$67 and a 210$\mu$ slit width, to provide a resolution of 2~\AA\ (R=2200) in the wavelength range 3750-5000~\AA. Exposure times varied between 200 and 750s, according to the magnitude of the target. A lamp frame for wavelength calibration was collected regularly every two science spectra during the night. The spectra were reduced by means of standard IRAF routines. Figure~\ref{Fig:spectra} shows some examples of the final result.The resulting S/N for the selected targets was typically S/N=80–120. Non-target stars fell regularly in the slit in almost all exposures, because both the OC and the surrounding low-latitude Galactic field are very crowded. Their spectra were reduced and analyzed in the same way as those of our targets, but the resulting spectra were of much lower quality (S/N$\approx$10--30). We hereafter refer to ``target'' and ``additional'' stars, to distinguish between the selected objects and the stars that fell by chance in the spectrograph slit. During the same run, we collected three spectra of the PN NGC\,2452. The first one was acquired centering the slit at the optical center of the nebula, where a bright spot was seen. The second and third spectra focused on the northern and southern regions, respectively. The reduction of these data proceeded as in the case of cluster stars, but the frames of a bright RV standard star were used during extraction to trace the curvature of the spectra on the CCD. The PN is an extended object, and its spectrum covered several pixels in the spatial direction. We performed both a narrow (8~pixels, $\sim 5\arcsec$) and a wide (20~pixels, $\sim 65\arcsec$) extraction for the northern and central spectra, but only a narrow extraction for the southern one because the flux was too faint outside $\pm4$~pixels from the center. \begin{figure} \centering \includegraphics[trim = 19mm 0mm 0mm 10mm, clip, width=10cm]{Images/spectra.pdf} \caption{Examples of reduced spectra. The wavelength intervals used in RV measurements are shown as horizontal lines. The spectra are labeled as T and A for `target' and `additional stars', respectively. The spectra have been shifted vertically to avoid overlap.} \label{Fig:spectra} \end{figure} \subsection{Photometric data} \label{ss_dataphot} Our study is based on the optical $UBVRI$ photometric catalog presented by \citet{Moitinho01}. The data were acquired in January 1998 at the CTIO 0.9m telescope, with a $2048\times2048$ Tek CCD, with a resulting $0\farcs39$ pixel scale and a $13\arcmin\times13\arcmin$ useful field of view. The frames were processed with standard IRAF routines, and the shutter effects were corrected applying a dedicated mask prepared during the reduction. We refer to \citet{Moitinho01} for a very detailed presentation of observations and data reduction. \subsection{\textit{Gaia} distances} \label{sub:GaiaDistances} Parallaxes and proper motions for program stars were obtained from the \textit{Gaia} DR2\footnote{Gaia Archive: https://gea.esac.esa.int/archive/} catalog. We added +0.029~mas to all \textit{Gaia} parallaxes, as advised by \citet{Lindegren18}, to account for the zero-point offset reported by \citet{Lindegren18} and \citet{Arenou18}. Following the guidelines of \citet{Luri18}, we employed a Bayesian method to infer distances from parallaxes through a model error and a priori assumption. Because the fractional errors on parallax are $f_\omega=\sigma / \varpi\leq$0.24 most program stars, we used the exponentially decreasing space density function (EDSD) as a prior, as described by \citet{BailerJones15}. A complete Bayesian analysis tutorial is available as Python and R notebooks and source code from the tutorial section on the \textit{Gaia} archive\footnote{https://github.com/agabrown/astrometry-inference-tutorials/}. Proper motions and distances computed from \textit{Gaia} DR2 parallaxes are shown in Table \ref{Tab:RV_Dis}. Upper and lower indices correspond to maximum and minimum distances in the error interval, respectively. \begin{figure} \centering \includegraphics[trim = 5mm 0mm 0mm 10mm, clip,angle=0,width=10cm]{Images/spectraPN.pdf} \caption{Reduced spectrum of the PN NCG\,2452. The flux was normalized to the height of the H$_\beta$ line.} \label{Fig:SpectrumPN} \end{figure} \section{Measurements} \label{s_meas} \subsection{NGC\,2453: radial velocities} \label{sub:RVClu} Radial velocities of program stars were measured using the Fourier cross-correlation technique \citep{Tonry79} via \textit{fxcor} IRAF task. The center of the correlation peak was fitted with a Gaussian profile. A grid of templates was prepared with synthetic spectra of solar metallicity drawn from the \citet{coelho2014}\footnote{http://specmodels.iag.usp.br/} library. The grid spanned the range from 375 to 500nm in step of 0.02~\AA, covering 3000~$\leq$~T$_\mathrm{eff}$~$\leq$~26000~K and 2.5~$\leq$~$\log g$~$\leq$~4.5, in steps of 2000~K and 0.5~dex, respectively. Most of the targets were better cross-correlated with the template at $T_\mathrm{eff}$=22000~K, $\log g=4.5$, except for MSP\,211 and NGC\,2453~16, which required a cooler model (6000 and 10000~K, respectively), and the red giant TYC\,6548-790-1, for which the correlation height was maximized at $T_\mathrm{eff}=4000$~K and $\log g=2.5$. \citet{Moni11} and \citet{Morse91} showed that the exact choice of the template does not introduce relevant systematic error, although a mismatch between the target and the template spectral type can increase the resulting uncertainties. The RV of hot stars was eventually measured with a CC restricted to the dominant Balmer lines (see MSP\,111 in Fig. \ref{Fig:spectra}), that is, in the intervals $4840-4885$~\AA\ ~(H$_{\beta}$), $4315-4365$~\AA\ ~(H$_{\gamma}$), $4075-4125$~\AA\ ~(H$_{\delta}$), and $3760-3995$~\AA\ ~(H$_{\epsilon}$ to H$_{12}$). The lines with hints of core emissions, namely H$_{\beta}$ and H$_{\gamma}$ in MSP\,74 were excluded from the CC. Spectral feature analyses for cold stars demanded more care. While they were bright stars, the low resolution blended the closest features (see TYC\,6548-790-1 spectrum in Fig. \ref{Fig:spectra}), although the stars were bright and the noise was not the dominant source of uncertainties in the optical range. Nevertheless, these stars were faint in the blue-UV edge of our spectra, where the camera was also less efficient (QE of 55\% at 3500~\AA~ against 80\% at 4000~\AA). In order to avoid possible sources of systematic error at the CCD borders, we measured the RVs using the wavelength interval $4000-4800$~\AA. The central peak of the CCF was higher than 0.95 for the target stars, indicating a high degree of similarity with the adopted template, except for TYC\,6548-790-1, for which it reached 0.82 only. All RVs were measured relative to the solar system barycenter. Zero-point corrections were made using three standard stars of spectral types K and G (\citet{chubak2012}) treated in the same way as the cold stars described above. We found an average zero-point correction of $-9\pm2$~km~s$^{-1}$. The results are reported in Table~\ref{Tab:RV_Dis}. The final error was obtained as the quadratic sum of the most relevant sources of uncertainties, namely the measurement error obtained in the CC procedure, the zero-point correction uncertainty, and the wavelength calibration error (although this last resulted negligible). Radial velocity measurements were performed on both targets and additional stars. However, the results for the latter are not reliable, because the random location of their PSF centroid in the spectrograph slit could easily have introduced a large systematic uncertainty on their RVs. In fact, the target stars MSP\,132 and MSP\,85 showed a very different RV when they fell as additional objects in other frames, and the two measurements of the additional star 2MASS\,J07473034-2711464 differ noticeably (see Table~\ref{Tab:RV_Dis}). Hence, we report the results for all measurements, but exclude the additional stars from the RV analysis. \begin{figure} \centering \includegraphics[trim = 10mm 0mm 0mm 10mm, clip,angle=0,width=10cm]{Images/HistoFinal.png} \caption{Radial velocity distribution for program and target stars of NGC~2453.} \label{Fig:Histo} \end{figure} \subsection{\textsl{NGC\,2453: temperature and gravities}} \label{sub:TgClu} The fundamental parameters (temperature, gravity, and rotation velocity) of the most likely cluster members (see section \ref{sec:Results}) were measured as in \citet{Moni17}, by means of the routines developed by \citet{Bergeron92} and \citet{Saffer94}, as modified by \citet{Napiwotzki99}. Briefly, the available Balmer and He lines were fitted simultaneously with a grid of synthetic spectra obtained from model atmospheres of solar metallicity, computed with ATLAS9 \citep{Kurucz93}. The stellar rotation projected along the line of sight, $v\sin i$, is not a fit parameter but an input quantity of the routines. It was therefore varied manually until finding the value which returned the solution with the lowest $\chi^2$. The results are given in Table~\ref{tab:Temperatures}, along with the photometric data of the targets from our optical photometry. The algorithm does not take into account possible sources of systematic error, such as the flat fielding procedure, the continuum definition, and the spectrum normalization. Hence, the errors returned by the routine were multiplied by a factor of three to derive a more realistic estimate of the uncertainties (see, e.g., \citep{Moni17}). The stellar temperature is mainly derived from the relative intensity of the Balmer lines, which is well measured in our spectra. On the contrary, surface gravity is estimated from the width of these features, but the method was insufficient to properly resolve its effects. In fact, we found a general underestimate of $\log g$ by about 0.2~dex when compared to expectations for MS objects ($\log g\approx$4.2), possibly due to the combination of a low spectral resolution and unresolved effects of stellar rotation. However, \citet{Zhang17} suggested that the method might be underestimating the surface gravity of MS stars by $\sim$0.1~dex even at very high spectral resolution. \subsection{\textsl{NGC\,2452: radial velocity}} \label{sub:RVPla} The spectrum of NGC\,2452 is shown in Fig.~\ref{Fig:SpectrumPN}. Bright emission lines of [OII] (3727~\AA), [NeIII] (3967~\AA, 3869~\AA), HeII (4686~\AA), and the Balmer lines H$_\beta$ (4861~\AA), H$_\gamma$ (4340~\AA) and H$_\delta$ (4102~\AA) can be easily identified. For a more detailed description of NGC\,2452 spectra in different locations we refer the reader to Table IV in \citet{Aller1979}. The RV of the PN was measured by CC with a synthetic spectrum. This was built adding up Gaussian curves with widths and heights equal to the observed features, but centered at the laboratory wavelengths taken from NIST Atomic Spectra Database Lines Form\footnote{https://www.nist.gov/pml/atomic-spectra-database}. The reduction returned five spectra for NGC\,2452, namely a wide and narrow extraction for both the northern and the central regions, and a narrow extraction for the southern one. The measurements were repeated independently for the five spectra, to verify if the results could be affected by the internal kinematics of the nebula. We did not detect any systematic error between the spectra beyond fluctuations compatible with observational errors. The final estimate was obtained from the average of these measurements, and is reported in Table~\ref{table:RVNebula} along with previous values from the literature. Our final result is RV=62$\pm$2~km~s$^{-1}$, in good agreement with the weighted mean of literature results of 65$\pm$2~km~s$^{-1}$. \begin{figure} \centering \includegraphics[trim = 0mm 35mm 0mm 0mm, clip, angle=-90,width=10cm]{Images/PMGAIA_DR2.pdf} \caption{Proper motion of stars within $2\farcm5$ of the NGC\,2453 center (gray points), from the \textit{Gaia} DR2 catalog. The open red circles and black squares show the position of the target and additional stars, respectively. The triangle indicates the MF54 star. \label{Fig:ProperMotion}} \end{figure} \begin{table*} \tiny \centering \caption{Photometric data, radial velocities, and distances of the program objects.} \label{Tab:RV_Dis} \begin{spacing}{1.7} \begin{tabular}{@{}lcccccccl@{}} \hline\hline Name & Type & \begin{tabular}[c]{@{}c@{}}J\\ (mag)\end{tabular} & \begin{tabular}[c]{@{}c@{}}J-H\\ (mag)\end{tabular} & \begin{tabular}[c]{@{}c@{}} $\mu^{\dagger}_{\alpha^{*}}$ \\ (mas yr$^{-1}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}} $\mu^{\dagger}_{\delta}$\\ (mas yr$^{-1}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}RV\\ (km/s)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distance\\ (kpc)\end{tabular} & \multicolumn{1}{c}{Note$^{\ddagger}$} \\ \hline & \multicolumn{1}{l}{} & & & & & \\ TYC\,6548-790-1 & T & 6.73 $\pm$ 0.02 & 0.85 $\pm$ 0.06 & $-2.33 \pm 0.04$ & 3.40 $\pm$ 0.05 & 80 $\pm$ 10 & 5.2 $^{6.2}_{4.4}$ & MLM \\ MSP\,111 & T & 11.81 $\pm$ 0.02 & 0.10 $\pm$ 0.05 & $-2.35 \pm 0.04$ & 3.41 $\pm$ 0.05 & 69 $\pm$ 4 & 5.4 $^{6.5}_{4.6}$ & MLM\\ MSP\,112 & T & 12.28 $\pm$ 0.03 & 0.12 $\pm$ 0.06 & $-2.38 \pm 0.04$ & 3.39 $\pm$ 0.04 & 89 $\pm$ 6 & 4.6 $^{5.4}_{4.0}$ & MLM\\ MSP\,126 & T & 12.28 $\pm$ 0.02 & 0.09 $\pm$ 0.05 & $-2.18 \pm 0.04$ & 3.64 $\pm$ 0.04 & 89 $\pm$ 7 & 4.2 $^{4.7}_{3.7}$ & MLM\\ MSP\,159 & T & 12.13 $\pm$ 0.03 & 0.09 $\pm$ 0.06 & $-2.41 \pm 0.05$ & 3.48 $\pm$ 0.05 & 88 $\pm$ 8 & 4.4 $^{5.3}_{3.8}$ & MLM\\ MSP\,85 & T & 12.53 $\pm$ 0.02 & 0.08 $\pm$ 0.05 & $-2.29 \pm 0.04$ & 3.45 $\pm$ 0.04 & 87 $\pm$ 7 & 4.6 $^{5.3}_{4.1}$ & MLM\\ & A & & & & & 117$\pm$ 7 & & \\ MSP\,132 & T & 11.83 $\pm$ 0.02 & 0.10 $\pm$ 0.05 & $-2.13 \pm 0.06$ & 3.46 $\pm$ 0.06 & 72 $\pm$ 4 & 4.7 $^{5.8}_{3.9}$ & MLM\\ & A & & & & & 28 $\pm$ 4 & & \\ NGC\,2453~55 & T & 12.82 $\pm$ 0.05 & 0.15 $\pm$ 0.10 & $-1.64 \pm 0.10$ & 5.08 $\pm$ 0.20 & 64 $\pm$ 6 & 11.0 $^{14.4}_{8.4}$ & NM, $\varpi$ < 0\\ MSP\,57 & T & 11.71 $\pm$ 0.02 & 0.09 $\pm$ 0.05 & $-2.47\pm 0.06$ & 3.59 $\pm$ 0.06 & 103$\pm$ 5 & 1.2 $^{1.2}_{1.1}$ & NM \\ NGC\,2453~16 & T & 12.11 $\pm$ 0.04 & 0.15 $\pm$ 0.09 & $-4.91 \pm 0.04$ & 4.10 $\pm$ 0.05 & 16 $\pm$ 2 & 1.3 $^{1.3}_{1.2}$ & NM \\ MSP\,211 & T & 12.68 $\pm$ 0.03 & 0.15 $\pm$ 0.06 & $-2.35 \pm 0.03$ & 3.48 $\pm$ 0.04 & 18 $\pm$ 8 & 4.4 $^{4.8}_{4.0}$ & NM \\ 2MASS\,J07473821-2710479 & A & 15.23 $\pm$ 0.06 & 0.25 $\pm$ 0.10 & $-2.39 \pm 0.07$ & 3.51 $\pm$ 0.08 & 72 $\pm$ 6 & 2.8 $^{3.4}_{2.4}$ & \\ 2MASS\,J07473390-2710060 & A & 15.36 $\pm$ 0.05 & 0.37 $\pm$ 0.10 & $-3.08 \pm 0.09$ & 2.98 $\pm$ 0.10 & 66 $\pm$ 15 & 3.4 $^{4.5}_{2.7}$ & \\ MSP\,52 & A & 14.24 $\pm$ 0.08 & 0.17 $\pm$ 0.20 & $-2.36 \pm 0.04$ & 3.35 $\pm$ 0.05 & $-11 \pm 4$ & 4.2 $^{4.9}_{3.7}$ & \\ MSP\,272 & A & 12.89 $\pm$ 0.03 & 0.25 $\pm$ 0.05 & $-1.08 \pm 0.20$ & 4.40 $\pm$ 0.20 & $-50 \pm 9$ & 0.9 $^{1.0}_{0.8}$ & \\ MSP\,76 & A & 12.91 $\pm$ 0.02 & 0.16 $\pm$ 0.04 & $-2.40 \pm 0.03$ & 3.48 $\pm$ 0.03 & 18 $\pm$ 3 & 4.1 $^{4.4}_{3.7}$ & \\ MSP\,141 & A & --- & --- & $-2.36 \pm 0.04$ & 3.49 $\pm$ 0.05 & 44 $\pm$ 4 & 4.2 $^{4.9}_{3.8}$ & \\ MSP\,74 & A & 11.87 $\pm$ 0.03 & 0.21 $\pm$ 0.06 & $-2.36 \pm 0.03$ & 3.42 $\pm$ 0.05 & 103$\pm$ 6 & 3.5 $^{3.9}_{3.2}$ & \\ 2MASS\,J07473034-2711464 & A & 14.68 $\pm$ 0.03 & 0.05 $\pm$ 0.05 & $-2.34 \pm 0.09$ & 3.83 $\pm$ 0.10 & 97 $\pm$ 5 & 2.0$^{2.3}_{1.8}$ & \\ & A & & & & & 70 $\pm$ 4 & & \\ 2MASS\,J07473176-2710057 & A & 14.58 $\pm$ 0.07 & 0.36 $\pm$ 0.20 & $-2.11 \pm 0.06$ & 3.50 $\pm$ 0.07 & 66 $\pm$ 6 & 3.6$^{4.3}_{3.1}$ & \\ MSP\,204 & A & 14.18 $\pm$ 0.07 & 0.24 $\pm$ 0.20 & $-2.20 \pm 0.05$ & 3.38 $\pm$ 0.05 & 101 $\pm$ 5 & 4.2$^{4.9}_{3.7}$ & \\ MSP\,223 & A & 14.06 $\pm$ 0.04 & 0.16 $\pm$ 0.08 & $-2.32 \pm 0.04$ & 3.36 $\pm$ 0.04 & 64 $\pm$ 4 & 3.7$^{4.1}_{3.3}$ & \\ \hline MF54 & - & 10.44 $\pm$ 0.03 & 0.17 $\pm$ 0.06 & $-2.24 \pm 0.20$ & 3.47 $\pm$ 0.40 & 67 $\pm$ 14$^{\dagger\dagger}$ & 4.2$^{6.5}_{2.9}$ & $\varpi / \sigma_{\omega}$=0.96 \\ \hline \hline \end{tabular} \end{spacing} \raggedright{$^{\dagger}$ Data from \textit{Gaia} DR2. \\ $^{\ddagger}$ MLM: Most Likely Member; NM: Non Member.\\ $^{\dagger\dagger}$ Data from \cite{moffat1974} } \end{table*} \begin{table*} \small \centering \caption{Derived parameters of the most likely members stars.} \label{tab:Temperatures} \begin{tabular}{@{}lcccccc@{}} \hline \multicolumn{1}{c}{Star} & $V$ & $(B-V)$ & $(U-B)$ & $T_\mathrm{eff}$ & $\log g$ & $v\cdot\sin i$ \\ \multicolumn{1}{c}{} & & & & K & dex & km~s$^{-1}$ \\ \hline TYC 6548-790-1 & 10.47 & 2.08 & $1.73$ & --- & --- &--- \\ MSP85 & 13.15 & 0.24 & $-0.40$ & 17700 $\pm$ 200 & 3.92 $\pm$ 0.03 &30 \\ MSP111 & 12.66 & 0.31 & $-0.34$ & 16700 $\pm$ 300 & 3.63 $\pm$ 0.06 &90 \\ MSP112 & 13.09 & 0.30 & $-0.33$ & 16600 $\pm$ 300 & 3.79 $\pm$ 0.06 &150 \\ MSP126 & 12.99 & 0.25 & $-0.40$ & 17800 $\pm$ 300 & 3.95 $\pm$ 0.06 &20 \\ MSP132 & 12.51 & 0.26 & $-0.40$ & 16600 $\pm$ 200 & 3.90 $\pm$ 0.03 &160 \\ MSP159 & 12.79 & 0.24 & $-0.41$ & 17700 $\pm$ 300 & 3.86 $\pm$ 0.06 &40 \\ \hline \end{tabular} \end{table*} \section{Results} \label{sec:Results} The RV distribution of our program stars is shown in Fig.~\ref{Fig:Histo}, while the proper motions drawn from the \textit{Gaia} DR2 catalog are plotted in Fig.~\ref{Fig:ProperMotion}. Almost half of the RVs are comprised between 60 and 90~km~s$^{-1}$, where previous estimates of the cluster RV are found \citepalias{moffat1974,bidin2014}, while most of the program stars in the proper motion diagram cluster around ($\mu _\alpha \cos{\delta},\mu_\delta)\approx(3.5,-2.5)$~mas~yr$^{-1}$. The distances derived from \textit{Gaia} parallaxes are also listed in Table~\ref{Tab:RV_Dis}, and they are in the range 4.2-5.4~kpc for most of the targets. The very high RV (103$\pm$5~km\,s$^{-1}$) and small distance (1.2$^{1.2}_{1.1}$~kpc) of the star MSP\,57 indicate that this is probably not a cluster member. The targets NGC\,2453~16 and MSP~211 are also suspected to be field stars due to their low RV (RV=16$\pm$2 and 18$\pm$8~km~s$^{-1}$, respectively), and for the former this conclusion is reinforced even by a discrepant distance and proper motion. In addition, NGC\,2453~55 lies far from the bulk of our sample in the proper motion plot, although its RV is compatible with it, and its uncertain distance does not provide additional information. These four stars were therefore labeled as ``non-member'' (NM) in Table~\ref{Tab:RV_Dis}, and excluded from further analysis. We are thus left with seven stars whose RVs, distances, and proper motions are very consistent, and these are considered ``Most Likely Members'' (MLM). Their RV distribution is shown with a vertically striped area in Fig. \ref{Fig:Histo}. The RVs of stars in the field of NGC\,2453 were previously measured by \citetalias{bidin2014} using CCF from the H$_{\alpha}$ line. The authors estimated RV=73$\pm$5 and 66$\pm$8~km~s$^{-1}$ for TYC\,6548-790-1 and MSP\,111, respectively, in agreement with this work despite the large uncertainties. On the other hand, their result for MSP\,57 (RV=70$\pm$9~km~s$^{-1}$) disagrees with ours. The authors considered this star as a probable cluster member, but new data from \textit{Gaia} DR2 locate it at about 1.2~kpc, too close for an association with the cluster, and its membership is not supported. On the other hand, \citetalias{bidin2014} classified the star MSP\,159 as a nonmember, because its proper motion from the PPMXL catalog \citep{roeser2010} was clearly offset from the bulk of their sample. However, the accurate measurements from the \textit{Gaia} DR2 catalog indicate a proper motion consistent with MLM stars, along with compatible RV and distance. Regarding the red giant star TYC\,6548-790-1, \citet{mermilliod2001} and \textit{Gaia} DR2 obtained RVs of $85.2\pm0.3$~km~s$^{-1}$ and $85.5\pm0.3$~km~s$^{-1}$, respectively, in good agreement with ours. We added the star NGC\,2453\,54 (hereafter MF\,54) to our sample both in Table~\ref{Tab:RV_Dis} and Fig.~\ref{Fig:ProperMotion}, although its RV was measured by \citet{moffat1974} but not by us. We return to this object in Sect. \ref{sec:discussion}. Finally, the RV of NGC\,2453 was computed using target stars labeled as MLM. We found a weighted mean of RV=$78\pm3~km~s^{-1}$, where the uncertainty is the statistical error on the mean. Table~\ref{table:RVNebula} compares our result with those available in the literature and reveals that our estimate differs from previous ones. These latter however were obtained from only one or two stars, whose cluster membership was inevitably uncertain. Our result, on the contrary, is based on a sample of seven stars with consistent RVs, proper motions, and parallax-based distances. From the \textit{Gaia} measurements for our program stars, the cluster distance and proper motion can also be estimated. Despite the large errors on distances, the modal values of all MLM stars are close each other and they differ less than their respective uncertainties, suggesting that the latter could have been overestimated. We adopted the weighted means of MLM stars and the respective errors-on-the-mean as best estimates of the cluster value and their uncertainties, respectively, obtaining $d=4.7\pm0.2$~kpc, $\mu_{\alpha^{*}}=-2.30\pm0.04$~mas~yr$^{-1}$, and $\mu_{\delta}=3.47\pm0.03$~mas~yr$^{-1}$. \begin{table} \small \centering \caption{Literature results for the RV of the PN NGC\,2452 and the OC NGC\,2453.} \label{table:RVNebula} \begin{tabular}{@{}llc@{}} \textbf{PN NGC~2452} & & \\ \hline Literature & RV (km s$^{-1}$) & \\ \hline \citet{wilson1953} & 68.0 $\pm$ 2.5 & \\ \citet{meatheringham1988} & 62.0 $\pm$ 2.8 & \\ \citet{durand1998} & 65 $\pm$ 3 & \\ & & \\ \multicolumn{1}{r}{Literature Average}& 65 $\pm$ 2 & \\ \multicolumn{1}{r}{This Work} & 62 $\pm$ 2 \\ \hline \\ \\ \textbf{OC NGC~2453} & & \\ \hline Literature & RV (km s$^{-1}$) & Number of stars \\ \hline \citet{moffat1974} & 67 $\pm$ 14 & 1 \\ \citet{bidin2014} & 68 $\pm$ 4 & 2 \\ & & \\ \multicolumn{1}{r}{This Work} & 78 $\pm$ 3 & 7 \\\hline \end{tabular} \end{table} \begin{figure*} \centering \includegraphics[width=18cm]{Images/CDM.pdf} \caption{CMDs and TCDs of NGC\,2453. {\it Left panels}: The $V$-$(B-V)$ CMD. Dashed and solid lines depict isochrones of 40~Myr and 50~Myr, respectively, shifted in magnitude for a distance of 4.7~kpc. {\it Right panels}: $(U-B)$-$(B-V)$ TCDs. Black and red lines depict intrinsic and reddened isochrones, respectively, and the arrow shows the reddening direction. {\it Bottom panels}: Zoomed region of the upper panels around the MLM stars. Light gray dots indicate the stars in the field along the line of sight of the cluster, black filled circles show the MLM stars, and open circles indicate stars with proper motions within 2$\sigma$ of the cluster. Black empty dots are stars with upper distance errors $\leqslant$3.5~kpc from \textit{Gaia} DR2, and the star in the square is MF54. PARSEC + COLIBRI isochrones from \citet{marigo2017} have been fitted to MLM stars.} \label{Fig:CDMCCDISO} \end{figure*} \subsection{Fundamental parameters} \label{sub:Parameters} NGC\,2453 has a great record of observations, but its fundamental parameters have proven difficult to establish, in part because of the complex mix of stars at different distances and reddening lying along the line of sight. In this work, we overcame the problems of field contamination estimating the cluster distance from the parallax-based \textit{Gaia} distances of spectroscopically confirmed members. With this information, we can thus determine the age and reddening of the system from isochrone fitting of our $UBVRI$ photometry, relying again on the constraints provided by the \textit{Gaia} database and our spectroscopic results. PARSEC + COLIBRI isochrones \citep{marigo2017} were used in this process. The upper panels of Fig.~\ref{Fig:CDMCCDISO} show the $V-(B-V)$ CMD and the $(U-B)-(B-V)$ TCD of the cluster area. MLM stars have been depicted as black circles. The TCD (top-right panel) reveals the presence of at least two groups of stars with very different reddening. To identify the cluster sequence, we selected stars with \textit{Gaia} proper motion within 2$\sigma$ of the cluster value (identified as the mean of the MLM stars in Table~\ref{Tab:RV_Dis}), with proper motion error lower than 0.1~mas~yr$^{-1}$, and \textit{Gaia} distance close to $d=4.7\pm0.2$~kpc. These stars are depicted in Fig.~\ref{Fig:CDMCCDISO} as open circles. To identify foreground stars, we also selected those whose distance confidence interval had an upper edge (upper index in Table 1) lower than 3.5~kpc, and we indicated them with black dots in the diagrams. Indeed, most of these stars are better described by a less reddened sequence than the bona-fide cluster members (open circles), although a few field stars might still be contaminating the latter sample. The brighter MLM stars and the additional open circles thus identify the cluster loci in the TCD. The intrinsic theoretical isochrone is shown in the TCD of Fig. \ref{Fig:CDMCCDISO} as a black solid curve, while the red one indicates the same model after applying the final reddening solution. The triangles on the intrinsic isochrone correspond to the points at the same temperature range as our spectroscopic estimates for MLM stars (see Table \ref{tab:Temperatures}), that is, $\log(T_{\mathrm{eff}})=[4.23,4.25]$. We determined the color excesses $E_{U-B}$ and $E_{B-V}$ from the difference of the average color index for MLM stars (black circles), and for isochrone points at the same temperature (black triangles). We thus derived the slope $E_{U-B}/E_{B-V}$ of the reddening vector in the TCD. The bottom-right panel of Fig.~\ref{Fig:CDMCCDISO} shows a zoomed region of the TCD, focused on the MLM stars, where it appears clear that three MLM stars (namely MSP\,111, MSP\,112 and MSP\,132) are found at redder colors than the others, possibly due to stellar rotation effects \citep{bastian2009} or the presence of a cooler companion \citep{yang2011}. Table \ref{tab:Temperatures} shows that these stars as indeed fast rotators. As a consequence, only the slow-rotating MLM stars were used in the process. We obtained a slope of $E_{U-B}/E_{B-V}=0.78\pm0.09$, with $E_{B-V}=0.42\pm0.01$. This result agrees well with \citet{turner2012}, who established localized reddening laws described by E$_{U-B}$/E$_{B-V}$ = 0.77 and $R_V$ = 2.9 for the third galactic quadrant (\citealt{turner2014testing}; \citealt{carraro2015}), which is adopted here. The resulting extinction is $A_V=1.22\pm0.03$~mag. This result, together with the distance derived in this work, fits the general Galactic extinction pattern determined by \citet{neckel1980} very well, even though the authors did not study the NGC\,2453 region (l=343\degr, b=$-1$\degr). According to their work, the Galactic region near the cluster line-of-sight (l=342\degr, b=0\degr) has an extinction $A_V\approx 1$ up to $\sim$5~kpc, and it increases at a further distance to $A_V\approx 2$ at about 6~kpc and beyond. In contrast, the next region closest to the cluster area (l = 345\degr, b = 0\degr) shows an extinction $A_V\approx 1.5$ between 2 and 6~kpc, with slight variations at both $\sim$3.5 and $\sim$5.0~kpc. These results seem to be confirmed using the 3D map of interstellar dust reddening\footnote{http://argonaut.skymaps.info/} describe by \citet{Green2018}. The map shows a distance of $d=5.0$~kpc for a reddening of E$_{B-V}$ = $0.42 \pm 0.03$ in the same line of view of the cluster, in great agreement with our results. Eventually, with the distance and reddening found so far, we fitted slow rotator MLM and bona-fide cluster stars in the CMD, with age as the only free parameter. We find that an age in the range $\tau\approx 40-50$~Myr is the best solution, which accurately reproduces the observed sequence of stars (see left panel of Fig.~\ref{Fig:CDMCCDISO}). \begin{table} \small \centering \caption{Parameters estimated for NGC 2453} \label{table:Literature} \begin{tabular}{llcl} \hline Reference & \multicolumn{1}{c}{$E_{(B-V)}$} & \multicolumn{1}{c}{$\tau$ ~(Myr)} & \multicolumn{1}{c}{$d$ (kpc)} \\ \hline \\ \cite{seggewiss1971} & 0.48 & -- & 1.5 \\ \cite{moffat1974} & 0.47 $\pm$ 0.04 & 40 & 2.9 $\pm$ 0.5 \\ \cite{gathier1986} & 0.49 $\pm$ 0.01 & -- & 5.0 $\pm$ 0.6 \\ \cite{mallik1995} & 0.47 & 25 & 5.9 $\pm$ 0.5 \\ \cite{moitinho2006} & -- & 40 & 5.25 \\ \cite{hasan2008} & 0.47 & 200 & 3.3 \\ & & & \\ This Work & 0.42 $\pm$ 0.01 & 40-50 & 4.7 $\pm$ 0.2 \\ \hline \end{tabular} \end{table} \section{Discussion} \label{sec:discussion} \subsection{Cluster parameters} \label{sub:isofit} Our estimates of reddening, distance, and age for NGC\,2453 are compared with literature results in Table~\ref{table:Literature}. All previous studies were purely photometric, while we joined information from optical spectroscopy, $UBV$ photometry, and recent data from the \textit{Gaia} mission. The distance and age derived here are roughly compatible with those found by \citet[][5.23~kpc and 40~Myr]{moitinho2006}, but the former is closer to the result of \citet[][$d=5.0\pm0.6$~kpc]{gathier1986}. However, the reddening derived by Gathier et al. (and in general, all estimates in the literature) is $\sim$15\% larger than ours. These authors based their results on five stars previously classified as cluster members by \citetalias{moffat1974}, namely NGC\,2453~7, 8, 28, 30 and 45 \citep{gathier1985}. However, \textit{Gaia} distances for the stars 28 and 30 ($1.1^{1.2}_{1,1}$~kpc and $7.8^{9.5}_{6.4}$~kpc, respectively) disagree with the estimates of Gathier et al. ($\sim$3.9 and 4.4~kpc, respectively), and they are much larger than the average value for our MLM stars. This suggests that some stars used in previous works to constrain the cluster parameters may not have been cluster members. Gathier found that the color excess $E_{B-V}$ of these two stars is the same ($\sim 0.51$), in spite of the huge distance discrepancy reported by \textit{Gaia}. On the other hand, \citet{mallik1995} showed that a reddening of $0.47$, as proposed by \citetalias{moffat1974}, produces reasonably good isochrone fits on the CMD. However, our analysis shows that such high values accurately fit the color of a group of stars that are displaced to redder colors than the rest of the MS, possibly due to their fast rotation or to the presence of a cool companion. We indicated the evolved giant star MF54 observed by \citetalias{moffat1974} as an empty square in Fig. \ref{Fig:CDMCCDISO}, and as a black triangle in Fig.~\ref{Fig:ProperMotion}. These authors classified MF54 as a cluster member based on its spectral class (B5V:k) and a RV of $67 \pm 14$~km~s$^{-1}$. Its \textit{Gaia} DR2 proper motion and distance agree with the mean values obtained for the cluster (see Table~\ref{Tab:RV_Dis}), despite the large error bars. However, the fractional parallax error is extremely large ($\sim$ 118\%), and it contrasts with the typical errors for MLM stars ($\lesssim$ 25\%), which produce less reliable distance measurements \citep{BailerJones15}. Due to the high uncertainties in the measurements, the membership of MF54 is not completely clear, and therefore we did not take it into account during the isochrone fit procedure. Similarly, the red giant star TYC\,6548-790-1 was also excluded from the fit. This star could be variable (see \citetalias{bidin2014}), and as a consequence its photometric data may not be completely reliable. \citet{mallik1995} showed that the inclusion of one or both of these two stars during the isochrone fitting procedure can change the cluster age from 15 to 40~Myr. In Fig. \ref{Fig:Density} we analyze the radial density profile of the OC. Only stars with proper motion within 3$\sigma$ of the cluster value were selected. It is clear that the cluster population dominates the background up to approximately r$\sim 8\arcmin-10\farcm5$. The angular distance between PN NGC\,2452 and the center of the OC NGC\,2453 is $8\farcm5$, that is, within the coronal extent of the OC. \subsection{Planetary nebula membership} \label{sub:membership} \citet{gathier1986} derived the reddening of the PN NGC\,2452 as $E_{B-V} = 0.43 \pm 0.05$, which is virtually the same found by us for the cluster. Nevertheless, the reddening-distance method used by \citet{gathier1986} for the PN leads to a distance of $d_{PN}=3.57\pm0.47$~kpc, which is confirmed with the more modern dust map by \citet{Green2018} ($d_{PN}=3.70$~kpc). Other authors adopted different methods, and found even smaller values \citep[see, e.g.,][]{acker1978,maciel1980,daub1982,stanghellini2008}. Distance and proper motions from \textit{Gaia} DR2 to PN NGC\,2452 are not particularly reliable ($d_{PN}=2.4^{3.4}_{1.8}$~kpc, $\mu_{\alpha}$=-2.5$\pm$0.2~mas~yr$^{-1}$ and $\mu_{\delta}$=3.5$\pm$0.2~mas~yr$^{-1}$). Even though the central star for NGC\,2452 was a target of various photometric studies \citep[e.g.,][]{Ciardullo96,Silvotti96}, and its coordinates match those from \textit{Gaia} very well, \citet{kimeswenger2018} restrict the identification to PNe with photometric colors in the range $-0.65\leqslant (bp -rp)\leqslant -0.25$. Outside this interval, \textit{Gaia} DR2 cannot identify the central star correctly due to contamination of the H$_\alpha$+[NII] emission line of the PN envelope. The color index for NGC\,2452 is $(bp -rp)=0.07$, which is highly reddened. Therefore, any identification would most likely be incorrect. Figure~\ref{Fig:RV_D} shows that the RV of PN~NGC\,2452, along with the distance proposed by \citet{gathier1986}, closely match the distance--RV profile of the Galaxy arm in the Puppis direction. The profile was obtained assuming the rotation curve of \citet{brand1993}, the solar peculiar motion of \citet{schonrich2010}, $ R_\odot=8.0\pm0.3$~kpc, and $V_\mathrm{LSR}=220\pm20$~km~s$^{-1}$. In contrast, the cluster NGC\,2453 is consistent in both RV and distance computed here to be just behind NGC\,2452, and possibly a member of the Perseus arm, as can be seen in Fig.~2 of \citet{moitinho2006}. \begin{figure} \centering \includegraphics[trim = 0mm 0mm 0mm 0mm,clip,angle=-90, width=9.5cm]{Images/DensityProfile.pdf} \caption{Radial density profile constructed for NGC\,2453 using proper motions from \textit{Gaia} DR2. The radial distance of NGC\,2452 is indicated with an arrow. The full line shows the field level as the average of all the points with r > 11$\arcmin$ .} \label{Fig:Density} \end{figure} \section{Conclusions} \label{sec:conclusions} We present the results of distance analyses solving the longstanding discrepancy regarding the fundamental parameters of the OC NGC\,2453 and the debated cluster membership of the PN NGC\,2452, which were likely affected by the selection of cluster stars contaminated by field objects. The study of RVs has often been required to confirm real PN/OC associations (see, e.g., \citealt{mallik1995}, \citealt{ majaess2007}, \citetalias{bidin2014}). When the RVs of the PN and the OC disagree, the membership is rejected (\citealt{kiss2008}, \citetalias{bidin2014}). The difference in RV between the PN (62$\pm$1~km~s$^{-1}$) and the cluster (78$\pm$3~km~s$^{-1}$) is noticeable and highly significant ($\sim5\sigma$), excluding a physical association between them. All photometric diagrams show the presence of a robust group of foreground stars located at distances $\leqslant$3.5~kpc and contaminating the cluster field. According to the theoretical distance--velocity profile of the Galactic disk in the direction of Puppis, the RV we obtain for the PN NGC\,2452 is consistent with membership to this foreground population. \begin{acknowledgements} This work has made use of data from the European Space Agency (ESA) mission \textit{Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the \textit{Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}); data from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation; and the SIMBAD and Vizier databases, operated at the CDS, Strasbourg, France. We thank the anonymous referee for his/her useful comments and suggestions. ESV thanks "Estrategia de Sostenibilidad, Universidad de Antioquia". DGD acknowledge support from the Plataforma de Movilidad Estudiantil y Acad\'emica of Alianza del Pac\'ifico and its chilean headquarters in the Agencia Chilena de Coperaci\'on Internacional para el Desarrollo - AGCID, as well as the support from call N$\degr$ 785 of 2017 of the colombian Departamento Administrativo de Ciencia, Tecnolog\'ia e Innovaci\'on, COLCIENCIAS. \end{acknowledgements} \begin{figure} \centering \includegraphics[trim = 0mm 0mm 0mm 10mm, width=10cm]{Images/curve.pdf} \caption{Distance--RV plot in the direction of Puppis. The solid curve shows our theoretical model based on Galactic rotation, with the dashed curves used to indicate the $1\sigma$ propagation errors. Gray circles are classical Galactic Cepheids from \citet{mel2015} in the third quadrant with Galatic latitudes $-2\degree<b<2\degree$, while triangles are bright stars with available RVs from \textit{Gaia} DR2 with $242.5\degree<l<243.5\degree$ and $-1\degree<b<1\degree$. Squares with error bars show the position of NGC\,2452 and NGC\,2453.} \label{Fig:RV_D} \end{figure} \bibliographystyle{aa}
1,314,259,994,588
arxiv
\section{Introduction} In this paper, we consider the long time stability of the invariant tori for the $d$-dimensional nonlinear Schr\"odinger (NLS) equation \begin{equation}\label{1} -\mathrm{i}\dot{u}=-\Delta u+V(x)\ast u+\varepsilon\frac{\partial F}{\partial \bar{u}}(|u|^{2}), \quad u=u(t,x) \end{equation} under the periodic boundary condition $x\in\mathbb{T}^{d}, d\geq 1$. The convolution function $V: \mathbb{T}^{d}\rightarrow \mathbb{C}$ is analytic and the Fourier coefficient $\hat{V}(a)$ takes real value, when expanding $V$ into Fourier series $V(x)=\sum_{a\in\mathbb{Z}^{d}}\hat{V}(a)e^{\mathrm{i}\langle a,x\rangle}$. The nonlinearity $F$ is real analytic. The NLS equation \eqref{1} is a Hamiltonian PDE. The KAM theory is a well-known approach to establish the existence of the invariant tori for Hamiltonian PDEs. The invariant tori so constructed are often referred to as the \emph{KAM tori}. The ``KAM for PDE'' theory started in late 1980's and originally applied to the one spatial dimensional PDEs, which is now well understood. See for example \cite{K, Kuk00, KP,P1,P2,W, LY11, BBP13, BBHM18} and the references therein. However, the KAM theory for space-multidimensional Hamiltonian PDEs is at its early stage. The first breakthrough was made by Bourgain \cite{B1} on the two dimensional NLS equation, in which he developed Craig and Wayne's scheme on periodic problems. Using new techniques of Green's function estimates in the spectral theory, Bourgain proved the persistence of invariant tori for space-multidimensional NLS and nonlinear wave (NLW) equations \cite{B2}. The above mentioned method is now known as the Craig-Wayne-Bourgain (CWB) method. See also \cite{BB13, BB20, Wang16, Wang19, Wang20} and the references therein. The classical KAM approach for space-multidimensional Hamiltonian PDEs was developed by Eliasson and Kuksin \cite{EK} on NLS equation. They take a sequence of symplectic transformations such that the transformed Hamiltonian guarantees the existence of the invariant tori. Moreover, the KAM approach in \cite{EK} also provides the reducibility and linear stability of the obtained invariant tori. See also \cite{EGK,GY,GXY11, PP1,PP2,Y} for the KAM approach on the space-multidimensional PDEs. To ensure that the obtained KAM tori can be observed in physics and other real applications, one has to prove that those KAM tori are stable in some sense, among which the simplest one is the \emph{linear stability}. Let us recall the definition of the linear stability of the invariant tori. Consider a nonlinear differential equation \begin{equation}\label{01} \dot{x}=X(x), \end{equation} which has an invariant torus $\mathcal{T}$ carrying the quasi-periodic flow $x(t)=x_{0}(t)$. We say that the invariant torus $\mathcal{T}$ is linearly stable if the equilibrium of the linearized equation \begin{equation*} \dot{y}=D X(x_{0}(t)) y \end{equation*} of \eqref{01} along $\mathcal{T}$ is Lyapunov stable. A more general definition is that $\mathcal{T}$ is linearly stable if all the Lyapunov exponents of $\mathcal{T}$ equal to zero. The classical KAM tori are linearly stable as a matter of the KAM approach. To see this, we consider a Hamiltonian perturbation \begin{equation*} H= N+P= N+ P^{\textrm{low}}+ P^{\textrm{high}} \end{equation*} of the integrable part \begin{equation*} N= \langle \omega, y\rangle+ \sum_{j\in \mathbb{Z}^{d}} \Omega_{j} z_{j} \bar{z}_{j}, \end{equation*} where \begin{equation*} \begin{aligned} P^{\textrm{low}}=& R^{x}+ \langle R^{y}, y\rangle+\langle R^{z}, z\rangle +\langle R^{\bar{z}}, \bar{z}\rangle + \langle R^{zz} z, z\rangle+ \langle R^{z\bar{z}} z, \bar{z}\rangle +\langle R^{\bar{z}\bar{z}}\bar{z}, \bar{z}\rangle \end{aligned} \end{equation*} and \begin{equation*} P^{\textrm{high}}= O(|y|^{2}+ \|z\| \cdot |y|+ \|z\|^{3}). \end{equation*} The classical KAM approach (for $d=1$) aims at taking a sequence of symplectic transformations to eliminate all terms in $P^{\textrm{low}}$, except for the averages $\langle \widehat{R^{y}}(0), y\rangle$ and $\sum_{i=j} \widehat{R^{z\bar{z}}_{ij}}(0) z_{i} \bar{z}_{j}$. In particular, the quadratic terms in $P^{\textrm{low}}$ are reduced to $\sum_{i=j} \widehat{R^{z\bar{z}}_{ij}}(0) z_{i} \bar{z}_{j}$ of constant coefficients, which can be put into the integrable part $N$ for the next iteration. In this way, the linearized equation of the obtained KAM tori can be reduced to \begin{equation}\label{001} i \dot{z}= (\Omega+ \varepsilon [R]) z, \end{equation} where $\Omega=\textrm{diag}(\Omega_{j}: j\in\mathbb{Z})$ and $[R]= \textrm{diag}((R^{\infty;z\bar{z}}_{ij})^{\wedge}(0): i=j\in \mathbb{Z})$ are diagonal and constant.\ Obviously, the equilibrium $z=0$ of \eqref{001} is Lyapunov stable, and thus the KAM tori are linearly stable. Unfortunately, there is a difficulty in extending the classical KAM approach for $d=1$ to the case of $d>1$. Taking NLS equation for example, the normal frequency satisfies $\Omega_{j}\sim |j|^{2}, j\in\mathbb{Z}^{d}$ after writing NLS equation into an infinitely dimensional Hamiltonian system as above. It follows that the normal frequencies may have unbounded multiplicities since $\# \{j'\in\mathbb{Z}^{d}: |j'|=|j|\}\sim |j|^{d-1}\rightarrow \infty$ as $|j|\rightarrow \infty$. This feature leads to serious resonances in solving the homological equations, which might impede the convergence of the symplectic transformations. Eliasson-Kuksin \cite{EK} analyzed carefully the separation property of the normal frequencies and provided insight on the T\"{o}plitz-Lipschitz property of the Hamiltonian. Using the ``super Newton iteration" (rather than the usual Newton iteration in KAM theory for $d=1$), they succeeded in eliminating $P^{\textrm{low}}$, but leaving an infinitely dimensional block-diagonal and constant matrix in the quadratic term of $z,\bar{z}$ behind. As a result, they proved that there are plenty of KAM tori for NLS equation with $d>1$, whose all Lyapunov exponents equal to zero and hence are linearly stable. As for NLW equation, the normal frequency $\Omega_{j}=|j|=\sqrt{j_{1}^{2}+\cdots+j_{d}^{2}}$ does not have a good separation property like NLS equation. Although Bourgain \cite{B2} had applied the CWB method to prove the existence of KAM tori for NLW equation with $d>1$, the linear stability of those KAM tori remains open. Recently, by modifying the CWB method, the authors \cite{HSSY} obtained not only the existence but also the linear stability of the KAM tori for the Hamiltonian system with finite degrees of freedom. In many cases, we cannot determine the stability of the nonlinear system from its linearized equation directly. Typically, in linear plane dynamical systems, a center equilibrium can become a focus after certain perturbation. There are also examples that a linearly stable model can be triggered by an initial perturbation to exhibit chaotic dynamics \cite{GG}. This prompts us to study the nonlinear stability, among which the \emph{long time stability} is of particular interest in PDEs. In finitely dimensional Hamiltonian system, the best result concerning the long time stability is the Nekhoroshev estimate \cite{Neh}. Consider a $n$ degree of freedom Hamiltonian $H= N(y)+ \varepsilon R(y, x)$, where $(y,x)\in \mathbb{R}^{n}\times \mathbb{T}^{n}$ is the action-angle variable. Assume the functions $N$ and $R$ are analytic in $(y,x)$ in some open domain. The Nekhoroshev estimate tells us that the variation of the actions of all orbits remains small over a finite, but exponentially long time interval. More precisely, for sufficiently small $\varepsilon$, one has \begin{equation}\label{Nek1} | y(t)- y(0)|\lesssim \varepsilon^{a} \quad \textrm{for}~|t|\lesssim \exp (\varepsilon^{-b}), \end{equation} where the constants $a, b$ depend on the degree of the freedom. In particular, if $N$ is convex, one can get $a=b=\frac{1}{2n}$. See P\"{o}schel \cite{Pos93}. Noticing also that the instabilities such as Arnold diffusion \cite{Arn64} may occur with the degree of freedom $n\geq 3$ and transfer of energy may appear in NLS equation \cite{CK}, one should not expect some orbits are stable forever. Consequently, it is reasonable to apply the Nekhoroshev estimate on the long time stability of orbits to describe the stability of the Hamiltonian system. For NLS equation (or generally the Hamiltonian PDEs), the degree of freedom of the Hamiltonian is infinite. One immediately gets that $a=b=0$ and the Nekhoroshev estimate in \eqref{Nek1} no longer works. Instead, Bourgain \cite{Bou96-GAFA} suggested investigating the long time behavior of orbits in the neighborhood of the equilibrium and relaxed the stable time interval from $|t|< \exp (-\varepsilon^{b})$ to $|t|< \varepsilon^{-M}$ for large $M$. From then on, there are lots of literature devoted to the long time stability of the equilibrium for Hamiltonian PDEs. See \cite{Bam03, DS04, BG, FG13, YZ14, BMP20,CLW20,CMW20, BG21}. We emphasize that in \cite{BG} Bambusi and Gr\'{e}bert introduced the tame property of the vector field, which simplifies the proof considerably. In contrast to the equilibrium, the KAM tori are much more complicated solutions of NLS equation. It is known that the KAM tori are sup-exponentially long time stable for the finitely dimensional Hamiltonian system \cite{BFG88}. For Hamiltonian PDEs, the study of the long time stability of KAM tori is limited to the case of $d=1$. For instance, \cite{CLY} and \cite{CGL15} studied the long time stability of the KAM tori for NLS and NLW equations, respectively. For cubic defocusing NLS equation on $\mathbb{T}^{2}$, Maspero-Procesi \cite{MP18} studied the large-time stability (with the stable time interval $|t|<\delta^{-2}$) of small finite gap solutions, which depend only on one spatial variable. For $d>2$, as far as we know, there seems no results in this respect. The main result of the present paper is that the majority of the KAM tori obtained by Eliasson-Kuksin \cite{EK} are stable in long time $|t|< \delta^{-M}$ for large $M$. More precisely, we have the following theorem. \begin{thm}\label{t} Under the assumptions for equation \eqref{1}, if $\varepsilon>0$ is sufficiently small, then for typical $V$ $($in the sense of measure$)$, the nonlinear Schr\"odinger equation \eqref{1} possesses a linearly stable KAM torus $\mathcal{T}=\mathcal{T}_{V}$ in the Sobolev space $H^{p}(\mathbb{T}^{d})$. Moreover, letting $ M \approx\varepsilon^{-\frac{2}{3}}$ and $p\geq80(4d)^{4d}(M+7)^{4}+1$, there exists a small $\delta_{0}$ depending on $p, M$ and $\emph{\textrm{dim}}~\mathcal{T}$ such that for any $0<\delta<\delta_{0}$ and any solution $u(t,x)$ of \eqref{1} with the initial datum $u(0,\cdot)$ satisfying \begin{equation*} d_{H^{p}(\mathbb{T}^{d})}(u(0,\cdot), \mathcal{T}):=\inf_{w\in \mathcal{T}} \|u(0,\cdot)- w\|_{H^{p}(\mathbb{T}^{d})}<\delta, \end{equation*} we have \begin{equation*} d_{H^{p}(\mathbb{T}^{d})}(u(t,\cdot), \mathcal{T})<2 \delta, \quad \forall~ |t|< \delta^{-M}. \end{equation*} In other words, the KAM tori for the nonlinear Schr\"odinger equation \eqref{1} are stable in long time. \end{thm} Theorem \ref{t} consists of two results (Theorem \ref{t1} and Theorem \ref{t2}) after writing NLS equation as the infinitely dimensional Hamiltonian system. By excluding some parameters, we establish the KAM theorem (see Theorem \ref{t1}) to guarantee the existence and linear stability of the KAM tori, which have already been obtained in \cite{EK}. However, to study the long time stability of the KAM tori, we modify the proof in \cite{EK} by taking Kolomogorov's iterative scheme such that the transformed Hamiltonian after the KAM iteration is still defined on an open domain. By the further parameter exclusion, we show the long time stability of the majority of the obtained KAM tori (see Theorem \ref{t2}), by establishing the partial normal form of the transformed Hamiltonian. The proof of Theorem \ref{t} is given at the end of section \ref{sect 2}. We clarify the main ideas in the proof of the theorem. \begin{enumerate} \item[i)] Write \eqref{1} as an infinitely dimensional Hamiltonian system \begin{equation*} H=\sum_{a\in \mathcal{A}}\omega_{a}r_{a}+\frac{1}{2}\sum_{a\in\mathcal{L}}\Omega_{a}(\xi_{a}^{2}+\eta_{a}^{2})+f \end{equation*} and split \begin{equation*} f=f^{\textrm{low}}+f^{\textrm{high}}. \end{equation*} The KAM approach developed in \cite{EK} aims at eliminating $f^{\textrm{low}}$, but leaving some resonant terms. In this way, the transformed Hamiltonian $H_{\infty}=H\circ \psi_{\infty} $ takes the form of \begin{equation*} H_{\infty}=\langle \omega',r\rangle+\frac{1}{2}\langle \zeta,(\Omega+Q)\zeta\rangle+f^{\textrm{high}}_{\infty}. \end{equation*} One sees that $r=0, \zeta=0$ is the KAM torus for $H_{\infty}$. Recall that the domain $D(\mu_{j},\sigma_{j})$ of the symplectic transformation $\psi_{j}$ in \cite{EK} degenerates into a singleton since $\mu_{j}\rightarrow 0, \sigma_{j}\rightarrow 0$ as $j\rightarrow \infty$. Since $\psi_{j}$ is quadratic in $r$ and $\zeta$, one surely can extend the domain of $\psi_{j}$ to the initial domain $D(\mu_{0},\sigma_{0})$. In addition, one can take $D(\mu_{0},\sigma_{0})$ as the domain of the vector field for the \emph{linearized equation} of the Hamiltonian $H_{\infty}$, based on which one is able to study the Lyapunov stability of the equilibrium on $D(\mu_{0}, \sigma_{0})$. Along this line, Eliasson-Kuksin \cite{EK} developed a powerful KAM approach for NLS equation with $d>1$ to show not only the existence of the KAM tori, but also their linear stability. However, when studying the long time stability of those KAM tori, we have to take the domain of the high order term $f^{\textrm{high}}_{\infty}$ into consideration. Since the domain $\cap_{j=1}^{\infty} D(\mu_{j},\sigma_{j})$ of $\psi_{\infty}$ is a singleton, we can only define $f^{\textrm{high}}_{\infty}$ on $D(0,0)$, on which the KAM tori are indeed constructed for the original NLS equation. We emphasize that the domain of $f^{\textrm{high}}_{\infty}$ usually cannot be extended to $D(\mu_{0},\sigma_{0})$ since $f^{\textrm{high}}_{\infty}$ is not a polynomial function. For that reason, we introduce Kolmogorov's iterative scheme in the framework of \cite{EK} by modifying the homological equations such that $D(\mu_{j}, \sigma_{j})\supset D(\frac{\mu_{0}}{2}, \frac{\sigma_{0}}{2})$ for all $j$. Then we can define $f^{\textrm{high}}_{\infty}$ on an open set $D(\frac{\mu_{0}}{2}, \frac{\sigma_{0}}{2})$ to take normal form computations. \item[ii)] To establish the long time stability of the KAM tori so constructed, we shall take symplectic transformation of $f^{\textrm{high}}_{\infty}$ to obtain a suitable Birkhoff normal form. We will not put the frequency shift produced in the symplectic transformation into the homological equations, and we will finally get \begin{equation*} f^{\textrm{high; new}}_{\infty}=O(\|z\|^{M+1}). \end{equation*} It then follows that the KAM torus ($r=0, z=0$) is stable in a long time interval of length $\delta^{-M}$. In this process, the tame property for space-multidimensional NLS equation can be preserved during the KAM iteration. Moreover, we will take the advantage of the momentum conservation. The corresponding Hamiltonian $f$ consists of monomials \begin{equation*} e^{\mathrm{i}\langle k,\varphi\rangle}\prod_{a\in \mathcal{A}}r_{a}^{n_{a}}\prod_{a\in\mathcal{L}}u_{a}^{l_{a}}v_{a}^{m_{a}} \end{equation*} satisfying \begin{equation*} -\sum_{a\in \mathcal{A}}k_{a}a+\sum_{a\in\mathcal{L}}(l_{a}-m_{a})a=0. \end{equation*} We need to verify momentum conservation in the KAM iteration. The persistence under Poisson bracket can be checked directly. Since the homological equations are of constant coefficients, the persistence under solving homological equations can also be directly checked. By the momentum conservation, we can deal with the frequency shift to establish the long time stability. \end{enumerate} We end up this section with several remarks. \begin{rem} In this paper, we benefit a lot from the momentum conservation, which comes from the $x$-independent nonlinearity $F$. See also \emph{\cite{GY,PP1,PP2}}. For the general case, there are extra difficulties in dealing with a block-diagonal shift of frequency. \end{rem} \begin{rem} As mentioned before, the existence of KAM tori $($quasi-periodic solution$)$ for space-multidimensional NLW equation can be obtained by the CWB method \emph{\cite{B2}}. However, on the one hand, a counterpart of KAM approach for NLW equation like Eliasson-Kuksin \emph{\cite{EK}} $($on NLS equation$)$ is still not available. See \emph{\cite{EGK}}. On the other hand, the CWB method does not provide a normal form of the Hamiltonian in the neighborhood of the KAM torus. As a result, the linear stability of KAM tori (quasiperiodic solutions) for NLW equation with $d>1$ is not clear, let alone the long time stability. \end{rem} The paper is organized as follows. In section \ref{sect 2}, we introduce some notations as the preliminary and present our main results. In section \ref{sect 3}, we formulate and solve the homological equation. In section \ref{sect 4}, we prove the KAM theorem to show the existence of the KAM tori for NLS equation. In section \ref{sect 5}, we construct a partial normal form to show the long time stability of the obtained KAM tori. \section{Main results}\label{sect 2} In this section, we present the main results of the infinitely dimensional Hamiltonian system. To begin with, we introduce some notations as the preliminary. \subsection{Preliminary} In this part, we collect some notations and definitions, which are frequently used throughout the paper. In subsection \ref{sec 2.1}, we write NLS equation as an infinitely dimensional Hamiltonian system. In subsection \ref{sec 2.2}, we introduce the tame property of the Hamiltonian vector field. In subsection \ref{sec 2.3}, we introduce the T\"{o}plitz-Lipschitz property. Finally, in subsection \ref{sec 2.4}, we introduce the normal form matrix. \subsubsection{Hamiltonian formulation of NLS equation}\label{sec 2.1} In order to prove Theorem \ref{t}, we write the nonlinear Schr\"odinger equation (\ref{1}) as an infinitely dimensional Hamiltonian system. We keep the notations consistent with those in \cite{EK}. Write \begin{equation*} u(x)=\sum_{a\in\mathbb{Z}^{d}}u_{a}e^{\mathrm{i}\langle a,x\rangle}, \ \overline{u(x)}=\sum_{a\in\mathbb{Z}^{d}}v_{a}e^{\mathrm{i}\langle -a,x\rangle}, \end{equation*} and let \begin{equation*} \zeta_{a}=\left( \begin{array}{c} \xi_{a} \\ \eta_{a} \\ \end{array} \right)=\frac{1}{\sqrt{2}}\left( \begin{array}{c} u_{a}+v_{a} \\ -\mathrm{i}(u_{a}-v_{a}) \\ \end{array} \right). \end{equation*} Then the nonlinear Schr\"odinger equation (\ref{1}) becomes a real Hamiltonian system with the symplectic structure $d\xi\wedge d\eta$ and the Hamiltonian \begin{equation*} \frac{1}{2}\sum_{a\in\mathbb{Z}^{d}}(|a|^{2}+\hat{V}(a))(\xi_{a}^{2}+\eta_{a}^{2})+\varepsilon \int_{\mathbb{T}^{d}}F(|u(x)|^{2})dx. \end{equation*} Let $\mathcal{A}$ be a finite subset of $\mathbb{Z}^{d}$ and $\mathcal{L}=\mathbb{Z}^{d}\setminus\mathcal{A}$. Introduce action-angle variables $(\varphi_{a},r_{a})$, $a\in \mathcal{A}$, \begin{equation*} \xi_{a}=\sqrt{2(r_{a}+q_{a})}\cos\varphi_{a}, \eta_{a}=\sqrt{2(r_{a}+q_{a})}\sin\varphi_{a}, \ q_{a}>0. \end{equation*} Let \begin{equation*} \omega_{a}=|a|^{2}+\hat{V}(a), a\in \mathcal{A} \quad\textrm{and}\quad \Omega_{a}=|a|^{2}+\hat{V}(a), a\in \mathcal{L}. \end{equation*} We have the Hamiltonian \begin{equation*} h+f=\sum_{a\in \mathcal{A}}\omega_{a}r_{a}+\frac{1}{2}\sum_{a\in\mathcal{L}}\Omega_{a}(\xi_{a}^{2}+\eta_{a}^{2})+\varepsilon \int_{\mathbb{T}^{d}}F(|u(x)|^{2})dx. \end{equation*} Assume $f$ is real analytic on \begin{equation*} D(\rho,\mu,\sigma)=\{(\varphi,r,\zeta)\in(\mathbb{C}/2\pi\mathbb{Z})^{\mathcal{A}}\times\mathbb{C}^{\mathcal{A}}\times l^{2}_{p}:|\Im\varphi|\leq\rho,|r|\leq\mu,\|\zeta\|_p\leq\sigma\}, \end{equation*} where \begin{equation*} \|\zeta\|^{2}_{p}=\sum_{a\in\mathcal{L}}(|\xi^{2}_{a}|+|\eta^{2}_{a}|)\langle a\rangle^{2p}, \ \langle a\rangle=\max(|a|,1). \end{equation*} \subsubsection{The $p$-tame norm of the Hamiltonian vector field}\label{sec 2.2} In this paper, $\|\cdot\|$ is an operator norm or $l^{2}$ norm. $|\cdot|$ will in general denote a sup norm. For $a \in \mathbb{Z}^{d}$, we use $|a|$ for the $l^{2}$ norm. Let $\mathcal{A}$ be a finite subset of $\mathbb{Z}^{d}$ and $\mathcal{L}=\mathbb{Z}^{d}\setminus\mathcal{A}$. Denote $\langle \zeta,\zeta'\rangle=\sum(\xi_{a}\xi'_{a}+\eta_{a}\eta'_{a})$ and $J=\left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \\ \end{array} \right) $. For $\gamma\geq 0$, we denote \begin{equation*} l^{2}_{p,\gamma}=\{\zeta=(\xi,\eta)\in\mathbb{C}^{\mathcal{L}}\times\mathbb{C}^{\mathcal{L}}:\|\zeta\|_{p,\gamma}<\infty\}, \end{equation*} where \begin{equation*} \|\zeta\|^{2}_{p,\gamma}=\sum_{a\in\mathcal{L}}(|\xi^{2}_{a}|+|\eta^{2}_{a}|)e^{2\gamma|a|}\langle a\rangle^{2p}, \ \langle a\rangle=\max(|a|,1). \end{equation*} When $\gamma= 0$, we simply write $l^{2}_{p}$ and $\|\zeta\|_p$. The phase space of the Hamiltonian dynamical system is defined by \begin{equation*} \mathcal{P}^{p}=(\mathbb{C}/2\pi\mathbb{Z})^{\mathcal{A}}\times\mathbb{C}^{\mathcal{A}}\times l^{2}_{p}. \end{equation*} Let $U\subset\mathbb{R}^{\mathbb{Z}^{d}}$ be a parameter set with positive measure (in the sense of Gauss or Kolmogorov). We define $p$-tame norm as in \cite{CLY}. \begin{defn} Let \begin{equation*} D(\rho)=\{\varphi\in(\mathbb{C}/2\pi\mathbb{Z})^{\mathcal{A}}:|\Im\varphi|\leq\rho\}, \end{equation*} and $f:D(\rho)\times U\rightarrow \mathbb{C}$ be analytic in $\varphi\in D(\rho)$ and $C^1$ $($in the sense of Whitney$)$ in $w\in U$ with \begin{equation*} f(\varphi;w)=\sum_{k\in\mathbb{Z}^{\mathcal{A}}}\hat{f}(k;w)e^{\mathrm{i}\langle k,\varphi\rangle}. \end{equation*} Define the norm \begin{equation*} \|f\|_{D(\rho)\times U}=\sup_{w\in U,a \in \mathbb{Z}^{d}}\sum_{k\in\mathbb{Z}^{\mathcal{A}}}\left(|\hat{f}(k;w)|+|\partial_{w_{a}}\hat{f}(k;w)|\right)e^{|k|\rho}. \end{equation*} \end{defn} \begin{defn} Let \begin{equation*} D(\rho,\mu)=\{(\varphi,r)\in(\mathbb{C}/2\pi\mathbb{Z})^{\mathcal{A}}\times\mathbb{C}^{\mathcal{A}}:|\Im\varphi|\leq\rho,|r|\leq\mu\}, \end{equation*} and $f:D(\rho,\mu)\times U\rightarrow \mathbb{C}$ be analytic in $(\varphi,r)\in D(\rho,\mu)$ and $C^1$ in $w\in U$ with \begin{equation*} f(\varphi,r;w)=\sum_{\alpha\in\mathbb{N}^{\mathcal{A}}}f^{\alpha}(\varphi;w)r^{\alpha}. \end{equation*} Define the norm \begin{equation*} \|f\|_{D(\rho,\mu)\times U}=\sum_{\alpha\in\mathbb{N}^{\mathcal{A}}}\|f^{\alpha}(\varphi;w)\|_{D(\rho)\times U}\mu^{|\alpha|}. \end{equation*} \end{defn} \begin{defn} Let \begin{equation*} D(\rho,\mu,\sigma)=\{(\varphi,r,\zeta)\in(\mathbb{C}/2\pi\mathbb{Z})^{\mathcal{A}}\times\mathbb{C}^{\mathcal{A}}\times l^{2}_{p}:|\Im\varphi|\leq\rho,|r|\leq\mu,\|\zeta\|_p\leq\sigma\}, \end{equation*} and $f:D(\rho,\mu,\sigma)\times U\rightarrow \mathbb{C}$ be analytic in $(\varphi,r,\zeta)\in D(\rho,\mu,\sigma)$ and $C^1$ in $w\in U$ with \begin{equation*} f(\varphi,r,\zeta;w)=\sum_{\alpha\in\mathbb{N}^{\mathcal{A}},\beta\in\mathbb{N}^{\bar{\mathcal{L}}}}f^{\alpha\beta}(\varphi;w)r^{\alpha}\zeta^{\beta}, \end{equation*} where $\bar{\mathcal{L}}=\mathcal{L}_{-1}\sqcup\mathcal{L},\mathcal{L}_{-1}=\mathcal{L}.$ For $a\in\mathcal{L}_{-1}, \zeta_{a}=\xi_{a}$, and for $a\in\mathcal{L}, \zeta_{a}=\eta_{a}$. Define the modulus \begin{equation*} \lfloor f \rceil_{D(\rho,\mu)\times U}(\zeta)=\sum_{\beta\in\mathbb{N}^{\bar{\mathcal{L}}}}\|f^{\beta}(\varphi,r;w)\|_{D(\rho,\mu)\times U}\zeta^{\beta}, \end{equation*} where \begin{equation*} f^{\beta}(\varphi,r;w)=\sum_{\alpha\in\mathbb{N}^{\mathcal{A}}}f^{\alpha\beta}(\varphi;w)r^{\alpha}. \end{equation*} \end{defn} For a homogeneous polynomial $f(\zeta)$ of degree $h>0$, it is associated with a symmetric $h$-linear form $\tilde{f}(\zeta^{(1)},\ldots,\zeta^{(h)})$ such that $\tilde{f}(\zeta,\ldots,\zeta)=f(\zeta)$. For a monomial \begin{equation*} f(\zeta)=f^{\beta}\zeta^{\beta}=f^{\beta}\zeta_{j_{1}}\cdots\zeta_{j_{h}}, \end{equation*} define \begin{equation*} \tilde{f}(\zeta^{(1)},\ldots,\zeta^{(h)})=\widetilde{f^{\beta}\zeta^{\beta}}=\frac{1}{h!}\sum_{\tau_{h}}f^{\beta}\zeta_{j_{1}}^{(\tau_{h}(1))}\cdots\zeta_{j_{h}}^{(\tau_{h}(h))}, \end{equation*} where $\tau_{h}$ is an $h$-permutation. For a homogeneous polynomial \begin{equation*} f(\zeta)=\sum_{|\beta|=h}f^{\beta}\zeta^{\beta}, \end{equation*} define \begin{equation*} \tilde{f}(\zeta^{(1)},\ldots,\zeta^{(h)})=\sum_{|\beta|=h}\widetilde{f^{\beta}\zeta^{\beta}}. \end{equation*} Now we can define $p$-tame norm of a Hamiltonian vector field. We first consider a Hamiltonian \begin{equation*} f(\varphi,r,\zeta;w)=f_{h}=\sum_{\alpha\in\mathbb{N}^{\mathcal{A}},\beta\in\mathbb{N}^{\bar{\mathcal{L}}},|\beta|=h}f_{h}^{\alpha\beta}(\varphi;w)r^{\alpha}\zeta^{\beta}. \end{equation*} Let $f_{\zeta}=(f_{\eta},-f_{\xi})$, and the Hamiltonian vector field $X_f$ is $(f_{r},-f_{\varphi},f_{\zeta})$. For $h\geq 1$, denote \begin{equation*} \|(\zeta^h)\|_{p,1}=\frac{1}{h}\sum_{j=1}^{h}\|\zeta^{(1)}\|_{1}\cdots\|\zeta^{(j-1)}\|_{1}\|\zeta^{(j)}\|_{p}\|\zeta^{(j+1)}\|_{1}\cdots\|\zeta^{(h)}\|_{1}. \end{equation*} \begin{defn} Let \begin{equation*} |||f_{\zeta}|||^{T}_{p,D(\rho,\mu)\times U}=\left\{ \begin{array}{cl} \sup_{0\neq\zeta^{(j)}\in l^2_p, 1\leq j\leq h-1}\frac{\|\widetilde{\lfloor f_{\zeta} \rceil}_{D(\rho,\mu)\times U} (\zeta^{(1)},\ldots,\zeta^{(h-1)})\|_{p}}{\|(\zeta^{h-1})\|_{p,1}}, & h\geq 2\\ \|\widetilde{\lfloor f_{\zeta} \rceil}_{D(\rho,\mu)\times U}\|_{p}, & h=0,1. \end{array} \right. \end{equation*} Define the $p$-tame norm of $f_{\zeta}$ by \begin{equation*} |||f_{\zeta}|||^{T}_{p,D(\rho,\mu,\sigma)\times U}=\max(|||f_{\zeta}|||^{T}_{p,D(\rho,\mu)\times U},|||f_{\zeta}|||^{T}_{1,D(\rho,\mu)\times U})\sigma^{h-1}. \end{equation*} \end{defn} \begin{defn} Let \begin{equation*} |||f_{r}|||_{D(\rho,\mu)\times U}= \left\{ \begin{array}{cl} \sup_{0\neq\zeta^{(j)}\in l^2_1, 1\leq j\leq h}\frac{|\widetilde{\lfloor f_{r} \rceil}_{D(\rho,\mu)\times U} (\zeta^{(1)},\ldots,\zeta^{(h)})|}{\|(\zeta^{h})\|_{1,1}}, & h\geq 1,\\ |\widetilde{\lfloor f_{r} \rceil}_{D(\rho,\mu)\times U}|, & h=0. \end{array} \right. \end{equation*} Define the norm of $f_{r}$ by \begin{equation*} |||f_{r}|||_{D(\rho,\mu,\sigma)\times U}=|||f_{r}|||_{D(\rho,\mu)\times U}\sigma^{h}. \end{equation*} The norm of $f_{\varphi}$ is defined as that of $f_{r}$. \end{defn} \begin{defn} Define the $p$-tame norm of the Hamiltonian vector field $X_f$ by \begin{equation*} |||X_f|||^{T}_{p,D(\rho,\mu,\sigma)\times U}=|||f_{r}|||_{D(\rho,\mu,\sigma)\times U}+\frac{1}{\mu}|||f_{\varphi}|||_{D(\rho,\mu,\sigma)\times U}+\frac{1}{\sigma}|||f_{\zeta}|||^{T}_{p,D(\rho,\mu,\sigma)\times U}. \end{equation*} \end{defn} \begin{defn} For a Hamiltonian \begin{equation*} f(\varphi,r,\zeta;w)=\sum_{h\geq 0}f_{h}, \quad f_{h}=\sum_{\alpha\in\mathbb{N}^{\mathcal{A}},\beta\in\mathbb{N}^{\bar{\mathcal{L}}},|\beta|=h}f_{h}^{\alpha\beta}(\varphi;w)r^{\alpha}\zeta^{\beta}, \end{equation*} define the $p$-tame norm of the Hamiltonian vector field $X_f$ by \begin{equation*} |||X_f|||^{T}_{p,D(\rho,\mu,\sigma)\times U}=\sum_{h\geq 0}|||X_{f_{h}}|||^{T}_{p,D(\rho,\mu,\sigma)\times U}. \end{equation*} \end{defn} \begin{rem} The $p$-tame norm can also be defined in complex coordinates \begin{equation*} z=\left( \begin{array}{c} u \\ v \\ \end{array} \right) =C^{-1}\left( \begin{array}{c} \xi \\ \eta \\ \end{array} \right), C=\frac{1}{\sqrt{2}}\left( \begin{array}{cc} 1 & 1 \\ -\mathrm{i} & \mathrm{i} \\ \end{array} \right). \end{equation*} \end{rem} Following the proof of Theorem 3.1 in \cite{CLY}, we have the following proposition. \begin{prop}\label{p} If $0<\tau<\rho, 0<\tau'<\frac{\sigma}{2}$, then \begin{equation*} |||X_{\{f,g\}}|||^{T}_{p,D(\rho-\tau,(\sigma-\tau')^{2},\sigma-\tau')\times U}\leq C\max\left(\frac{1}{\tau},\frac{\sigma}{\tau'}\right)|||X_{f}|||^{T}_{p,D(\rho,\sigma^{2},\sigma)\times U}|||X_{g}|||^{T}_{p,D(\rho,\sigma^{2},\sigma)\times U}, \end{equation*} where $C>0$ is a constant depending on $\# \mathcal{A}$. \end{prop} We define the weighted norm of the Hamiltonian vector field $X_f$ by \begin{equation*} |||X_f|||_{\mathcal{P}^{p},D(\rho,\mu,\sigma)\times U}=\sup_{(\varphi,r,\zeta;w)\in D(\rho,\mu,\sigma)\times U}\|X_{f}\|_{\mathcal{P}^{p},D(\rho,\mu,\sigma)}, \end{equation*} where \begin{equation*} \|X_{f}\|_{\mathcal{P}^{p},D(\rho,\mu,\sigma)}=|f_{r}|+\frac{1}{\mu}|f_{\varphi}|+\frac{1}{\sigma}\|f_{\zeta}\|_{p}. \end{equation*} Following the proof of Theorem 3.5 in \cite{CLY}, we have \begin{equation*} |||X_f|||_{\mathcal{P}^{p},D(\rho,\mu,\sigma)\times U}\leq|||X_f|||^{T}_{p,D(\rho,\mu,\sigma)\times U}. \end{equation*} \subsubsection{T\"{o}plitz-Lipschitz property}\label{sec 2.3} Recall the definition of T\"{o}plitz-Lipschitz matrices in \cite{EK}. Let $gl(2,\mathbb{C})$ be the space of all complex $2\times2$-matrices. For $A=\left( \begin{array}{cc} a & b \\ c & d \\ \end{array} \right) \in gl(2,\mathbb{C})$, denote $\pi A=\frac{1}{2}\left( \begin{array}{cc} a+d & b-c \\ c-b & a+d \\ \end{array} \right) $ and $[A]=\left( \begin{array}{cc} |a| & |b| \\ |c| & |d| \\ \end{array} \right) $. Now consider an infinite-dimensional $gl(2,\mathbb{C})$-valued matrix \begin{equation*} A:\mathcal{L}\times\mathcal{L}\rightarrow gl(2,\mathbb{C}), \ (a,b)\mapsto A_{a}^{b}. \end{equation*} For any $\mathcal{D}\subset\mathcal{L}\times\mathcal{L}$, define \begin{equation*} |A|_\mathcal{D}=\sup_{(a,b)\in\mathcal{D}}\|A_{a}^{b}\|, \end{equation*} where $\|\cdot\|$ is the operator norm. Define $(\pi A)_{a}^{b}=\pi A_{a}^{b}$ and $(\mathcal{E}_{\gamma}^{\pm}A)_{a}^{b}=[A_{a}^{b}]e^{\gamma|a\mp b|}$. Define the norm \begin{equation*} |A|_\gamma=\max(|\mathcal{E}_{\gamma}^{+}\pi A|_{\mathcal{L}\times\mathcal{L}},|\mathcal{E}_{\gamma}^{-}(1-\pi) A|_{\mathcal{L}\times\mathcal{L}}). \end{equation*} Define \begin{equation*} \mathcal{T}_{\Delta}^{\pm}A=A\mid_{\{(a,b)\in\mathcal{L}\times\mathcal{L}:|a\mp b|\leq\Delta\}}, \ \mathcal{T}_{\Delta}A=\mathcal{T}_{\Delta}^{+}\pi A+\mathcal{T}_{\Delta}^{-}(1-\pi) A. \end{equation*} A matrix $A:\mathcal{L}\times\mathcal{L}\rightarrow gl(2,\mathbb{C})$ is T\"{o}plitz at $\infty$, if for all $a,b,c$, the two limits \begin{equation*} \lim_{t\rightarrow +\infty}A_{a+tc}^{b\pm tc}\exists =A_{a}^{b}(\pm,c). \end{equation*} For $c\neq 0$, define $(\mathcal{M}_{c}A)_{a}^{b}=\left(\max(\frac{|a|}{|c|},\frac{|b|}{|c|})+1\right)[A_{a}^{b}]$. For $\Lambda\geq 0$, define the Lipschitz domain $D_{\Lambda}^{+}(c)\subset\mathcal{L}\times\mathcal{L}$ be the set of all $(a,b)$ such that there exist $a',b'\in \mathbb{Z}^{d}, t\geq 0$ such that \begin{equation*} |a=a'+tc|\geq\Lambda(|a'|+|c|)|c|, \ |b=b'+tc|\geq\Lambda(|b'|+|c|)|c|, \ \frac{|a|}{|c|},\frac{|b|}{|c|}\geq 2\Lambda^{2}. \end{equation*} Define $(a,b)\in D_{\Lambda}^{-}(c)$ if and only if $(a,-b)\in D_{\Lambda}^{+}(c)$. Define the Lipschitz constants \begin{equation*} \textrm{Lip}_{\Lambda,\gamma}^{\pm}A=\sup_{c} |\mathcal{E}_{\gamma}^{\pm}\mathcal{M}_{c}(A-A(\pm,c))|_{D_{\Lambda}^{\pm}(c)}, \end{equation*} and the Lipschitz norm \begin{equation*} \langle A\rangle_{\Lambda,\gamma}=\max(\textrm{Lip}_{\Lambda,\gamma}^{+}\pi A,\textrm{Lip}_{\Lambda,\gamma}^{-}(1-\pi) A)+|A|_\gamma. \end{equation*} For $d=2$, the matrix $A$ is T\"{o}plitz-Lipschitz if it is T\"{o}plitz at $\infty$ and $\langle A\rangle_{\Lambda,\gamma}<\infty$ for some $\Lambda,\gamma$. For $d>2$, we can define T\"{o}plitz-Lipschitz matrices inductively (see Section 2.4 in \cite{EK}). \begin{defn} Let \begin{equation*} D^{\gamma}(\sigma)=\{\zeta\in l^{2}_{p,\gamma}:\|\zeta\|_{p,\gamma}\leq\sigma\}, \end{equation*} and $f:D^{0}(\sigma)\rightarrow \mathbb{C}$ be real analytic. We say that $f$ is T\"{o}plitz at $\infty$ if $\partial^{2}_{\zeta}f(\zeta)$ is T\"{o}plitz at $\infty$ for all $\zeta\in D^{0}(\sigma)$. Define the norm $[f]_{\Lambda,\gamma,\sigma}$ to be the smallest $C$ such that \begin{equation*} \begin{aligned} & |f(\zeta)|\leq C,~ \forall~ \zeta\in D^{0}(\sigma), \\ & \|\partial_{\zeta}f(\zeta)\|_{p,\gamma'}\leq\frac{C}{\sigma}, ~\langle\partial^{2}_{\zeta}f(\zeta)\rangle_{\Lambda,\gamma'} \leq\frac{C}{\sigma^{2}},~ \forall~\zeta\in D^{\gamma'}(\sigma),~\forall~\gamma'\leq\gamma. \end{aligned} \end{equation*} \end{defn} \begin{defn} Let $A(w):\mathcal{L}\times\mathcal{L}\rightarrow gl(2,\mathbb{C})$ be $C^1$ in $w\in U$. Define \begin{equation*} |A|_{\gamma;U}=\sup_{w\in U}(|A(w)|_{\gamma},|\partial_{w}A(w)|_{\gamma}). \end{equation*} If $A(w),\partial_{w}A(w)$ are T\"{o}plitz at $\infty$ for all $w\in U$, define \begin{equation*} \langle A\rangle_{\Lambda,\gamma;U}=\sup_{w\in U}(\langle A(w)\rangle_{\Lambda,\gamma},\langle \partial_{w}A(w)\rangle_{\Lambda,\gamma}). \end{equation*} When $\gamma=0$, we simply write $|A|_{U},\langle A\rangle_{\Lambda;U}$. \end{defn} \subsubsection{The Normal form matrix}\label{sec 2.4} For $\Delta\geq 0$, define an equivalence relation on $\mathcal{L}$ generated by the pre-equivalence relation \begin{equation*} a\sim b \Leftrightarrow |a|=|b|,|a-b|\leq\Delta. \end{equation*} Let $[a]_\Delta$ be the equivalence class (block) of $a$ and $\mathcal{E}_\Delta$ be the set of equivalence classes. Let $d_\Delta$ be the supremum of all block diameters, then by Proposition 4.1 in \cite{EK}, $d_\Delta\preceq \Delta^{\frac{(d+1)!}{2}}$. A matrix $A:\mathcal{L}\times\mathcal{L}\rightarrow gl(2,\mathbb{C})$ is on normal form, denoted $\mathcal{NF}_{\Delta}$, if $A$ is real valued, symmetric, $\pi A=A$ and block-diagonal over $\mathcal{E}_\Delta$, i.e., $ A_{a}^{b}=0, \forall~[a]_\Delta\neq [b]_\Delta$. A matrix $Q:\mathcal{L}\times\mathcal{L}\rightarrow \mathbb{C}$ is on normal form, denoted $\mathcal{NF}_{\Delta}$, if $Q$ is Hermitian and block-diagonal over $\mathcal{E}_\Delta$. For a normal form matrix $A$, \begin{equation*} \frac{1}{2}\langle\zeta,A\zeta\rangle=\frac{1}{2}\langle\xi,A_{1}\xi\rangle+\langle\xi,A_{2}\eta\rangle+\frac{1}{2}\langle\eta,A_{1}\eta\rangle, \end{equation*} where $A_{1}+\mathrm{i}A_{2}$ is a Hermitian matrix. Let \begin{equation*} z=\left( \begin{array}{c} u \\ v \\ \end{array} \right) =C^{-1}\left( \begin{array}{c} \xi \\ \eta \\ \end{array} \right), C=\frac{1}{\sqrt{2}}\left( \begin{array}{cc} 1 & 1 \\ -\mathrm{i} & \mathrm{i} \\ \end{array} \right), \end{equation*} and define $C^{T}AC:\mathcal{L}\times\mathcal{L}\rightarrow gl(2,\mathbb{C})$ by $(C^{T}AC)_{a}^{b}=C^{T}A_{a}^{b}C$. If $A$ is on normal form, then \begin{equation*} \frac{1}{2}\langle z,C^{T}ACz\rangle=\langle u,Qv\rangle, \end{equation*} where $Q$ is the normal form matrix associated to $A$. \subsection{Main results} Let \begin{equation*} h(r,\zeta;w)=\langle\omega(w),r\rangle+\frac{1}{2}\langle\zeta,(\Omega(w)+H(w))\zeta\rangle, \end{equation*} where $\Omega(w)$ is a real diagonal matrix with diagonal elements $\Omega_{a}(w)I$, $H(w),\partial_{w}H(w)$ are T\"{o}plitz at $\infty$ and $\mathcal{NF}_{\Delta}$ for all $w\in U$. Assume \begin{equation}\label{as1} \partial_{w_{a}}\omega_{b}(w)=\delta_{ab}, \ a\in \mathbb{Z}^{d}, b\in \mathcal{A}, w\in U, \end{equation} \begin{equation}\label{as2} \partial_{w_{a}}\Omega_{b}(w)=\delta_{ab}, \ a\in \mathbb{Z}^{d}, b\in \mathcal{L}, w\in U. \end{equation} Assume there exist constants $c_1,c_2,c_3,c_4,c_5>0$ such that \begin{equation}\label{as3} |\Omega_{a}(w)-|a|^{2}|\leq c_1e^{-c_2|a|}, \ a\in \mathcal{L}, w\in U, \end{equation} \begin{equation}\label{as4} |\Omega_{a}(w)|\geq c_3, \ a\in \mathcal{L}, w\in U, \end{equation} \begin{equation}\label{as5} |\Omega_{a}(w)+\Omega_{b}(w)|\geq c_3, \ a,b\in \mathcal{L}, w\in U, \end{equation} \begin{equation}\label{as6} |\Omega_{a}(w)-\Omega_{b}(w)|\geq c_3, \ |a|\neq |b|, \ a,b\in \mathcal{L}, w\in U, \end{equation} \begin{equation}\label{as7} \|H(w)\|\leq \frac{c_3}{4}, \ w\in U, \end{equation} \begin{equation}\label{as8} \|\partial_{w}H(w)\|\leq c_4, w\in U, \end{equation} \begin{equation}\label{as9} \langle H \rangle_{\Lambda;U}\leq c_5. \end{equation} Let \begin{equation*} D^{\gamma}(\rho,\mu,\sigma)=\{(\varphi,r,\zeta)\in(\mathbb{C}/2\pi\mathbb{Z})^{\mathcal{A}}\times\mathbb{C}^{\mathcal{A}}\times l^{2}_{p,\gamma}:|\Im\varphi|\leq\rho,|r|\leq\mu,\|\zeta\|_{p,\gamma}\leq\sigma\}, \end{equation*} and $f:D^{\gamma}(\rho,\mu,\sigma)\times U\rightarrow \mathbb{C}$ be real analytic in $(\varphi,r,\zeta)\in D^{\gamma}(\rho,\mu,\sigma)$ and $C^1$ in $w\in U$. Define \begin{equation*} [f]_{\Lambda,\gamma,\sigma;U,\rho,\mu} =\sup_{(\varphi,r)\in D(\rho,\mu)}[f(\varphi,r,\cdot;\cdot)]_{\Lambda,\gamma,\sigma;U}. \end{equation*} \begin{thm}\label{t1} Consider the Hamiltonian $h+f$, where \begin{equation*} h(r,\zeta;w)=\langle\omega(w),r\rangle+\frac{1}{2}\langle\zeta,(\Omega(w)+H(w))\zeta\rangle \end{equation*} satisfy \eqref{as1}-\eqref{as9}, $H(w),\partial_{w}H(w)$ are T\"{o}plitz at $\infty$ and $\mathcal{NF}_{\Delta}$ for all $w\in U$, \begin{equation}\label{k1} |||X_{f}|||^{T}_{p,D(\rho,\mu,\sigma)\times U}\leq\varepsilon, \end{equation} \begin{equation}\label{k2} [f]_{\Lambda,\gamma,\sigma;U,\rho,\mu}\leq\varepsilon. \end{equation} Assume $\gamma,\sigma,\rho,\mu<1$, $\Lambda,\Delta\geq 3$, $\rho=\sigma$, $\mu=\sigma^{2}$, $d_{\Delta}\gamma\leq1$. Then there is a subset $U_{\infty}\subset U$ such that if \begin{equation*} \varepsilon\preceq\min\left(\gamma,\rho,\frac{1}{\Delta},\frac{1}{\Lambda}\right)^{\exp}, \end{equation*} then for all $w\in U_{\infty}$, there is a real analytic symplectic map \begin{equation*} \Phi:D(\frac{\rho}{2},\frac{\mu}{4},\frac{\sigma}{2})\rightarrow D(\rho,\mu,\sigma) \end{equation*} such that \begin{equation*} (h+f)\circ\Phi=h_{\infty}+f_{\infty}, \end{equation*} where \begin{equation*} h_{\infty}=\langle \omega_{\infty}(w),r\rangle+\frac{1}{2}\langle\zeta,(\Omega(w)+H_{\infty}(w))\zeta\rangle, \end{equation*} \begin{equation*} f_{\infty}=O(|r|^{2}+|r|\|\zeta\|_{p}+\|\zeta\|^{3}_{p}) \end{equation*} with the estimates \begin{equation}\label{k3} |||X_{f_{\infty}}|||^{T}_{p,D(\frac{\rho}{2},\frac{\mu}{2},\frac{\sigma}{2})\times U_{\infty}} \leq c\varepsilon^{\frac{2}{3}}, \end{equation} \begin{equation}\label{k4} |\omega_{\infty}(w)-\omega(w)|+|\partial_{w}(\omega_{\infty}(w)-\omega(w))|\leq c\varepsilon^{\frac{2}{3}}, \end{equation} \begin{equation}\label{k5} \|H_{\infty}(w)-H(w)\|+\|\partial_{w}( H_{\infty}(w)-H(w))\|\leq c\varepsilon^{\frac{2}{3}}, \end{equation} \begin{equation}\label{k6} \mathrm{meas}(U\setminus U_{\infty})\preceq \varepsilon^{\exp'}. \end{equation} The exponents $\exp$, $\exp'$ depend on $d, \# \mathcal{A}, p$, and the constant $c$ depends on $d,\# \mathcal{A}, p, c_1,\cdots,c_5$. \end{thm} The proof of Theorem \ref{t1} is delayed to section \ref{sect 4}. \begin{thm}\label{t2} Given any $1\leq M \leq (4c\varepsilon^{\frac{2}{3}})^{-1}$ and $p\geq80(4d)^{4d}(M+7)^{4}+1$, there exists a set $\tilde{U}\subset U_{\infty}$ such that for any $\delta>0$ and $w\in \tilde{U}$, the KAM tori obtained in Theorem \ref{t1} is stable in long time $\delta^{-M}$. Moreover, $\mathrm{meas}(U_{\infty}\setminus\tilde{U})\preceq \delta^{\exp}$, where the positive exponent $\exp$ depends on $d, \# \mathcal{A}, p, M$. \end{thm} The proof of Theorem \ref{t2} is given at the end of section \ref{sect 5}. Based on Theorem \ref{t1} and Theorem \ref{t2}, we are able to prove Theorem \ref{t} on the long time stability of the KAM tori for NLS equation. \bigskip \noindent\textbf{Proof of Theorem \ref{t}.} Recall the Hamiltonian formulation of NLS equation \eqref{1} in subsection \ref{sec 2.1}. Let $\omega_{a}=|a|^{2}+\hat{V}(a), a\in \mathcal{A},$ $\Omega_{a}=|a|^{2}+\hat{V}(a), a\in \mathcal{L}$, and take $w_{a}=\hat{V}(a)$. Then we have \begin{equation*} h=\sum_{a\in \mathcal{A}}\omega_{a}r_{a}+\frac{1}{2}\sum_{a\in\mathcal{L}}\Omega_{a}(\xi_{a}^{2}+\eta_{a}^{2}),\quad f=\varepsilon \int_{\mathbb{T}^{d}}F(|u(x)|^{2})dx. \end{equation*} The T\"{o}plitz-Lipschitz property of $f$ follows from Theorem 7.2 in \cite{EK} and the tame property follows from Section 3.5 in \cite{BG}. By Theorem \ref{t1}, if $\varepsilon>0$ is sufficiently small, then for typical $V$ (in the sense of measure), the $d$-dimensional nonlinear Schr\"odinger equation (\ref{1}) has a quasi-periodic solution. By Theorem \ref{t2}, assume $u_{0}(t,x)$ with initial value $u_{0}(0,x)$ is a quasi-periodic solution for the equation (\ref{1}), then for any solution $u(t,x)$ with initial value $u(0,x)$ satisfying \begin{equation*} \|u(0,\cdot)-u_{0}(0,\cdot)\|_{H^{p}(\mathbb{T}^{d})}<\delta, \ \forall~ 0<\delta\ll1, \end{equation*} we have \begin{equation*} \|u(t,\cdot)-u_{0}(t,\cdot)\|_{H^{p}(\mathbb{T}^{d})}<C\delta, \ \forall~ 0<|t|<\delta^{-M}. \end{equation*} In other words, the obtained KAM tori for the nonlinear Schr\"odinger equation (\ref{1}) are of long time stability. \qed \section{The homological equations}\label{sect 3} In this section, we formulate and solve the homological equation in the KAM iteration. To obtain an open and uniform domain for the transformed Hamiltonian, we apply Kolmogorov's iterative scheme. As a result, the homological equation is complicated than that in \cite{EK}, but it can be solved by the method developed in \cite{EK}. Write \begin{equation*} f(\varphi,r,\zeta;w)=f^{low}+f^{high}, \end{equation*} where \begin{equation*} f^{low}=f^{\varphi}+f^{0}+f^{1}+f^{2} =F^{\varphi}(\varphi;w)+\langle F_{0}(\varphi;w),r\rangle+\langle F_{1}(\varphi;w),\zeta\rangle+\frac{1}{2}\langle F_{2}(\varphi;w)\zeta,\zeta\rangle. \end{equation*} Define \begin{equation*} \mathcal{T}_{\Delta}f^{low}= \sum_{|k|\leq\Delta}\left(\hat{F^{\varphi}}(k;w)+\langle\hat{F_{0}}(k;w),r\rangle+\langle\hat{F_{1}}(k;w),\zeta\rangle +\frac{1}{2}\langle\mathcal{T}_{\Delta}\hat{F_{2}}(k;w)\zeta,\zeta\rangle\right)e^{\mathrm{i}\langle k,\varphi\rangle}. \end{equation*} Let $\Delta'>1$ and $0<\kappa<1$. Assume there exists $U' \subset U$ such that for all $w\in U'$, $0<|k|\leq\Delta'$, the following properties hold: \begin{itemize} \item Diophantine condition: \begin{equation}\label{sd1} |\langle k,\omega(w)\rangle|\geq \kappa; \end{equation} \item The first Melnikov condition: \begin{equation}\label{sd2} |\langle k,\omega(w)\rangle+\alpha(w)|\geq \kappa, \quad \forall~\alpha(w)\in\mathrm{spec}(((\Omega+H)(w))_{[a]_\Delta}), \ \forall~ [a]_\Delta; \end{equation} \item The second Melnikov condition with the same sign: \begin{equation}\label{sd3} |\langle k,\omega(w)\rangle+\alpha(w)+\beta(w)|\geq \kappa, \quad \forall \left\{ \begin{aligned} \alpha(w)\in\mathrm{spec}(((\Omega+H)(w))_{[a]_\Delta}),\\ \beta(w)\in\mathrm{spec}(((\Omega+H)(w))_{[b]_\Delta}), \end{aligned} \right. ~\forall~ [a]_\Delta, [b]_\Delta; \end{equation} \item The second Melnikov condition with opposite signs: \begin{equation}\label{sd4} |\langle k,\omega(w)\rangle+\alpha(w)-\beta(w)|\geq \kappa, \quad \forall \left\{\begin{aligned} \alpha(w)\in\mathrm{spec}(((\Omega+H)(w))_{[a]_\Delta}),\\ \beta(w)\in\mathrm{spec}(((\Omega+H)(w))_{[b]_\Delta}), \end{aligned}\right. \end{equation} and for any $\mathrm{dist}([a]_\Delta, [b]_\Delta)\leq \Delta'+2d_\Delta.$ \end{itemize} We have the following result on the solution of the homological equation. \begin{prop}\label{p1} Consider the Hamiltonian $h+f$, where \begin{equation*} h(r,\zeta;w)=\langle\omega(w),r\rangle+\frac{1}{2}\langle\zeta,(\Omega(w)+H(w))\zeta\rangle \end{equation*} satisfy \eqref{as1}-\eqref{as9}, $H(w),\partial_{w}H(w)$ are T\"{o}plitz at $\infty$ and $\mathcal{NF}_{\Delta}$ for all $w\in U$, \begin{equation*} f(\varphi,r,\zeta;w)=f^{low}+f^{high} \end{equation*} satisfy \begin{equation}\label{ho1} |||X_{f^{low}}|||^{T}_{p,D(\rho,\mu,\sigma)\times U}\leq\varepsilon, \ |||X_{f^{high}}|||^{T}_{p,D(\rho,\mu,\sigma)\times U}\leq1, \end{equation} \begin{equation}\label{ho2} [f^{low}]_{\Lambda,\gamma,\sigma;U,\rho,\mu}\leq\varepsilon, \ [f^{high}]_{\Lambda,\gamma,\sigma;U,\rho,\mu}\leq 1. \end{equation} Assume $\gamma,\sigma,\rho,\mu<1$, $\Lambda,\Delta\geq 3$, $\rho=\sigma$, $\mu=\sigma^{2}$, $d_{\Delta}\gamma\leq1$. Let $U' \subset U$ satisfy \eqref{sd1}-\eqref{sd4}. Then for all $w\in U'$, the homological equation \begin{equation}\label{ho3} \{h,s\}=-\mathcal{T}_{\Delta'}f^{low}-\mathcal{T}_{\Delta'}\{f^{high},s\}^{low}+h_1 \end{equation} has solutions \begin{equation}\label{ho4} s(\varphi,r,\zeta;w)=s^{low}=s^{\varphi}+s^{0}+s^{1}+s^{2}, \end{equation} \begin{equation}\label{ho5} h_{1}(r,\zeta;w)=a_{1}(w)+\langle \chi_{1}(w),r\rangle+\frac{1}{2}\langle\zeta,H_{1}(w)\zeta\rangle \end{equation} with the estimates \begin{equation}\label{ho6} |||X_{s^{\varphi}}|||^{T}_{p,D(\rho-\tau,\sigma^{2},\sigma)\times U'}\preceq \frac{\varepsilon}{\tau\kappa^{2}}, \end{equation} \begin{equation}\label{ho7} |||X_{s^{1}}|||^{T}_{p,D(\rho-3\tau,(\sigma-3\tau)^{2},\sigma-3\tau)\times U'} \preceq\frac{d_{\Delta}^{d}\varepsilon}{\tau^{3}\kappa^{4}}, \end{equation} \begin{equation}\label{ho8} |||X_{s^{0}}|||^{T}_{p,D(\rho-5\tau,(\sigma-5\tau)^{2},\sigma-5\tau)\times U'}\preceq \frac{d_{\Delta}^{d}\varepsilon}{\tau^{5}\kappa^{6}}, \end{equation} \begin{equation}\label{ho9} |||X_{s^{2}}|||^{T}_{p,D(\rho-5\tau,(\sigma-5\tau)^{2},\sigma-5\tau)\times U'}\preceq \frac{d_{\Delta}^{3d}\varepsilon}{\tau^{5}\kappa^{6}}, \end{equation} \begin{equation}\label{ho10} |||X_{s}|||^{T}_{p,D(\rho-5\tau,(\sigma-5\tau)^{2},\sigma-5\tau)\times U'}\preceq\frac{d_{\Delta}^{3d}\varepsilon}{\tau^{5}\kappa^{6}}, \end{equation} \begin{equation}\label{ho11} |||X_{h_{1}}|||^{T}_{p,D(\rho-5\tau,(\sigma-5\tau)^{2},\sigma-5\tau)\times U'} \preceq\frac{d_{\Delta}^{d}\varepsilon}{\tau^{4}\kappa^{4}}, \end{equation} where $0<\tau<\frac{\rho}{100}$, $a\preceq b$ means there exists a constant $c>0$ depending on $d,\# \mathcal{A}, p, c_1,\cdots,c_5$ such that $a\leq cb$. The new Hamiltonian \begin{equation}\label{ho12} (h+f)\circ X^{t}_{s}\mid_{t=1}=h+h_1+f_1 \end{equation} with \begin{equation}\label{ho13} \begin{aligned} f_1=&(1-\mathcal{T}_{\Delta'})f^{low}+f^{high}+(1-\mathcal{T}_{\Delta'}) \{f^{high},s\}^{low}+\{f^{high},s\}^{high}\\ &+\int_{0}^{1}(1-t)\{\{h,s\},s\}\circ X^{t}_{s}dt+\int_{0}^{1}\{f^{low},s\}\circ X^{t}_{s}dt\\ &+\int_{0}^{1}(1-t)\{\{f^{high},s\},s\}\circ X^{t}_{s}dt \end{aligned} \end{equation} satisfies \begin{equation}\label{ho14} |||X_{f_{1}^{low}}|||^{T}_{p,D(\rho-8\tau,(\sigma-8\tau)^{2},\sigma-8\tau)\times U'} \preceq\frac{d_{\Delta}^{d}e^{-\frac{1}{2}\tau\Delta'}}{\kappa^{4}\tau^{\# \mathcal{A}+4}}\varepsilon +\frac{(\Delta\Delta')^{\exp}}{\sigma^{6}\kappa^{4}}\frac{e^{-\frac{1}{2}\gamma\Delta'}}{\gamma^{d+p}\tau^{\# \mathcal{A}+3}}\varepsilon+\frac{d_{\Delta}^{6d}\varepsilon^{2}}{\tau^{12}\kappa^{12}}, \end{equation} \begin{equation}\label{ho15} |||X_{f_{1}^{high}}|||^{T}_{p,D(\rho-8\tau,(\sigma-8\tau)^{2},\sigma-8\tau)\times U'}\preceq 1+\frac{d_{\Delta}^{3d}\varepsilon}{\tau^{6}\kappa^{6}}+\frac{d_{\Delta}^{6d}\varepsilon^{2}}{\tau^{12}\kappa^{12}}, \end{equation} where the exponent $\exp$ depends on $d,\# \mathcal{A}, p$. Moreover, the following estimates hold. \begin{itemize} \item[i)] The solution $s$ and the remainder $h_{1}$ satisfy \begin{equation}\label{ho16} [s]_{\Lambda'+d_\Delta+2,\gamma,\sigma';U',\rho',\mu'} \preceq\frac{1}{\kappa^{7}}(\Delta\Delta')^{\exp}\frac{1}{\rho-\rho'}\left(\frac{1}{\sigma-\sigma'}\frac{1}{\sigma}+\frac{1}{\rho-\rho'}\frac{1}{\mu-\mu'}\right)\frac{1}{\mu}\varepsilon, \end{equation} \begin{equation}\label{ho17} [h_{1}]_{\Lambda'+d_\Delta+2,\gamma,\sigma';U',\rho',\mu'} \preceq\frac{1}{\kappa^{4}}(\Delta\Delta')^{\exp}\frac{1}{\rho-\rho'}\left(\frac{1}{\sigma-\sigma'}\frac{1}{\sigma}+\frac{1}{\rho-\rho'}\frac{1}{\mu-\mu'}\right)\frac{1}{\mu}\varepsilon, \end{equation} where $\rho'<\rho$, $\mu'<\mu$, $\sigma'<\sigma$, $\Lambda'\geq \mathrm{cte}.\max(\Lambda,d^{2}_{\Delta},d^{2}_{\Delta'})$, and the constant $\mathrm{cte}$. is the one in \emph{\cite[Proposition 6.7]{EK}}. \item[ii)] There is the measure estimate \begin{equation}\label{ho18} \mathrm{meas}(U\backslash U')\preceq\max(\Lambda,\Delta,\Delta')^{\exp}\kappa^{(\frac{1}{d+1})^{d}}. \end{equation} \end{itemize} \end{prop} The measure estimate (\ref{ho18}) follows directly from \cite[Proposition 6.6 and 6.7 ]{EK}, which is not repeated here. The rest of this section is devoted to the proof of Proposition \ref{p1}. For the sake of notations, we shall not indicate the dependence on the parameter $w$ of functions when it is known from the text. \smallskip \subsection{Formulation of the homological equations.} Write \begin{equation*} f^{low}=f^{\varphi}+f^{0}+f^{1}+f^{2}=F^{\varphi}(\varphi;w)+\langle F_{0}(\varphi;w),r\rangle+\langle F_{1}(\varphi;w),\zeta\rangle+\frac{1}{2}\langle F_{2}(\varphi;w)\zeta,\zeta\rangle, \end{equation*} \begin{equation*} s=s^{low}=s^{\varphi}+s^{0}+s^{1}+s^{2}=S^{\varphi}(\varphi;w)+\langle S_{0}(\varphi;w),r\rangle+\langle S_{1}(\varphi;w),\zeta\rangle+\frac{1}{2}\langle S_{2}(\varphi;w)\zeta,\zeta\rangle. \end{equation*} By the calculations in \cite[Section 4.1.2]{CLY}, we obtain \begin{equation}\label{es1} \{f^{high},s\}^{low}=\{f^{high},s\}^{0}+\{f^{high},s\}^{1}+\{f^{high},s\}^{2}, \end{equation} \begin{equation}\label{es2} \{f^{high},s\}^{1}=\{f^{high},s^{\varphi}\}^{1}, \end{equation} \begin{equation}\label{es3} \{f^{high},s\}^{0}=\{f^{high},s^{\varphi}+s^{1}\}^{0}, \ \{f^{high},s\}^{2}=\{f^{high},s^{\varphi}+s^{1}\}^{2}. \end{equation} Let $g=\{f^{high},s\}$. Write \begin{equation*} g^{low}=g^{0}+g^{1}+g^{2}=\langle G_{0}(\varphi;w),r\rangle+\langle G_{1}(\varphi;w),\zeta\rangle+\frac{1}{2}\langle G_{2}(\varphi;w)\zeta,\zeta\rangle. \end{equation*} In Fourier modes, the homological equation (\ref{ho3}) decomposes into \begin{equation}\label{es4} -\mathrm{i}\langle k,\omega(w)\rangle\hat{S}^{\varphi}(k;w)=-\hat{F}^{\varphi}(k;w)+\delta_{0}^{k}a_{1}(w), \end{equation} \begin{equation}\label{es5} -\mathrm{i}\langle k,\omega(w)\rangle\hat{S}_{1}(k;w)+J(\Omega(w)+H(w))\hat{S}_{1}(k;w)=-\hat{F}_{1}(k;w)-\hat{G}_{1}(k;w), \end{equation} \begin{equation}\label{es6} -\mathrm{i}\langle k,\omega(w)\rangle\hat{S}_{0}(k;w)=-\hat{F}_{0}(k;w)-\hat{G}_{0}(k;w)+\delta_{0}^{k}\chi_{1}(w), \end{equation} \begin{equation}\label{es7} \begin{array}{c} -\mathrm{i}\langle k,\omega(w)\rangle\hat{S}_{2}(k;w)+(\Omega(w)+H(w))J\hat{S}_{2}(k;w) -\hat{S}_{2}(k;w)J(\Omega(w)+H(w))\\ =-\hat{F}_{2}(k;w)-\hat{G}_{2}(k;w)+\delta_{0}^{k}H_{1}(w). \end{array} \end{equation} We solve the equations (\ref{es4})-(\ref{es7}) in the order (\ref{es4}) $\rightarrow$ (\ref{es5}) $\rightarrow$ (\ref{es6}) $\rightarrow$ (\ref{es7}). \subsection{Solution of the homological equation \eqref{es4}.} The homological equation \eqref{es4} is very standard in the KAM theory. From (\ref{es4}), we obtain \begin{equation*}\label{es8} a_{1}(w)=\hat{F}^{\varphi}(0;w)~\quad \textrm{and}\quad \hat{S}^{\varphi}(k;w)=\frac{\hat{F}^{\varphi}(k;w)}{\mathrm{i}\langle k,\omega(w)\rangle},~ k\neq0. \end{equation*} By the Diophantine condition (\ref{sd1}), we have \begin{equation}\label{es9} |\hat{S}^{\varphi}(k;w)|\leq\frac{1}{\kappa}|\hat{F}^{\varphi}(k;w)|. \end{equation} Differentiating (\ref{es4}), we obtain a similar homological equation \begin{equation}\label{es10} -\mathrm{i}\partial_{w}(\langle k,\omega(w)\rangle)\hat{S}^{\varphi}(k;w)-\mathrm{i}\langle k,\omega(w)\rangle\partial_{w}\hat{S}^{\varphi}(k;w)=-\partial_{w}\hat{F}^{\varphi}(k;w) \end{equation} for $\partial_{w}\hat{S}^{\varphi}(k;w)$ and there is also \begin{equation*}\label{es11} |\partial_{w}\hat{S}^{\varphi}(k;w)|\leq\frac{1}{\kappa}(|k|\cdot|\hat{S}^{\varphi}(k;w)| +|\partial_{w}\hat{F}^{\varphi}(k;w)|), \end{equation*} which together with \eqref{es9} implies \begin{equation*}\label{es12} |\hat{S}^{\varphi}(k;w)|+|\partial_{w}\hat{S}^{\varphi}(k;w)|\leq\frac{|k|+1}{\kappa^{2}}(|\hat{F}^{\varphi}(k;w)|+|\partial_{w}\hat{F}^{\varphi}(k;w)|). \end{equation*} It follows that \begin{equation*}\label{es13} \begin{aligned} \|S^{\varphi}_{\varphi}\|_{D(\rho-\tau)\times U'}&=\sum_{k\in\mathbb{Z}^{\mathcal{A}}}(|\hat{S}^{\varphi}_{\varphi}(k;w)|+|\partial_{w}\hat{S}^{\varphi}_{\varphi}(k;w)|)e^{|k|(\rho-\tau)}\\ &\leq\sum_{k\in\mathbb{Z}^{\mathcal{A}}}\frac{|k|+1}{\kappa^{2}}(|\hat{F}^{\varphi}_{\varphi}(k;w)|+|\partial_{w}\hat{F}^{\varphi}_{\varphi}(k;w)|)e^{|k|(\rho-\tau)}\\ &\preceq \frac{1}{\tau\kappa^{2}}\|F^{\varphi}_{\varphi}\|_{D(\rho)\times U'}. \end{aligned} \end{equation*} As a result, we have \begin{equation}\label{es14} |||X_{s^{\varphi}}|||^{T}_{p,D(\rho-\tau,\sigma^{2},\sigma)\times U'}\preceq \frac{1}{\tau\kappa^{2}}|||X_{f^{\varphi}}|||^{T}_{p,D(\rho,\sigma^{2},\sigma)\times U'}. \end{equation} \bigskip \subsection{Solution of the homological equation \eqref{es5}.} For simplicity, we write (\ref{es5}) as \begin{equation*}\label{es15} \mathrm{i}\langle k,\omega(w)\rangle S+J(\Omega+H)S=F+G. \end{equation*} Transforming into complex coordinates \begin{equation*} z=\left( \begin{array}{c} u \\ v \\ \end{array} \right) =C^{-1}\left( \begin{array}{c} \xi \\ \eta \\ \end{array} \right), C=\frac{1}{\sqrt{2}}\left( \begin{array}{cc} 1 & 1 \\ -\mathrm{i} & \mathrm{i} \\ \end{array} \right), \end{equation*} and letting $S'=C^{-1}S=\left( \begin{array}{c} S'_{1} \\ S'_{2} \\ \end{array} \right),F'=C^{-1}F=\left( \begin{array}{c} F'_{1} \\ F'_{2} \\ \end{array} \right),G'=C^{-1}G=\left( \begin{array}{c} G'_{1} \\ G'_{2} \\ \end{array} \right)$, we obtain the equivalent equations \begin{equation}\label{es16} \left\{ ~ \begin{aligned} & \mathrm{i}\langle k,\omega(w)\rangle S'_{1}-\mathrm{i}(\Omega+H^{T})S'_{1}=F'_{1}+G'_{1},\\ & \mathrm{i}\langle k,\omega(w)\rangle S'_{2}+\mathrm{i}(\Omega+H)S'_{2}=F'_{2}+G'_{2}. \end{aligned} \right. \end{equation} \textbf{Solution of \eqref{es16}.} We only solve $S'_{1}$ in \eqref{es16} since $S'_{2}$ can be solved accordingly. By the first Melnikov condition (\ref{sd2}), we have \begin{equation*}\label{es18} \|S'_{1[a]_\Delta}\|\leq\frac{1}{\kappa}\|F'_{1[a]_\Delta}+G'_{1[a]_\Delta}\|. \end{equation*} Using similar arguments to that of $S^{\varphi}$, we get \begin{equation}\label{es21} \|S'_{1[a]_\Delta}\|+\|\partial_{w}S'_{1[a]_\Delta}\|\preceq\frac{|k|+1}{\kappa^{2}}(\|F'_{1[a]_\Delta}+G'_{1[a]_\Delta}\|+\|\partial_{w}F'_{1[a]_\Delta}+\partial_{w}G'_{1[a]_\Delta}\|). \end{equation} \smallskip \textbf{Estimate of $s^{1}_{\varphi}$.} Recall \begin{equation*}\label{es22} s^{1}=\sum_{a\in\mathcal{L}}(S'_{1a}(\varphi;w)u_{a}+S'_{2a}(\varphi;w)v_{a}), \end{equation*} and consider \begin{equation*}\label{es23} \lfloor s^{1}_{\varphi}\rceil_{D(\rho-3\tau,\sigma^{2})\times U'}(z)=\sum_{a\in\mathcal{L}}(\|S'_{1a\varphi}(\varphi;w)\|_{D(\rho-3\tau)\times U'}u_{a}+\|S'_{2a\varphi}(\varphi;w)\|_{D(\rho-3\tau)\times U'}v_{a}). \end{equation*} For $z=(u,v)$, define $\tilde{z}=(\tilde{u},\tilde{v})$ such that for all $a\in [a]_\Delta$, $\tilde{u}_{a}=\|u_{[a]_\Delta}\|,\tilde{v}_{a}=\|v_{[a]_\Delta}\|$. By (\ref{es21}), we see the first sum in $\lfloor s^{1}_{\varphi}\rceil$ satisfies \begin{equation*}\label{es24} \begin{aligned} &~ |\sum_{a\in\mathcal{L}}\|S'_{1a\varphi}(\varphi)\|_{D(\rho-3\tau)\times U'}u_{a}|\\ \leq & \sum_{a\in\mathcal{L}}\sum_{k\in\mathbb{Z}^{\mathcal{A}}} (|\hat{S}'_{1a\varphi}(k)|+|\partial_{w}\hat{S}'_{1a\varphi}(k)|) \cdot e^{(\rho-3\tau)|k|}\cdot |u_{a}|\\ = & \sum_{k\in\mathbb{Z}^{\mathcal{A}}}\sum_{[a]_\Delta\in\mathcal{E}_{\Delta}} \sum_{a\in[a]_\Delta}(|\hat{S}'_{1a\varphi}(k)|+|\partial_{w}\hat{S}'_{1a\varphi}(k)|) \cdot |u_{a}| \cdot e^{(\rho-3\tau)|k|}\\ \preceq & \sum_{k\in\mathbb{Z}^{\mathcal{A}}}\sum_{a\in\mathcal{L}}\frac{|k|+1}{\kappa^{2}} \left(|\hat{F}'_{1a\varphi}(k)+\hat{G}'_{1a\varphi}(k)| +|\partial_{w}\hat{F}'_{1a\varphi}(k)+\partial_{w}\hat{G}'_{1a\varphi}(k)|\right) \tilde{u}_{a} e^{(\rho-3\tau)|k|}\\ \preceq &\frac{1}{\tau\kappa^{2}}\sum_{a\in\mathcal{L}}\|F'_{1a\varphi}(\varphi) +G'_{1a\varphi}(\varphi)\|_{D(\rho-2\tau)\times U'} \cdot \tilde{u}_{a}. \end{aligned} \end{equation*} Similarly, we have \begin{equation*}\label{es25} |\sum_{a\in\mathcal{L}}\|S'_{2a\varphi}(\varphi;w)\|_{D(\rho-3\tau)\times U'}v_{a}| \preceq\frac{1}{\tau\kappa^{2}}\sum_{a\in\mathcal{L}}\|F'_{2a\varphi}(\varphi;w)+G'_{2a\varphi}(\varphi;w)\|_{D(\rho-2\tau)\times U'}\tilde{v}_{a}. \end{equation*} It follows that \begin{equation*}\label{es26} |\lfloor s^{1}_{\varphi}\rceil_{D(\rho-3\tau,\sigma^{2})\times U'}(z)|\preceq\frac{1}{\tau\kappa^{2}}\lfloor f^{1}_{\varphi}+g^{1}_{\varphi}\rceil_{D(\rho-2\tau,\sigma^{2})\times U'}(\tilde{z}). \end{equation*} Moreover, we see from \begin{equation*}\label{es27} |||s^{1}_{\varphi}|||_{D(\rho-3\tau,\sigma^{2})\times U'}=\sup_{0\neq z\in l_{1}^{2}}\frac{|\lfloor s^{1}_{\varphi}\rceil_{D(\rho-3\tau,\sigma^{2})\times U'}(z)|}{\|z\|_{1}}, \end{equation*} and $\|\tilde{z}\|_{1}\preceq d_{\Delta}^{d}\|z\|_{1}$ that \begin{equation}\label{es28} |||s^{1}_{\varphi}|||_{D(\rho-3\tau,\sigma^{2})\times U'}\preceq\frac{d_{\Delta}^{d}}{\tau\kappa^{2}}|||f^{1}_{\varphi}+g^{1}_{\varphi}|||_{D(\rho-2\tau,\sigma^{2})\times U'}. \end{equation} \smallskip \textbf{Estimate of $s^{1}_{z}$.} Consider \begin{equation*}\label{es29} |||s^{1}_{z}|||^{T}_{p,D(\rho-3\tau,\sigma^{2})\times U'}=\|\lfloor s^{1}_{z}\rceil_{D(\rho-3\tau,\sigma^{2})\times U'}\|_{p}, \end{equation*} \begin{equation*}\label{es30} \|\lfloor s^{1}_{z}\rceil_{D(\rho-3\tau,\sigma^{2})\times U'}\|_{p}^{2}=\sum_{a\in\mathcal{L}}(\|S'_{1a}(\varphi)\|^{2}_{D(\rho-3\tau)\times U'}+\|S'_{2a}(\varphi)\|^{2}_{D(\rho-3\tau)\times U'})\langle a\rangle^{2p}. \end{equation*} By (\ref{es21}), we have \begin{equation*}\label{es31} \left(\sum_{a\in[a]_\Delta}\|S'_{1a}(\varphi)\|^{2}_{D(\rho-3\tau)\times U'}\right)^{\frac{1}{2}}= \left[\sum_{a\in[a]_\Delta}(\sum_{k\in\mathbb{Z}^{\mathcal{A}}}(|\hat{S}'_{1a}(k)| +|\partial_{w}\hat{S}'_{1a}(k)|)e^{(\rho-3\tau)|k|})^{2}\right]^{\frac{1}{2}} \end{equation*} \begin{align*} &\leq\sum_{k\in\mathbb{Z}^{\mathcal{A}}}[\sum_{a\in[a]_\Delta}(|\hat{S}'_{1a}(k)| +|\partial_{w}\hat{S}'_{1a}(k)|)^{2}]^{\frac{1}{2}}e^{(\rho-3\tau)|k|}\\ &\preceq\sum_{k\in\mathbb{Z}^{\mathcal{A}}}\frac{|k|+1}{\kappa^{2}} \left(\|\hat{F}'_{1[a]}(k)+\hat{G}'_{1[a]}(k)\|+\|\partial_{w}\hat{F}'_{1[a]}(k) +\partial_{w}\hat{G}'_{1[a]}(k)\|\right)e^{(\rho-3\tau)|k|}\\ &\preceq\sum_{k\in\mathbb{Z}^{\mathcal{A}}}\sum_{a\in[a]_\Delta}\frac{|k|+1}{\kappa^{2}} \left(|\hat{F}'_{1a}(k)+\hat{G}'_{1a}(k)|+|\partial_{w}\hat{F}'_{1a}(k) +\partial_{w}\hat{G}'_{1a}(k)|\right)e^{(\rho-3\tau)|k|}\\ &\preceq\frac{1}{\tau\kappa^{2}}\sum_{a\in[a]_\Delta}\|F'_{1a}(\varphi) +G'_{1a}(\varphi)\|_{D(\rho-2\tau)\times U'}, \end{align*} which implies the first sum in $\lfloor s^{1}_{z}\rceil$ satisfies \begin{equation*}\label{es32} \sum_{a\in[a]_\Delta}\|S'_{1a}(\varphi;w)\|^{2}_{D(\rho-3\tau)\times U'}\langle a\rangle^{2p} \preceq(\frac{d_{\Delta}^{d}}{\tau\kappa^{2}})^{2}\sum_{a\in[a]_\Delta}\|F'_{1a}(\varphi;w)+G'_{1a}(\varphi;w)\|^{2}_{D(\rho-2\tau)\times U'}\langle a\rangle^{2p}. \end{equation*} Similarly, the other sum in $\lfloor s^{1}_{z}\rceil$ satisfies \begin{equation*}\label{es33} \sum_{a\in[a]_\Delta}\|S'_{2a}(\varphi;w)\|^{2}_{D(\rho-3\tau)\times U'}\langle a\rangle^{2p} \preceq(\frac{d_{\Delta}^{d}}{\tau\kappa^{2}})^{2}\sum_{a\in[a]_\Delta}\|F'_{2a}(\varphi;w)+G'_{2a}(\varphi;w)\|^{2}_{D(\rho-2\tau)\times U'}\langle a\rangle^{2p}. \end{equation*} Then we immediately get \begin{equation}\label{es34} |||s^{1}_{z}|||^{T}_{p,D(\rho-3\tau,\sigma^{2})\times U'}\preceq\frac{d_{\Delta}^{d}}{\tau\kappa^{2}}|||f^{1}_{z}+g^{1}_{z}|||^{T}_{p,D(\rho-2\tau,\sigma^{2})\times U'}. \end{equation} Combining (\ref{es28}) and (\ref{es34}), we have \begin{equation}\label{es35} |||X_{s^{1}}|||^{T}_{p,D(\rho-3\tau,\sigma^{2},\sigma)\times U'}\preceq \frac{d_{\Delta}^{d}}{\tau\kappa^{2}}|||X_{f^{1}+g^{1}}|||^{T}_{p,D(\rho-2\tau,\sigma^{2},\sigma)\times U'}. \end{equation} \bigskip \subsection{Solution of the homological equations \eqref{es6}-\eqref{es7}.} Solving equation (\ref{es6}) as equation (\ref{es4}), we obtain \begin{equation}\label{es36} |||X_{s^{0}}|||^{T}_{p,D(\rho-5\tau,\sigma^{2},\sigma)\times U'}\preceq \frac{1}{\tau\kappa^{2}}|||X_{f^{0}+g^{0}}|||^{T}_{p,D(\rho-4\tau,\sigma^{2},\sigma)\times U'}. \end{equation} Now we consider the equation (\ref{es7}). For simplicity, we write (\ref{es7}) as \begin{equation*}\label{es37} \mathrm{i}\langle k,\omega(w)\rangle S+(\Omega+H)JS-SJ(\Omega+H)=F+G-H_{1}. \end{equation*} Changing into complex coordinates $z=C^{-1}\zeta$ and letting $S'=C^{T}SC=\left( \begin{array}{cc} S_{1}' & S_{2}' \\ S'^{T}_{2} & S_{3}' \\ \end{array} \right) ,F'=C^{T}FC,G'=C^{T}GC$, $H_{1}'=C^{T}H_{1}C$, we obtain the equivalent equations as follows \begin{equation}\label{es38} \mathrm{i}\langle k,\omega(w)\rangle S_{1}'+\mathrm{i}(\Omega+H)S_{1}'+\mathrm{i}S_{1}'(\Omega+H^{T})=F'_{1}+G'_{1}, \end{equation} \begin{equation}\label{es39} \mathrm{i}\langle k,\omega(w)\rangle S_{2}'+\mathrm{i}(\Omega+H)S_{2}'-\mathrm{i}S_{2}'(\Omega+H)=F'_{2}+G'_{2}-H_{12}', \end{equation} \begin{equation}\label{es40} \mathrm{i}\langle k,\omega(w)\rangle S'^{T}_{2}-\mathrm{i}(\Omega+H^{T})S'^{T}_{2}+\mathrm{i}S'^{T}_{2}(\Omega+H^{T})=F'^{T}_{2}+G'^{T}_{2}-H'^{T}_{12}, \end{equation} \begin{equation}\label{es41} \mathrm{i}\langle k,\omega(w)\rangle S_{3}'-\mathrm{i}(\Omega+H^{T})S_{3}'-\mathrm{i}S_{3}'(\Omega+H)=F'_{3}+G'_{3}. \end{equation} \textbf{Solutions of \eqref{es38}-\eqref{es41}.} Consider first $S'_{2}$ in \eqref{es39}-\eqref{es40}. When $k\neq0$, we have $H_{12}'=0$. In a similar way as before, we obtain from the second Melnikov condition \eqref{sd4} that \begin{equation*}\label{es45} \|S'^{[b]_\Delta}_{2[a]_\Delta}\|+\|\partial_{w}S'^{[b]_\Delta}_{2[a]_\Delta}\| \preceq\frac{|k|+1}{\kappa^{2}}(\|F'^{[b]_\Delta}_{2[a]_\Delta}+G'^{[b]_\Delta}_{2[a]_\Delta}\| +\|\partial_{w}F'^{[b]_\Delta}_{2[a]_\Delta}+\partial_{w}G'^{[b]_\Delta}_{2[a]_\Delta}\|). \end{equation*} When $k=0$, the above estimate also holds since $H_{12}'=(F'_{2}+G'_{2})\mid_{\{(a,b)\in\mathcal{L}\times\mathcal{L}:|a|=|b|\}}$. Using the second Melnikov condition (\ref{sd3}) with the same sign, we can also solve $S_{1}'$ and $S_{3}'$. In conclusion, we have \begin{equation}\label{es46} \|S'^{[b]_\Delta}_{\nu [a]_\Delta}\|+\|\partial_{w}S'^{[b]_\Delta}_{\nu [a]_\Delta}\| \preceq\frac{|k|+1}{\kappa^{2}}(\|F'^{[b]_\Delta}_{\nu [a]_\Delta}+G'^{[b]_\Delta}_{\nu [a]_\Delta}\| +\|\partial_{w}F'^{[b]_\Delta}_{\nu [a]_\Delta}+\partial_{w}G'^{[b]_\Delta}_{\nu [a]_\Delta}\|), \end{equation} with $\nu\in\{1, 2, 3\}$. \textbf{Estimate of $s^{2}_{\varphi}$.} Recalling \begin{equation*}\label{es48} s^{2}=\frac{1}{2}\sum_{a,b\in\mathcal{L}}(S'^{b}_{1a}(\varphi;w)u_{a}u_{b}+2S'^{b}_{2a}(\varphi;w)u_{a}v_{b}+S'^{b}_{3a}(\varphi;w)v_{a}v_{b}), \end{equation*} we consider \begin{equation*}\label{es49} \begin{aligned} \lfloor s^{2}_{\varphi}\rceil_{D(\rho-5\tau,\sigma^{2})\times U'}(z)=&\frac{1}{2} \sum_{a,b\in\mathcal{L}}(\|S'^{b}_{1a\varphi}(\varphi;w)\|_{D(\rho-5\tau)\times U'}u_{a}u_{b}\\ +&2\|S'^{b}_{2a\varphi}(\varphi;w)\|_{D(\rho-5\tau)\times U'}u_{a}v_{b}+\|S'^{b}_{3a\varphi}(\varphi;w)\|_{D(\rho-5\tau)\times U'}v_{a}v_{b}), \end{aligned} \end{equation*} and and the associated multilinear form $\widetilde{s^{2}_{\varphi}}$ \begin{equation}\label{es50} \begin{aligned} \widetilde{\lfloor s^{2}_{\varphi}\rceil}&_{D(\rho-5\tau,\sigma^{2})\times U'}(z^{(1)},z^{(2)})=\frac{1}{2} \sum_{a,b\in\mathcal{L}}(\|S'^{b}_{1a\varphi}(\varphi;w)\|_{D(\rho-5\tau)\times U'}u^{(1)}_{a}u^{(2)}_{b}\\ +&\|S'^{b}_{2a\varphi}(\varphi;w)\|_{D(\rho-5\tau)\times U'}(u^{(1)}_{a}v^{(2)}_{b}+u^{(2)}_{a}v^{(1)}_{b})+\|S'^{b}_{3a\varphi}(\varphi;w)\|_{D(\rho-5\tau)\times U'}v^{(1)}_{a}v^{(2)}_{b}). \end{aligned} \end{equation} By (\ref{es46}), we know that $|\sum_{a\in[a]_\Delta,b\in[b]_\Delta}\|S'^{b}_{1a\varphi}(\varphi;w)\|_{D(\rho-5\tau)\times U'}u^{(1)}_{a}u^{(2)}_{b}|$ is less than \begin{align*} &\sum_{a\in[a],b\in[b]}\sum_{k\in\mathbb{Z}^{\mathcal{A}}}(|\hat{S}'^{b}_{1a\varphi}(k)|+|\partial_{w}\hat{S}'^{b}_{1a\varphi}(k)|)e^{(\rho-5\tau)|k|}|u^{(1)}_{a}u^{(2)}_{b}| \\ &\preceq\sum_{k\in\mathbb{Z}^{\mathcal{A}}}\sum_{a\in[a],b\in[b]}\frac{|k|+1}{\kappa^{2}} \left(|\hat{F}'^{b}_{1a\varphi}(k)+\hat{G}'^{b}_{1a\varphi}(k)| +|\partial_{w}\hat{F}'^{b}_{1a\varphi}(k)+\partial_{w}\hat{G}'^{b}_{1a\varphi}(k)|\right)\tilde{u}^{(1)}_{a}\tilde{u}^{(2)}_{b}e^{(\rho-5\tau)|k|}\\ &\preceq\frac{1}{\tau\kappa^{2}}\sum_{a\in[a],b\in[b]}\|F'^{b}_{1a\varphi}(\varphi)+G'^{b}_{1a\varphi}(\varphi)\|_{D(\rho-4\tau)\times U'}\tilde{u}^{(1)}_{a}\tilde{u}^{(2)}_{b}, \end{align*} which implies \begin{equation*}\label{es52} |\sum_{a,b\in\mathcal{L}}\|S'^{b}_{1a\varphi}(\varphi)\|_{D(\rho-5\tau)\times U'}u^{(1)}_{a}u^{(2)}_{b}| \preceq\frac{1}{\tau\kappa^{2}}\sum_{a,b\in\mathcal{L}}\|F'^{b}_{1a\varphi}(\varphi)+G'^{b}_{1a\varphi}(\varphi)\|_{D(\rho-4\tau)\times U'}\tilde{u}^{(1)}_{a}\tilde{u}^{(2)}_{b}. \end{equation*} There are similar estimate for the other three summations in the R.H.S. of \eqref{es50}. Then we have \begin{equation*}\label{es56} |\widetilde{\lfloor s^{2}_{\varphi}\rceil}_{D(\rho-5\tau,\sigma^{2})\times U'}(z^{(1)},z^{(2)})|\preceq\frac{1}{\tau\kappa^{2}} \widetilde{\lfloor f^{2}_{\varphi}+g^{2}_{\varphi}\rceil}_{D(\rho-4\tau,\sigma^{2})\times U'}(\tilde{z}^{(1)},\tilde{z}^{(2)}). \end{equation*} Since \begin{equation*}\label{es57} |||s^{2}_{\varphi}|||_{D(\rho-5\tau,\sigma^{2})\times U'} =\sup_{0\neq z^{(1)},z^{(2)}\in l_{1}^{2}}\frac{|\widetilde{\lfloor s^{2}_{\varphi}\rceil}_{D(\rho-5\tau,\sigma^{2})\times U'}(z^{(1)},z^{(2)})|}{\|z^{(1)}\|_{1}\|z^{(2)}\|_{1}}, \end{equation*} we see from $\|\tilde{z}\|_{1}\preceq d_{\Delta}^{d}\|z\|_{1}$ that \begin{equation}\label{es58} |||s^{2}_{\varphi}|||_{D(\rho-5\tau,\sigma^{2})\times U'}\preceq\frac{d_{\Delta}^{2d}}{\tau\kappa^{2}}|||f^{2}_{\varphi}+g^{2}_{\varphi}|||_{D(\rho-4\tau,\sigma^{2})\times U'}. \end{equation} \textbf{Estimate of $s^{2}_{z}$.} Now we estimate \begin{equation*}\label{es59} |||s^{2}_{z}|||^{T}_{p,D(\rho-5\tau,\sigma^{2})\times U'}=\sup_{0\neq z\in l_{p}^{2}}\frac{\|\lfloor s^{2}_{z}\rceil_{D(\rho-5\tau,\sigma^{2})\times U'}(z)\|_{p}}{\|z\|_{p}}, \end{equation*} in which $\|\lfloor s^{2}_{z}\rceil_{D(\rho-5\tau,\sigma^{2})\times U'}(z)\|_{p}^{2}$ equals to the following sum \begin{equation}\label{es60} \begin{aligned} & \sum_{a\in\mathcal{L}} \left|\sum_{b\in\mathcal{L}}(\|S'^{b}_{1a}(\varphi;w)\|_{D(\rho-5\tau)\times U'}u_{b}+\|S'^{b}_{2a}(\varphi;w)\|_{D(\rho-5\tau)\times U'}v_{b})\right|^{2}\langle a\rangle^{2p}\\ +& \sum_{a\in\mathcal{L}}\left|\sum_{b\in\mathcal{L}}(\|S'^{a}_{2b}(\varphi;w)\|_{D(\rho-5\tau)\times U'}u_{b}+\|S'^{b}_{3a}(\varphi;w)\|_{D(\rho-5\tau)\times U'}v_{b})\right|^{2}\langle a\rangle^{2p}. \end{aligned} \end{equation} By (\ref{es46}), we see that \begin{equation*}\label{es61} \left[\sum_{a\in[a]_\Delta}\left(\sum_{b\in\mathcal{L}}\|S'^{b}_{1a}(\varphi;w)\|_{D(\rho-5\tau)\times U'}|u_{b}|\right)^{2}\right]^{\frac{1}{2}} \end{equation*} \begin{align*} &=\left[\sum_{a\in[a]}\left(\sum_{k\in\mathbb{Z}^{\mathcal{A}}}\sum_{b\in\mathcal{L}}(|\hat{S}'^{b}_{1a}(k)|+|\partial_{w}\hat{S}'^{b}_{1a}(k)|)e^{(\rho-5\tau)|k|}|u_{b}|\right)^{2}\right]^{\frac{1}{2}} \\ &\leq\sum_{k\in\mathbb{Z}^{\mathcal{A}}}\left[\sum_{a\in[a]}\left(\sum_{b\in\mathcal{L}}(|\hat{S}'^{b}_{1a}(k)|+|\partial_{w}\hat{S}'^{b}_{1a}(k)|)e^{(\rho-5\tau)|k|}|u_{b}|\right)^{2}\right]^{\frac{1}{2}}\\ &\leq\sum_{k\in\mathbb{Z}^{\mathcal{A}}}\left[\sum_{b\in\mathcal{L}}\left(\sum_{a\in[a]}(|\hat{S}'^{b}_{1a}(k)|+|\partial_{w}\hat{S}'^{b}_{1a}(k)|)^{2}\right)^{\frac{1}{2}}|u_{b}|\right]e^{(\rho-5\tau)|k|}\\ &\preceq\sum_{k\in\mathbb{Z}^{\mathcal{A}}}\frac{|k|+1}{\kappa^{2}}e^{(\rho-5\tau)|k|}\sum_{[b]\in\mathcal{E}_{\Delta}}(\|\hat{F}'^{[b]}_{1[a]}(k)+\hat{G}'^{[b]}_{1[a]}(k)\| +\|\partial_{w}\hat{F}'^{[b]}_{1[a]}(k)+\partial_{w}\hat{G}'^{[b]}_{1[a]}(k)\|)\|u_{[b]}\| \\ &\preceq\sum_{k\in\mathbb{Z}^{\mathcal{A}}}\frac{|k|+1}{\kappa^{2}}e^{(\rho-5\tau)|k|}\sum_{a\in[a]}\sum_{b\in\mathcal{L}}(|\hat{F}'^{b}_{1a}(k)+\hat{G}'^{b}_{1a}(k)| +|\partial_{w}\hat{F}'^{b}_{1a}(k)+\partial_{w}\hat{G}'^{b}_{1a}(k)|)\tilde{u}_{b}\\ &\preceq\frac{1}{\tau\kappa^{2}}\sum_{a\in[a]}\sum_{b\in\mathcal{L}}\|F'^{b}_{1a}(\varphi)+G'^{b}_{1a}(\varphi)\|_{D(\rho-4\tau)\times U'}\tilde{u}_{b}. \end{align*} As a result, we have \begin{equation*}\label{es62} \begin{aligned} \sum_{a\in[a]_\Delta}&\left(\sum_{b\in\mathcal{L}}\|S'^{b}_{1a}(\varphi)\|_{D(\rho-5\tau)\times U'}|u_{b}|\right)^{2}\\ &\preceq(\frac{d^{d}_{\Delta}}{\tau\kappa^{2}})^{2}\sum_{a\in[a]_{\Delta}}\left(\sum_{b\in\mathcal{L}}\|F'^{b}_{1a}(\varphi)+G'^{b}_{1a}(\varphi)\|_{D(\rho-4\tau)\times U'}\tilde{u}_{b}\right)^{2}. \end{aligned} \end{equation*} Similar estimates hold for the other three summations in \eqref{es60}. Then we have \begin{equation*}\label{es66} \|\lfloor s^{2}_{z}\rceil_{D(\rho-5\tau,\sigma^{2})\times U'}(z)\|_{p}\preceq\frac{d^{d}_{\Delta}}{\tau\kappa^{2}}\|\lfloor f^{2}_{z}+g^{2}_{z}\rceil_{D(\rho-4\tau,\sigma^{2})\times U'}(\tilde{z})\|_{p}, \end{equation*} which together with $\|\tilde{z}\|_{p}\preceq d_{\Delta}^{d}\|z\|_{p}$ implies \begin{equation}\label{es67} |||s^{2}_{z}|||^{T}_{D(\rho-5\tau,\sigma^{2})\times U'}\preceq\frac{d_{\Delta}^{2d}}{\tau\kappa^{2}}|||f^{2}_{z}+g^{2}_{z}|||^{T}_{D(\rho-4\tau,\sigma^{2})\times U'}. \end{equation} Finally, combining (\ref{es58}) and (\ref{es67}), we have \begin{equation*}\label{es68} |||X_{s^{2}}|||^{T}_{p,D(\rho-5\tau,\sigma^{2},\sigma)\times U'}\preceq \frac{d_{\Delta}^{2d}}{\tau\kappa^{2}}|||X_{f^{2}+g^{2}}|||^{T}_{p,D(\rho-4\tau,\sigma^{2},\sigma)\times U'}. \end{equation*} \smallskip \subsection{Verification of the estimates \eqref{ho6}-\eqref{ho11}.} In this part, we shall verify the estimates of the vector fields $X_{s}$ and $X_{h_{1}}$. Recall that $s=s^{\varphi}+s^{0}+s^{1}+s^{2}$. By (\ref{ho1}), (\ref{es14}), we have \begin{equation*}\label{es69} |||X_{s^{\varphi}}|||^{T}_{p,D(\rho-\tau,\sigma^{2},\sigma)\times U'}\preceq \frac{1}{\tau\kappa^{2}}|||X_{f^{\varphi}}|||^{T}_{p,D(\rho,\sigma^{2},\sigma)\times U'}\preceq \frac{\varepsilon}{\tau\kappa^{2}}, \end{equation*} which together with Proposition \ref{p} and (\ref{ho1}) implies \begin{equation}\label{es70} \begin{aligned} |||X_{\{f^{high},s^{\varphi}\}}|||^{T}&_{p,D(\rho-2\tau,(\sigma-\tau)^{2},\sigma-\tau)\times U'}\\ &\preceq \frac{1}{\tau}|||X_{f^{high}}|||^{T}_{p,D(\rho-\tau,\sigma^{2},\sigma)\times U'}|||X_{s^{\varphi}}|||^{T}_{p,D(\rho-\tau,\sigma^{2},\sigma)\times U'}\preceq\frac{\varepsilon}{\tau^{2}\kappa^{2}}. \end{aligned} \end{equation} By (\ref{ho1}), (\ref{es2}), (\ref{es35}) and (\ref{es70}), we have \begin{equation*}\label{es71} \begin{aligned} &|||X_{s^{1}}|||^{T}_{p,D(\rho-3\tau,(\sigma-3\tau)^{2},\sigma-3\tau)\times U'}\preceq \frac{d_{\Delta}^{d}}{\tau\kappa^{2}}|||X_{f^{1}+g^{1}}|||^{T}_{p,D(\rho-2\tau,(\sigma-3\tau)^{2},\sigma-3\tau)\times U'}\\ \preceq &\frac{d_{\Delta}^{d}}{\tau\kappa^{2}}(|||X_{f^{1}}|||^{T}_{p,D(\rho-2\tau,(\sigma-3\tau)^{2},\sigma-3\tau)\times U'}+|||X_{g^{1}}|||^{T}_{p,D(\rho-2\tau,(\sigma-3\tau)^{2},\sigma-3\tau)\times U'}) \preceq\frac{d_{\Delta}^{d}\varepsilon}{\tau^{3}\kappa^{4}}. \end{aligned} \end{equation*} Similar to $X_{s^{1}}$, we can estimate $X_{s^{0}}$ and $X_{s^{2}}$ in sequence and finally get \begin{equation}\label{es75} |||X_{s}|||^{T}_{p,D(\rho-5\tau,(\sigma-5\tau)^{2},\sigma-5\tau)\times U'}\preceq\frac{d_{\Delta}^{3d}\varepsilon}{\tau^{5}\kappa^{6}}. \end{equation} Moreover, the vector field $X_{h_{1}}$ satisfies \begin{equation}\label{es76} \begin{aligned} |||X_{h_{1}}|||^{T}&_{p,D(\rho-5\tau,(\sigma-5\tau)^{2},\sigma-5\tau)\times U'} \leq|||X_{f^{0}+g^{0}}|||^{T}_{p,D(\rho-5\tau,(\sigma-5\tau)^{2},\sigma-5\tau)\times U'}\\ &+|||X_{f^{2}+g^{2}}|||^{T}_{p,D(\rho-5\tau,(\sigma-5\tau)^{2},\sigma-5\tau)\times U'} \preceq\frac{d_{\Delta}^{d}\varepsilon}{\tau^{4}\kappa^{4}}. \end{aligned} \end{equation} \smallskip \subsection{Estimate of the new Hamiltonian.} In this part, we shall verify the properties \eqref{ho12}-\eqref{ho15}. Using Taylor's formula, we obtain from the homological equation (\ref{ho3}) that \begin{equation*}\label{es77} (h+f)\circ X^{t}_{s}\mid_{t=1}=(h+f^{low}+f^{high})\circ X^{t}_{s}\mid_{t=1} \end{equation*} \begin{align*} =&h+\{h,s\}+\int_{0}^{1}(1-t)\{\{h,s\},s\}\circ X^{t}_{s}dt+f^{low}+\int_{0}^{1}\{f^{low},s\}\circ X^{t}_{s}dt \\ &+f^{high}+\{f^{high},s\}+\int_{0}^{1}(1-t)\{\{f^{high},s\},s\}\circ X^{t}_{s}dt \\ =&h+h_{1}+(1-\mathcal{T}_{\Delta'})f^{low}+f^{high}+(1-\mathcal{T}_{\Delta'})\{f^{high},s\}^{low}+\{f^{high},s\}^{high} \\ &+\int_{0}^{1}(1-t)\{\{h,s\},s\}\circ X^{t}_{s}dt+\int_{0}^{1}\{f^{low},s\}\circ X^{t}_{s}dt+\int_{0}^{1}(1-t)\{\{f^{high},s\},s\}\circ X^{t}_{s}dt. \end{align*} Then we have $f_{1}=f_{1}^{low}+f_{1}^{high}$, where \begin{equation}\label{es113} \begin{aligned} f_{1}^{low}=&(1-\mathcal{T}_{\Delta'})f^{low}+(1-\mathcal{T}_{\Delta'})\{f^{high},s\}^{low}+\left(\int_{0}^{1}(1-t)\{\{h,s\},s\}\circ X^{t}_{s}dt\right)^{low}\\ &+\left(\int_{0}^{1}\{f^{low},s\}\circ X^{t}_{s}dt\right)^{low}+\left(\int_{0}^{1}(1-t)\{\{f^{high},s\},s\}\circ X^{t}_{s}dt\right)^{low},\\ f_{1}^{high}=&f^{high}+\{f^{high},s\}^{high}+\left(\int_{0}^{1}(1-t)\{\{h,s\},s\}\circ X^{t}_{s}dt\right)^{high}\\ &+\left(\int_{0}^{1}\{f^{low},s\}\circ X^{t}_{s}dt\right)^{high}+\left(\int_{0}^{1}(1-t)\{\{f^{high},s\},s\}\circ X^{t}_{s}dt\right)^{high}. \end{aligned} \end{equation} \bigskip \textbf{Estimate of the vector field generated by $(1-\mathcal{T}_{\Delta'})f^{low}$.} Observing that \begin{equation*}\label{es78} \begin{aligned} \|(1-\mathcal{T}_{\Delta'})&F^{\varphi}_{\varphi}\|_{D(\rho-\tau)\times U'}=\sum_{|k|>\Delta'}(|\hat{F}^{\varphi}_{\varphi}(k;w)|+|\partial_{w}\hat{F}^{\varphi}_{\varphi}(k;w)|)e^{|k|(\rho-\tau)}\\ &\leq\sum_{|k|>\Delta'}e^{-\tau|k|}\|F^{\varphi}_{\varphi}\|_{D(\rho)\times U'} \preceq \frac{1}{\tau^{\# \mathcal{A}}}e^{-\frac{1}{2}\tau\Delta'}\|F^{\varphi}_{\varphi}\|_{D(\rho)\times U'}, \end{aligned} \end{equation*} we obtain \begin{equation*}\label{es79} |||X_{(1-\mathcal{T}_{\Delta'})f^{\varphi}}|||^{T}_{p,D(\rho-\tau,\sigma^{2},\sigma)\times U'} \preceq \frac{1}{\tau^{\# \mathcal{A}}}e^{-\frac{1}{2}\tau\Delta'}|||X_{f^{\varphi}}|||^{T}_{p,D(\rho,\sigma^{2},\sigma)\times U'}\preceq\frac{1}{\tau^{\# \mathcal{A}}}e^{-\frac{1}{2}\tau\Delta'}\varepsilon. \end{equation*} The same estimates hold for $|||X_{(1-\mathcal{T}_{\Delta'})f^{0}}|||^{T}_{p,D(\rho-\tau,\sigma^{2},\sigma)\times U'}$ and $|||X_{(1-\mathcal{T}_{\Delta'})f^{1}}|||^{T}_{p,D(\rho-\tau,\sigma^{2},\sigma)\times U'}$. Then we turn to \begin{equation*}\label{es83} (1-\mathcal{T}_{\Delta'})f^{2}=(1-\mathcal{T}^{1}_{\Delta'})f^{2}+(1-\mathcal{T}^{2}_{\Delta'})f^{2}, \end{equation*} where \begin{equation*}\label{es82} \begin{aligned} (1-\mathcal{T}^{1}_{\Delta'})f^{2}=\frac{1}{2}\langle \zeta,(1-\mathcal{T}_{\Delta'})F_{2}(\varphi;w)\zeta\rangle,\quad (1-\mathcal{T}^{2}_{\Delta'})f^{2}=\frac{1}{2}\sum_{|k|>\Delta'}\langle \zeta,\mathcal{T}_{\Delta'}\hat{F}_{2}(k;w)\zeta\rangle e^{\mathrm{i}\langle k,\varphi\rangle}. \end{aligned} \end{equation*} It is easy to see \begin{equation}\label{es84} |||X_{(1-\mathcal{T}^{2}_{\Delta'})f^{2}}|||^{T}_{p,D(\rho-\tau,\sigma^{2},\sigma)\times U'}\preceq\frac{1}{\tau^{\# \mathcal{A}}}e^{-\frac{1}{2}\tau\Delta'}\varepsilon. \end{equation} By (\ref{ho2}), we have \begin{equation*}\label{es85} \sup_{(\varphi,w)\in D(\rho)\times U}(|F_{2}(\varphi;w)|_{\gamma},|\partial_{w}F_{2}(\varphi;w)|_{\gamma})\leq\frac{\varepsilon}{\sigma^{2}}. \end{equation*} Hence \begin{equation*}\label{es86} |\hat{F}'^{b}_{2a}(k;w)|+|\partial_{w}\hat{F}'^{b}_{2a}(k;w)|\preceq \frac{\varepsilon}{\sigma^{2}}e^{-\gamma|a-b|-\rho|k|}, \end{equation*} which implies \begin{equation*}\label{es87} \begin{aligned} \|F'^{b}_{2a\varphi}(\varphi;w)\|_{D(\rho-\tau)\times U}=&\sum_{k\in\mathbb{Z}^{\mathcal{A}}}(|\hat{F}'^{b}_{2a\varphi}(k;w)| +|\partial_{w}\hat{F}'^{b}_{2a\varphi}(k;w)|)e^{(\rho-\tau)|k|}\\ \preceq& \sum_{k\in\mathbb{Z}^{\mathcal{A}}}|k|e^{-\tau|k|}\frac{\varepsilon}{\sigma^{2}}e^{-\gamma|a-b|} \preceq \frac{1}{\tau^{\# \mathcal{A}+1}}\frac{\varepsilon}{\sigma^{2}}e^{-\gamma|a-b|}. \end{aligned} \end{equation*} Using Young's inequality (2) in \cite{EK}, we obtain \begin{equation*}\label{es88} |||X_{(1-\mathcal{T}^{1}_{\Delta'})f^{2}}|||^{T}_{p,D(\rho-\tau,\sigma^{2},\sigma)\times U'}\preceq\frac{1}{\gamma^{d+p}}\frac{1}{\tau^{\# \mathcal{A}+1}}\frac{\varepsilon}{\sigma^{2}}e^{-\frac{1}{2}\gamma\Delta'}, \end{equation*} which together with \eqref{es84} leads to \begin{equation*}\label{es89} |||X_{(1-\mathcal{T}_{\Delta'})f^{2}}|||^{T}_{p,D(\rho-\tau,\sigma^{2},\sigma)\times U'} \preceq\frac{1}{\tau^{\# \mathcal{A}}}e^{-\frac{1}{2}\tau\Delta'}\varepsilon+\frac{1}{\gamma^{d+p}}\frac{1}{\tau^{\# \mathcal{A}+1}}\frac{\varepsilon}{\sigma^{2}}e^{-\frac{1}{2}\gamma\Delta'}. \end{equation*} In conclusion, we have \begin{equation}\label{es90} |||X_{(1-\mathcal{T}_{\Delta'})f^{low}}|||^{T}_{p,D(\rho-\tau,\sigma^{2},\sigma)\times U'} \preceq\frac{1}{\tau^{\# \mathcal{A}}}e^{-\frac{1}{2}\tau\Delta'}\varepsilon+\frac{1}{\gamma^{d+p}}\frac{1}{\tau^{\# \mathcal{A}+1}}\frac{\varepsilon}{\sigma^{2}}e^{-\frac{1}{2}\gamma\Delta'}. \end{equation} \textbf{Estimate of the vector field generated by $(1-\mathcal{T}_{\Delta'})g^{low}=(1-\mathcal{T}_{\Delta'}) \{f^{high}, s\}^{low}$.} From equation (\ref{es4}), we obtain \begin{equation}\label{es91} [s^{\varphi}]_{\Lambda,\gamma,\sigma;U',\rho,\mu}\preceq\frac{1}{\kappa^{2}}(\Delta')^{\exp}\varepsilon. \end{equation} Applying \cite[Proposition 6.6]{EK} to equation (\ref{es5}), we obtain \begin{equation*}\label{es92} \|\hat{S}_1(k;\cdot)\|_{p,\gamma;U'}\preceq\frac{1}{\kappa^{2}}(\Delta\Delta')^{\exp}\|\hat{F}_1(k;\cdot)+\hat{G}_1(k;\cdot)\|_{p,\gamma;U'}. \end{equation*} Noticing that \begin{equation*}\label{es93} \{f^{high},s^{\varphi}\}=-\langle \partial_{r}f^{high},\partial_{\varphi}s^{\varphi}\rangle, \end{equation*} it follows from (\ref{ho2}), (\ref{es91}) and \cite[Equation (42)]{EK} that \begin{equation}\label{es94} [\{f^{high},s^{\varphi}\}]_{\Lambda,\gamma,\sigma;U',\rho^{(1)},\mu^{(1)}}\preceq\frac{1}{\rho-\rho^{(1)}}\frac{1}{\mu-\mu^{(1)}}\frac{1}{\kappa^{2}}(\Delta')^{\exp}\varepsilon, \end{equation} which together with (\ref{ho2}) and \eqref{es1} implies \begin{equation*}\label{es95} [s^{1}]_{\Lambda,\gamma,\sigma;U',\rho^{(1)},\mu^{(1)}}\preceq\frac{1}{\kappa^{4}}(\Delta\Delta')^{\exp}\frac{1}{\rho-\rho^{(1)}}\frac{1}{\mu-\mu^{(1)}}\varepsilon. \end{equation*} Since $s^{1}$ is independent of $r$, there is \begin{equation}\label{es96} [s^{1}]_{\Lambda,\gamma,\sigma;U',\rho^{(1)},\mu}\preceq\frac{1}{\kappa^{4}}(\Delta\Delta')^{\exp}\frac{1}{\rho-\rho^{(1)}}\frac{1}{\mu}\varepsilon. \end{equation} Next we estimate \begin{equation*}\label{es97} \{f^{high},s^{1}\}=-\langle \partial_{r}f^{high},\partial_{\varphi}s^{1}\rangle+\langle \partial_{\zeta}f^{high},J\partial_{\zeta}s^{1}\rangle. \end{equation*} By (\ref{ho2}), (\ref{es96}) and the Cauchy estimate (42) in \cite{EK}, we have \begin{equation*}\label{es98} [\langle \partial_{r}f^{high},\partial_{\varphi}s^{1}\rangle]_{\Lambda,\gamma,\sigma;U',\rho^{(2)},\mu^{(1)}} \preceq\frac{1}{\kappa^{4}}(\Delta\Delta')^{\exp}\frac{1}{\rho-\rho^{(1)}}\frac{1}{\rho^{(1)}-\rho^{(2)}}\frac{1}{\mu-\mu^{(1)}}\frac{1}{\mu}\varepsilon. \end{equation*} Applying further Proposition 3.1 (ii) in \cite{EK}, we have \begin{equation*}\label{es99} [\langle \partial_{\zeta}f^{high},J\partial_{\zeta}s^{1}\rangle]_{\Lambda,\gamma,\sigma^{(1)};U',\rho^{(1)},\mu} \preceq\frac{1}{\kappa^{4}}(\Delta\Delta')^{\exp}\frac{1}{\rho-\rho^{(1)}}\frac{1}{\sigma-\sigma^{(1)}}\frac{1}{\sigma}\frac{1}{\mu}\varepsilon, \end{equation*} which implies \begin{equation*}\label{es100} [\{f^{high},s^{1}\}]_{\left\{\substack{\Lambda,\gamma,\sigma^{(1)};\\ U',\rho^{(2)},\mu^{(1)}}\right\}} \preceq\frac{1}{\kappa^{4}}(\Delta\Delta')^{\exp}\bm{\delta}\varepsilon, \end{equation*} where \begin{equation}\label{bm delta} \bm{\delta}=\frac{1}{\rho-\rho^{(1)}} \left(\frac{1}{\sigma-\sigma^{(1)}}\frac{1}{\sigma}+\frac{1}{\rho^{(1)}-\rho^{(2)}} \frac{1}{\mu-\mu^{(1)}}\right)\frac{1}{\mu}. \end{equation} By (\ref{es1}) and (\ref{es94}), we have \begin{equation}\label{es101} [g^{low}]_{\left\{\substack{\Lambda,\gamma,\sigma^{(1)};\\ U',\rho^{(2)},\mu^{(1)}}\right\}} \preceq\frac{1}{\kappa^{4}}(\Delta\Delta')^{\exp}\bm{\delta} \varepsilon, \end{equation} which implies \begin{equation}\label{es102} \sup_{(\varphi,w)\in D(\rho^{(2)})\times U'}\left\{ \begin{aligned} |G_{2}(\varphi;w)|_{\gamma},\\ |\partial_{w}G_{2}(\varphi;w)|_{\gamma}\end{aligned}\right\} \preceq\frac{(\Delta\Delta')^{\exp}}{(\sigma^{(1)})^{2}\kappa^{4}} \bm{\delta'} \varepsilon, \end{equation} where \begin{equation*} \bm{\delta'}= \frac{1}{\rho-\rho^{(1)}} \left(\frac{1}{\sigma-\sigma^{(1)}}\frac{1}{\sigma}+\frac{1}{\rho^{(1)}-\rho^{(2)}} \frac{1}{\mu}\right)\frac{1}{\mu}. \end{equation*} By (\ref{es1}) and (\ref{es70}), we have \begin{equation}\label{es103} |||X_{g^{low}}|||^{T}_{p,D(\rho-4\tau,(\sigma-4\tau)^{2},\sigma-4\tau)\times U'} \preceq\frac{d_{\Delta}^{d}\varepsilon}{\tau^{4}\kappa^{4}}. \end{equation} Following the proof of (\ref{es90}), we get from (\ref{es102}) and (\ref{es103}) that \begin{equation}\label{es104} \begin{aligned} |||X_{(1-\mathcal{T}_{\Delta'})g^{low}}||&|^{T}_{p,D(\rho-5\tau,(\sigma-4\tau)^{2},\sigma-4\tau) \times U'}\\ \preceq & ~ \frac{1}{\tau^{\# \mathcal{A}}}e^{-\frac{1}{2}\tau\Delta'}\frac{d_{\Delta}^{d}\varepsilon}{\tau^{4}\kappa^{4}} +\frac{1}{\gamma^{d+p}}\frac{1}{\tau^{\# \mathcal{A}+1}}e^{-\frac{1}{2}\gamma\Delta'} \frac{(\Delta\Delta')^{\exp}}{(\sigma^{(1)})^{2}\kappa^{4}} \bm{\delta'} \varepsilon \\ \preceq & ~\frac{d_{\Delta}^{d}}{\kappa^{4}\tau^{\# \mathcal{A}+4}}e^{-\frac{1}{2}\tau\Delta'}\varepsilon +\frac{1}{\gamma^{d+p}}\frac{1}{\tau^{\# \mathcal{A}+3}}e^{-\frac{1}{2}\gamma\Delta'}\frac{(\Delta\Delta')^{\exp}}{\sigma^{6}\kappa^{4}}\varepsilon, \end{aligned} \end{equation} where $\rho^{(1)}=\rho-\tau,\rho^{(2)}=\rho-2\tau,\sigma^{(1)}=\sigma-\tau$. \textbf{Estimate of the vector field $X_{f_{1}^{low}}$.} Using (\ref{ho1}), (\ref{ho3}), (\ref{es76}) and (\ref{es103}), we have \begin{equation*}\label{es105} |||X_{\{h,s\}}|||^{T}_{p,D(\rho-5\tau,(\sigma-5\tau)^{2},\sigma-5\tau)\times U'}\preceq\frac{d_{\Delta}^{d}\varepsilon}{\tau^{4}\kappa^{4}}, \end{equation*} which together with Proposition \ref{p} and (\ref{es75}) implies \begin{equation*}\label{es106} |||X_{\{\{h,s\},s\}}|||^{T}_{p,D(\rho-6\tau,(\sigma-6\tau)^{2},\sigma-6\tau)\times U'}\preceq\frac{d_{\Delta}^{4d}\varepsilon^{2}}{\tau^{10}\kappa^{10}}. \end{equation*} By Proposition \ref{p}, (\ref{ho1}), (\ref{es75}), we have \begin{equation}\label{es107} \begin{aligned} & |||X_{\{f^{low},s\}}|||^{T}_{p,D(\rho-6\tau,(\sigma-6\tau)^{2},\sigma-6\tau)\times U'}\preceq\frac{d_{\Delta}^{3d}\varepsilon^{2}}{\tau^{6}\kappa^{6}},\\ & |||X_{\{f^{high},s\}}|||^{T}_{p,D(\rho-6\tau,(\sigma-6\tau)^{2},\sigma-6\tau)\times U'}\preceq\frac{d_{\Delta}^{3d}\varepsilon}{\tau^{6}\kappa^{6}}, \end{aligned} \end{equation} which implies \begin{equation*}\label{es109} |||X_{\{\{f^{high},s\},s\}}|||^{T}_{p,D(\rho-7\tau,(\sigma-7\tau)^{2},\sigma-7\tau)\times U'}\preceq\frac{d_{\Delta}^{6d}\varepsilon^{2}}{\tau^{12}\kappa^{12}}. \end{equation*} Then applying Theorem 3.3 in \cite{CLY}, we have \begin{equation}\label{es110} \begin{aligned} & |||X_{\int_{0}^{1}(1-t)\{\{h,s\},s\}\circ X^{t}_{s}dt}|||^{T}_{p,D(\rho-7\tau,(\sigma-7\tau)^{2},\sigma-7\tau)\times U'} \preceq\frac{d_{\Delta}^{4d}\varepsilon^{2}}{\tau^{10}\kappa^{10}},\\ & |||X_{\int_{0}^{1}\{f^{low},s\}\circ X^{t}_{s}dt}|||^{T}_{p,D(\rho-7\tau,(\sigma-7\tau)^{2},\sigma-7\tau)\times U'}\preceq\frac{d_{\Delta}^{3d}\varepsilon^{2}}{\tau^{6}\kappa^{6}},\\ & |||X_{\int_{0}^{1}(1-t)\{\{f^{high},s\},s\}\circ X^{t}_{s}dt}|||^{T}_{p,D(\rho-8\tau,(\sigma-8\tau)^{2},\sigma-8\tau)\times U'}\preceq\frac{d_{\Delta}^{6d}\varepsilon^{2}}{\tau^{12}\kappa^{12}}. \end{aligned} \end{equation} By (\ref{es90}), (\ref{es104}), \eqref{es110} and (\ref{es113}), we have \begin{equation*}\label{es115} \begin{aligned} |||X_{f_{1}^{low}}&|||^{T}_{p,D(\rho-8\tau,(\sigma-8\tau)^{2},\sigma-8\tau)\times U'}\\ &\preceq\frac{d_{\Delta}^{d}}{\kappa^{4}\tau^{\# \mathcal{A}+4}}e^{-\frac{1}{2}\tau\Delta'}\varepsilon +\frac{1}{\gamma^{d+p}}\frac{1}{\tau^{\# \mathcal{A}+3}}e^{-\frac{1}{2}\gamma\Delta'}\frac{(\Delta\Delta')^{\exp}}{\sigma^{6}\kappa^{4}}\varepsilon+\frac{d_{\Delta}^{6d}\varepsilon^{2}}{\tau^{12}\kappa^{12}}. \end{aligned} \end{equation*} \textbf{Estimate of the vector field $X_{f_{1}^{high}}$.} By (\ref{ho1}), \eqref{es107}, (\ref{es110}) and \eqref{es113}, we have \begin{equation*}\label{es116} |||X_{f_{1}^{high}}|||^{T}_{p,D(\rho-8\tau,(\sigma-8\tau)^{2},\sigma-8\tau)\times U'}\preceq 1+\frac{d_{\Delta}^{3d}\varepsilon}{\tau^{6}\kappa^{6}}+\frac{d_{\Delta}^{6d}\varepsilon^{2}}{\tau^{12}\kappa^{12}}. \end{equation*} \subsection{Verification of \eqref{ho16}-\eqref{ho17}.} In this part, we shall verify the estimates \eqref{ho16}-\eqref{ho17}. From (\ref{es6}), (\ref{ho2}) and (\ref{es101}), we obtain \begin{equation*}\label{es117} [s^{0}]_{\Lambda,\gamma,\sigma^{(1)};U',\rho^{(2)},\mu^{(1)}} \preceq\frac{1}{\kappa^{6}}(\Delta\Delta')^{\exp} \bm{\delta} \varepsilon, \end{equation*} where $\bm{\delta}$ is given by \eqref{bm delta}. Since $s^{0}$ is independent of $\zeta$, we obtain \begin{equation}\label{es118} [s^{0}]_{\Lambda,\gamma,\sigma;U',\rho^{(2)},\mu^{(1)}} \preceq\frac{1}{\kappa^{6}}(\Delta\Delta')^{\exp}\frac{1}{\rho-\rho^{(1)}}\left(\frac{1}{\sigma^{2}}+\frac{1}{\rho^{(1)}-\rho^{(2)}}\frac{1}{\mu-\mu^{(1)}}\right)\frac{1}{\mu}\varepsilon. \end{equation} Applying Proposition 6.7 in \cite{EK} to equation (\ref{es7}), it follows from (\ref{ho2}) and (\ref{es101}) that \begin{equation}\label{es119} \begin{aligned} & [s^{2}]_{\left\{\substack{\Lambda'+d_\Delta+2,\gamma,\sigma^{(1)};\\ U',\rho^{(2)},\mu^{(1)}}\right\}} \preceq\frac{1}{\kappa^{7}}(\Delta\Delta')^{\exp} \bm{\delta} \varepsilon,\\ & [h_{1}]_{\left\{\substack{\Lambda'+d_\Delta+2,\gamma,\sigma^{(1)};\\ U',\rho^{(2)},\mu^{(1)}}\right\}} \preceq\frac{1}{\kappa^{4}}(\Delta\Delta')^{\exp} \bm{\delta} \varepsilon. \end{aligned} \end{equation} Using (\ref{es91}), (\ref{es96}), (\ref{es118}) and (\ref{es119}), we obtain \begin{equation*}\label{es121} [s]_{\left\{\substack{\Lambda'+d_\Delta+2,\gamma,\sigma^{(1)};\\U',\rho^{(2)},\mu^{(1)}}\right\}} \preceq\frac{1}{\kappa^{7}}(\Delta\Delta')^{\exp}\frac{1}{\rho-\rho^{(1)}}\left(\frac{1}{\sigma-\sigma^{(1)}}\frac{1}{\sigma}+\frac{1}{\rho^{(1)}-\rho^{(2)}}\frac{1}{\mu-\mu^{(1)}}\right)\frac{1}{\mu}\varepsilon. \end{equation*} This completes the proof of Proposition \ref{p1}. \section{Proof of the KAM theorem \ref{t1}}\label{sect 4} This section is devoted to the proof of Theorem \ref{t1}. In subsection \ref{sec 4.1}, we take the normal form computations. In subsection \ref{sec 4.2}, we establish and prove the KAM iterative lemma, based on which Theorem \ref{t1} is an immediate result. \subsection{The normal form computation.}\label{sec 4.1} For $\rho_{+}<\rho$, $\gamma_{+}<\gamma$, let $$\Delta'=80(\log\frac{1}{\varepsilon})^{2}\frac{1}{\min(\gamma-\gamma_{+},\rho-\rho_{+})},$$ and $n=[\log\frac{1}{\varepsilon}]$. Assume $\rho=\sigma$, $\mu=\sigma^{2}$, $d_{\Delta}\gamma\leq1$. For $1\leq j\leq n$, let \begin{equation*} \varepsilon_{j}=\frac{\varepsilon}{\kappa^{20}}\varepsilon_{j-1}, \ \varepsilon_{0}=\varepsilon, \end{equation*} \begin{equation*} \gamma_{j}=\gamma-j\frac{\gamma-\gamma_{+}}{n}, \ \gamma_{0}=\gamma, \end{equation*} \begin{equation*} \rho_{j}=\rho-j\frac{\rho-\rho_{+}}{n}, \ \rho_{0}=\rho, \end{equation*} \begin{equation*} \sigma_{j}=\sigma-j\frac{\sigma-\sigma_{+}}{n}, \ \sigma_{0}=\sigma, \end{equation*} \begin{equation*} \mu_{j}=\sigma_{j}^{2}, \ \mu_{0}=\mu, \end{equation*} \begin{equation*} \Lambda_{j}=\Lambda_{j-1}+d_{\Delta}+30, \ \Lambda_{0}=\mathrm{cte}.\max(\Lambda,d_{\Delta}^{2},d_{\Delta'}^{2}), \end{equation*} where the constant $\mathrm{cte}$. is the one in Proposition 6.7 in \cite{EK}. We have the following lemma. \begin{lem}\label{l1} For $0\leq j<n$, consider the Hamiltonian $h+h_1+\cdots+h_{j}+f_{j}$, where \begin{equation*} h(r,\zeta;w)=\langle\omega(w),r\rangle+\frac{1}{2}\langle\zeta,(\Omega(w)+H(w))\zeta\rangle \end{equation*} satisfy \eqref{as1}-\eqref{as9}, $H(w),\partial_{w}H(w)$ are T\"{o}plitz at $\infty$ and $\mathcal{NF}_{\Delta}$ for all $w\in U$. Let $U' \subset U$ satisfy \eqref{sd1}-\eqref{sd4}. For all $w\in U'$, \begin{equation*} h_{j}=a_{j}(w)+\langle \chi_{j}(w),r\rangle+\frac{1}{2}\langle\zeta,H_{j}(w)\zeta\rangle, \end{equation*} \begin{equation*} f_{j}=f_{j}^{low}+f_{j}^{high} \end{equation*} satisfy \begin{equation}\label{fi1} |||X_{f_{j}^{low}}|||^{T}_{p,D(\rho_{j},\mu_{j},\sigma_{j})\times U'}\leq\beta^{j}\varepsilon_{j}, \ |||X_{f_{j}^{high}}|||^{T}_{p,D(\rho_{j},\mu_{j},\sigma_{j})\times U'}\leq1, \end{equation} \begin{equation}\label{fi2} [f_{j}^{low}]_{\Lambda_{j},\gamma_{j},\sigma_{j};U',\rho_{j},\mu_{j}}\leq\beta^{j}\varepsilon_{j}, \ [f_{j}^{high}]_{\Lambda_{j},\gamma_{j},\sigma_{j};U',\rho_{j},\mu_{j}}\leq 1 \end{equation} for some \begin{equation*} \beta\preceq \max\left(\frac{1}{\gamma-\gamma_{+}},\frac{1}{\rho-\rho_{+}},\Delta,\Lambda,\log\frac{1}{\varepsilon}\right)^{\exp_1}. \end{equation*} Then there exists an exponent $\exp_2$ such that if \begin{equation*} \varepsilon\preceq\kappa^{20}\min\left(\gamma-\gamma_{+},\rho-\rho_{+},\frac{1}{\Delta},\frac{1}{\Lambda},\frac{1}{\log\frac{1}{\varepsilon}}\right)^{\exp_2}, \end{equation*} then for all $w\in U'$, there is a real analytic symplectic map $\Phi_{j}$ such that \begin{equation*} (h+h_1+\cdots+h_{j}+f_{j})\circ\Phi_{j}=h+h_1+\cdots+h_{j+1}+f_{j+1}, \end{equation*} with the estimates \begin{equation}\label{fi3} |||X_{f^{low}_{j+1}}|||^{T}_{p,D(\rho_{j+1},\mu_{j+1},\sigma_{j+1})\times U'} \preceq\beta^{j+1}\varepsilon_{j+1}, \end{equation} \begin{equation}\label{fi4} |||X_{f^{high}_{j+1}}|||^{T}_{p,D(\rho_{j+1},\mu_{j+1},\sigma_{j+1})\times U'} \preceq 1+\frac{1}{\kappa^{6}}\beta^{j+1}\varepsilon_{j}+\beta^{j+1}\varepsilon_{j+1}, \end{equation} \begin{equation}\label{fi5} [f^{low}_{j+1}]_{\Lambda_{j+1},\gamma_{j+1},\sigma_{j+1};U',\rho_{j+1},\mu_{j+1}} \preceq\beta^{j+1}\varepsilon_{j+1}, \end{equation} \begin{equation}\label{fi6} [f^{high}_{j+1}]_{\Lambda_{j+1},\gamma_{j+1},\sigma_{j+1};U',\rho_{j+1},\mu_{j+1}} \preceq 1+\frac{1}{\kappa^{7}}\beta^{j+1}\varepsilon_{j}+\beta^{j+1}\varepsilon_{j+1}, \end{equation} where the exponents $\exp_1$, $\exp_2$ depend on $d, \# \mathcal{A}, p$. \end{lem} \noindent\textbf{Proof.}~ By Proposition \ref{p1}, we can solve the homological equation \begin{equation}\label{a1} \{h,s_{j}\}=-\mathcal{T}_{\Delta'}f_{j}^{low}-\mathcal{T}_{\Delta'}\{f_{j}^{high},s_{j}\}^{low}+h_{j+1} \end{equation} with the estimates \begin{equation}\label{a2} \begin{aligned} & [s_{j}]_{\left\{\substack{\Lambda_{j}+d_\Delta+2,\gamma_{j},\sigma_{j}^{(1)};\\ U',\rho_{j}^{(1)},\mu_{j}^{(1)}}\right\}} \preceq\frac{1}{\kappa^{7}}(\Delta\Delta')^{\exp}\bm{\delta_{1}}\varepsilon_{j},\\ & [h_{j+1}]_{\left\{\substack{\Lambda_{j}+d_\Delta+2,\gamma_{j},\sigma_{j}^{(1)};\\U',\rho_{j}^{(1)},\mu_{j}^{(1)}}\right\}} \preceq\frac{1}{\kappa^{4}}(\Delta\Delta')^{\exp}\bm{\delta_{1}}\varepsilon_{j}, \end{aligned} \end{equation} where \begin{equation*} \bm{\delta_{1}}= \frac{1}{\rho_{j}-\rho_{j}^{(1)}} \left(\frac{1}{\sigma_{j}-\sigma_{j}^{(1)}}\frac{1}{\sigma_{j}}+\frac{1}{\rho_{j}-\rho_{j}^{(1)}}\frac{1}{\mu_{j}-\mu_{j}^{(1)}}\right)\frac{1}{\mu_{j}} \beta^{j}. \end{equation*} \bigskip \textbf{Step 1: computation of $f_{j+1}$.} In this step, we compute the new perturbation $f_{j+1}= f_{j+1}^{low}+f_{j+1}^{high}$. Using Taylor's formula, by the homological equation (\ref{a1}), we obtain \begin{equation}\label{a4} \begin{aligned} (h+h_1+\cdots&+h_{j}+f_{j})\circ X^{t}_{s_{j}}\mid_{t=1} =h+h_{1}+\cdots+h_{j+1}\\ &+(1-\mathcal{T}_{\Delta'})f_{j}^{low}+f_{j}^{high}+(1-\mathcal{T}_{\Delta'})\{f_{j}^{high},s_{j}\}^{low}+\{f_{j}^{high},s_{j}\}^{high} \\ &+\int_{0}^{1}(1-t)\{\{h,s_{j}\},s_{j}\}\circ X^{t}_{s_{j}}dt+\int_{0}^{1}\{h_1+\cdots+h_{j},s_{j}\}\circ X^{t}_{s_{j}}dt \\ &+\int_{0}^{1}\{f_{j}^{low},s_{j}\}\circ X^{t}_{s_{j}}dt+\int_{0}^{1}(1-t)\{\{f_{j}^{high},s_{j}\},s_{j}\}\circ X^{t}_{s_{j}}dt. \end{aligned} \end{equation} Let $\Phi_{j}= X^{t}_{s_{j}}\mid_{t=1}$ and \begin{equation*} (h+h_1+\cdots+h_{j}+f_{j})\circ X^{t}_{s_{j}}\mid_{t=1} =h+h_{1}+\cdots+h_{j+1}+f_{j+1}. \end{equation*} Then we get from (\ref{a1}) and (\ref{a4}) that \begin{equation*}\label{a20} \begin{aligned} f_{j+1}=& (1-\mathcal{T}_{\Delta'})f_{j}^{low}+f_{j}^{high}+(1-\mathcal{T}_{\Delta'})\{f_{j}^{high},s_{j}\}^{low}+\{f_{j}^{high},s_{j}\}^{high}\\ &+\int_{0}^{1}(1-t)\{-\mathcal{T}_{\Delta'}f_{j}^{low},s_{j}\}\circ X^{t}_{s_{j}}dt +\int_{0}^{1}(1-t)\{-\mathcal{T}_{\Delta'}\{f_{j}^{high},s_{j}\}^{low},s_{j}\}\circ X^{t}_{s_{j}}dt\\ &+\int_{0}^{1}\{h_1+\cdots+h_{j}+(1-t)h_{j+1},s_{j}\}\circ X^{t}_{s_{j}}dt +\int_{0}^{1}\{f_{j}^{low},s_{j}\}\circ X^{t}_{s_{j}}dt\\ &+\int_{0}^{1}(1-t)\{\{f_{j}^{high},s_{j}\},s_{j}\}\circ X^{t}_{s_{j}}dt. \end{aligned} \end{equation*} As a result, there is \begin{align*} f^{low}_{j+1}=&(1-\mathcal{T}_{\Delta'})f_{j}^{low}+(1-\mathcal{T}_{\Delta'}) \{f_{j}^{high},s_{j}\}^{low}+\left(\int_{0}^{1}\{f_{j}^{low},s_{j}\}\circ X^{t}_{s_{j}}dt\right)^{low}\\ &+\left(\int_{0}^{1}(1-t)\{-\mathcal{T}_{\Delta'}f_{j}^{low},s_{j}\}\circ X^{t}_{s_{j}}dt\right)^{low}\\ &+\left(\int_{0}^{1}(1-t)\{-\mathcal{T}_{\Delta'}\{f_{j}^{high},s_{j}\}^{low},s_{j}\}\circ X^{t}_{s_{j}}dt\right)^{low} \\ &+\left(\int_{0}^{1}(1-t)\{\{f_{j}^{high},s_{j}\},s_{j}\}\circ X^{t}_{s_{j}}dt\right)^{low}\\ &+\left(\int_{0}^{1}\{h_1+\cdots+h_{j}+(1-t)h_{j+1},s_{j}\}\circ X^{t}_{s_{j}}dt\right)^{low}, \end{align*} and \begin{align*} f^{high}_{j+1}=&f_{j}^{high}+\{f_{j}^{high},s_{j}\}^{high} +\left(\int_{0}^{1}(1-t)\{-\mathcal{T}_{\Delta'}f_{j}^{low},s_{j}\}\circ X^{t}_{s_{j}}dt\right)^{high}\\ &+\left(\int_{0}^{1}(1-t)\{-\mathcal{T}_{\Delta'}\{f_{j}^{high},s_{j}\}^{low},s_{j}\}\circ X^{t}_{s_{j}}dt\right)^{high}\\ &+\left(\int_{0}^{1}(1-t)\{\{f_{j}^{high},s_{j}\},s_{j}\}\circ X^{t}_{s_{j}}dt\right)^{high}\\ &+\left(\int_{0}^{1}\{f_{j}^{low},s_{j}\}\circ X^{t}_{s_{j}}dt\right)^{high}\\ &+\left(\int_{0}^{1}\{h_1+\cdots+h_{j}+(1-t)h_{j+1},s_{j}\}\circ X^{t}_{s_{j}}dt\right)^{high}. \end{align*} \bigskip \textbf{Step 2: estimates of $f_{j+1}^{low}$ and $f_{j+1}^{high}$.} In this part, we verify the estimates \eqref{fi5} and \eqref{fi6}. The various estimates of the poission brackets are based on \eqref{a2} and \cite[ Proposition 3.3]{EK}. To begin with, we see from (\ref{fi2}) that \begin{equation}\label{a5} [(1-\mathcal{T}_{\Delta'})f_{j}^{low}]_{\left\{\substack{\Lambda_{j},\gamma_{j}^{(1)},\sigma_{j};\\U',\rho_{j}^{(1)},\mu_{j}}\right\}}\preceq \left[\left(\frac{1}{\rho_{j}-\rho_{j}^{(1)}}\right)^{\#\mathcal{A}}e^{-\frac{1}{2}(\rho_{j}-\rho_{j}^{(1)})\Delta'}+e^{-(\gamma_{j}-\gamma_{j}^{(1)})\Delta'}\right]\beta^{j}\varepsilon_{j}. \end{equation} By (\ref{es101}), there is \begin{equation}\label{a6} [\{f_{j}^{high},s_{j}\}^{low}]_{\left\{\substack{\Lambda_{j},\gamma_{j},\sigma_{j}^{(1)};\\U',\rho_{j}^{(1)},\mu_{j}^{(1)}}\right\}} \preceq\frac{1}{\kappa^{4}}(\Delta\Delta')^{\exp}\bm{\delta_{1}}\varepsilon_{j}, \end{equation} and thus \begin{equation}\label{a7} \begin{aligned} &\quad [(1-\mathcal{T}_{\Delta'}) \{f_{j}^{high},s_{j}\}^{low}]_{\left\{\substack{\Lambda_{j},\gamma^{(1)}_{j},\sigma_{j}^{(1)};\\U',\rho_{j}^{(2)},\mu_{j}^{(1)}}\right\}}\\ \preceq & \left[\left(\frac{1}{\rho_{j}^{(1)}-\rho_{j}^{(2)}}\right)^{\#\mathcal{A}}e^{-\frac{1}{2}(\rho_{j}^{(1)}-\rho_{j}^{(2)})\Delta'}+e^{-(\gamma_{j}-\gamma_{j}^{(1)})\Delta'}\right] \frac{1}{\kappa^{4}}(\Delta\Delta')^{\exp}\bm{\delta_{1}} \varepsilon_{j}. \end{aligned} \end{equation} Let \begin{equation*} \bm{\delta_{2}}= (\Lambda_{j}+d_\Delta+2)^{2}\left(\frac{1}{\gamma_{j}-\gamma_{j}^{(1)}}\right)^{d+1}\frac{1}{\sigma_{j}^{(1)}-\sigma_{j}^{(2)}}\frac{1}{\sigma_{j}^{(1)}} +\frac{1}{\rho_{j}^{(1)}-\rho_{j}^{(2)}}\frac{1}{\mu_{j}^{(1)}-\mu_{j}^{(2)}}. \end{equation*} By (\ref{fi2}), (\ref{a2}) and \cite[ Proposition 3.3]{EK}, we have \begin{equation}\label{a8} [\{f_{j}^{high},s_{j}\}]_{\left\{\substack{\Lambda_{j}+d_\Delta+5,\gamma_{j}^{(1)},\sigma_{j}^{(2)};\\ U',\rho_{j}^{(2)},\mu_{j}^{(2)}}\right\}}\preceq \bm{\delta_{2}} \frac{1}{\kappa^{7}}(\Delta\Delta')^{\exp} \bm{\delta_{1}} \varepsilon_{j}, \end{equation} and \begin{equation*}\label{a9} [\{f_{j}^{low},s_{j}\}]_{\left\{\substack{\Lambda_{j}+d_\Delta+5,\gamma_{j}^{(1)},\sigma_{j}^{(2)};\\U',\rho_{j}^{(2)},\mu_{j}^{(2)}}\right\}}\preceq \bm{\delta_{2}} \frac{1}{\kappa^{7}}(\Delta\Delta')^{\exp} \bm{\delta_{1}} \beta^{j} \varepsilon_{j}^{2}. \end{equation*} Moreover, we obtain from (\ref{a2}), (\ref{a6}) and \cite[Proposition 3.3]{EK} that \begin{equation}\label{a10} [\{\{f_{j}^{high},s_{j}\}^{low},s_{j}\}]_{\left\{\substack{\Lambda_{j}+d_\Delta+5,\gamma_{j}^{(1)},\sigma_{j}^{(2)}; \\U',\rho_{j}^{(2)},\mu_{j}^{(2)} }\right\}}\preceq \bm{\delta_{2}} \frac{1}{\kappa^{11}}(\Delta\Delta')^{\exp} \bm{\delta_{1}}^{2} \varepsilon_{j}^{2}. \end{equation} Let \begin{equation*} \bm{\delta_{3}}= (\Lambda_{j}+d_\Delta+5)^{2}\left(\frac{1}{\gamma_{j}^{(1)}-\gamma_{j}^{(2)}}\right)^{d+1}\frac{1}{\sigma_{j}^{(2)}-\sigma_{j}^{(3)}}\frac{1}{\sigma_{j}^{(2)}} +\frac{1}{\rho_{j}^{(2)}-\rho_{j}^{(3)}}\frac{1}{\mu_{j}^{(2)}-\mu_{j}^{(3)}}. \end{equation*} By (\ref{a2}), (\ref{a8}), using Proposition 3.3 in \cite{EK}, we have \begin{equation*}\label{a11} [\{\{f_{j}^{high},s_{j}\},s_{j}\}]_{\left\{\substack{\Lambda_{j}+d_\Delta+8,\gamma_{j}^{(2)},\sigma_{j}^{(3)};\\ U',\rho_{j}^{(3)},\mu_{j}^{(3)}}\right\}} \preceq \bm{\delta_{3} \delta_{2}} \frac{1}{\kappa^{14}} \bm{\delta_{1}}^{2} \varepsilon_{j}^{2}, \end{equation*} and \begin{equation*} \begin{aligned} & [ \{h_{i+1},s_{j}\}]_{\left\{\substack{\Lambda_{j}+d_\Delta+5,\gamma_{j}^{(1)},\sigma_{j}^{(2)};\\U',\rho_{j}^{(2)},\mu_{j}^{(2)}}\right\}}\\ \preceq&\left[(\Lambda_{j}+d_\Delta+2)^{2}\left(\frac{1}{\gamma_{j}-\gamma_{j}^{(1)}}\right)^{d+p}\frac{\sigma_{j}^{(1)}}{\sigma_{j}^{(1)}-\sigma_{j}^{(2)}}\frac{1}{(\sigma_{i}^{(1)})^{2}} +\frac{1}{\rho_{j}^{(1)}-\rho_{j}^{(2)}}\frac{1}{\mu_{i}^{(1)}}\right]\\ &\times\frac{1}{\kappa^{11}}(\Delta\Delta')^{\exp}\bm{\delta_{1}} \cdot \frac{1}{\rho_{i}-\rho_{i}^{(1)}}\left(\frac{1}{\sigma_{i}-\sigma_{i}^{(1)}}\frac{1}{\sigma_{i}}+\frac{1}{\rho_{i}-\rho_{i}^{(1)}}\frac{1}{\mu_{i}-\mu_{i}^{(1)}}\right)\frac{1}{\mu_{i}} \beta^{i}\varepsilon_{i}\varepsilon_{j}. \end{aligned} \end{equation*} Take \begin{equation*} \begin{aligned} &\rho_{j}^{(l)}=\rho_{j}-\frac{l}{4}(\rho_{j}-\rho_{j+1}), \quad \gamma_{j}^{(l)}=\gamma_{j}-\frac{l}{4}(\gamma_{j}-\gamma_{j+1}), \\ &\sigma_{j}^{(l)}=\sigma_{j}-\frac{l}{4}(\sigma_{j}-\sigma_{j+1}),\quad \mu_{j}^{(l)}=(\sigma_{j}^{(l)})^{2}, \end{aligned} \end{equation*} where $l=1,2,3,4$. By (\ref{a5}), we have \begin{equation}\label{a13} [(1-\mathcal{T}_{\Delta'})f_{j}^{low}]_{\Lambda_{j},\gamma_{j}^{(1)},\sigma_{j};U',\rho_{j}^{(1)},\mu_{j}}\preceq \beta\varepsilon\beta^{j}\varepsilon_{j}\preceq\beta^{j+1}\varepsilon_{j+1}. \end{equation} By (\ref{a7}), we have \begin{equation*}\label{a14} \begin{aligned} & [(1-\mathcal{T}_{\Delta'})\{f_{j}^{high},s_{j}\}^{low}]_{\Lambda_{j},\gamma^{(1)}_{j},\sigma_{j}^{(1)};U',\rho_{j}^{(2)},\mu_{j}^{(1)}} \preceq\frac{1}{\kappa^{4}}\beta\varepsilon\beta^{j}\varepsilon_{j}\preceq\beta^{j+1}\varepsilon_{j+1},\\ & [\{f_{j}^{high},s_{j}\}]_{\Lambda_{j}+d_\Delta+5,\gamma_{j}^{(1)},\sigma_{j}^{(2)};U',\rho_{j}^{(2)},\mu_{j}^{(2)}} \preceq\frac{1}{\kappa^{7}}\beta\beta^{j}\varepsilon_{j}\preceq\frac{1}{\kappa^{7}}\beta^{j+1}\varepsilon_{j},\\ & [\{f_{j}^{low},s_{j}\}]_{\Lambda_{j}+d_\Delta+5,\gamma_{j}^{(1)},\sigma_{j}^{(2)};U',\rho_{j}^{(2)},\mu_{j}^{(2)}} \preceq\frac{1}{(\Lambda_{j}+d_\Delta+5)^{14}}\beta^{j+1}\varepsilon_{j+1},\\ & [\{\{f_{j}^{high},s_{j}\}^{low},s_{j}\}]_{\Lambda_{j}+d_\Delta+5,\gamma_{j}^{(1)},\sigma_{j}^{(2)};U',\rho_{j}^{(2)},\mu_{j}^{(2)}} \preceq\frac{1}{(\Lambda_{j}+d_\Delta+5)^{14}}\beta^{j+1}\varepsilon_{j+1},\\ & [\{\{f_{j}^{high},s_{j}\},s_{j}\}]_{\Lambda_{j}+d_\Delta+8,\gamma_{j}^{(2)},\sigma_{j}^{(3)};U',\rho_{j}^{(3)},\mu_{j}^{(3)}} \preceq\frac{1}{(\Lambda_{j}+d_\Delta+8)^{14}}\beta^{j+1}\varepsilon_{j+1},\\ & [\{h_1+\cdots+h_{j}+(1-t)h_{j+1},s_{j}\}]_{\Lambda_{j}+d_\Delta+5,\gamma_{j}^{(1)},\sigma_{j}^{(2)};U',\rho_{j}^{(2)},\mu_{j}^{(2)}} \preceq\frac{1}{(\Lambda_{j}+d_\Delta+5)^{14}}\beta^{j+1}\varepsilon_{j+1}, \end{aligned} \end{equation*} which imply the following properties \begin{equation}\label{a23} \begin{aligned} &\left[\int_{0}^{1}(1-t)\{-\mathcal{T}_{\Delta'}f_{j}^{low},s_{j}\}\circ X^{t}_{s_{j}}dt\right]_{\Lambda_{j+1},\gamma_{j+1},\sigma_{j+1};U',\rho_{j+1},\mu_{j+1}} \preceq\beta^{j+1}\varepsilon_{j+1},\\ &\left[\int_{0}^{1}\{f_{j}^{low},s_{j}\}\circ X^{t}_{s_{j}}dt\right]_{\Lambda_{j+1},\gamma_{j+1},\sigma_{j+1};U',\rho_{j+1},\mu_{j+1}} \preceq\beta^{j+1}\varepsilon_{j+1},\\ &\left[\int_{0}^{1}(1-t)\{-\mathcal{T}_{\Delta'}\{f_{j}^{high},s_{j}\}^{low},s_{j}\}\circ X^{t}_{s_{j}}dt\right]_{\Lambda_{j+1},\gamma_{j+1},\sigma_{j+1};U',\rho_{j+1},\mu_{j+1}} \preceq\beta^{j+1}\varepsilon_{j+1},\\ &\left[\int_{0}^{1}\{h_1+\cdots+h_{j}+(1-t)h_{j+1},s_{j}\}\circ X^{t}_{s_{j}}dt\right]_{\Lambda_{j+1},\gamma_{j+1},\sigma_{j+1};U',\rho_{j+1},\mu_{j+1}} \preceq\beta^{j+1}\varepsilon_{j+1},\\ &\left[\int_{0}^{1}(1-t)\{\{f_{j}^{high},s_{j}\},s_{j}\}\circ X^{t}_{s_{j}}dt\right]_{\Lambda_{j+1},\gamma_{j+1},\sigma_{j+1};U',\rho_{j+1},\mu_{j+1}} \preceq\beta^{j+1}\varepsilon_{j+1}. \end{aligned} \end{equation} Then from the definition of $f_{j+1}^{low}$ and $f_{j+1}^{high}$, we get the desired estimates \begin{equation}\label{a28} [f^{low}_{j+1}]_{\Lambda_{j+1},\gamma_{j+1},\sigma_{j+1};U',\rho_{j+1},\mu_{j+1}} \preceq\beta^{j+1}\varepsilon_{j+1}, \end{equation} \begin{equation*}\label{a29} [f^{high}_{j+1}]_{\Lambda_{j+1},\gamma_{j+1},\sigma_{j+1};U',\rho_{j+1},\mu_{j+1}} \preceq 1+\frac{1}{\kappa^{7}}\beta^{j+1}\varepsilon_{j}+\beta^{j+1}\varepsilon_{j+1}. \end{equation*} \bigskip \textbf{Step 3: estimates of the vector fields $X_{f_{j+1}^{low}}$ and $X_{f_{j+1}^{high}}$.} By (\ref{es75}) and (\ref{es76}), we have \begin{equation}\label{a30} \begin{aligned} & |||X_{s_{j}}|||^{T}_{p,D(\rho_{j}^{(1)},\mu_{j}^{(1)},\sigma_{j}^{(1)})\times U'}\preceq\frac{1}{\kappa^{6}}\Delta^{\exp}\left(\frac{1}{\rho_{j}-\rho_{j}^{(1)}}\right)^{5}\beta^{j}\varepsilon_{j},\\ & |||X_{h_{j+1}}|||^{T}_{p,D(\rho_{j}^{(1)},\mu_{j}^{(1)},\sigma_{j}^{(1)})\times U'}\preceq\frac{1}{\kappa^{4}}\Delta^{\exp}\left(\frac{1}{\rho_{j}-\rho_{j}^{(1)}}\right)^{4}\beta^{j}\varepsilon_{j}. \end{aligned} \end{equation} Using (\ref{es90}), we have \begin{equation}\label{a32} \begin{aligned} |||X_{(1-\mathcal{T}_{\Delta'})f_{j}^{low}}&|||^{T}_{p,D(\rho_{j}^{(1)},\mu_{j}^{(1)},\sigma_{j}^{(1)})\times U'} \preceq \left(\frac{1}{\rho_{j}-\rho_{j}^{(1)}}\right)^{\# \mathcal{A}}e^{-\frac{1}{2}(\rho_{j}-\rho_{j}^{(1)})\Delta'} \beta^{j}\varepsilon_{j}\\ &\qquad+\frac{1}{\gamma_{j}^{d+p}}\left(\frac{1}{\rho_{j}-\rho_{j}^{(1)}}\right)^{\# \mathcal{A}+1}\frac{1}{\sigma_{j}^{2}}e^{-\frac{1}{2}\gamma_{j}\Delta'}\beta^{j}\varepsilon_{j}\preceq\beta^{j+1}\varepsilon_{j+1}. \end{aligned} \end{equation} By (\ref{es103}) and (\ref{es104}), we have \begin{equation}\label{a33} |||X_{\{f_{j}^{high},s_{j}\}^{low}}|||^{T}_{p,D(\rho_{j}^{(1)},\mu_{j}^{(1)},\sigma_{j}^{(1)})\times U'} \preceq\frac{1}{\kappa^{4}}\Delta^{\exp}\left(\frac{1}{\rho_{j}-\rho_{j}^{(1)}}\right)^{4}\beta^{j}\varepsilon_{j}, \end{equation} \begin{equation}\label{a34} \textrm{and}\qquad |||X_{(1-\mathcal{T}_{\Delta'})\{f_{j}^{high},s_{j}\}^{low}}|||^{T}_{p,D(\rho_{j}^{(1)},\mu_{j}^{(1)},\sigma_{j}^{(1)})\times U'} \end{equation} \begin{equation*} \preceq\frac{1}{\kappa^{4}}(\Delta\Delta')^{\exp}\left(\frac{1}{\rho_{j}-\rho_{j}^{(1)}}\right)^{\# \mathcal{A}+4}\left[e^{-\frac{1}{2}(\rho_{j}-\rho_{j}^{(1)})\Delta'} +\frac{1}{\gamma_{j}^{d+p}}\frac{1}{\sigma_{j}^{6}}e^{-\frac{1}{2}\gamma_{j}\Delta'}\right]\beta^{j}\varepsilon_{j}\preceq\beta^{j+1}\varepsilon_{j+1}. \end{equation*} By (\ref{fi1}), (\ref{a30}) and Proposition \ref{p}, we have \begin{equation}\label{a35} |||X_{\{f_{j}^{low},s_{j}\}}|||^{T}_{p,D(\rho_{j}^{(2)},\mu_{j}^{(2)},\sigma_{j}^{(2)})\times U'} \preceq\frac{1}{\rho_{j}^{(1)}-\rho_{j}^{(2)}}\frac{1}{\kappa^{6}}\Delta^{\exp}\left(\frac{1}{\rho_{j}-\rho_{j}^{(1)}}\right)^{5}\beta^{2j}\varepsilon_{j}^{2}, \end{equation} \begin{equation}\label{a36} |||X_{\{f_{j}^{high},s_{j}\}}|||^{T}_{p,D(\rho_{j}^{(2)},\mu_{j}^{(2)},\sigma_{j}^{(2)})\times U'} \preceq\frac{1}{\rho_{j}^{(1)}-\rho_{j}^{(2)}}\frac{1}{\kappa^{6}}\Delta^{\exp}\left(\frac{1}{\rho_{j}-\rho_{j}^{(1)}}\right)^{5}\beta^{j}\varepsilon_{j}. \end{equation} Using (\ref{es105}), we have \begin{equation}\label{a37} |||X_{\{h,s_{j}\}}|||^{T}_{p,D(\rho_{j}^{(1)},\mu_{j}^{(1)},\sigma_{j}^{(1)})\times U'} \preceq\frac{1}{\kappa^{4}}\Delta^{\exp}\left(\frac{1}{\rho_{j}-\rho_{j}^{(1)}}\right)^{4}\beta^{j}\varepsilon_{j}. \end{equation} By (\ref{a30}), (\ref{a36}), (\ref{a37}), using Proposition \ref{p}, we have \begin{equation}\label{a38} \begin{aligned} & |||X_{\{\{h,s_{j}\},s_j\}}|||^{T}_{p,D(\rho_{j}^{(2)},\mu_{j}^{(2)},\sigma_{j}^{(2)})\times U'} \preceq\frac{\Delta^{\exp} \beta^{2j}\varepsilon_{j}^{2}}{(\rho_{j}^{(1)}-\rho_{j}^{(2)})\cdot \kappa^{10}} \left(\frac{1}{\rho_{j}-\rho_{j}^{(1)}}\right)^{9},\\ & |||X_{\{\{f_{j}^{high},s_{j}\},s_j\}}|||^{T}_{p,D(\rho_{j}^{(3)},\mu_{j}^{(3)},\sigma_{j}^{(3)})\times U'} \preceq \frac{\Delta^{\exp} \beta^{2j} \varepsilon_{j}^{2}}{(\rho_{j}^{(2)}-\rho_{j}^{(3)})(\rho_{j}^{(1)}-\rho_{j}^{(2)}) \kappa^{12}} \left(\frac{1}{\rho_{j}-\rho_{j}^{(1)}}\right)^{10},\\ &|||X_{\{h_{i+1},s_j\}}|||^{T}_{p,D(\rho_{j}^{(2)},\mu_{j}^{(2)},\sigma_{j}^{(2)})\times U'} \preceq\frac{\Delta^{\exp} \beta^{i+j}\varepsilon_{i}\varepsilon_{j}}{(\rho_{j}^{(1)}-\rho_{j}^{(2)})\kappa^{10}} \left(\frac{1}{\rho_{j}-\rho_{j}^{(1)}}\right)^{5} \left(\frac{1}{\rho_{i}-\rho_{i}^{(1)}}\right)^{4}. \end{aligned} \end{equation} By (\ref{a35}) and \eqref{a38}, we see from \cite[Theorem 3.3]{CLY} that \begin{equation}\label{a44} \begin{aligned} & |||X_{\int_{0}^{1}(1-t)\{\{h,s_{j}\},s_{j}\}\circ X^{t}_{s_{j}}dt}|||^{T}_{p,D(\rho_{j+1},\mu_{j+1},\sigma_{j+1})\times U'} \preceq\beta^{j+1}\varepsilon_{j+1},\\ & |||X_{\int_{0}^{1}\{h_1+\cdots+h_{j},s_{j}\}\circ X^{t}_{s_{j}}dt}|||^{T}_{p,D(\rho_{j+1},\mu_{j+1},\sigma_{j+1})\times U'} \preceq\beta^{j+1}\varepsilon_{j+1},\\ & |||X_{\int_{0}^{1}\{f_{j}^{low},s_{j}\}\circ X^{t}_{s_{j}}dt}|||^{T}_{p,D(\rho_{j+1},\mu_{j+1},\sigma_{j+1})\times U'} \preceq\beta^{j+1}\varepsilon_{j+1},\\ & |||X_{\int_{0}^{1}(1-t)\{\{f_{j}^{high},s_{j}\},s_{j}\}\circ X^{t}_{s_{j}}dt}|||^{T}_{p,D(\rho_{j+1},\mu_{j+1},\sigma_{j+1})\times U'} \preceq\beta^{j+1}\varepsilon_{j+1}. \end{aligned} \end{equation} Finally, we obtain from the definitions of $f^{low}_{j+1}$ and $f^{high}_{j+1}$ that \begin{equation*}\label{a48} |||X_{f^{low}_{j+1}}|||^{T}_{p,D(\rho_{j+1},\mu_{j+1},\sigma_{j+1})\times U'} \preceq\beta^{j+1}\varepsilon_{j+1}, \end{equation*} \begin{equation*}\label{a49} |||X_{f^{high}_{j+1}}|||^{T}_{p,D(\rho_{j+1},\mu_{j+1},\sigma_{j+1})\times U'} \preceq 1+\frac{1}{\kappa^{6}}\beta^{j+1}\varepsilon_{j}+\beta^{j+1}\varepsilon_{j+1}. \end{equation*} This completes the proof of Lemma \ref{l1}. \qed \subsection{The KAM iterative lemma}\label{sec 4.2} Assume $\rho=\sigma$, $\mu=\sigma^{2}$, $d_{\Delta}\gamma\leq1$. For $m\geq 0$, let \begin{equation*} \varepsilon_{m}=e^{-\frac{1}{20}(\log\frac{1}{\varepsilon_{m-1}})^{2}}, \ \varepsilon_{0}=\varepsilon, \end{equation*} \begin{equation*} \vartheta_{m}=\frac{\sum_{j=1}^{m}\frac{1}{j^{2}}}{2\sum_{j=1}^{\infty}\frac{1}{j^{2}}}, \ \vartheta_{0}=0, \end{equation*} \begin{equation*} \rho_{m}=(1-\vartheta_{m})\rho, \ \rho_{0}=\rho, \end{equation*} \begin{equation*} \sigma_{m}=(1-\vartheta_{m})\sigma, \ \sigma_{0}=\sigma, \end{equation*} \begin{equation*} \mu_{m}=\sigma_{m}^{2}, \ \mu_{0}=\mu, \end{equation*} \begin{equation*} \gamma_{m}=d^{-1}_{\Delta_{m}}, \ \gamma_{0}=\min(\gamma,d^{-1}_{\Delta}), \end{equation*} \begin{equation*} \Delta_{m}=80(\log\frac{1}{\varepsilon_{m-1}})^{2}\frac{1}{\min(\gamma_{m-1},\rho_{m-1}-\rho_{m})}, \ \Delta_{0}=\Delta, \end{equation*} \begin{equation*} \Lambda_{m}=\mathrm{cte}.d^{2}_{\Delta_{m}}, \end{equation*} where the constant $\mathrm{cte}$. is the one in Proposition 6.7 in \cite{EK}. We have the following KAM iterative lemma. \begin{lem}\label{l2} For $m\geq0$, consider the Hamiltonian $h_{m}+f_{m}$, where \begin{equation*} h_{m}=\langle \omega_{m}(w),r\rangle+\frac{1}{2}\langle\zeta,(\Omega(w)+H_{m}(w))\zeta\rangle, \end{equation*} $H_{m}(w),\partial_{w}H_{m}(w)$ are T\"{o}plitz at $\infty$ and $\mathcal{NF}_{\Delta_{m}}$ for all $w\in U_{m}$, \begin{equation*} f_{m}=f_{m}^{low}+f_{m}^{high} \end{equation*} satisfy \begin{equation*}\label{ii1} |||X_{f_{m}^{low}}|||^{T}_{p,D(\rho_{m},\mu_{m},\sigma_{m})\times U_{m}}\leq\varepsilon_{m}, \ |||X_{f_{m}^{high}}|||^{T}_{p,D(\rho_{m},\mu_{m},\sigma_{m})\times U_{m}}\leq \varepsilon+\sum_{j=1}^{m}\varepsilon_{j-1}^{\frac{2}{3}}, \end{equation*} \begin{equation*}\label{ii2} [f_{m}^{low}]_{\Lambda_{m},\gamma_{m},\sigma_{m};U_{m},\rho_{m},\mu_{m}}\leq\varepsilon_{m}, \ [f_{m}^{high}]_{\Lambda_{m},\gamma_{m},\sigma_{m};U_{m},\rho_{m},\mu_{m}}\leq \varepsilon+\sum_{j=1}^{m}\varepsilon_{j-1}^{\frac{2}{3}}. \end{equation*} Assume for all $w\in U_{m}$, \begin{equation*}\label{ii3} |\omega_{m}(w)-\omega(w)|+|\partial_{w}(\omega_{m}(w)-\omega(w))|\leq\sum_{j=1}^{m}\varepsilon_{j-1}^{\frac{2}{3}}, \end{equation*} \begin{equation*}\label{ii4} \|H_{m}-H\|_{U_{m}}+\langle H_{m}-H\rangle_{\Lambda_{m};U_{m}}\leq\sum_{j=1}^{m}\varepsilon_{j-1}^{\frac{2}{3}}. \end{equation*} Then there is a subset $U_{m+1}\subset U_{m}$ such that if \begin{equation*} \varepsilon\preceq\min\left(\gamma,\rho,\frac{1}{\Delta},\frac{1}{\Lambda}\right)^{\exp}, \end{equation*} then for all $w\in U_{m+1}$, there is a real analytic symplectic map $\Phi_{m}$ such that \begin{equation*} (h_{m}+f_{m})\circ\Phi_{m}=h_{m+1}+f_{m+1} \end{equation*} with the estimates \begin{equation*}\label{ii5} |||X_{f^{low}_{m+1}}|||^{T}_{p,D(\rho_{m+1},\mu_{m+1},\sigma_{m+1})\times U_{m+1}} \leq\varepsilon_{m+1}, \end{equation*} \begin{equation*}\label{ii6} |||X_{f^{high}_{m+1}}|||^{T}_{p,D(\rho_{m+1},\mu_{m+1},\sigma_{m+1})\times U_{m+1}} \leq \varepsilon+\sum_{j=1}^{m+1}\varepsilon_{j-1}^{\frac{2}{3}}, \end{equation*} \begin{equation*}\label{ii7} [f^{low}_{m+1}]_{\Lambda_{m+1},\gamma_{m+1},\sigma_{m+1};U_{m+1},\rho_{m+1},\mu_{m+1}} \leq\varepsilon_{m+1}, \end{equation*} \begin{equation*}\label{ii8} [f^{high}_{m+1}]_{\Lambda_{m+1},\gamma_{m+1},\sigma_{m+1};U_{m+1},\rho_{m+1},\mu_{m+1}} \leq \varepsilon+\sum_{j=1}^{m+1}\varepsilon_{j-1}^{\frac{2}{3}}, \end{equation*} \begin{equation*}\label{ii9} |\omega_{m+1}(w)-\omega(w)|+|\partial_{w}(\omega_{m+1}(w)-\omega(w))|\leq\sum_{j=1}^{m+1}\varepsilon_{j-1}^{\frac{2}{3}}, \end{equation*} \begin{equation*}\label{ii10} \|H_{m+1}-H\|_{U_{m+1}}+\langle H_{m+1}-H\rangle_{\Lambda_{m+1};U_{m+1}}\leq\sum_{j=1}^{m+1}\varepsilon_{j-1}^{\frac{2}{3}}, \end{equation*} \begin{equation*}\label{ii11} \mathrm{meas}(U_{m}\setminus U_{m+1})\preceq \varepsilon_{m}^{\exp'}, \end{equation*} where the exponents $\exp$, $\exp'$ depend on $d, \# \mathcal{A}, p$. \end{lem} \noindent\textbf{Proof.}~ Take $\kappa^{20}=\varepsilon^{\frac{1}{20}}$ in Lemma \ref{l1}, there is a real analytic symplectic map $\Phi$ such that \begin{equation*} (h+f)\circ\Phi=h+h_1+\cdots+h_{n}+f_{n}. \end{equation*} Let $h_{+}=h+h_1+\cdots+h_{n}$, $f_{+}=f_{n}$. The KAM iterative lemma follows immediately from Lemma \ref{l1}. \qed \section{Long time stability of the KAM tori}\label{sect 5} In this section, we prove Theorem \ref{t2} on the long time stability of the KAM tori. By momentum conservation, the frequency shift is diagonal. We will construct a partial normal form of order $M+2$ based on $h_{\tilde{m}}+f^{high}_{\tilde{m}}$, where \begin{equation*} h_{\tilde{m}}=\langle \omega_{\tilde{m}}(w),r\rangle+\frac{1}{2}\langle\zeta,(\Omega(w)+H_{\tilde{m}}(w))\zeta\rangle, \end{equation*} $H_{\tilde{m}}(w),\partial_{w}H_{\tilde{m}}(w)$ are T\"{o}plitz at $\infty$ and $\mathcal{NF}_{\Delta_{\tilde{m}}}$ for all $w\in U_{\infty}$. We change to complex coordinates \begin{equation*} z=\left( \begin{array}{c} u \\ v \\ \end{array} \right) =C^{-1}\left( \begin{array}{c} \xi \\ \eta \\ \end{array} \right), \ C=\frac{1}{\sqrt{2}}\left( \begin{array}{cc} 1 & 1 \\ -\mathrm{i} & \mathrm{i} \\ \end{array} \right), \end{equation*} then \begin{equation*} h_{\tilde{m}}=\langle \omega_{\tilde{m}}(w),r\rangle+\langle u,(\Omega(w)+H_{\tilde{m}}(w))v\rangle. \end{equation*} Let $\mathcal{B}=\{a\in \mathcal{L}:|a|\leq N\}$, $\tilde{u}=(u_{a})_{a\in\mathcal{B}}$, $\tilde{v}=(v_{a})_{a\in\mathcal{B}}$, $\check{u}=(u_{a})_{a\in\mathcal{L}\backslash\mathcal{B}}$,$\check{v}=(v_{a})_{a\in\mathcal{L}\backslash\mathcal{B}}$. Write \begin{equation*} h_{\tilde{m}}=\langle \omega_{\tilde{m}}(w),r\rangle+\sum_{a\in\mathcal{B}}\tilde{\lambda}_{a}(w)u_{a}v_{a}+\langle \check{u},(\Omega(w)+H_{\tilde{m}}(w))\check{v}\rangle, \end{equation*} where $\tilde{\lambda}_{a}(w)\in\mathrm{spec}(\Omega(w)+H_{\tilde{m}}(w))_{\mathcal{B}}$. Let $\tilde{\Delta}>1$ and $0<\tilde{\kappa}<1$. Assume there exists $\tilde{U} \subset U_{\infty}$ such that for all $w\in \tilde{U}$, $|k|\leq\tilde{\Delta}$, $|\tilde{l}|\leq M+2$, $|k|+|\tilde{l}|\neq 0$, the following conditions hold: \begin{itemize} \item Diophantine condition: \begin{equation}\label{sda1} |\langle k,\omega_{\tilde{m}}(w)\rangle+\langle \tilde{l},\tilde{\lambda}(w)\rangle|\geq \frac{\tilde{\kappa}}{4^{M}N^{(4d)^{4d}(|\tilde{l}|+4)^{2}}}; \end{equation} \item The first Melnikov condition: \begin{equation}\label{sda2} |\langle k,\omega_{\tilde{m}}(w)\rangle+\langle \tilde{l},\tilde{\lambda}(w)\rangle+\alpha(w)|\geq \frac{\tilde{\kappa}}{4^{M}N^{(4d)^{4d}(|\tilde{l}|+4)^{2}}} \end{equation} for any $\alpha(w)\in\mathrm{spec}(((\Omega+H_{\tilde{m}})(w))_{[a]_{\Delta_{\tilde{m}}}})$ and any $[a]_{\Delta_{\tilde{m}}}$; \item The second Melnikov condition with the same sign: \begin{equation}\label{sda3} |\langle k,\omega_{\tilde{m}}(w)\rangle+\langle \tilde{l},\tilde{\lambda}(w)\rangle+\alpha(w)+\beta(w)|\geq \frac{\tilde{\kappa}}{4^{M}N^{(4d)^{4d}(|\tilde{l}|+4)^{2}}} \end{equation} for any $\alpha(w)\in\mathrm{spec}(((\Omega+H_{\tilde{m}})(w))_{[a]_{\Delta_{\tilde{m}}}}), \beta(w)\in\mathrm{spec}(((\Omega+H_{\tilde{m}})(w))_{[b]_{\Delta_{\tilde{m}}}})$ and any $[a]_{\Delta_{\tilde{m}}}, [b]_{\Delta_{\tilde{m}}}$; \item The second Melnikov condition with the opposite signs: \begin{equation}\label{sda4} |\langle k,\omega_{\tilde{m}}(w)\rangle+\langle \tilde{l},\tilde{\lambda}(w)\rangle+\alpha(w)-\beta(w)|\geq \frac{\tilde{\kappa}}{4^{M}N^{(4d)^{4d}(|\tilde{l}|+4)^{2}}} \end{equation} for any $ \alpha(w)\in\mathrm{spec}(((\Omega+H_{\tilde{m}})(w))_{[a]_{\Delta_{\tilde{m}}}}), \beta(w)\in\mathrm{spec}(((\Omega+H_{\tilde{m}})(w))_{[b]_{\Delta_{\tilde{m}}}}), $ and any $\mathrm{dist}([a]_{\Delta_{\tilde{m}}}, [b]_{\Delta_{\tilde{m}}})\leq \tilde{\Delta}+2d_{\Delta_{\tilde{m}}}$. \end{itemize} We have the following lemma. For the sake of notations, we denote \begin{equation*} \bm{\sum_{(1)}} = \sum_{2|\alpha|+|\beta|+|\upsilon|=j} ,\quad \bm{\sum_{(2)}}= \sum_{2|\alpha|+|\beta|+|\upsilon|=j-1},\quad \bm{\sum_{(3)}}= \sum_{2|\alpha|+|\beta|+|\upsilon|=j-2}. \end{equation*} \begin{lem}\label{l3} For $2\leq j_0\leq M+1$, consider the partial normal form of order $j_0$ \begin{equation*} T_{j_{0}}=h_{\tilde{m}}+Z_{j_{0}}+P_{j_{0}}+R_{j_{0}}+Q_{j_{0}}, \end{equation*} with \begin{equation*} Z_{j_{0}}=\sum_{3\leq j\leq j_{0}}Z_{j_{0}j},~ P_{j_{0}}=\sum_{j\geq j_{0}+1}P_{j_{0}j},~ R_{j_{0}}=\sum_{3\leq j\leq j_{0}}R_{j_{0}j},~ Q_{j_{0}}=\sum_{j\geq 3}Q_{j_{0}j}, \end{equation*} where \begin{itemize} \item $Z_{j_{0}j}=Z_{j_{0}j}(r,u,v;w)$ equals to \begin{equation*} \sum_{2|\alpha|+2|\beta|=j}Z_{j_{0}}^{\alpha\beta\beta0}(w)r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\beta}+ \sum_{2|\alpha|+2|\beta|=j-2,|a|=|b|}Z_{j_{0}}^{\alpha\beta\beta ab}(w)r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\beta}\check{u}_{a}\check{v}_{b}, \end{equation*} \item $P_{j_{0}j}=P_{j_{0}j}(\varphi,r,u,v;w)$ equals to \begin{align*} &\bm{\sum_{(1)}} P_{j_{0}}^{\alpha\beta\upsilon0}(\varphi;w)r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon} + \bm{\sum_{(2)}} \langle \check{u},P_{j_{0}}^{\alpha\beta\upsilon\check{u}}(\varphi;w)\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon}\\ +&\bm{\sum_{(2)}} \langle \check{v},P_{j_{0}}^{\alpha\beta\upsilon\check{v}}(\varphi;w)\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon} +\bm{\sum_{(3)}} \frac{1}{2}\langle \check{u},P_{j_{0}}^{\alpha\beta\upsilon\check{u}\check{u}}(\varphi;w)\check{u}\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon}\\ +&\bm{\sum_{(3)}} \langle \check{u},P_{j_{0}}^{\alpha\beta\upsilon\check{u}\check{v}}(\varphi;w)\check{v}\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon} +\bm{\sum_{(3)}} \frac{1}{2}\langle \check{v},P_{j_{0}}^{\alpha\beta\upsilon\check{v}\check{v}}(\varphi;w)\check{v}\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon}, \end{align*} \item $R_{j_{0}j}=R_{j_{0}j}(\varphi,r,u,v;w)$ equals to \begin{align*} &\bm{\sum_{(1)}}R_{j_{0}}^{\alpha\beta\upsilon0}(\varphi;w)r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon}+ \bm{\sum_{(2)}}\langle \check{u},R_{j_{0}}^{\alpha\beta\upsilon\check{u}}(\varphi;w)\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon} \\ +&\bm{\sum_{(2)}}\langle \check{v},R_{j_{0}}^{\alpha\beta\upsilon\check{v}}(\varphi;w)\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon} +\bm{\sum_{(3)}}\frac{1}{2}\langle \check{u},R_{j_{0}}^{\alpha\beta\upsilon\check{u}\check{u}}(\varphi;w)\check{u}\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon} \\ +&\bm{\sum_{(3)}}\langle \check{u},R_{j_{0}}^{\alpha\beta\upsilon\check{u}\check{v}}(\varphi;w)\check{v}\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon} +\bm{\sum_{(3)}}\frac{1}{2}\langle \check{v},R_{j_{0}}^{\alpha\beta\upsilon\check{v}\check{v}}(\varphi;w)\check{v}\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon}, \end{align*} \item $Q_{j_{0}j}=Q_{j_{0}j}(\varphi,r,u,v;w)$ equals to \begin{equation*} \sum_{2|\alpha|+|\beta|+|\upsilon|+|\mu|+|\nu|=j,|\mu|+|\nu|\geq3}Q_{j_{0}}^{\alpha\beta\upsilon\mu\nu}(\varphi;w)r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon}\check{u}^{\mu}\check{v}^{\nu}. \end{equation*} \end{itemize} Let $\rho'=\frac{\rho}{12M}, \delta'=\frac{\delta}{2M}$ and \begin{equation*} D_{j_{0}}=D\left(\frac{\rho}{2}-3(j_{0}-2)\rho', (5\delta-2(j_{0}-2)\delta')^{2},5\delta-2(j_{0}-2)\delta'\right), \end{equation*} \begin{equation*} D'_{j_{0}}=D\left(\frac{\rho}{2}-(3(j_{0}-2)+1)\rho', (5\delta-2(j_{0}-2)\delta')^{2},5\delta-2(j_{0}-2)\delta'\right). \end{equation*} Assume \begin{equation}\label{nf1} |||X_{Z_{j_{0}j}}|||^{T}_{p,D_{j_{0}}\times\tilde{U}}\preceq\delta\left(\frac{1}{\tilde{\kappa}^{2}}\Delta_{\tilde{m}}^{\exp}N^{(4d)^{4d}(j_{0}+4)^{2}}\delta\right)^{j-3}, \end{equation} \begin{equation}\label{nf2} |||X_{R_{j_{0}j}}|||^{T}_{p,D_{j_{0}}\times\tilde{U}}\preceq\delta\left(\frac{1}{\tilde{\kappa}^{2}}\Delta_{\tilde{m}}^{\exp}N^{(4d)^{4d}(j_{0}+4)^{2}}\delta\right)^{j-3}, \end{equation} \begin{equation}\label{nf3} |||X_{P_{j_{0}j}}|||^{T}_{p,D_{j_{0}}\times\tilde{U}}\preceq\delta\left(\frac{1}{\tilde{\kappa}^{2}}\Delta_{\tilde{m}}^{\exp}N^{(4d)^{4d}(j_{0}+5)^{2}}\delta\right)^{j-3}, \end{equation} \begin{equation}\label{nf4} |||X_{Q_{j_{0}j}}|||^{T}_{p,D_{j_{0}}\times\tilde{U}}\preceq\delta\left(\frac{1}{\tilde{\kappa}^{2}}\Delta_{\tilde{m}}^{\exp}N^{(4d)^{4d}(j_{0}+5)^{2}}\delta\right)^{j-3}, \end{equation} where $a\preceq b$ means there is a constant $c>0$ depending on $\rho,M,d,\# \mathcal{A}, p, c_1,c_2,c_3,c_4,c_5$ such that $a\leq c b$, the exponent $\exp$ depends on $d, \# \mathcal{A}, p$. Then there is a symplectic map $\Psi_{j_{0}}$ such that \begin{equation*} T_{j_{0}+1}=T_{j_{0}}\circ\Psi_{j_{0}}=h_{\tilde{m}}+Z_{j_{0}+1}+P_{j_{0}+1}+R_{j_{0}+1}+Q_{j_{0}+1}, \end{equation*} which is given exactly by the formula of $T_{j_{0}}$ but with $j_{0}+1$ in place of $j_{0}$. Moreover, the estimates \eqref{nf1}-\eqref{nf4} also hold with $j_{0}+1$ in place of $j_{0}$, \end{lem} \noindent\textbf{Proof.}~ Consider the homological equation \begin{equation}\label{n1} \{h_{\tilde{m}},F_{j_{0}}\}=-\mathcal{T}_{\tilde{\Delta}}P_{j_{0}(j_{0}+1)}+\hat{Z}_{j_{0}}, \end{equation} where \begin{align*} \mathcal{T}_{\tilde{\Delta}}P_{j_{0}(j_{0}+1)} = & \sum_{|k|\leq\tilde{\Delta}}\Big[\sum_{2|\alpha|+|\beta|+|\upsilon|=j_{0}+1} \hat{P}_{j_{0}}^{\alpha\beta\upsilon0}(k;w)r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon}\\ +& \sum_{2|\alpha|+|\beta|+|\upsilon|=j_{0}} \left( \langle \check{u},\hat{P}_{j_{0}}^{\alpha\beta\upsilon\check{u}}(k;w)\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon} +\langle \check{v},\hat{P}_{j_{0}}^{\alpha\beta\upsilon\check{v}}(k;w)\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon}\right)\\ +&\sum_{2|\alpha|+|\beta|+|\upsilon|=j_{0}-1}\left(\frac{1}{2}\langle \check{u},\hat{P}_{j_{0}}^{\alpha\beta\upsilon\check{u}\check{u}}(k;w)\check{u}\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon} +\frac{1}{2}\langle \check{v},\hat{P}_{j_{0}}^{\alpha\beta\upsilon\check{v}\check{v}}(k;w)\check{v}\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon}\right.\\ &\left.+\langle \check{u},\mathcal{T}_{\tilde{\Delta}}\hat{P}_{j_{0}}^{\alpha\beta\upsilon\check{u}\check{v}} (k;w)\check{v}\rangle\right) r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon}\Big] e^{\mathrm{i}\langle k,\varphi\rangle}. \end{align*} Let \begin{align*} F_{j_{0}}(\varphi,r,&u,v;w)=\sum_{2|\alpha|+|\beta|+|\upsilon|=j_{0}+1} F_{j_{0}}^{\alpha\beta\upsilon0}(\varphi;w)r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon}\\ & +\sum_{2|\alpha|+|\beta|+|\upsilon|=j_{0}}\langle \check{u},F_{j_{0}}^{\alpha\beta\upsilon\check{u}}(\varphi;w)\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon} +\langle \check{v},F_{j_{0}}^{\alpha\beta\upsilon\check{v}}(\varphi;w)\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon}\\ &+\sum_{2|\alpha|+|\beta|+|\upsilon|=j_{0}-1}\frac{1}{2}\langle \check{u},F_{j_{0}}^{\alpha\beta\upsilon\check{u}\check{u}}(\varphi;w)\check{u}\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon} +\langle \check{u},F_{j_{0}}^{\alpha\beta\upsilon\check{u}\check{v}}(\varphi;w)\check{v}\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon}\\ &+\sum_{2|\alpha|+|\beta|+|\upsilon|=j_{0}-1}\frac{1}{2}\langle \check{v},F_{j_{0}}^{\alpha\beta\upsilon\check{v}\check{v}}(\varphi;w)\check{v}\rangle r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon}. \end{align*} In Fourier modes, we have \begin{equation}\label{n2} -\mathrm{i}(\langle k,\omega_{\tilde{m}}\rangle+\langle \beta-\upsilon,\tilde{\lambda}\rangle)\hat{F}_{j_{0}}^{\alpha\beta\upsilon0}(k) =-\hat{P}_{j_{0}}^{\alpha\beta\upsilon0}(k)+\delta_{0}^{k}\delta_{\beta}^{\upsilon} \hat{Z}_{j_{0}}^{\alpha\beta\upsilon0},\quad \end{equation} \begin{equation}\label{n3} -\mathrm{i}(\langle k,\omega_{\tilde{m}}\rangle+\langle \beta-\upsilon,\tilde{\lambda}\rangle)\hat{F}_{j_{0}}^{\alpha\beta\upsilon\check{u}}(k) -\mathrm{i}(\Omega+H_{\tilde{m}})\hat{F}_{j_{0}}^{\alpha\beta\upsilon\check{u}}(k) =-\hat{P}_{j_{0}}^{\alpha\beta\upsilon\check{u}}(k), \end{equation} \begin{equation}\label{n4} -\mathrm{i}(\langle k,\omega_{\tilde{m}}\rangle+\langle \beta-\upsilon,\tilde{\lambda}\rangle)\hat{F}_{j_{0}}^{\alpha\beta\upsilon\check{v}}(k) +\mathrm{i}(\Omega+H_{\tilde{m}}^{T})\hat{F}_{j_{0}}^{\alpha\beta\upsilon\check{v}}(k) =-\hat{P}_{j_{0}}^{\alpha\beta\upsilon\check{v}}(k), \end{equation} \begin{equation}\label{n5} \begin{aligned} -\mathrm{i}(\langle k,\omega_{\tilde{m}}\rangle+\langle \beta-\upsilon,\tilde{\lambda}\rangle) &\hat{F}_{j_{0}}^{\alpha\beta\upsilon\check{u}\check{u}}(k) -\mathrm{i}(\Omega+H_{\tilde{m}}) \hat{F}_{j_{0}}^{\alpha\beta\upsilon\check{u}\check{u}}(k)\\ -\mathrm{i}&\hat{F}_{j_{0}}^{\alpha\beta\upsilon\check{u}\check{u}}(k) (\Omega+H_{\tilde{m}}^{T}) =-\hat{P}_{j_{0}}^{\alpha\beta\upsilon\check{u}\check{u}}(k), \end{aligned} \end{equation} \begin{equation}\label{n6} \begin{aligned} -\mathrm{i}(\langle k,\omega_{\tilde{m}}\rangle+\langle \beta-\upsilon,\tilde{\lambda}\rangle) &\hat{F}_{j_{0}}^{\alpha\beta\upsilon\check{v}\check{v}}(k) +\mathrm{i}(\Omega+H_{\tilde{m}}^{T}) \hat{F}_{j_{0}}^{\alpha\beta\upsilon\check{v}\check{v}}(k)\\ +\mathrm{i}&\hat{F}_{j_{0}}^{\alpha\beta\upsilon\check{v}\check{v}}(k)(\Omega+H_{\tilde{m}}) =-\hat{P}_{j_{0}}^{\alpha\beta\upsilon\check{v}\check{v}}(k), \end{aligned} \end{equation} \begin{equation}\label{n7} \begin{aligned} -\mathrm{i}(\langle k,\omega_{\tilde{m}}\rangle+\langle \beta-\upsilon,\tilde{\lambda}\rangle) &\hat{F}_{j_{0}}^{\alpha\beta\upsilon\check{u}\check{v}}(k) +\mathrm{i}\hat{F}_{j_{0}}^{\alpha\beta\upsilon\check{u}\check{v}}(k)(\Omega +H_{\tilde{m}})\\ -\mathrm{i}(\Omega+H_{\tilde{m}})&\hat{F}_{j_{0}}^{\alpha\beta\upsilon\check{u}\check{v}}(k) =-\hat{P}_{j_{0}}^{\alpha\beta\upsilon\check{u}\check{v}}(k) +\delta_{0}^{k}\delta_{\beta}^{\upsilon} \hat{Z}_{j_{0}}^{\alpha\beta\upsilon\check{u}\check{v}}. \end{aligned} \end{equation} We solve equations (\ref{n2})-(\ref{n7}) as in Proposition \ref{p1}. We have \begin{equation}\label{n8} \begin{aligned} \hat{Z}_{j_{0}}(r,u,v;w)=&\sum_{2|\alpha|+2|\beta|=j_{0}+1} \hat{P}_{j_{0}}^{\alpha\beta\beta0}(0;w)r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\beta}\\ +& \sum_{2|\alpha|+2|\beta|=j_{0}-1,|a|=|b|}\hat{P}_{j_{0}}^{\alpha\beta\beta ab}(0;w)r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\beta}\check{u}_{a}\check{v}_{b}. \end{aligned} \end{equation} Let $\Psi_{j_{0}}=X_{F_{j_{0}}}^{t}|_{t=1}$. Using Taylor's formula, there is \begin{equation}\label{n9} \begin{aligned} T_{j_{0}+1}=&T_{j_{0}}\circ\Psi_{j_{0}}=(h_{\tilde{m}}+Z_{j_{0}}+P_{j_{0}}+R_{j_{0}}+Q_{j_{0}}) \circ X_{F_{j_{0}}}^{t}|_{t=1}\\ =&h_{\tilde{m}}+\{h_{\tilde{m}},F_{j_{0}}\}+\int_{0}^{1}(1-t) \{\{h_{\tilde{m}},F_{j_{0}}\},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt\\ &+Z_{j_{0}}+\int_{0}^{1}\{Z_{j_{0}},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt +P_{j_{0}(j_{0}+1)}+\int_{0}^{1}\{P_{j_{0}(j_{0}+1)},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt\\ &+P_{j_{0}}-P_{j_{0}(j_{0}+1)}+\int_{0}^{1}\{P_{j_{0}}-P_{j_{0}(j_{0}+1)},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt\\ &+R_{j_{0}}+\int_{0}^{1}\{R_{j_{0}},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt+Q_{j_{0}}+\int_{0}^{1}\{Q_{j_{0}},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt. \end{aligned} \end{equation} By (\ref{n1}) and (\ref{n9}), we have \begin{equation}\label{n10} \begin{aligned} T_{j_{0}+1}=&h_{\tilde{m}}+Z_{j_{0}}+\hat{Z}_{j_{0}} +\int_{0}^{1}(1-t)\{\{h_{\tilde{m}},F_{j_{0}}\},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt\\ &+\int_{0}^{1}\{Z_{j_{0}},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt +\int_{0}^{1}\{P_{j_{0}(j_{0}+1)},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt\\ &+P_{j_{0}}-P_{j_{0}(j_{0}+1)}+\int_{0}^{1}\{P_{j_{0}}-P_{j_{0}(j_{0}+1)},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt\\ &+R_{j_{0}}+(1-\mathcal{T}_{\tilde{\Delta}})P_{j_{0}(j_{0}+1)} +\int_{0}^{1}\{R_{j_{0}},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt+Q_{j_{0}}\\ &+\int_{0}^{1}\{Q_{j_{0}},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt. \end{aligned} \end{equation} Hence \begin{equation}\label{n11} Z_{j_{0}+1}=Z_{j_{0}}+\hat{Z}_{j_{0}}, \end{equation} \begin{equation}\label{n12} R_{j_{0}+1}=R_{j_{0}}+(1-\mathcal{T}_{\tilde{\Delta}})P_{j_{0}(j_{0}+1)}, \end{equation} and \begin{equation}\label{n13} \begin{aligned} & P_{j_{0}+1}+Q_{j_{0}+1}=\int_{0}^{1}(1-t)\{\{h_{\tilde{m}},F_{j_{0}}\},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt +\int_{0}^{1}\{Z_{j_{0}}+P_{j_{0}},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt\\ &+\int_{0}^{1}\{R_{j_{0}},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt+P_{j_{0}}-P_{j_{0}(j_{0}+1)} +Q_{j_{0}} +\int_{0}^{1}\{Q_{j_{0}},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt. \end{aligned} \end{equation} By (\ref{sda1})-(\ref{sda4}) and (\ref{nf3}), following the proof of Proposition \ref{p1}, we have \begin{equation}\label{n14} \begin{aligned} |||X_{F_{j_{0}}}|||^{T}_{p,D'_{j_{0}}\times\tilde{U}}\preceq & \frac{1}{\tilde{\kappa}^{2}} \Delta_{\tilde{m}}^{\exp}N^{(4d)^{4d}(j_{0}+5)^{2}} |||X_{P_{j_{0}(j_{0}+1)}}|||^{T}_{p,D_{j_{0}}\times\tilde{U}}\\ \preceq & \left(\frac{1}{\tilde{\kappa}^{2}}\Delta_{\tilde{m}}^{\exp}N^{(4d)^{4d}(j_{0}+5)^{2}}\delta\right)^{j_{0}-1}. \end{aligned} \end{equation} Write \begin{equation*} Z_{j_{0}+1}=\sum_{3\leq j\leq j_{0}+1}Z_{(j_{0}+1)j}, \end{equation*} where \begin{equation*} Z_{(j_{0}+1)j}=Z_{j_{0}j}, \ 3\leq j\leq j_{0}, \end{equation*} \begin{equation*} Z_{(j_{0}+1)(j_{0}+1)}=\hat{Z}_{j_{0}}. \end{equation*} For $3\leq j\leq j_{0}$, using (\ref{nf1}), we have \begin{equation}\label{n15} |||X_{Z_{(j_{0}+1)j}}|||^{T}_{p,D_{j_{0}+1}\times\tilde{U}}\preceq\delta\left(\frac{1}{\tilde{\kappa}^{2}}\Delta_{\tilde{m}}^{\exp}N^{(4d)^{4d}(j_{0}+5)^{2}}\delta\right)^{j-3}. \end{equation} By (\ref{nf3}) and (\ref{n8}), we have \begin{equation}\label{n16} |||X_{\hat{Z}_{j_{0}}}|||^{T}_{p,D_{j_{0}+1}\times\tilde{U}}\preceq\delta\left(\frac{1}{\tilde{\kappa}^{2}}\Delta_{\tilde{m}}^{\exp}N^{(4d)^{4d}(j_{0}+5)^{2}}\delta\right)^{j_{0}-2}. \end{equation} Then the estimate of (\ref{n11}) follows from (\ref{n15}) and (\ref{n16}). Write \begin{equation*} R_{j_{0}+1}=\sum_{3\leq j\leq j_{0}+1}R_{(j_{0}+1)j}, \end{equation*} where \begin{equation*} R_{(j_{0}+1)j}=R_{j_{0}j}, \ 3\leq j\leq j_{0}, \end{equation*} \begin{equation*} R_{(j_{0}+1)(j_{0}+1)}=(1-\mathcal{T}_{\tilde{\Delta}})P_{j_{0}(j_{0}+1)}. \end{equation*} For $3\leq j\leq j_{0}$, using (\ref{nf2}), we have \begin{equation}\label{n17} |||X_{R_{(j_{0}+1)j}}|||^{T}_{p,D_{j_{0}+1}\times\tilde{U}}\preceq\delta\left(\frac{1}{\tilde{\kappa}^{2}}\Delta_{\tilde{m}}^{\exp}N^{(4d)^{4d}(j_{0}+5)^{2}}\delta\right)^{j-3}. \end{equation} By (\ref{nf3}), we have \begin{equation}\label{n18} |||X_{R_{(j_{0}+1)(j_{0}+1)}}|||^{T}_{p,D_{j_{0}+1}\times\tilde{U}}\preceq\delta\left(\frac{1}{\tilde{\kappa}^{2}}\Delta_{\tilde{m}}^{\exp}N^{(4d)^{4d}(j_{0}+5)^{2}}\delta\right)^{j_{0}-2}. \end{equation} By (\ref{n17}) and (\ref{n18}), we obtain the estimate of (\ref{n12}). Let \begin{equation*} h_{\tilde{m}}^{(0)}=h_{\tilde{m}}, \ h_{\tilde{m}}^{(j)}=\{h_{\tilde{m}}^{(j-1)},F_{j_{0}}\}, ~ j\geq1. \end{equation*} We have \begin{equation*} h_{\tilde{m}}^{(j)}(\varphi,r,u,v;w)=\sum_{\substack{2|\alpha|+|\beta|+|\upsilon|+|\mu|+|\nu|\\ =j(j_{0}-1)+2}} h_{\tilde{m}}^{(j)\alpha\beta\upsilon\mu\nu}(\varphi;w)r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon}\check{u}^{\mu}\check{v}^{\nu}, \end{equation*} and \begin{equation}\label{n19} \int_{0}^{1}(1-t)\{\{h_{\tilde{m}},F_{j_{0}}\},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt=\sum_{j\geq2}\frac{1}{j!}h_{\tilde{m}}^{(j)}. \end{equation} By (\ref{nf3}), (\ref{n1}) and (\ref{n8}), we have \begin{equation}\label{n20} |||X_{h_{\tilde{m}}^{(1)}}|||^{T}_{p,D_{j_{0}}\times\tilde{U}}\preceq\delta\left(\frac{1}{\tilde{\kappa}^{2}}\Delta_{\tilde{m}}^{\exp}N^{(4d)^{4d}(j_{0}+5)^{2}}\delta\right)^{j_{0}-2}. \end{equation} For $j\geq2$, let $\rho_{j}=\frac{\rho}{6jM}$, $\delta_{j}=\frac{\delta}{2jM}$. By (\ref{n14}), (\ref{n20}) and Proposition \ref{p}, we have \begin{equation}\label{n21} \begin{aligned} &~ \frac{1}{j!}|||X_{h_{\tilde{m}}^{(j)}}|||^{T}_{p,D_{j_{0}+1}\times\tilde{U}} \\ \preceq & \frac{1}{j!}\left(C\max\left(\frac{1}{\rho_{j}},\frac{\delta}{\delta_{j}}\right)\right)^{j-1} \left(|||X_{F_{j_{0}}}|||^{T}_{p,D'_{j_{0}}\times\tilde{U}}\right)^{j-1}|||X_{h_{\tilde{m}}^{(1)}}|||^{T}_{p,D_{j_{0}}\times\tilde{U}}\\ \preceq & \frac{j^{j-1}}{j!}\delta\left(\frac{1}{\tilde{\kappa}^{2}}\Delta_{\tilde{m}}^{\exp}N^{(4d)^{4d}(j_{0}+5)^{2}}\delta\right)^{j_{0}-2} \left(C\left(\frac{1}{\tilde{\kappa}^{2}}\Delta_{\tilde{m}}^{\exp}N^{(4d)^{4d}(j_{0}+5)^{2}}\delta\right)^{j_{0}-1}\right)^{j-1}\\ \preceq & \delta\left(\frac{1}{\tilde{\kappa}^{2}}\Delta_{\tilde{m}}^{\exp} N^{(4d)^{4d}(j_{0}+6)^{2}}\delta\right)^{j(j_{0}-1)-1}, \end{aligned} \end{equation} since $j, j_{0}\geq2$, $j(j_{0}-1)+2\geq2j_{0}\geq j_{0}+2$. By (\ref{n21}), we obtain the estimate of (\ref{n19}). Let \begin{equation*} W_{i}=Z_{j_{0}i}, \ 3\leq i\leq j_{0}, \ W_{i}=P_{j_{0}i} \ i\geq j_{0}+1. \end{equation*} We have \begin{equation*} Z_{j_{0}}+P_{j_{0}}=\sum_{i\geq3}W_{i}. \end{equation*} Let \begin{equation*} W_{i}^{(0)}=W_{i},~ \ W_{i}^{(j)}=\{W_{i}^{(j-1)},F_{j_{0}}\},~ j\geq1. \end{equation*} We have \begin{equation*} W_{i}^{(j)}(\varphi,r,u,v;w)=\sum_{2|\alpha|+|\beta|+|\upsilon|+|\mu|+|\nu|=j(j_{0}-1)+i} W_{i}^{(j)\alpha\beta\upsilon\mu\nu}(\varphi;w)r^{\alpha}\tilde{u}^{\beta}\tilde{v}^{\upsilon}\check{u}^{\mu}\check{v}^{\nu}, \end{equation*} and \begin{equation}\label{n22} \int_{0}^{1}\{Z_{j_{0}}+P_{j_{0}},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt=\sum_{i\geq3}\sum_{j\geq1}\frac{1}{j!}W_{i}^{(j)}. \end{equation} Following the proof of (\ref{n21}), we have \begin{equation}\label{n23} \frac{1}{j!}|||X_{W_{i}^{(j)}}|||^{T}_{p,D_{j_{0}+1}\times\tilde{U}}\preceq \delta\left(\frac{1}{\tilde{\kappa}^{2}} \Delta_{\tilde{m}}^{\exp}N^{(4d)^{4d}(j_{0}+6)^{2}}\delta\right)^{j(j_{0}-1)+i-3}, \end{equation} since $j\geq1, j_{0}\geq2, i\geq3$, $j(j_{0}-1)+i\geq j_{0}+2$. By (\ref{n23}), we obtain the estimate of (\ref{n22}). Using the same method, we can estimate \begin{equation*} \int_{0}^{1}\{R_{j_{0}},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt +\int_{0}^{1}\{Q_{j_{0}},F_{j_{0}}\}\circ X_{F_{j_{0}}}^{t}dt. \end{equation*} Hence, we obtain the estimate of (\ref{n13}). \qed Now we prove Theorem \ref{t2} concerning the long time stability of the KAM tori of the infinitely dimensional Hamiltonian system. \noindent\textbf{Proof of Theorem \ref{t2}.}~ Since the difference between $h_{\infty}+f_{\infty}$ and $h_{\tilde{m}}+f^{high}_{\tilde{m}}$ is $\varepsilon^{\frac{2}{3}}_{\tilde{m}}$, if we choose $\tilde{m}$ such that $\varepsilon^{\frac{2}{3}}_{\tilde{m}}\sim \delta^{M+1}$, then we can construct a partial normal form of order $M+2$ based on $h_{\tilde{m}}+f^{high}_{\tilde{m}}$. By Lemma \ref{l3}, there is a symplectic map $\Psi$ such that \begin{equation*} (h_{\tilde{m}}+f^{high}_{\tilde{m}})\circ\Psi=h_{\tilde{m}}+Z_{M+2}+P_{M+2}+R_{M+2}+Q_{M+2}. \end{equation*} Taking $N=\delta^{-\frac{M+1}{p-1}}$, $\tilde{\kappa}=\delta^{\frac{1}{900M}}$, we have \begin{equation*}\label{n24} |||X_{P_{M+2}}|||^{T}_{p,D(\frac{\rho}{4},(4\delta)^{2},4\delta)\times\tilde{U}}\preceq\delta\left(\frac{1}{\tilde{\kappa}^{2}}\Delta_{\tilde{m}}^{\exp}N^{(4d)^{4d}(M+7)^{2}}\delta\right)^{M}\leq\delta^{M+\frac{1}{2}}. \end{equation*} Taking $\tilde{\Delta}=800M(\log\frac{1}{\delta})^{2}\frac{1}{\min(\gamma_{\tilde{m}},\rho)}$, we have \begin{equation*}\label{n25} |||X_{R_{M+2}}|||^{T}_{p,D(\frac{\rho}{4},(4\delta)^{2},4\delta)\times\tilde{U}}\leq\delta^{M+\frac{1}{2}}. \end{equation*} Since \begin{equation*} \|\check{z}\|_{1}=\left(\sum_{|a|>N}|z_{a}|^{2}|a|^{2}\right)^{\frac{1}{2}}=\left(\sum_{|a|>N}|z_{a}|^{2}\frac{|a|^{2p}}{|a|^{2p-2}}\right)^{\frac{1}{2}}\leq\frac{\|\check{z}\|_{p}}{N^{p-1}}\leq \delta^{M+1}\|\check{z}\|_{p}, \end{equation*} we have \begin{equation*}\label{n26} |||X_{Q_{M+2}}|||_{\mathcal{P}^{p},D(\frac{\rho}{4},(4\delta)^{2},4\delta)\times\tilde{U}}\leq\delta^{M+\frac{1}{2}}. \end{equation*} As done in Section 5.3 of \cite{CLY}, we can prove the long time stability for the KAM tori obtained in Theorem \ref{t1}. The measure estimate can be done as in Theorem \ref{t1}. \qed \newcommand{\etalchar}[1]{$^{#1}$}
1,314,259,994,589
arxiv
\section{Conclusions} \label{sec:conclusions} We study the interpretability of pruned networks. Specifically, we use network dissection \citep{netdissect} to examine the number of units that learn to recognize disentangled, human-identifiable concepts in networks whose weights have been removed using lottery ticket pruning \citep{lth, lthas}. We find that this sparse pruning has no impact on the interpretability of the Resnet-50 model (as trained on ImageNet) until so many parameters are pruned that accuracy beings to decline. We conclude that parameters considered unnecessary by magnitude pruning are also unnecessary to maintain the level of interpretability of the unpruned model. However, pruning does not cause interpretability to improve either. \section{Discussion and Future Work} \label{sec:future} In this short paper, we only consider a sparse pruning technique that preserves the number of units in the network. It is possible that, if entire convolutional filters were pruned as in \citep{pruning-filters}, a completely different set of behaviors might result. For example, the network might become less interpretable with pruning as it has less capacity with which to develop intermediate representations. Or alternatively, if---as \citet{rethinking-pruning} argue---pruned convolutional filters were never necessary to begin with, then the network would remain equally interpretable but with a higher percentage of interpretable units (convolutional filters are units with respect to network dissection). It is also possible that another fine-tuning strategy might produce different results. The lottery ticket strategy allows the network to retrain nearly from the start after each round of pruning, meaning that the network has the opportunity to learn entirely new representations. In contrast, standard pruning techniques retain the trained weights of unpruned connections and fine-tune for a small number of iterations at a low learning rate, severely limiting the network's ability to learn new representations. It would be interesting to compare the interpretability of the networks produced by each approach. It is possible that lottery ticket fine-tuning makes it possible to learn new, disentangled representations for the smaller network size, or, alternatively, that limited fine-tuning more effectively sustains the interpretability of the unpruned networks. For this workshop paper, we only consider Resnet-50. It would be valuable to study the extent to which the behavior we observe extends to other networks as in \citet{netdissect}. \section{Introduction} \label{sec:intro} Neural network {pruning}~(e.g., \citet{brain-damage, han-pruning, pruning-filters}) is a standard set of techniques for removing unnecessary structure from networks in order to reduce storage requirements, improve computational performance, or diminish energy demands. In practice, techniques for pruning individual connections from neural networks can reduce parameter-counts of state-of-the-art models by an order of magnitude \citep{han-pruning, gale-pruning} without reducing accuracy. In other words, only a small portion of the model is necessary to represent the function that it eventually learns, meaning that---at the end of training---the vast majority of parameters are superfluous. In this paper, we seek to understand the relationship between these superfluous parameters and the interpretability of the underlying model. To do so, we study the effect of pruning a neural network on its interpretability. We consider three possible hypotheses about this relationship: \textit{Hypothesis A: No relationship.} Pruning does not substantially alter the interpretability of a neural network model (until the model has been pruned to the extent that it loses accuracy). \textit{Hypothesis B: Pruning improves interpretability.} Unnecessary parameters only obscure the underlying, simpler function learned by the network. By removing unnecessary parameters, we focus attention on the most important components of the neural network, thereby improving interpretability. \textit{Hypothesis C: Pruning reduces interpretability.} A large neural network has the capacity to represent many human-recognizable concepts in a detectable fashion. As the network loses parameters, it must learn compressed representations that obscure these concepts, reducing interpretability. \paragraph{Interpretability methodology.} We measure the interpretability of pruned neural networks using the \emph{network dissection} technique \citep{netdissect}. Network dissection aims to identify convolutional units that recognize particular human-interpretable concepts. It does so by measuring the extent to which each unit serves as binary segmenter for that concept on a series of input images. The particular images considered are from a dataset called Broden assembled by \citeauthor{netdissect}; this dataset contains pixel-level labels for a wide range of hierarchical concepts, including colors, textures, objects, and scenes. For each image in Broden, network dissection computes a convolutional unit's activation map, interpolates to expand it to the size of the input image, and segments the image based on the pixels for which the unit has a high activation according to its typical distribution of activations.% \footnote{A high activation is determined by a threshold ``$T_k$ such that $P(a_k > T_k) = 0.005$ over every spatial location of the activation map in the data set.'' \citep{netdissect}} Network dissection then computes the size of the intersection of the mask pixels and pixels labeled for particular concepts and divides this quantity by the size of the union; the technique considers units for which this ratio is larger than 0.05 to be interpretable, having learned a disentangled representation of this concept. \paragraph{Pruning methodology.} The neural networks that we dissect are Resnet-50 \citep{resnet} models trained on the ImageNet \citep{imagenet} dataset. We apply a sparse pruning technique, removing weights with the highest magnitudes at the end of training (as in \citet{han-pruning}, \citet{gale-pruning}, and \citet{lth}). Doing so produces pruned networks that have fewer parameters but the same number of neurons, meaning these pruned networks retain the capability to contain as many interpretable neurons as the original network. Immediately after pruning, a neural network's accuracy decreases because part of the model has been removed; pruned networks are typically \emph{fine-tuned} for a small number of training steps to recover accuracy. We use the lottery ticket fine-tuning procedure \citep{lth}, where the weights of a network are reset back to their values at an iteration early in training. \citeauthor{lth} show (and we confirm for our models) that networks trained in this way can learn to match the accuracy of the original network. We choose this fine-tuning approach to allow pruned networks to learn from an early stage of training, potentially learning different functions better adapted to the smaller model. The Resnet-50 networks for ImageNet studied in this paper were uncovered using a modified version of this technique described by \citet{lthas}. We prune Resnet-50 \emph{iteratively}, removing 20\% of weights, fine-tuning, and then pruning again. This process produces pruned networks at increments of 20\%, making it possible to evaluate the effect of pruning on interpretability as a network is gradually reduced in size. Figure \ref{fig:lth-imagenet} shows the top-1 accuracy of this network as a function of the number of parameters remaining. When 16.8\% of parameters or more remain, accuracy matches that of the original network. When 10\% of parameters remain, accuracy drops by a percentage point, followed by a steeper decline under further pruning. \begin{figure} \centering \includegraphics[width=.5\textwidth]{figures/imagenet/accuracy} \caption{The top-1 accuracy of Resnet-50 on ImageNet when pruned to the specified size.} \label{fig:lth-imagenet} \end{figure} \paragraph{Findings.} We find that sparse pruning does not reduce the interpretability of Resnet-50 until so many parameters are pruned that accuracy declines, supporting Hypothesis A. We conclude that the parameters that pruning considers to be superfluous for accuracy are also superfluous for interpretability. \section{Methodology} \textbf{Network.} We study a ResNet-20~\citep{he_resnet_2015} trained on CIFAR-10~\citep{krizhevsky_cifar_2009}. We use a standard implementation provided with TensorFlow, available online~\footnote{\url{https://github.com/tensorflow/models/tree/v1.13.0/official/resnet}}. This implementation trains for $182$ epochs, using an initial learning rate of $0.1$, decaying by a factor of $10$ at epochs $91$ and $136$. The network trains to \NA{XX.X\%} accuracy on CIFAR10. \textbf{Fine-tuning experiments.} There several hyperparameters that must be chosen when pruning with fine-tuning, namely the number of epochs $x$ for which fine-tuning will occur and the learning rate schedule during that phase. In our experiments, we explore many values of $x$, which controls the cost of obtaining the final, pruned network (Criterion 1). We follow the standard practice from \NA{CITE MANY PAPERS} to fine-tune using the final learning rate from the original training phase. \textbf{Rewinding/replaying experiments.} Rewinding includes only one hyperparameter: the number of epochs $x$ that the network should be rewound. For example, consider a network that is normally trained for $T$ epochs. We would train the network for the original $T$ epochs, prune, rewind the weights to their values at epoch $T - x$, and replay the remaining $x$ epochs of training on the pruned network. When rewinding, the learning rate schedule is accordingly rewound to the same epoch. We also consider rewinding with \emph{abbreviated replaying} in which we reduce the number of training epochs that occur during the replaying phase. \citet{frankle_lottery_2019} observe that the subnetworks found using Algorithm \ref{alg:rewind} require fewer training steps than the original network to reach accuracy milestones. We hypothesize that we can exploit this effect by replaying for fewer iterations than the original network was trained, potentially reducing the overall cost of finding a pruned network. \section{Results} \label{sec:results} Network dissection considers both the number of units that learn disentangled concepts and the overall number of Broden concepts learned by any unit. We study these quantities for the final four layers of Resnet-50, comprising 2048 units in total. Based on the analysis of \citeauthor{netdissect}, we expect these layers to learn higher-level concepts like objects and scenes. \paragraph{Interpretability.} Figure \ref{fig:lth-dissection} plots the number of units that learn disentangled concepts (left) and the overall number of concepts learned (right). Each line represents a separate trained Resnet-50 model starting with a different random initialization. All three trials show similar behavior: until 16.8\% of parameters remain, the network remains as interpretable as it was before it was pruned; after this point, interpretability begins to gradually decline. This pattern indicates that sparse pruning has little relationship with interpretability (Hypothesis A)---interpretability barely suffers until more than 90\% of parameters have been pruned. Instead, this pattern seems to follow the trend of network accuracy (Figure \ref{fig:lth-imagenet}). Interpretability begins to decline at the same parameter-count that the network becomes less accurate as a product of over-pruning. \begin{figure} \centering \includegraphics[width=.3\textwidth]{figures/units-by-purpose/legend}% \includegraphics[width=.5\textwidth]{figures/units-by-purpose/accuracy} \caption{The number of disentangled concepts learned by any unit. The categories are sorted into higher-level Broden categories representing the granularity of each concept. This graph breaks down these concepts for a single trial from Figures \ref{fig:lth-imagenet} and \ref{fig:lth-dissection}.} \label{fig:lth-concepts} \end{figure} Figure \ref{fig:lth-concepts} separates a single trial from the right plot of Figure \ref{fig:lth-dissection} into a taxonomy of concepts according to their level of granularity. Higher-level concepts like scenes and objects seem to be more volatile in the face of pruning. Higher-level concepts are also more likely to disappear as interpretability and accuracy drop at extreme levels of pruning. It is possible that the network's failure to learn as many disentangled, higher-level concepts diminishes its overall accuracy. \begin{figure} \centering \includegraphics[width=.5\textwidth]{figures/still-interpretable/accuracy}% \includegraphics[width=.5\textwidth]{figures/consistency/accuracy} \caption{Of the units that are interpretable in the original network, (left) the percent of interpretable units that were interpretable in the original network and (right) the percent of interpretable units that recognize the same concept as they did when they were in the original network. Each line represents a model trained with a different initialization.} \label{fig:lth-consistency} \end{figure} \paragraph{Consistency.} We use the lottery ticket procedure \citep{lth} to fine-tune after pruning, meaning that the pruned networks are re-trained nearly from initialization. In comparison to other fine-tuning strategies, we believe this configuration offers pruned networks more leeway to learn new representations. We are therefore interested in understanding the extent to which the same units are interpretable and learn to recognize the same concepts in the original and pruned networks. The left graph in Figure \ref{fig:lth-consistency} shows the percentage of interpretable units in the original network that were also interpretable in the pruned network. Although this figure declines as the networks are pruned, nearly 80\% of the originally interpretable units remain interpretable even after 89\% of parameters have been pruned. Of these units that were interpretable in both the original and pruned networks, the right graph in Figure \ref{fig:lth-consistency} explores the consistency of the concepts these units learn. For each pruned network, it plots the percentage of interpretable units that recognize the same concept as they did when they were in the original network, considering only those units that were interpretable both in the original network and in the particular pruned network. As the network is pruned, the fraction of such \emph{consistent} units declines. However, it remains relatively high: about 70\% of such units learn to recognize the same concept even after 89\% of parameters are pruned.
1,314,259,994,590
arxiv
\section{Radiowave Detection of Neutrinos} The RICE array is designed to detect events in which a neutrino-nucleon scattering event initiates a compact electromagnetic cascade in ice. Such cascades carry a net negative electric charge of magnitude 0.25$e$/GeV, resulting in a pulse of radio Cherenkov emission via the ``Askaryan effect''\cite{Askaryan}, with power peaked at wavelengths comparable to the lateral dimensions of the cascade, i.e., the Moliere radius ($\sim$10 cm). This paper reports new limits from the RICE array, updating the previous limits from 2006\cite{rice06}, and also discusses how recent radio frequency (RF) studies of ice properties will impact new initiatives at South Pole. Development of radio frequency detectors to measure ultra-high energy cosmic ray interactions has recently intensified. Projects use several methods and target materials including salt\cite{SALSA}, ice\cite{RAMAND,ARIANNA,ANITAinstr,FORTE,RITA} and lunar regolith\cite{GLUE,LUNASKA}. Complementary efforts seek to measure the RF signals in cosmic ray-induced extensive air showers\cite{LOPES,CODALEMA,AERA,ANITAcr}. While calculations of RF signals from cosmic rays first appeared nearly 70 years ago\cite{BlackwellLovell_1941}, the new technology of nanosecond-scale digitizers and massive multi-channel data analysis are now bringing the potential of radio detection to fruition. \subsection{Signal Strength} Radio-wavelength detection of electromagnetic showers in ice relies on two experimentally established phenomena - long attenuation lengths exceeding 1 km, and coherence extending up to 1 GHz for Cherenkov emission. Discussions of the Askaryan effect\cite{Askaryan} upon which the radiowave detection technique is founded, its experimental verification in a testbeam environment\cite{slac-testbeam,slac-salt04}, calculations of the expected radio-frequency signal from a purely electromagnetic shower\cite{ZHS,Alvarez-papers,SoebPRD,addendum,RomanJohn,AndresJaime}, as well as hadronic showers\cite{Shahid-hadronic}, and modifications due to the LPM effect\cite{LPM,spencer04} can be found in the literature. All estimates give the same qualitative conclusion - at large distances, the signal at the antenna inputs is a symmetric pulse, approximately 1-2 ns wide in the time domain. On the Cherenkov cone, the power spectrum rises monotonically with frequency, as expected in the coherent long-wavelength limit. In that limit, the excess negative charge in the shower front (roughly one electron per 4 GeV shower energy) can be treated as a single (``coherent'') source charge. For perfect signal transmission (no cable signal losses) through an electrically matched system, calculations estimate the Cherenkov-cone signal strength due to a 1 PeV neutrino initiating a shower at R=1 km from an antenna to be $\sim 10 \mu V\sqrt{\tt B}$, with {\tt B} the system bandwidth in GHz. This is comparable to the 300 K thermal noise over that bandwidth in the same antenna, prior to amplification. \section{The RICE Experiment} Previous RICE publications described initial limits on the incident neutrino flux\cite{rice03a}, calibration procedures\cite{rice03b}, ice properties' measurements\cite{RICEnZ,RICEfaraday,RICEbiref10}, and successor analyses of micro-black hole production\cite{Shahid-hadronic}, gamma-ray burst production of UHE neutrinos\cite{grb06}, tightened limits on the diffuse neutrino flux\cite{rice06} and a search for magnetic monopoles\cite{daniel}. Herein we update both our diffuse neutrino search, based on the entire data sample, using algorithms which differ minimally from our preceding search, and also provide new information derived from radioglaciological studies of the ice dielectric permittivity at radio wavelengths. \subsection{Experimental Layout} Figure \ref{fig:viewer.eps} \begin{figure}[htpb]\centerline{\includegraphics[width=9cm]{viewer.eps}}\caption{\it Cutaway view of RICE experimental hardware. Fat-dipole antennas (show in in blue) are connected by coaxial cables (yellow) to a data acquisition system housed in the MAPO building (shown as rectangular solid). Locations are drawn to scale relative to MAPO. As indicated in the Figure, the deepest antenna is approximately 350 meters below the surface.}\label{fig:viewer.eps}\end{figure} shows the detector geometry (essentially unchanged since 2000) in relation to the Martin A. Pomerantz Observatory (MAPO) at South Pole Station. The sensitive detector elements, radio receivers, are submerged at depths of several hundred meters close to the Geographic South Pole, in holes primarily drilled for the AMANDA experiment. Six of the RICE receivers are deployed in `dry' holes drilled specifically for RICE in 1998-99. Despite bulk motion of the ice sheet, and the closing of those dry holes under the ambient hydrostatic pressure over $\sim$5 years, we continue to receive signals from all successfully deployed antennas. A block diagram of the experiment, showing the signal path from in-ice to the surface electronics, is shown in Figure \ref{fig:RICE-block-diagram.xfig.eps}. \begin{figure}[htpb] \centerline{\includegraphics[width=15cm]{RICE-block-diagram.xfig.eps}} \caption{\it Block diagram, showing primary experimental components. Beginning with the in-ice `fat dipole' antenna, signal is initially amplified, then conveyed by coaxial cable to the surface, where it is high-pass filtered, and then undergoes a second stage of amplification. Signals are then split into a `trigger' path and a `digitization' path; the latter of these brings signals into one channel of an HP5454 digitial oscilloscope, which holds waveform data until the trigger decision is made ($\sim$1.5 microseconds). The trigger latch initiates readout of an 8.192 microsecond waveform sample, digitized at 1 GSa/s.} \label{fig:RICE-block-diagram.xfig.eps} \end{figure} The primary sensors are the in-ice `fat-dipoles', which have good bandwidth over the frequency interval 200-1000 MHz, and a beam pattern consistent with the expected $\cos^2\theta$ dependence for wavelengths larger than the physical scale of the dipole antenna, as verified in transmitter tests (Figure \ref{fig:Rxcostheta}) performed while lowering a transmitter dipole antenna into a dry borehole. \begin{figure}[htpb]\centerline{\includegraphics[width=13cm]{IOcostheta_ch12_highgain.eps}}\caption{\it Amplitude of received signal as a function of viewing angle between transmitter and receiver, for receiver channels 3 and 5 (selected on the basis of their ability to sample the largest range in $\sin\theta$). Dipole antennas, which are expected to follow a $\cos\theta$ field beam pattern should show a reduction in a factor of 2 ($\cos^2\theta$) for broadcasts between two vertically aligned antennas, between $\sin\theta$=0 and $\sin\theta=\sqrt{2}/2$, roughly consistent with observation.}\label{fig:Rxcostheta}\end{figure} \subsection{Full Data Set \label{s:CDS}} The statistics of the complete data taken thus far with the RICE array are summarized in Table \ref{tab:datasum}. \begin{table} \begin{center} \begin{tabular}{c|cc|ccccc} \hline Year & RunTime & LiveTime & 4-hit Trigs & Unbiased & Vetoes \\ & ($10^6$ s) & ($10^6$ s) & ($\times 10^4$) & ($\times 10^4$) & ($\times 10^4$) (prescale) \\ \hline 1999 & 0.18 & 0.10 & 0.26 & - & 1.2 (1) \\ 2000 & 22.3 & 15.7 & 30.6 & 3.3 & 11182.8 (10000) \\ 2001 & 4.6 & 3.3 & 6.0 & 1.3 & 317.4 (10000) \\ 2002 & 19.9 & 13.6 & 16.9 & 3.5 & 12973.9 (10000) \\ 2003 & 24.5 & 17.1 & 13.8 & 4.4 & 3153.9 (10000) \\ 2004 & 11.6 & 9.4 & 9.4 & 2.5 & 142.5 (10000) \\ 2005 & 18.3 & 15.5 & 26.5 & 4.0 & 471.0 (10000) \\ 2006 & 19.3 & 16.5 & 8.9 & 4.2 & 20560.5 (10000) \\ 2007 & 14.6 & 11.8 & 25.8 & 4.3 & 866.3 (10000) \\ 2008 & 20.1 & 17.2 & 21.1 & 5.0 & 186.2 (10000)\\ 2009 & 26.6 & 23.8 & 10.1 & 8.3 & 488.2 (10000)\\ 2010 & 23.1 & 21.9 & 6.1 & 5.5 & 224.4 (10000) \\ \hline \end{tabular} \caption{\it Summary of RICE data taken through December, 2010. ``4-hit Triggers'' refer to all events for which there are at least four RICE antennas registering voltages exceeding a pre-set discriminator threshold in a coincidence time comparable to the light transit time across the array ($1.25\mu$s); ``Unbiased Triggers'' correspond to the total number of events taken at pre-specified intervals and are intended to capture background conditions within the array; ``Veto Triggers'' are events tagged online by a fast ($\sim$10 ms/event) software algorithm as consistent with having a surface origin. With the cessation of AMANDA operations in March, 2009, beginning in February 2010, the ``AMANDA'' trigger line was replaced by a 3-fold surface-antenna multiplicity trigger. Variations in the veto rate are attributed to the commissioning of new experiments, with associated anthropogenic electromagnetic interference, and also the decommissioning of other experiments, as well as communications streams such as the GOES satellite in 2006.} \label{tab:datasum} \end{center} \end{table} Over a typical 24-hour period, roughly 1500 data event triggers pass a fast online hardware surface-background veto (``HSV''; with a decision time $\sim$5$\mu$s/event) and an online software surface-background veto ($\sim$10 ms/event). To these data we have applied a sequence of offline cuts to remove background, as detailed below. \section{Trigger and Data Collection} Our basic online procedures are essentially unchanged from our previous publication. The first three tiers, or trigger levels (Table \ref{tab:trigsum}) are applied in either hardware (H) or software (S) online, as follows: \begin{enumerate} \item L0: Passes Hardware Surface Veto, with one antenna exceeding a threshold approximately equal to six times the ambient background noise level. \item L1: Four antennas satisfying an L0 requirement within a coincidence time window equal to the light transit time across the array (1.25 microseconds). \item L2: Events are deemed to be inconsistent with originating at the surface, using a software veto. \end{enumerate} Events passing all tiers are transferred daily from the South Pole for permanent storage on disk at the University of Wisconsin. \begin{table}[htpb] \begin{center} \begin{tabular}{c|c|c|c} \hline Trigger Level & Requirement & Maximum Rate & Typical Winter Rate \\ \hline L0 (H) & Passes HSV veto & 200 kHz & 1 kHz \\ L1 (H) & Four antennas exceed threshold within 1.25 $\mu$s & 100 Hz & 1 Hz \\ L2 (S) & Passes surface-source veto & 0.1 Hz & 0.02 Hz \\ \hline \end{tabular} \caption{\it Summary of trigger rates at three levels. ``HSV'' refers to the online hardware surface veto of down-coming, anthropogenic noise. The third column represents the maximum trigger rate to process events exceeding the online event threshold. The final column represents the maximum rate at which data can be written to disk.} \label{tab:trigsum} \end{center} \end{table} \section{Hit Finding and Event Reconstruction} Our previous analysis assigned the hit time to the first $6\times\sigma_V$ excursion in a waveform, with $\sigma_V$ defined as the rms of the voltages recorded in the first 1000 ns of an event capture (i.e., prior to the possible onset of any signals). To improve our hit-finding, we have performed a study of time domain signal characteristics. An ideal narrow-band antenna has a response to an impulse that follows the form: $$V_0(t)\sim\cos(\omega_0t)exp(-t/\tau),$$ with $\tau\sim$1/{\tt B}. For a first approximation to what these signals might look like, we used `thermal noise hits', defined as a sequence of waveform samples in unbiased events consistent with band-limited transients, drawn from the data itself. Figure \ref{fig:sigwvfm0} qualitatively indicates the reproducibility of such short-duration transient responses from channel-to-channel (2002 data), with the response to a sharp transmitter signal shown in Figure \ref{fig:sigwvfm6} for comparison. \begin{figure}[htpb] \centerline{\includegraphics[width=10cm,angle=0]{sigwvfm0.ps}} \caption{\it ``Short-duration'' waveforms, selecting cases with fast impulsive responses (designated as ``thermal hits''), for the indicated channels.} \label{fig:sigwvfm0} \end{figure} \begin{figure}[htpb] \centerline{\includegraphics[width=10cm,angle=0]{Tx-Rx-2002365201358-97tx3-general-ch1.eps}} \vspace{0.5cm} \caption{\it Receiver waveforms captured when transmitter is active, for comparison with 'thermal' hits in previous Figure.} \label{fig:sigwvfm6} \end{figure} We have performed an embedding study to evaluate contributions to hit time resolutions and the relative efficacy of the damped exponential parametrization to previous hit time definitions. Monte Carlo simulations of neutrino-induced hits are embedded into data unbiased events, and the extracted hit times then compared with the known (true) embedded time. \begin{figure}[htpb]\centerline{\includegraphics[width=10cm,angle=0]{TimingAccuracyRecon.eps}} \caption{\it Difference between the true (embedded) time between antenna hits minus the reconstructed hit time (after embedding) for two reconstruction algorithms. The indicated timing resolution due to pattern recognition uncertainties is approximately 2 ns.} \label{fig:dt-embedding}\end{figure} Our previous analysis employed the `first 6$\sigma_V$ excursion' hit criterion; based on Figure \ref{fig:dt-embedding} and the signals shapes displayed previously, we have now applied the more general `exponential ring' hit criterion, which records the time of waveforms exceeding $5.5\sigma_V$ in voltage, and also have the shape of a damped exponential `ring'. Monte Carlo simulations indicate that this improved timing resolution improves the azimuthal directional resolution of RICE by approximately 10\%. Such an algorithm is also designed to reject events where the signal persists for hundreds of nanoseconds in each channel. Figure \ref{fig:ToTToT.eps} shows the time-over-threshold distribution, defined relative to the rms voltage $\sigma_V$, as the number of samples exceeding $\pm6V_{rms}$, for various triggers. The contamination of our `general trigger' data sample with large time-over-threshold events, which nearly uniformly trace to the surface, is evident from Figure \ref{fig:ToTToT.eps}. Such waveforms are immediately rejected by requiring the expected damped exponential signal form. \begin{figure}[htpb]\centerline{\includegraphics[width=10cm]{ToTToT.eps}}\caption{\it ``Time-over-threshold'' for 2010 event triggers, by trigger type, as described in the text.}\label{fig:ToTToT.eps}\end{figure} Also overlaid (green) is the distribution expected from Monte Carlo simulations, exhibiting a considerably narrower distribution and clustered towards zero time-over-threshold. \section{Backgrounds\label{s:bkgnds}} Our previous publication\cite{rice06} presented detailed consideration of anthropogenic transients, thermal noise backgrounds, and possible backgrounds from atmospheric muons, atmospheric neutrinos, air showers, and RF emissions due to solar flares. We herein briefly review techniques for background suppression. We generally distinguish different backgrounds to the neutrino search according to the following criteria: \begin{itemize} \item vertex location of reconstructed source \item waveform shape characteristics of hit channels (including, e.g., time-over-threshold [Fig. \ref{fig:ToTToT.eps}]) \item goodness-of-fit to a well-constrained single vertex as evidenced by timing residual characteristics (defined as the inconsistency of the recorded hit channels to originate from a single source point) \item RF conditions during data-taking \item Fourier spectrum of hit channels \item cleanliness of hits (e.g., presence of multiple pulses in an 8.192 microsecond waveform capture) \item multiplicity of receiver antennas registering hits for a particular event \item time-since-last-trigger ($\delta t_{ij}\equiv t_i-t_j$, where $t_i$ is the time of the i$^{th}$ trigger and $t_j$ is the time of the next trigger). In high-background, low-livetime instances, we expect $\delta t_{ij}\to \delta t_{min}$, where $\delta t_{min}$ is the $\sim$10 s/event readout time of the DAQ. In low-background, high-livetime instances, we expect $\delta t_{ij}\to \delta t_{max}$, where $\delta t_{max}$ is the ten-minute interval between successive unbiased triggers. \end{itemize} We can coarsely characterize three general classes of backgrounds according to the above scheme, as follows. 1) Continuous wave backgrounds (CW) have a) a long time-over-threshold for channels with amplitudes well above the discriminator threshold, b) large timing residuals (since the threshold crossing times will be ambiguous), c) small values of $\delta t_{ij}$ for the case where the discriminator threshold is far below the CW amplitude, d) a Fourier spectrum dominated by one frequency (plus overtones), e) a hit multiplicity which is on average roughly constant, and determined by the number of channels which exceed threshold when their noise voltage is added to the underlying CW voltage. Such backgrounds may cluster in time and are generally easily recognized on-line. 2) True thermal noise backgrounds should have a) three-dimensional vertex locations which are spatially distributed as Gaussians peaked at the centroid of the array ($x$=0, $y$=0, $z$=--120 m), as demonstrated by Monte Carlo simulations (by simulating four hits at random times within the 1.25$\mu$s discriminator window; see Fig. \ref{fig:plot_thermal_2000.eps}), b) very small time-over-thresholds, c) large timing residuals, d) successive trigger time difference characteristics which depend in a statistically predictable way on the ratio of discriminator thresholds to rms background noise voltages, e) a ratio of general/unbiased triggers which, in principle, can be statistically derived from the background noise distribution observed in unbiased events, f) a Fourier spectrum determined by the intrinsic band response of the various components of a RICE receiver circuit, g) no double pulse characteristics, h) no correlation with date or time. 3) ``Loud'' transients are observed to constitute the dominant background. We sub-divide possible transient sources into two categories: those sources which originate within the ice itself, primarily due to AMANDA and/or IceCube photomultiplier tube electronics, and those sources which originate on, or above the surface. After our initial deployment of three test antennas in 1996-97, highpass ($>$250 MHz) filters were inserted to suppress the former backgrounds, leaving more sporadic anthropogenic surface-generated noise as the dominant transient background. Such triggers are characterized by: a) typically, large time-over-thresholds, b) $\delta t_{ij}$ distributions which reflect saturation of the DAQ data throughput, or show time structure if the source is periodic, c) Fourier spectra which are likely to depart from thermal ``white'' noise in the frequency domain. \subsection{Vertex Suppression of Transient Anthropogenic Backgrounds \label{s:TAB}} Vertex distributions give perhaps the most direct characterization of surface-generated (z$\sim$0) vs. sub-surface (and therefore, candidates for more interesting processes) events. Consistency between various source reconstruction algorithms gives confidence that the true source has been located. Due to ray tracing effects, it is difficult to identify surface sources at large polar angles, which increasingly fold into the region around the critical angle. We implement both a ``grid''-based $\chi^2$ vertex search algorithm, as well as an analytic, 4-hit vertex reconstruction algorithm, as detailed previously\cite{rice03a}. We have additionally cross-checked our vertex-finding against results obtained using the CERN Minuit package. In our offline analysis, we require that the reconstructed vertex depth be greater than 200 meters to suppress anthropogenic surface noise. \subsection{Vertex Quality Requirements} We impose a maximum time residual (defined as the time deviation from consistency of the recorded antenna hit times with a single in-ice source point) requirement of less than 50 ns, per antenna hit. Since four antennas will necessarily allow a solution of the equation ${\bf r}=c(t-t_0)$, with $t_0$ the time of source emission, imposition of this requirement necessitates a minimum antenna hit multiplicity of five. This requirement is particularly effective at removing events where there may be a thermal noise fluctuation superimposed on anthropogenic noise, or multiple anthropogenic events which overlay upon each other. \subsection{Rejection of repetitive patterns} Inconsistency of the recorded hit time sequence with a previously logged hit time sequence, irrespective of the previous two requirements can also be used to identify backgrounds. This final requirement requires one full pass of each year's data to create a `library' of identified background hit time patterns, as follows. As each new event is processed, if the sequence of hit antennas for that event matches a previously recorded pattern to within 10 ns per antenna, then: a) the event is considered to be `repetitive' and is discarded from further signal candidacy, and b) the pattern itself is updated with a statistical weighting of the new event with all the previous events identified as consistent with that pattern. As a concrete example, an event with hit times in the first four antennas (exclusively) of 100, 250, 400, and 600 ns would be `clustered' with a previously logged pattern (with statistical weight 1) in the exact same channels with hit times of 92, 258, 403, and 605 ns, resulting in a modified `clustered' pattern, with statistical weight of 2, and re-weighted hit times 96, 254, 401.5, and 602.5 ns. The loss in neutrino efficiency incurred by this `template cut' algorithm is assessed by simulation to be of order 1\%. \begin{figure}[htpb]\centerline{\includegraphics[width=10cm]{plot_thermal_2000.eps}}\caption{\it Reconstructed source depth for primary neutrino search triggers (``Physics triggers'') compared to events identified as surface sources based on Hardware Surface Veto information, as well as transmitter calibration events (green) and simulated thermal noise (points). When anthropogenic backgrounds are low and the experiment is operating close to the thermal limit, the reconstructed vertex distribution for thermal noise events is expected to peak close to the center of the array, with a width given by the light transit distance across the 1.25 $\mu$s coincidence window defined by the RICE general event trigger. During the winter months, when station noise is typically lowest, approximately 50\% of recorded events are thermal noise backgrounds. During the austral summer months, when human activity at South Pole Station is largest, this fraction typically decreases to less than 10\%.}\label{fig:plot_thermal_2000.eps}\end{figure} \subsection{Air Shower Backgrounds} There are possible radio signals associated directly with cosmic ray air showers. These include the production of geo-synchrotron radiation in the atmosphere, as well as transition radiation and Cherenkov signals produced as the shower impacts and evolves into the ice. These three mechanisms all require coherent radiation from all or part of the shower. In all three cases, the transverse profile of the shower dictates a fundamental frequency response, whereas for the geo-synchrotron and Cherenkov signals the shower/observer geometry must also be favorable to have coherent emission from the full longitudinal development of the shower. Coherent production of synchrotron radiation in the geomagnetic field has recently been observed by the LOPES\cite{LOPES}, CODALEMA\cite{CODALEMA}, AERA\cite{AERA}, and ANITA\cite{ANITAcr} collaborations. The coherent air shower signal is most intense below 100 MHz\cite{HuegeRefs}, but, as demonstrated by ANITA, still detectable in the RICE bandpass, which may attest to the observability of the air Cherenkov pulse that accompanies the geosynchrotron signal. We have not studied this mechanism in detail, but note that the frequency response is ultimately related to the geometry of the air shower -- the signal rolls over at $f\sim cR/r_M^2$ where R$\sim$1 km is the height of shower max and $r_M \sim$100--200~m is the Moliere radius for the shower. Transition radiation results when the shower impacts the ice\cite{gazazian}. In this case, R$\sim$200~m for RICE, f$\sim$200~MHz, and the region for coherent emission is a disk of order 10~m radius. Only a fraction of the excess shower charge, typically 10\%, is contained within that distance of the shower axis. Further, transition radiation is forward peaked, so illumination of more than one antenna string is rather unlikely. We have not seriously modeled transition radiation from air shower impacts as a background for RICE. The most interesting signal for RICE is the Askaryan pulse produced when the air shower core hits the ice. At RICE frequencies, the Askaryan pulse must originate from a transverse dimension comparable to that for a shower initiated in-ice, a few tens of cm at most. This length scale is compatible with the core of the shower where the highest energy particles reside. Particles have their last interactions of order 1 km above the ice, so the required relativistic-$\gamma$ factor is of order $10^4$, corresponding to surface particle energies $\sim 10$~GeV for $e^-$, $e^+$ and bremstrahlung $\gamma$'s. We have run the standard RICE Monte Carlo simulation to assess the acceptance to impacting air shower cores. Simulated events illuminate the surface isotropically from the upper hemisphere over a distance within 500 meters of the center of the RICE array. At large zenith angles, the likelihood of four antennas being within some portion of the Cherenkov cone becomes large, however, the practical ability to separate such signals from surface background near the horizon is diminishingly small. Figure \ref{fig:EffAreaEAS.eps} displays the corresponding effective area, as a function of shower core energy, assuming the 10\% shower core containment cited above. Given that the charged cosmic ray flux is approximately 1/5000~$m^2$/yr at the nominal RICE event detection threshold of 100 PeV, and falling with an $E^{-2.7}$ power law (so that the integral flux falls as $E^{-1.7}$), Figure \ref{fig:EffAreaEAS.eps} indicates that the expected detection rate per year for RICE is likely to be undetectably small. \begin{figure}[htpb]\centerline{\includegraphics[width=14cm]{EffAreaEAS.eps}}\caption{\it Monte Carlo simulation results for RICE effective area for air showers impacting the South Polar surface.}\label{fig:EffAreaEAS.eps}\end{figure} Such a possible signal was, in fact, explicitly rejected as consistent with down-coming anthropogenic noise, prior to 2009. In 2009, that veto was somewhat loosened, specifically to admit such possible signal. For this search, we required a `direct hit' corresponding to a vertex within 20 m of the center of the RICE array in order to have any opportunity at imaging the down-coming Cherenkov ring itself and thereby unambiguously discriminate against above-surface backgrounds; such backgrounds, at large zenith angles, will fold into a tight polar angle region around the critical angle $\theta_{crit}$. Imposition of this fiducial requirement, of course, limits the effective area to a maximum of $\sim$1000 $m^2$. For this search, all data were processed through a separate analysis chain with minimal initial event selection requirements consisting of: i) a reconstructed impact point within that allowed fiducial area, ii) good agreement between the two vertex finders, and iii) a very loose ``cut'' on the goodness-of-fit to a Cherenkov cone, requiring that the event $\chi^2$ be less than 100. These requirements allow only six event candidates in all of the 2009 data. Unfortunately, all six events fail subsequent time-over-threshold requirements on the waveform shape, resulting in no down-coming impacting air shower candidates. \subsection{Wind Backgrounds} The extraordinary lack of moisture at South Pole, coupled with a volatile surface snow layer and an absence of large conductors to facilitate discharge of atmospheric electrostatic fields, results in extremely high atmospheric breakdown voltages from above-surface structures. Surface charge build-up can be enhanced by high wind velocities. Rapid discharges can subsequently produce measurable radio frequency signal. As demonstrated in Fig. \ref{PlotWindSpeedCorr.eps}, we observe an apparent correlation of trigger rate with wind velocity, as expected in a surface discharge model. Fortunately, these events typically trace back to the surface and do not pose an in-ice neutrino background. \begin{figure}[htpb]\centerline{\includegraphics[width=1.1\textwidth]{PlotWindSpeedCorr}}\caption{\it Tabulated windspeed at South Pole (m/s; green) vs. RICE livetime (red). High windspeeds apparently result in large electrostatic discharge events from local above-surface structures. Such events are flagged offline as of non-neutrino origin.}\label{PlotWindSpeedCorr.eps}\end{figure} \subsection{Ambient (non-episodic) Radio Frequency Backgrounds at Pole} The Very Low Frequency (VLF)\cite{VLF} receiver array at South Pole is intended to monitor the ionosphere using a large set of buried antennas, at frequencies well below the RICE sensitivity. Nevertheless, the high power of the signal broadcast by this array can evidently couple into the electronics of the RICE data acquisition system, resulting in a measurable number of triggers (10.6\% of all our physics triggers in 2010, e.g. [Figure \ref{fig:VLF.eps}]). \begin{figure}[htpb]\centerline{\includegraphics[width=10cm]{VLF.eps}}\caption{\it Trigger time (second vs. minute) showing the 15 minute periodicity of the Very Low Frequency radar system, operating at 19.5 kHz, at South Pole.}\label{fig:VLF.eps}\end{figure} The waveforms in such events, however, immediately fail our exponential ring criterion in the offline analysis. The VLF background is by far the most pernicious of the periodic backgrounds observed to contaminate the RICE data sample. \section{Monte Carlo simulations} We determine the neutrino detection efficiency of our event selection criteria using simulations of showers, both electromagnetic and hadronic, resulting from neutrino collisions, superimposed on environmental characterization drawn from data itself at random times (unbiased events). Our basic Monte Carlo simulation signal codes are unchanged since 2005, save for updates to modeling radio frequency ice dielectric response (detailed below). In practice, due to the LPM effect, our sensitivity to neutrino interactions is dominated by the response to hadronic showers, and is therefore approximately uniform for all three neutrino flavors. \subsection{Event-finding efficiency} Our overall event-finding efficiency is approximately unchanged from our prior estimate. Of simulated events which we expect to trigger the RICE detector, we expect 74.2\% to pass our primary event selection requirements. Cut-by-cut details are presented in Table \ref{tab:MCeff}. Note that the definition of the cuts generally follows our previous analysis, with only slight differences. \begin{table}[htpb] \begin{center} \begin{tabular}{c|c|c} \hline Requirement & Efficiency (\%) & Data Events \\ \hline Starting sample & 100.0 & 2298921 \\ $\ge$4 6-sigma hits & 99.9 & 1754982 \\ Maximum time-over-threshold cut & 99.7 & 565891 \\ $\le$two channels with high time residual & 98.0 & 145723 \\ Acceptable total time residual & 95.5 & 38922 \\ Passes amplitude template cut & 92.1 & 8035 \\ Passes time template cut & 89.1 & 1043 \\ Vertex of at least one algorithm below firn & 88.3 & 279 \\ Agreement between two vertex-finding algorithms & 85.1 & 140 \\ Passes Cherenkov cone geometry cut & 81.8 & 36 \\ 5 high quality 6$\sigma_V$ hits & 77.4 & 8 \\ Deepest channel hit first & 77.2 & 0 \\ No surface antennas with good `early' hits & 74.2 & 0 \\ \hline \end{tabular} \caption{\it Cumulative Monte Carlo efficiency, using simulated neutrino events embedded into forced trigger events. Fractional efficiencies are measured relative to a total of 2500 simulated events, assuming 1:1:1 mix of $\nu_e:\nu_\mu:\nu_\tau$, which passed our simulated trigger criteria. Also shown are event survival statistics for RICE data. Note that these event selection criteria are designed to encompass, and reject, all possible backgrounds itemized in the text, without necessarily targeting just one type.} \label{tab:MCeff} \end{center} \end{table} \subsection{Effective Volume} As can be seen from Figure \ref{fig:Veff0.eps}, the overall neutrino response, as measured by effective volume, is essentially unchanged relative to our previous analysis. \begin{figure}[htpb]\centerline{\includegraphics[width=13cm]{Veff_comp_2005_2010.eps}}\caption{\it Comparison of effective volume calculated with current Monte Carlo simulations with effective volume calculated for results reported in 2005.}\label{fig:Veff0.eps}\end{figure} Systematic errors in effective volume, as indicated by the shaded band in Figure \ref{fig:Veff0.eps}, result in roughly a factor of two possible variation in the expected overall neutrino event yield. The breakdown of the contribution of various systematic uncertainties to our total systematic error is very similar to our previous analysis, with the exception of improvements in our understanding of ice properties, as outlined elsewhere in this document. These result in a net improvement of approximately 15\% in total systematic error compared to our previous publication. Dominant systematic errors remain uncertainties in the attenuation length as well as uncertainties in the index-of-refraction profile through the firn. Figure \ref{fig:SysErr} shows the relative contribution of various parameters to the overall systematic error. Shown are components due to uncertainties in the effective height (green), radiofrequency attenuation length of the ice (cyan), uncertainties in the index-of-refraction (magenta, and dominant in the upper 200 m of the ice sheet), and also uncertainties due to the possiblity of Cherenkov signals generated in-ice, which reflect back down off the ice-air interface, and intercept the RICE array from above (yellow). Note that the latter effect, which can lead to so-called `double' hits, only increases the estimated effective volume, and is (conservatively) excluded from our calculation of the flux upper limit in the current analysis. \begin{figure}[htpb]\centerline{\includegraphics[width=15cm]{RICESysErr.eps}}\caption{\it Contribution to RICE systematic uncertainties in effective volume, as a function of energy, as detailed in the text.}\label{fig:SysErr}\end{figure} \section{Search for in-ice Neutrino interactions and Discussion} Imposing the event selection requirements enumerated in Table \ref{tab:MCeff}, we find that no events survive as in-ice shower candidates. One of the few events which satisfied all the waveform characteristic requirements, but was flagged as having surface origin is shown in Figure \ref{fig:disp}. \begin{figure}[htpb]\centerline{\includegraphics[width=10cm]{disp.eps}}\caption{\it Waveform display from event taken on Julian Day 93, UTC 02:45:41.407834.6 (2005). This event satisfies all event criteria listed in Table \ref{tab:MCeff}, save for the requirement that the deepest channel (Ch. 15) have a hit time preceding, rather than following the other hits in the array. As can be seen from the Figure, the late impulse observed on Channel 15 marks this event as originating from the surface.}\label{fig:disp}\end{figure} \subsection{Neutrino Flux Limit Results \label{s:NFLR}} Our flux limit is derived directly from the effective volume $V_{eff}$, the livetime ${\cal L}$, and the event-finding efficiency $\epsilon(\sim$0.64), which is the product of the online software veto efficiency ($\epsilon_{online}\sim$0.91) and the offline analysis efficiency ($\epsilon_{offline}\sim$0.742). Our 95\% C.L. flux bounds are shown in Fig. \ref{fig:revised_UL}. Compared to our previous result, we have slightly more than doubled our sensitivity. The dominating factor in our sensitivity gain is from extended livetime. \begin{figure}[htpb]\centerline{\includegraphics[width=17cm]{all_fluxes_UL.eps}}\caption{\it Compilation of existing neutrino flux limits, including updates reported herein. Factors of 3 or 3/2 shown in the plot are needed to translate predictions and experiments sensitive to only one or two neutrino flavors to the three flavors of neutrinos to which the RICE experiment is sensitive. Model predictions shown are from calculations of the neutrino fluence from blazars by Stecker\cite{Stecker2005}, BL LAc galaxies\cite{Mucke2003}, GRB's\cite{Soeb2003}, photonuclear production of neutrinos by cosmic-ray interactions with the Cosmic Microwave Background\cite{ESS2001}, models of neutrinos generated locally to Earth\cite{Ina2008,Naumov2001,Bartol2004,Honda2006} and Active Galactic Nuclei fluence predictions\cite{Julia2005}. Other presented experimental limits are those from AMANDA-II\cite{JohnKelly2009,AMII09,AMII10,AM08}, the previous RICE result\cite{rice06}, the HiRes experiment\cite{HiRes}, based on electron neutrinos only (and extrapolated to three flavors), ANITAII\cite{ANITAII} and a result from the Auger experiment\cite{Auger}, based on tau neutrinos only (and extrapolated to three flavors). Models are shown as dashed or solid lines; experimental results as lines/points.}\label{fig:revised_UL}\end{figure} As can be immediately seen from inspection of Figure \ref{fig:revised_UL}, RICE is still well below the sensitivity required to conclusively probe the ``cosmogenic'' neutrino flux, expected from interactions of the ultra-high energy cosmic baryonic flux (protons, neutrons, or nuclei) with the cosmic microwave background (CMB). In brief, that flux is calculable by bootstrapping from the ultra-high energy charged cosmic ray particle flux at Earth, assuming some source composition (at the extremes, either proton or iron nuclei) for those measured charged cosmic rays, then integrating over redshift using some evolution model to obtain the anticipated rate in the current epoch. Using the parameters from the first such complete model\cite{ESS2001}, assuming an all-proton composition, and integrating over the RICE sensitivity and livetime, we obtain an estimated number of 0.084 neutrino detections over the livetime quoted herein. Other recent estimates, which assume large admixtures of iron in the cosmic all-charged spectrum, result in estimated rates an order of magnitude smaller (0.0063 events\cite{Kotera}). \section{The Future of in-ice UHE Neutrino Detection at South Pole} Clearly, larger effective volumes are needed to definitively confront the entire suite of extant neutrino flux models. Synoptic strategies (ANITA, e.g.) afford sensitive volumes of $3\times 10^6$ ${\rm km}^3$, albeit viewed at typical distances of order 100-400 km from the event vertex, resulting in high neutrino detection thresholds due to the 1/R signal strength losses. In the embedded signal detection scheme (RICE, e.g.), the neutrino interaction vertex is typically `close', but the sensitive volume limited by the radio frequency ice attenuation length and the $\sim$2 km-thickness of cold, high RF transparent ice, suggesting an ``ideal'' geometry of multiple `stations' of antennas deployed at shallow depths and separated by distances of order the radio attenuation length, each capable of independently imaging a neutrino interaction. Central to the in-ice detection scheme are favorable radio frequency ice properties and transparency. By now, several measurements have redundantly established bulk ice attenuation lengths of order 1--2 km in the frequency range of interest. Within the last 2--3 years, as relative gains in neutrino sensitivity diminished, and in anticipation of a next-generation successor experiment, the RICE mission has begun to focus on precise characterization of asymmetries in ice properties, particularly effects of internal scattering layers and inherent asymmetries in the single-ice-crystal dielectric tensor. Both of these can be probed using bistatic radar echo sounding techniques. In this approach, a high-gain transmitter horn antenna is placed at one location on the snow surface, and the internal reflections from both within the snow, as well as the bedrock, are recorded by a second high-gain receiver horn antenna. Geometric asymmetries in the ice response can be studied by rotating the azimuthal plane of polarization of the horn antennas. Figures \ref{fig:InternalLayers} and \ref{fig:BedRefl} show the measured reflections for times prior (Fig. \ref{fig:InternalLayers}) and corresponding (Fig. \ref{fig:BedRefl}) to the expected time for the bedrock echo, at a depth of 2850 m. Both show strong dependence of received signal with broadcast azimuthal angle, although only the latter shows the time delay between two polarizations indicative of a difference in index-of-refraction with orientation, i.e., birefringence. Quantitatively, the echo amplitudes observed from internal reflections are typically 20--40 dB reduced compared to those expected from a ``perfect mirror'', consistent with the characteristics of reflections from acid layers embedded within the ice itself. Frequency analysis of those reflections additionally corroborate the expected 1/f amplitude dependence of acid layer reflections. Note that, for both these Figures, we have averaged over 10K--40K waveform captures to enhance the signal-to-noise ratio, corresponding to a reduction in the incoherent noise by (typically) at least two orders of magnitude. None of these reflections would therefore be visible in a single ``event'', such as an in-ice neutrino interaction. \begin{figure}[htpb]\centerline{\includegraphics[width=15cm]{5us-25us-all-vertically-shifted.eps} \caption{\it Ensemble of internal layer radar reflections observed, as a function of E-field polarization plane of vertically broadcast radio signals. In this Figure, 40000 waveform captures have been averaged; the coordinate system used is a local coordinate system for which the ice flow axis makes an angle of $153^\circ$ with respect to our zero degree convention. Azimuthal polarization angle is shown in the key; also included are `cross-polarized' (+60$\times$+150) orientation results, for which transmitter and receiver horn are orthogonal to each other. Echo time is shown horizontally, and approximately translates to depth via: depth [km]$\approx$ t[ns]/12000. For visual clarity, successive vertical traces have been offset by $\pm$100 ns; reflection structure is actually synchronous to within 1 ns.}\label{fig:InternalLayers}\end{figure} \begin{figure}[htpb]\centerline{\includegraphics[width=15cm]{RT090.eps}}\caption{\it Ensemble of bedrock radar reflections observed, as a function of E-field polarization plane of vertically broadcast radio signals. }\label{fig:BedRefl}\end{figure} Lab studies have shown that completely aligned ice crystals convey radio waves with approximately 1.7\% reduced speeds for propagation transverse to the plane containing that crystal (the $\hat{c}$-axis). The time lag between the top three traces vs. the bottom three traces shown in Figure \ref{fig:BedRefl} corresponds to approximately 50 ns, over a total propagation time of 34000 ns, i.e., a birefringent asymmetry of order 0.15\% between wavespeed propagation along the ordinary (fast-) vs. extraordinary (slow-) axes. We can additionally use the {\it lack} of any asymmetry observed in Fig. \ref{fig:InternalLayers} to conclude that birefringence is a feature only of the lower (warmer) half of the ice sheet at South Pole, at a level of 0.25\% asymmetry. Taken together, these results indicate that losses due to internal radio layers will not result in appreciable loss of signal for neutrino-induced radio signals received in future experiments, and also that dimunition of peak signal strength due to birefringence will similarly be noticeable in only $\sim$5\% of all neutrino detection geometries. Somewhat interestingly, Figure \ref{fig:InternalLayers} shows a marked dependence of peak measured reflected amplitude, as a function of azimuth. Given that the only `preferred' horizontal direction is defined by the ice flow axis, it is natural to consider correlations between the amplitude variations observed in both the internal layer and also bedrock reflections, with the known bulk motion of the ice sheet. Fig. \ref{fig:A_v_phi} shows a strong correlation in three of the five most prominent observed internal layer echoes with the ice sheet flow direction and suggests that the internal acid layers are likely aligned (similar to a diffraction grating) by the local ice flow. \begin{figure}[htpb]\centerline{\includegraphics[width=15cm]{Amplitude_v_phi.eps}}\caption{\it Peak amplitude dependence of internal layer reflection, indexed by echo time, as a function of signal polarization. Note the correlation of phase with the direction of local ice flow direction, indicated by the dashed blue arrow.}\label{fig:A_v_phi}\end{figure} \subsection{Dependence of attenuation length with depth} The observed echo amplitudes shown in Figure \ref{fig:InternalLayers} are largely determined by three factors: the intrinsic reflectivity of each layer, the diminution of signal power $P_{signal}$ with distance, and attenuation of the signal due to ice absorption. For the directional horn antennas used in this experiment, $P_{signal}\propto r^{-\alpha}$, with 1$<\alpha<$2. If we assume approximately equivalent reflection coefficients for all observed internal layers, we can determine a `local' amplitude attenuation length between the first three, and last three layers, as shown in Table \ref{tab:localLatten}, by direct application of the Friis equation\citep{Friis}, and using the azimuth-averaged values of amplitude. For this calculation, we take $\alpha$=2; assuming a cylindrical-flux tube with no transverse spreading ($\alpha$=1) gives values approximately 20\% smaller than those presented in Table \ref{tab:localLatten}. Our calculations are consistent with the expected warming of the ice sheet from the bedrock below, and the corresponding reduction in attenuation length with increasing temperature. \begin{table}[htpb] \begin{center} \begin{tabular}{c|c|c|c} & 13.9$\mu$s& 17.2$\mu$s& 19.6$\mu$s\\ \hline 6$\mu$s& 3348 m & 1521 m & 1514 m \\ 9.6$\mu$s& 1170 m & 867 m & 964 m \\ 13.9$\mu$s& & 643 m & 849 m \\ \hline \end{tabular} \end{center} \caption{\it Inter-layer attenuation lengths, calculated from amplitudes measured for returns, and assuming uniform reflectivity of all layers, as discussed in text. Estimated systematic errors are of order 25--30\%. First column indicates first reflecting layer; successive columns indicate second reflecting layer used to calculate attenuation length via Friis Equation. These results affirm the expectation that primary neutrino sensitivity is poorest in the warm ice near the bedrock.} \label{tab:localLatten} \end{table} \subsection{Future Plans for Radioglaciology} Thus far, virtually all information on the radio frequency response of ice sheets has been derived using vertically broadcast signals. In the austral summer of 2011-12, RICE hardware will be used to broadcast RF at largely oblique angles, which will provide information on the ice response away from the ${\hat c}$-axis, and more typical of the geometry of the neutrino signals to be detected by in-ice experiments such as RICE. As illustrated above, all RICE studies done thus far, however, substantiate the basic premise of such detectors: that the excellent RF properties of cold polar ice imply that englacial neutrino detectors provide the most cost effective technique for confronting the cosmogenic neutrino flux. \subsection{The ARA Experimental Initiative} During the austral summer of 2010-11, initial deployments of the next generation of neutrino detection hardware, realized as the recently funded Askaryan Radio Array (ARA\cite{ARA}), were made at the South Pole. The first ARA deployment, in the form of a relatively shallow (20--30 m) ``testbed'' prototype has already demonstrated 20 arc-minute angular reconstruction of calibration antennas, as well as sensitivity to variations in the received galactic noise and RF emissions from solar flares. An ambitious proposal, including 14 institutions from eight countries has been submitted to develop an autonomously-powered (and therefore, arbitrarily scaleable) experiment capable of initially defining the cosmogenic neutrino flux, and, over the timescale of a decade, eventually performing statistical characterization of that flux. Comprising 37 stations, each individually with an energy reach approximately an order of magnitude below that of RICE, ARA will achieve a nearly 100-fold improvement in total effective volume, achieved via: \begin{enumerate} \item direct digitization at the sensor (antenna) rather than on the surface, eliminating the $\sim$15 dB signal losses typically incurred by conveyance through coaxial cable. \item order-of-magnitude reduction of the geometric scale of each ``station'' from the 200-m typical of RICE such that the temporal signal coincidence window can similarly be narrowed by a comparable factor of 10. \item Siting of the experiment several km from the main South Pole station itself, resulting in considerably lower ambient noise rates. The remote ARA deployment site exhibits virtually none of the anthropogenic, or wind-generated RFI that plaged the RICE data sample. \item Extension of the lower-frequency limit of the antenna response from the current 250 MHz to $\sim$150 MHz, resulting in improved response to off-Cherenkov-peak signals. \item ``Optimized'' antenna receiver placement, as opposed to the requirement that RICE co-deploy in boreholes being drilled for the AMANDA experiment. \end{enumerate} The 2011-12 austral season will include the first deployment of a full-fledged ARA station (``ARA-1''); that single station will, in one year, have equivalent neutrino sensitivity to the ten years of RICE data accumulated thus far and reported herein. The first results on neutrino searches from the ARA testbed should be forthcoming within the next few months. \section*{Acknowledgments} The authors particular thank Chris Allen (U. of Kansas) for very helpful discussions, as well as our colleagues on the RICE and ANITA experiments. We also thank Andy Bricker of Lawrence High School (Lawrence, KS) for his assistance working with the Lawrence and Free State High School students. We also thank the winterovers at South Pole Station (Xinhua Bai, The Most Rev. Allan Baker, Philip Braughton, Christina Hammock, Michael Offenbacher, Mark Noske, Nicolas Hart-Michel, Robert Fuhrman, Flint Hamblin, and Nick Strehl) whose efforts were essential to the operation of this experiment. Sean Grullon contributed the ROOT software code used to produce the upper limit compilation presented in this document. This work was supported by the National Science Foundation's Office of Polar Programs (grant OPP-0826747) and QuarkNet programs.
1,314,259,994,591
arxiv
\section{\bf Introduction} \vskip 0.4 true cm The original motivation for the present work concerns with the open debate of the regularity of hydrodynamical parameters of fluid flows. It is still not known that starting from a smooth initial conditions in a three dimensional fluid, when and how any kind of blow up or singularity will happen. A large amount of works consider this problem in various special cases and obtain many results. It was known that the type of singularity is so strong such that many kinds of integral norms of hydrodynamical quantities are also singular. However, almost all of these integral norms are obtained by the Lebesgue integration but we know that there are other types of integration that are generalizations of the usual Riemann integral and do not coincide the Lebesgue integral. So, a natural question comes that how can we say something when our functions are not Lebesgue integrable? How should one replace (absolute) continuities and regularities in these new cases? As the first step it looks necessary to test and generalize a direct relation between the integration and continuity and Banach--Zarecki Theorem provides perhaps the most visible case to observe such a relation. It was therefore needed to discover a more direct and closer relation between the absolute continuity and the Lebesgue integral to be an arrow for other works. Banach--Zarecki Theorem is a classical theorem in real analysis with many applications mostly in geometric and functional analysis as well as some physical and engineering subjects. The origin of this theorem was stated and proved by Banach and independently by Zarecki for a real--valued function on an interval \cite{N}. For functions of a real variable with values in reflexive Banach spaces, the result is contained in \cite{F}, Theorem 2.10.13, where the codomain space has the Radon-Nikodym property. There also exists another version of the theorem initiated by an old result of Lusin \cite{L}, later extended for a function of a real variable with values in a metric space \cite{Du,DZ}. It is not surprising that there is a variety of extensions for this theorem to more variables in many ways and also by natural changes in properties well-known in one dimensional case such as almost everywhere continuity and differentiability, integration by parts and so on \cite{DZa,Ji,Ma}. In fact this theorem can be generalized to the concept of approximate continuity that plays an important role to understand the relationship between Riemann integrability (for almost everywhere continuous functions) and continuity on the one hand, and the relationship between approximate continuity and Lebesgue integrability (for almost everywhere approximately continuous functions), on the other hand \cite{DZ}. There exist alternative proofs for this theorem; although these are of different appearance but they are constructed from a common root (see e.g. \cite{Bru,Car,Roy,Yeh}). In the present work the classical form of the theorem is considered, since it looks possible to naturally extend the results to more general cases mentioned above. The most convenient statement of the Banach--Zarecki theorem is \cite{Bru}: \begin{thm}{\label{th1}} {\it Let $F$ is a real--valued function defined on a real bounded closed interval $[a, b]$. A necessary and sufficient condition for $F$ to be absolutely continuous is that\\ $(i)$ $F$ is continuous and of bounded variation on $[a, b]$,\\ $(ii)$ $F$ satisfies Lusin's condition, i.e. it maps sets of Lebesgue measure zero into sets of Lebesgue measure zero.} \end{thm} The necessary condition is straightforward and will not be discussed here. Its proof is given in almost any text book of real analysis \cite{Bru,JZ}. However the sufficient condition is rather technical and requires some non--trivial efforts and may rarely be found in common references. Thus, our attempt is concentrated on providing an alternative proof of the sufficient condition, that is, if a real--valued function is continuous and of bounded variation and also satisfies Lusin's condition, then it is absolutely continuous. In \cite{Bru}, there is a proof for the sufficient condition employing an inequality being also proved in this reference. The main tools of this approach are the almost everywhere differentiability and the Vitali covering theorem. However the present proof is based on the close relation between the Lebesgue integral and the properties of a measure space which manifests itself essentially through the Radon-Nikodym theorem. Thus, the main used tools here are the Radon-Nikodym theorem and the properties of variations of functions. This new proof may however cost to be considered because of several reasons such as the following. Here a slightly more general result is proven, namely Lemma \ref{4} while we need only Corollary \ref{5} for our proof. The concept of almost everywhere differentiability and thus the Vitali covering lemma is not used. The methods and techniques handled here seem to be applicable and naturally generalizable to a class of similar problems. There is a hope to generalize this method to obtain an analog version for the absolute continuity in relation with other types of integration rather than the Lebesgue integral. Finally it is seen that here some statements are proven employing only conditions $(i)$ and $(ii)$ mentioned in the Banach--Zarecki theorem and without using the absolute continuity condition, while these statements are usually proved through a direct application of the absolute continuity condition in the common literatures. In order to prove Theorem \ref{th1}, our strategy is to establish the following theorem which illustrates more clearly, the relation between the absolute continuity and the Lebesgue integral. \begin{thm}\label{main} Suppose that $F:[a,b]\longrightarrow{\Bbb R}$ is a continuous and of bounded variation and satisfies Lusin's condition. Then there exists an integrable function and in fact a Borel--measurable function $f:[a,b]\longrightarrow{\Bbb R}$ such that \begin{align*} F(x)=F(a)+\int_{[a,x]}\,f\,d\lambda~~:~~~~\forall x\in [a,b], \end{align*} where $d\lambda$ in the integral comes from the Lebesgue measure $\lambda$. \end{thm} This theorem will immediately yield Theorem \ref{th1} through the application of the well known statement \cite{Bru,JZ}: \begin{quote} {\em Let $f:[a,b]\longrightarrow{\Bbb R}$ be a Lebesgue integrable function and let $F(x)=F(a) + \int_{[a,x]}f\,d\lambda$, then $F$ is absolute continuous on $[a,b]$.} \end{quote} In the next section, we prove the Theorem \ref{main} in three steps, the first of which is well known in text books \cite{JZ} while step 2 and especially step 3 are of our main interests. Throughout this paper we assume that the notation $\lambda$ implies the Lebesgue measure, unless specially stated otherwise. \section{\bf The main result: new proof of Theorem 1.2} \vskip 0.4 true cm The proof is divided into three interconnected steps.\\ {\em Step 1.} At first, we prove the theorem assuming that $F$ is strictly increasing. In this case, the proof coincides the standard proof given in common text books (see e.g. Theorem 4.3.8 of \cite{JZ}) which employs the Radon--Nikodym theorem. To have a complete discussion, let us briefly review the proof here. Since $F$ is strictly increasing, $F$ is a homeomorphism from $I=[a,b]$ to $J=F(I)=[F(a),F(b)]$ and so $F$ preserves Borel sets between $I$ and $J$. Let ${\mathcal B}$ be the collection of Borel measurable subsets of $I$, then we can define the new measure $\nu:{\mathcal B}\longrightarrow [0,\infty)$ as $\nu(E)=\lambda(F(E))$. It is clear that $\nu$ is a finite measure and is absolutely continuous relative to $\lambda$ (since $F$ satisfies Lusin's condition). Therefore, according to the Radon--Nikodym theorem, there exists a (Borel) measurable and Lebesgue integrable function $f:I\longrightarrow {\Bbb R}$ such that \begin{eqnarray}\label{eq:5} \nu(E)=\int_E\,f\,d\lambda, \hspace{0.5cm} E\in {\mathcal B}. \end{eqnarray} Especially if $E=[a,x]$ for $x\in I$, then $F(E)=[F(a),F(x)]$ and Eq.~\eqref{eq:5} immediately implies that \begin{align*} F(x)=F(a)+\int_{[a,x]}\,f\,d\lambda, \hspace{0.5cm} x\in I. \end{align*} This completes the proof of this step.\\ {\em Step 2.} Let $F$ is non--decreasing (i.e. increasing but not strictly increasing). So, there exists the continuous and of bounded variation function $G(x)=F(x)+x$ which is strictly increasing. The proof will be complete if we prove that Lusin's property is fulfilled by $G$, i.e. for $N\subset[a,b]$ if $\lambda(N)=0$ then $\lambda(G(N))=0$. Since $F$ is non--decreasing, one easily observes that the constant values of $F$ make sense in disjoint intervals $S_k$ and the continuity of $F$ implies that $S_k$s are closed intervals, say $[a_k,b_k]$. Hence, in general, on $S=\bigcup_{k=1}^{+\infty} S_k$, $F$ takes the values $F(S)=\Big\{\mu_k\Big\}_{k=1}^{+\infty}$ where $\mu_k$ is the value of $F$ on $S_k$. The intervals $S_k$ may be so small and their union $S$ is not necessary closed. Now, since $S_k$~s are disjoint, we can write \begin{align*} N_1=N\cap S, \hspace{1cm} N_2=N-N_1. \end{align*} Therefore, we have \begin{align*} \lambda(G(N))\leq \lambda(G(N_1))+\lambda(G(N_2)), \end{align*} while \begin{align*} \lambda(G(N_1))= \lambda\Big( \bigcup_{k=1}^{+\infty}G(N\cap S_k) \Big) \leq \sum_{k=1}^{+\infty}\lambda (G(N\cap S_k)). \end{align*} On the other hand, $G(N\cap S_k)=\{ \mu_k + x~|~x\in N\cap S_k\}$ and thus $\lambda(G(N\cap S_k))=\lambda(N\cap S_k)$, so \begin{align*} \lambda(G(N_1)) &\leq \sum_{k=1}^{+\infty}\lambda (G(N\cap S_k)) \nonumber\\ &= \lambda(\bigcup_{k=1}^{+\infty}(N\cap S_k))=\lambda(N_1)\leq \lambda(N)=0. \end{align*} Therefore $\lambda(G(N_1))=0$. To prove $\lambda(G(N_2))=0$, we notice that $F$ satisfies Lusin's condition i.e. $\lambda(N_2)=0$ results in $\lambda(F(N_2))$, so for each $\epsilon>0$, we can find an open set $U$ such that $F(N_2)\subset U$ with $\lambda(U)<\epsilon$. In addition, since $\lambda(N_2)=0$, one can find an open set $U'$ including $N_2$ such that $\lambda(U')<\epsilon$. The open set $V:= U'\cap F^{-1}(U)$ contains $N_2$ such that $\lambda(V)<\epsilon$ and $\lambda(F(V))<\epsilon$. Suppose $V=\bigcup_{k=1}^{+\infty}I_k$ where $I_k$s are disjoint open intervals. For each $I_k$, consider the two closed intervals (if exist) $S_i$ and $S_j$ intersecting $I_k$ from the left and right containing the left and right boundary points of $I_k$ resp. Define $I'_k = I_k-(S_i\cup S_j)$ (it is possible that $I'_k$ is empty). Thus $I'_k\subset I_k$ and $I'_k$~s are mutually disjoint. Let $V':=\bigcup_{k=1}^{+\infty}I'_k$ and since $S_k$~s are all out of $N_2$, $N_2\subset V'$ and thus $F(N_2)\subset F(V')$. It is important to attend that for each $l$ and $k$, $S_l$ is either completely contained in $I'_k$ or is disjoint from it. According to the conditions on $F$, i.e. non--increasing and continuity, one can deduce that $F(I'_k)$ is an interval (not necessarily closed or open) which we denote it by $J_k$. Now we acclaim that $J_k$~s are mutually disjoint. If else, for example if $y\in J_k\cap J_l$ for some $k$ and $l$, then there exist at least two points $x_k\in I'_k$ and $x_l\in I'_l$ such that $F(x_k)=F(x_l)$. Hence there exists an $S_i$ so that $[x_k,x_l]\subset S_i$ but hence $S_i$ is not completely in $I'_k$ or $I'_l$ which is a contradiction. Relations \begin{align*} F(N_2)\subset F(V')=\bigcup_{k=1}^{+\infty}F(I'_k)\subset U, \end{align*} imply that \begin{align*} \lambda\Big(\bigcup_{k=1}^{+\infty}J_k\big)\leq \lambda(U)< \epsilon, \end{align*} and since $J_k$~s are disjoint sets, \begin{align*} \sum_{k=1}^{+\infty}\lambda(J_k)< \epsilon. \end{align*} The remaining work is to determine $G(I'_k)$~s and approximate their measure. For each $k$, we have \begin{align*} G(I'_k)&=\Big\{F(x)+x~|~x\in I'_k \Big\}\\ \nonumber &\subseteq \Big\{y+x~|~y\in J_k,~x\in I'_k \Big\}\\ &\subseteq \Big(\inf(I'_k)+\inf(J_k)~~,~\sup(I'_k)+\sup(J_k)\Big), \end{align*} which results in \begin{align*} \lambda(G(I'_k))\leq \lambda(I'_k)+\lambda(J_k). \end{align*} So we obtain \begin{align*} \lambda(G(N_2)) \leq \lambda\Big( \bigcup_{k=1}^{+\infty}G(I'_k)\Big)\leq \sum_{k=1}^{+\infty}\lambda(G(I'_k)). \end{align*} The latter equations clarify that \begin{align*} \lambda(G(N_2))\leq \sum_{k=1}^{+\infty}\lambda(I'_k) + \sum_{k=1}^{+\infty}\lambda(J_k)< \epsilon + \lambda(U) < 2\epsilon. \end{align*} Thus $\lambda(G(N_2))=0$. This shows that Lusin's condition is fulfilled for $G(x)=F(x)+x$. Now pertaining to Step 1, there is an integrable and Borel-measurable function $f_1:[a,b]\longrightarrow{\Bbb R}$ s.t. \begin{align*} G(x)-G(a)=\int_{[a,x]}f_1\,d\lambda, \end{align*} hence \begin{align*} F(x)-F(a)=\int_{[a,x]}f\,d\lambda, \end{align*} thus if we let $f=f_1-1$ and this completes the proof of Step 2.\\ {\em Step 3.} Finally, we assume that $F$ is continuous and of bounded variation which satisfies Lusin's condition and show that the theorem holds. To accomplish this, we make use of the following two lemmas. \begin{lem}{\label{3}} Let $F:[a,b]\longrightarrow{\Bbb R}$ be a continuous function of bounded variation. If $F=p-n$ is the Jordan decomposition for $F$, then $p$ and $n$ are continuous. \end{lem} \begin{lem}{\label{4}} In the situation of Lemma \ref{3}, let $N\subset [a,b]$ such that $F(N)$ has a zero Lebesgue measure. Then $p(N)$ and $n(N)$ are also of Lebesgue measure zero. \end{lem} Lemma \ref{4} immediately yields the following result. \begin{cor}{\label{5}} In the situation of Lemma \ref{3}, let $F$ satisfies Lusin's condition, then $p$ and $n$ also satisfy Lusin's condition. \end{cor} It is seen from Lemma \ref{3} and Corollary \ref{5} that both of $p$ and $n$ are continuous and of bounded variation and Lusin's condition is valid for them. Then since they are non--decreasing, there will exist integrable and Borel--measurable real--valued functions $g$ and $h$ on $[a,b]$ so that $p(x)=p(a)+\int_{[a,x]}g\,d\lambda$ and $n(x)=n(a)+\int_{[a,x]}h\,d\lambda$ and therefore the proof will be completed substituting $f=g-h$. \section{\bf Proof of Lemma 2.1} \vskip 0.4 true cm It is sufficient to prove that $p$ is continuous. At first, we denote that $p$ is a right--continuous function. The continuity of $p$ can be directly achieved by ($\epsilon-\delta$) method. However an alternative proof is presented here because of its easier application in the proof of Lemma \ref{4}. According to definition, \begin{eqnarray}\label{eq:6} p(x)= \bigvee^x_a(F)=\sup_P |F(P)|= \sup_P \sum_{k=1}^{n(P)} |F(x_k)- F(x_{k-1})| \end{eqnarray} is the variation of $F$ from $a$ to $x$ where the supremum is taken over all partitions $$P \,:\, {a = x_0 < x_1 < \cdots < x_n = x}$$ of $[a,x]$ and $n = n(P) = \#P - 1$. Therefore for arbitrary $\epsilon > 0$ there is a partition $P$ such that \begin{eqnarray}\label{eq:7} 0\leq \bigvee^x_a(F)- |F(P)|<\epsilon. \end{eqnarray} \begin{defn} For the given partition $P \,:\, a = x_0 < x_1 < \cdots < x_n = b$, let $x\in [x_{i-1}, x_i]$. Two adjacent partitions $P_1(x)$ and $P_2(x)$ are defined as \begin{align*} &&P_1(x) \,:\, {a=x_0 < \cdots < x_{i-1} \leq x},\\ &&P_2(x) \,:\, {x \leq x_i < \cdots < x_n=b}, \end{align*} and partition $P'(x)$ considered as a refinement of $P$ is \begin{align*} P'(x):~a=x_0<\cdots<x_{i-1} \leq x \leq x_i<\cdots<x_n=b. \end{align*} \end{defn} For $\epsilon>0$ and its corresponding partition $P$ considered in Eq.~\eqref{eq:7}, one can define continuous functions ${\rm w}_i:[x_{i-1},x_i]\longrightarrow {\Bbb R}$ as \begin{eqnarray}\label{eq:8} {\rm w}_i(x)= |F(P_1(x))|. \end{eqnarray} Application of the pasting lemma implies the existence of the continuous function ${\rm u}_{\epsilon}: [a,b]\longrightarrow {\Bbb R}$ so that on each $[x_{i-1},x_i]$, ${\rm u}_{\epsilon}$ is equal to ${\rm w}_i$. Therefore \begin{align*} \Big(\bigvee^x_a(F)- |F(P_1(x))|\;\Big)+\Big(\bigvee^b_x(F)- |F(P_2(x))|\;\Big)&=\bigvee^b_a(F)-|F(P'(x))|\\ &< \epsilon. \end{align*} The two terms on the left hand side of the above relation are nonnegative and especially considering the first term, one finds that \begin{eqnarray}\label{eq:9} 0\leq p(x)- {\rm u}_{\epsilon}(x)<\epsilon, \end{eqnarray} in which Eqs.~\eqref{eq:7} and \eqref{eq:8} were applied. Now consider $\Big\{{\rm u}_{_{2^{-k}}}\Big\}_{k=1}^{\infty}$ as a sequence of continuous functions. Equation \eqref{eq:9} with $\epsilon=2^{-k}$ shows that this sequence converges uniformly to $p$, thus $p$ is continuous. \section{\bf Proof of Lemma 2.2} \vskip 0.4 true cm Let $N\subset [a,b]$ such that $\lambda(F(N))=0$. For arbitrary $\epsilon>0$, consider its corresponding partition $P$ as introduced in Eq.~\eqref{eq:7}. It is sufficient to prove that $\lambda(p(N_i))=\lambda(n(N_i))=0$ where $N_i=N\cap [x_{i-1},x_i]$ ($1\leq i\leq n$). Since $F(N_i)$ has zero Lebesgue measure, there exists a sequence of disjoint open intervals $\{J_k\}_{k=1}^{\infty}$ such that $F(N_i)\subset \bigcup_{k=1}^{\infty} J_k$ and \begin{eqnarray}\label{eq:10} \sum_{k=1}^{\infty} \lambda(J_k)<\epsilon. \end{eqnarray} At most one of $J_k$~s contains the point $F(x_{i-1})$ and at most one of them contains $F(x_i)$. If so, we exclude these two points from $J_k$~s and split the interval(s) containing the points into two adjacent open intervals. This process clearly leaves relation \eqref{eq:10} unchanged. For each $J_k$ we have $F^{-1}(J_k)=\bigcup_{l=1}^{\infty} I_{kl}$ where intervals $I_{kl}=(a_{kl}, b_{kl})$ are disjoint. According to our hypothesis, one can easily observe that \begin{eqnarray}\label{eq:11} \lambda(p(N_i))\leq \sum_{k,l=1}^{\infty} \lambda(p(I_{kl})). \end{eqnarray} Choose any finite number of intervals $I_{kl}$s and call them $(a_1, b_1)$, $\cdots$, $(a_m, b_m)$ in such an order that we have the partition \begin{eqnarray}\label{eq:12} &\hspace{1cm} Q:~b_0=x_{i-1}\leq a_1<b_1<a_2<\cdots<a_m<b_m\leq a_{m+1}=x_i. \end{eqnarray} Thus \begin{align*} \bigvee^{x_i}_{x_{i-1}}(F)-|F(Q)|<\epsilon, \end{align*} which means that \begin{align*} \sum^m_{j=1}\!\!\Big(\!\bigvee^{b_j}_{a_j}(F)-|F(b_j)-F(a_j)|\!\Big) +\sum^m_{j=0}\!\!\Big(\!\bigvee^{a_{j+1}}_{b_j}(F)-|F(a_{j+1})-F(b_j)|\!\Big)<\epsilon. \end{align*} Each term in the left side is nonnegative, especially noting to the first term and recalling the definition of $p$ through Eq.~\eqref{eq:6} and its non-decreasing property, one concludes that \begin{align*} \sum^m_{j=1} \lambda(p(a_j,b_j))<\epsilon + \sum^m_{j=1} |F(b_j)-F(a_j)|. \end{align*} The above inequality holds for any finite number of $I_{kl}$~s, thus \begin{eqnarray}\label{eq:13} \sum^{\infty}_{k,l=1} \lambda(p(I_{kl}))<\epsilon + \sum^{\infty}_{k,l=1} |F(b_{kl})-F(a_{kl})|. \end{eqnarray} Our next task is to find an upper bound proportional to $\epsilon$ for the second term of the last equation. To do this we consider two separate cases. The first case is when $F(x_{i-1})=F(x_i)$. Choose again the finite number of $I_{kl}$~s say $(a_j, b_j)$~s for $1\leq j\leq m$ and construct the partition $Q$ as introduced in Eq.~\eqref{eq:12}. This partition is a refinement of $x_{i-1}<x_i$ and so $|F(Q)|-|F(x_i)-F(x_{i-1})|<\epsilon$ and thus \begin{align*} \sum^{m}_{j=1} |F(b_j)-F(a_j)|\leq |F(Q)|<\epsilon. \end{align*} The last inequality holds for any finite number of $I_{kl}$~s and so is also valid for all of them. Therefore when $F(x_{i-1})=F(x_i)$ by the use of Eq.~\eqref{eq:13} we have \begin{eqnarray}\label{eq:14} \lambda(p(N_i))\leq \sum^{\infty}_{k,l=1} \lambda(p(I_{kl}))<2\,\epsilon. \end{eqnarray} The second case is related to the condition $F(x_{i-1})<F(x_i)$ (the opposite case is similar). Recall that $J_k$~s where disjoint open intervals containing $F(N_i)$ (except probably the two points $F(x_{i-1})$ and $F(x_i)$) with total measure less than $\epsilon$. Thus we are able to divide them into three types: $J_k^+$~s whose points are greater than $F(x_i)$, $J_k^-$~s whose points are less than $F(x_{i-1})$ and $J_k^{\circ}$~s whose points are between $F(x_{i-1})$ and $F(x_i)$. At first attend to $J_k^+$~s. In this case, take any finite number of $I_{kl}^+$~s (whose images are inside $J_k^+$~s) say $(a^+_j, b^+_j)$~s for $1\leq j\leq m$ such that $x_{i-1}\leq a^+_1<b^+_1<a^+_2<\cdots<a^+_m<b^+_m\leq x_i$. Images of these intervals lie inside a finite (say $s$) number of $J_k^+$~s, namely $J^+_{k_r}=(c^+_r, d^+_r)$ for $1\leq r\leq s$ where obviously $s\leq m$. Suppose that in addition, $(c^+_r, d^+_r)$~s are arranged increasingly such that $F(x_i)\leq c^+_1<d^+_1 <c^+_2<\cdots<c^+_s<d^+_s$. The compact set $[x_{i-1},x_i]\cap F^{-1}(c^+_1)$ has a minimum and maximum respectively $\alpha^+$ and $\beta^+$. Since the images of all $(a^+_j, b^+_j)$~s are greater than $c^+_1\geq F(x_i)$, the intermediate value theorem implies that they all lie between $\alpha^+$ and $\beta^+$. Thus there exist partition $R_1:~x_{i-1}<\alpha^+ < \beta^+ <x_i$ and its refinement $R_2:~x_{i-1}<\alpha^+ <a_1^+ < b_1^+ < \cdots a_m^+ < b_m^+ < \beta^+ <x_i$. The relation $|F(R_2)|-|F(R_1)|<\epsilon$ regarding the fact that $F(\alpha^+)=F(\beta^+)=c^+_1$ implies that \begin{align*} \sum_{j=1}^m |F(b^+_j)-F(a^+_j)|<\epsilon, \end{align*} but since this is true for any finite number of considered intervals, so for $I_{kl}^+=(a_{kl}^+ , b_{kl}^+)$~s we have \begin{eqnarray}\label{eq:15} \sum_{k,l} |F(b_{kl}^+)-F(a_{kl}^+)|<\epsilon. \end{eqnarray} Quite similarly, for $I_{kl}^-=(a_{kl}^- , b_{kl}^-)$~s we have \begin{eqnarray*}\label{eq:16} \sum_{k,l} |F(b_{kl}^-)-F(a_{kl}^-)|<\epsilon. \end{eqnarray*} Finally consider $J_k^{\circ}$~s whose points are between $F(x_{i-1})$ and $F(x_i)$ where for each $k$, $F^{-1}(J_k^{\circ})=\bigcup_{l=1}^{\infty} I^{\circ}_{kl}$. Similar to the previous case choose a finite number of $I^{\circ}_{kl}$~s such as $(a^{\circ}_j, b^{\circ}_j)$~s for $1\leq j\leq m$ such that $x_{i-1}\leq a^{\circ}_1<b^{\circ}_1<a^{\circ}_2<\cdots< a^{\circ}_m<b^{\circ}_m\leq x_i$ and assume their images lie in $J^{\circ}_{k_r}=(c^{\circ}_r, d^{\circ}_r)$ for $1\leq r\leq s$ where clearly $s\leq m$. Again suppose $(c^{\circ}_r, d^{\circ}_r)$~s are arranged increasingly such that \begin{eqnarray}\label{eq:17} F(x_{i-1})\leq c^{\circ}_1<d^{\circ}_1<c^{\circ}_2 <\cdots<c^{\circ}_s<d^{\circ}_s\leq F(x_i). \end{eqnarray} Now define $\alpha^{\circ}_r=\min\Big( [x_{i-1},x_i]\cap F^{-1}(c^{\circ}_r)\Big)$ for $1\leq r\leq s$. Relation \eqref{eq:17} and the intermediate value theorem establish that \begin{eqnarray}\label{eq:18} x_{i-1}\leq \alpha^{\circ}_1<\alpha^{\circ}_2 <\cdots<\alpha^{\circ}_s<\alpha^{\circ}_{s+1}=x_i. \end{eqnarray} Note that in the above relation $\alpha^{\circ}_{s+1}$ is defined to be $x_i$. In addition, define $\beta^{\circ}_r=\max\Big( [x_{i-1}, \alpha^{\circ}_{r+1}]\\ \cap F^{-1}(d^{\circ}_r)\Big)$ for $1\leq r\leq s$ and also define $\beta^{\circ}_0=x_{i-1}$. This definition immediately yields that for each $r=1,\cdots s-1$ we have $\alpha^{\circ}_r<\beta^{\circ}_r<\alpha^{\circ}_{r+1}$ while for $r=0$ we have $x_{i-1}=\beta^{\circ}_0\leq \alpha^{\circ}_1$ and for $r=s$ we have $\alpha^{\circ}_s<\beta^{\circ}_s\leq \alpha^{\circ}_{s+1}=x_i$. Thus, relation \eqref{eq:18} is finally improved to admit to define the partition \begin{eqnarray}\label{eq:19} &\hspace{1cm} S_1:~x_{i-1}=\beta^{\circ}_0\leq \alpha^{\circ}_1<\beta^{\circ}_1< \alpha^{\circ}_2<\cdots<\alpha^{\circ}_s<\beta^{\circ}_s\leq \alpha^{\circ}_{s+1}=x_i. \end{eqnarray} In this position we claim that for each $j, r$ ($1\leq j\leq m, 0\leq r\leq s$) we have $(a^{\circ}_j, b^{\circ}_j)\cap(\beta^{\circ}_r,\alpha^{\circ}_{r+1})=\varnothing$. If not, assume $y$ belongs to this set, then only two states may happen: In the first state we have $F(y)<d^{\circ}_r<c^{\circ}_{r+1}$ for $1\leq r\leq s-1$, $F(y)<c^{\circ}_1$ for $r=0$ and $F(y)<d^{\circ}_s$ for $r=s$. The case $r=0$ has no sense because $y\in (a^{\circ}_j, b^{\circ}_j)$ and the images of all $(a^{\circ}_j, b^{\circ}_j)$~s are greater than $c^{\circ}_1$. When $r=s$ since $F(y)<d^{\circ}_s\leq F(\alpha^{\circ}_{s+1})=F(x_i)$, the intermediate value theorem implies that there exists a point $z\in (y,x_i]$ such that $F(z)=d^{\circ}_s$. But according to the definition of $\beta^{\circ}_s$ we must have $z\leq \beta^{\circ}_s$ which contradicts with the position of $y$. Finally when $1\leq r\leq s-1$, since $F(y)<d^{\circ}_r<F(\alpha^{\circ}_{r+1}) =c^{\circ}_{r+1}$, the intermediate value theorem implies that there exists a point $z'\in (y,\alpha^{\circ}_{r+1})$ such that $F(z')=d^{\circ}_r$ but according to the definition of $\beta^{\circ}_r$ we must have $z'\leq \beta^{\circ}_r$ which is a contradiction. On the other hand in the second state we may have $d^{\circ}_r<c^{\circ}_{r+1}<F(y)$ for $1\leq r\leq s-1$, $c^{\circ}_1<F(y)$ for $r=0$ and $d^{\circ}_s<F(y)$ for $r=s$. The case $r=s$ has no sense because the images of all $(a^{\circ}_j, b^{\circ}_j)$~s are less than $d^{\circ}_s$. When $r=0$ since $F(\beta^{\circ}_0)=F(x_{i-1})\leq c^{\circ}_1<F(y)$, the intermediate value theorem implies that there exists a point $t\in [x_{i-1},y)$ such that $F(t)=c^{\circ}_1$. But according to the definition of $\alpha^{\circ}_1$ we must have $\alpha^{\circ}_1\leq t$ which is in contradiction with the position of $y$. Finally when $1\leq r\leq s-1$, since $F(\beta^{\circ}_r)=d^{\circ}_r<c^{\circ}_{r+1}<F(y)$, the intermediate value theorem implies that there exists a point $t'\in (\beta^{\circ}_r,y)$ such that $F(t')=c^{\circ}_{r+1}$ but according to the definition of $\alpha^{\circ}_{r+1}$ we must have $\alpha^{\circ}_{r+1}\leq t'$ which is a contradiction. Thus, our claim is proved, that is, non of the points $a^{\circ}_j$ or $b^{\circ}_j$ lie inside the intervals $(\beta^{\circ}_r,\alpha^{\circ}_{r+1})$ or in another words, all points $a^{\circ}_j$ and $b^{\circ}_j$ lie only inside intervals $[\alpha^{\circ}_r,\beta^{\circ}_r]$. The above fact admits the definition of partition $S_2$ as \begin{eqnarray}\label{eq:20} \hspace{1cm} S_2:~x_{i-1}&\!=\!&\beta^{\circ}_0\leq \alpha^{\circ}_1\leq a^{\circ}_1< b^{\circ}_1 <\cdots<a^{\circ}_{j_1}< b^{\circ}_{j_1}\leq \beta^{\circ}_1 \nonumber\\ &\!<\!& \alpha^{\circ}_2\leq a^{\circ}_{j_1+1}<b^{\circ}_{j_1+1} <\cdots<a^{\circ}_{j_{2}}< b^{\circ}_{j_{2}}\leq\beta^{\circ}_2 \nonumber\\ &\!<\!&\alpha^{\circ}_3 <\cdots <\alpha^{\circ}_s\leq\cdots<a^{\circ}_m<b^{\circ}_m\leq\beta^{\circ}_s\leq \alpha^{\circ}_{s+1}=x_i, \end{eqnarray} which is clearly a refinement of partition $S_1$ defined in \eqref{eq:19}. Thus, according to our hypothesis we see that $|F(S_2)|-|F(S_1)|<\epsilon$ which by a simple but careful observation results in the following relation \begin{align*} \sum_{j=1}^m |F(b^{\circ}_j)-F(a^{\circ}_j)|<\epsilon + \sum_{r=1}^s|F(\beta^{\circ}_r)-F(\alpha^{\circ}_r)|. \end{align*} Recalling the definitions of $\alpha^{\circ}_r$ and $\beta^{\circ}_r$ and since $J^{\circ}_{k_r}=(c^{\circ}_r, d^{\circ}_r)$, the above relation converts to \begin{align*} \sum_{j=1}^m |F(b^{\circ}_j)-F(a^{\circ}_j)|<\epsilon + \sum_{r=1}^s\lambda(J^{\circ}_{k_r}), \end{align*} and due to relation \eqref{eq:10} one obtains \begin{align*} \sum_{j=1}^m |F(b^{\circ}_j)-F(a^{\circ}_j)|<2\,\epsilon. \end{align*} Since the above relation is true for the end points of any finite number (here $m$) of $I^{\circ}_{kl}$~s, it is also valid for all of them, that is \begin{eqnarray} \label{eq:21} \sum_{k,l} |F(b^{\circ}_{kl})-F(a^{\circ}_{kl})|<2\,\epsilon. \end{eqnarray} Now by gathering the relations \eqref{eq:15}, \eqref{eq:16} and \eqref{eq:21} it is found that \begin{eqnarray}\label{eq:22} \sum^{\infty}_{k,l=1} |F(b_{kl})-F(a_{kl})|<4\,\epsilon. \end{eqnarray} Inequalities \eqref{eq:11}, \eqref{eq:13} and \eqref{eq:22} yield \begin{eqnarray}\label{eq:23} \lambda(p(N_i))\leq\sum^{\infty}_{k,l=1} \lambda(p(I_{kl}))<5\,\epsilon. \end{eqnarray} This establishes the zero measure of $p(N_i)$ when $F(x_{i-1})\neq F(x_i)$. It only remains to show for the non-decreasing function $n=p-F$, that $\lambda(n(N_i))=0$. In an exactly similar way of obtaining relation \eqref{eq:11} one easily finds that \begin{align*} \lambda(n(N_i))\leq \sum_{k,l=1}^{\infty} \lambda(n(I_{kl})), \end{align*} where still $I_{kl}=(a_{kl},b_{kl})$ and thus $n(I_{kl})\subset[n(a_{kl}),n(b_{kl})]$. Then we notice that for any two points $x,y\in[x_{i-1},x_i]$ since $n=p-F$ we have \begin{align*} |n(y)-n(x)|\leq |p(y)-p(x)|+|F(y)-F(x)|. \end{align*} Substituting $a_{kl},b_{kl}$~s to resp. $x,y$ the latter relation yields \begin{align*} \lambda(n(N_i))\leq\sum_{k,l=1}^{\infty} \lambda(n(I_{kl}))\leq \sum_{k,l=1}^{\infty} \lambda(p(I_{kl}))+\sum^{\infty}_{k,l=1} |F(b_{kl})-F(a_{kl})|. \end{align*} The upper bounds for the first and second terms on the right hand side of the above relation due to \eqref{eq:22} and \eqref{eq:23} proves the zero measure of $n(N_i)$. \section{\bf Conclusion} \vskip 0.4 true cm As one little step towards understanding the regularity of hydrodynamical quantities, it was attempted to see a more direct and clear dependence of continuity and integrability through the Lebesgue integral while there is a hope to generalize the method to find the situation for other types of integration. Indeed, there probably exists an alternative kind of absolute continuity in connection with other types of integration rather than the Lebesgue one. Even further, since the used method here essentially employed the general measure-type informations, it looks to have sense to include the issue of measurability of fluid functions under the mechanism of singularity. In other words, the problem of blow up usually deals with singularities and therefore infinite integrals while it is not yet known if this dynamics can change even the measurability of solutions or not. It was seen here that the absolute continuity can be extracted directly as a consequence of measure-type properties of functions. There was nowhere used the idea of differentiability which is the result of the Vitali covering lemma. Instead, the Radon--Nikodym theorem was the main tool which relies solely on the excellent consistency between the Lebesgue integral and a measure space. In addition, Lemma \ref{4} was proven showing a slightly more general result than needed for the proof of the Banach-Zarecki theorem. Although the classical version of this theorem was proven here but it is not surprising if one can generalize this proof to more general spaces and even higher dimensions. \vskip 0.4 true cm \begin{center}{\textbf{Acknowledgments}} \end{center} The authors wish to sincerely appreciate the referee for useful comments and suggestions which definitely improved the paper. \\ \\ \vskip 0.4 true cm
1,314,259,994,592
arxiv
\section{Introduction} Despite the recent successes of Reinforcement Learning \citep[e.g.][]{mnih2015humanlevel,Silver1140}, it has hardly been applied in real industrial issues. This could be attributed to two undesirable properties which limit its practical applications. First, it depends on a tremendous amount of interaction data that cannot always be simulated. This issue can be alleviated by model-based methods -- which we consider in this work -- that often benefit from better sample efficiencies than their model-free counterparts. Second, it relies on trial-and-error and random exploration. In order to overcome these shortages, and motivated by the path planning problem for a self-driving car, in this paper we consider the problem of controlling an unknown linear system $x(t)$ so as to maximise an \emph{arbitrary} bounded reward function $R$, in a critical setting where mistakes are costly and must be avoided at all times. This choice of rich reward space is crucial to have sufficient flexibility to model non-convex and non-smooth functions that naturally arise in many practical problems involving combinatorial optimisation, branching decisions, etc., while quadratic costs are mostly suited for tracking a fixed reference trajectory \citep[e.g.][]{Kumar2013}. Since experiencing failures is out of question, the only way to prevent them from the outset is to rely on some sort of prior knowledge. In this work, we assume that the system dynamics are partially known, in the form of a linear differential equation with unknown parameters and inputs. More precisely, we consider a linear system with state $x\in\mathbb{R}^p$, acted on by controls $u\in\mathbb{R}^q$ and disturbances $\omega\in\mathbb{R}^r$, and following dynamics in the form: \begin{equation} \label{eq:dynamics} \dot{x}(t)=A(\theta)x(t) + B u(t) + D \omega(t),\;t\geq0, \end{equation} where the parameter vector $\theta$ in the state matrix $A(\theta)\in\mathbb{R}^{p\times p}$ belongs to a compact set $\Theta \subset \mathbb{R}^d$. The control matrix $B\in\mathbb{R}^{p\times q}$ and disturbance matrix $D\in\mathbb{R}^{p\times r}$ are known. We also assume having access to the observation of $x(t)$ and to a noisy measurement of $\dot{x}(t)$ in the form $y(t)=\dot{x}(t) + C\nu(t)$, where $\nu(t)\in\mathbb{R}^s$ is a measurement noise and $C\in\mathbb{R}^{p\times s}$ is known. Assumptions over the disturbance $\omega$ and noise $\nu$ will be detailed further, and we denote $\eta(t) = C\nu(t) + D\omega(t)$. We argue that this structure assumption is realistic given that most industrial applications to date have been relying on physical models to describe their processes and well-engineered controllers to operate them, rather than machine learning. Our framework relaxes this modelling effort by allowing some \emph{structured uncertainty} around the nominal model. We adopt a data-driven scheme to estimate the parameters more accurately as we interact with the true system. Many model-based reinforcement learning algorithms rely on the estimated dynamics to derive the corresponding optimal controls \citep[e.g.][]{Lenz2015,Levine2015}, but suffer from \emph{model bias}: they ignore the error between the learned and true dynamics, which can dramatically degrade control performances \citep{Schneider1997}. To address this issue, we turn to the framework of \emph{robust} decision-making: instead of merely considering a point estimate of the dynamics, for any $N\in\mathbb{N}$, we build an entire \emph{confidence region} $\mathcal{C}_{N,\delta}\subset\Theta$, illustrated in \Cref{fig:estimation}, that contains the true dynamics parameter with high probability: \begin{align} \probability{\theta\in \mathcal{C}_{N,\delta}} \geq 1-\delta, \label{eq:confidence} \end{align} where $\delta\in(0,1)$. In \Cref{sec:estimation}, having observed a history $\mathcal{D}_{N} = \{(x_n, y_n,u_n)\}_{n\in\left[N\right]}$ of transitions, our first contribution extends the work of \citet{Abbasi2011} who provide a confidence ellipsoid for the least-square estimator to our setting of feature matrices, rather than feature vectors. The \emph{robust control objective} $V^r$ \citep{Bental2009,Bertsimas2011,Gorissen2015} aims to maximise the worst-case outcome with respect to this confidence region $\mathcal{C}_{N,\delta}$: \begin{equation} \label{eq:robust-control} \sup_{\mathbf{u}\in{(\mathbb{R}^q)}^\mathbb{N}} {V^r(\mathbf{u})}, \qquad \text{ where }\qquad {V^r(\mathbf{u})} \stackrel{def}{=} \inf_{\substack{\theta \in \mathcal{C}_{N,\delta} \\ \omega\in[\underline{\omega},\overline{\omega}]^\mathbb{R}}} \left[\sum_{n=N+1}^\infty \gamma^n R(x_n(\mathbf{u},\omega))\right], \end{equation} $\gamma\in(0,1)$ is a discount factor, and $x_n(\mathbf{u},\bm{\omega})$ is the state reached at step $n$ under controls $\mathbf{u}$ and disturbances $\omega(t)$ within the given admissible bounds $[\underline\omega(t),\overline\omega(t)]$. Maximin problems such as $\eqref{eq:robust-control}$ are notoriously hard if the reward $R$ has a simple form. However, without a restriction on the shape of functions $R$, we cannot hope to derive an explicit solution. In our second contribution, we propose a robust MPC algorithm for solving \eqref{eq:robust-control} numerically. In \Cref{sec:prediction}, we leverage recent results from the uncertain system simulation literature to derive an \emph{interval predictor} $[\underline{x}(t),\overline{x}(t)]$ for the system \eqref{eq:dynamics}, illustrated in \Cref{fig:prediction}. For any $N\in\mathbb{N}$, this predictor takes the information on the current state ${x}_N$, the confidence region $\mathcal{C}_{N,\delta}$, planned control sequence $\mathbf{u}$ and admissible disturbance bounds $[\underline{\omega}(t),\overline{\omega}(t)]$; and must verify the \emph{inclusion property}: \begin{equation} \label{eq:inclusion-property} \underline{x}(t)\leq x(t)\leq\overline{x}(t), \forall t\geq t_N. \end{equation} Since $R$ is generic, potentially non-smooth and non-convex, solving the optimal -- not to mention the robust -- control objective is intractable. In \Cref{sec:control}, facing a sequential decision problem with continuous states, we turn to the literature of tree-based planning algorithms. Although there exist works addressing continuous actions \citep{Busoniu2018,Weinstein2012}, we resort to a first approximation and discretise the continuous decision $(\mathbb{R}^q)^\mathbb{N}$ space by adopting a hierarchical control architecture: at each time, the agent can select a high-level \emph{action} $a$ from a finite space $\mathcal{A}$. Each action $a\in\mathcal{A}$ corresponds to the selection of a low-level controller $\pi_a$, that we take affine: $u(t) = \pi_a(x(t)) \stackrel{def}{=} -K_a x(t) + u_a.$ For instance, a tracking a subgoal $x_g$ can be achieved with $\pi_g = K(x_g - x)$. This discretisation induces a suboptimality, but it can be mitigated by diversifying the controller basis. The robust objective \eqref{eq:robust-control} becomes $\sup_{\mathbf{a}\in{\mathcal{A}}^\mathbb{N}} V^r(\mathbf{a})$, where $x_n(\mathbf{a}, \omega)$ stems from \eqref{eq:dynamics} with $u_n = \pi_{a_n}(x_n)$. However, tree-based planning algorithms are designed for a single known generative model rather than a confidence region for the system dynamics. Our third contribution adapts them to the robust objective \eqref{eq:robust-control} by approximating it with a tractable surrogate $\hat{V}^r$ that exploits the interval predictions \eqref{eq:inclusion-property} to define a pessimistic reward. In our main result, we show that the best surrogate performance achieved during planning is guaranteed to be attained on the true system, and provide an upper bound for the approximation gap and suboptimality of our framework in \Cref{thm:minimax-regret-bound}. This is the first result of this kind for maximin control with generic costs to the best of our knowledge. \Cref{alg:full} shows the full integration of the three procedures of estimation, prediction and control. In \Cref{sec:multi-model}, our forth contribution extends the proposed framework to consider multiple modelling assumptions, while narrowing uncertainty through data-driven model rejection, and still ensuring safety via robust model-selection during planning. Finally, in \Cref{sec:experiments} we demonstrate the applicability of \Cref{alg:full} in two numerical experiments: a simple illustrative example and a more challenging simulation for safe autonomous driving. \paragraph{Notation} The system dynamics are described in continuous time, but sensing and control happen in discrete time with time-step $\mathrm{d} t>0$. For any variable $z$, we use subscript to refer to these discrete times: $z_n = z(t_n)$ with $t_n = n\mathrm{d} t$ and $n\in\mathbb{N}$. We use bold symbols to denote temporal sequences $\mathbf{z} = (z_n)_{n\in\mathbb{N}}$. We denote $z^+ = \max(z,0)$, $z^- = z^+-z$, $|z| = z^++z^-$ and $[n]=\{1,\dots, n\}$. \begin{figure} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[trim={1cm 0 0 0}, clip, width=0.6\linewidth]{img/ellipsoid} \caption{The model estimation procedure, running on the obstacle avoidance problem of \Cref{sec:experiments}. The confidence region $C_{N,\delta}$ shrinks with the number of samples $N$.} \label{fig:estimation} \end{minipage} \hfill \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[trim={4cm 0 0 0}, clip, width=0.8\linewidth]{img/obstacle_small} \caption{The state prediction procedure running on the obstacle avoidance problem of \Cref{sec:experiments}. At each time step (red to green), we bound the set of reachable states under model uncertainty \eqref{eq:confidence}} \label{fig:prediction} \end{minipage}% \end{figure} \begin{algorithm}[tb] \caption{Robust Estimation, Prediction and Control} \label{alg:full} \begin{algorithmic} \STATE {\bfseries Input:} confidence level $\delta$, structure $(A,\phi)$, reward $R$, $\mathcal{D}_{[0]}\gets\emptyset,\, \mathbf{a}_1\gets\emptyset$ \FOR{$N = 0,1,2,\dots$} \STATE $\mathcal{C}_{N,\delta} \gets$\textsc{Model Estimation}$(\mathcal{D}_{N})$. \eqref{eq:polytope} \FOR{each planning step $k\in\{N,\dots,N+K\}=N+[K]$} \STATE $[\underline{x}_{k+1}, \overline{x}_{k+1}]\gets$ \textsc{Interval Prediction}($\mathcal{C}_{N,\delta}, \mathbf{a}_kb$) for each action $b\in \mathcal{A}$. \eqref{eq:interval-predictor} \STATE $\mathbf{a}_{k+1}$ $\gets$\textsc{Pessimistic Planning}$(\underline{R_{k+1}}([\underline{x}_{k+1}, \overline{x}_{k+1}]))$. \eqref{eq:opd} \ENDFOR \STATE Execute the recommended control $u_{N+1}$, and add the transition $(x_{N+1}, y_{N+1}, u_{N+1})$ to $\mathcal{D}_{[N+1]}$. \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Related Work} The control of uncertain systems is a long-standing problem, to which a vast body of literature is dedicated. Existing work is mostly concerned with the problem of \emph{stabilisation} around a fixed reference state or trajectory, including approaches such as $\mathcal{H}_\infty$ control \citep[][]{Basar1996}, sliding-mode control \citep{Lu1997} or system-level synthesis \citep{Dean2017,Dean2018}. This paper fits in the popular MPC framework, for which adaptive data-driven schemes have been developed to deal with model uncertainty \citep{Sastry1990,Tanaskovic2014,Amos2018}, but lack guarantees. The family of tube-MPC algorithms seeks to derive theoretical guarantees of \emph{robust constraint satisfaction}: the state $x$ is constrained in a safe region $\mathbb{X}$ around the origin, often chosen convex \citep{Fukushima2007,Adetola2009,Aswani2013,Turchetta2016,Lorenzen2017,Kohler2019,Lu2019,Leurent2020robust}. Yet, many tasks cannot be framed as stabilisation problems (e.g. obstacle avoidance) and are better addressed with the minimax control objective, which allows more flexible goal formulations. Minimax control has mostly been studied in two particular instances. \paragraph{Finite states} Minimax control of finite Markov Decision Processes with uncertain parameters was studied in \citep{Iyengar2005,Nilim2005,Wiesemann2013}, who showed that the main results of Dynamic Programming can be extended to their robust counterparts only when the dynamics ambiguity set verifies a certain rectangularity property. Since we consider continuous states, these methods do not apply. \paragraph{Linear dynamics and quadratic costs} Several approaches have been proposed for cumulative regret minimisation in the LQ problem. In the \emph{Optimism in the Face of Uncertainty} paradigm, the best possible dynamics within a high-confidence region is selected under a controllability constraint, to compute the corresponding optimal control in closed-form by solving a Riccati equation. The results of \citep{abbasi-yadkori11a,Ibrahimi2013,Faradonbeh2017} show that this procedure achieves a $\tilde{\mathcal{O}}\left(N^{1/2}\right)$ regret. Posterior sampling algorithms \citep{Ouyang2017,abeille18a} select candidate dynamics randomly instead, and obtain the same result. Other works use noise injection for exploration such as \citep{Dean2017,Dean2018}. However, neither optimism nor random exploration fit a critical setting, where ensuring safety requires instead to consider pessimistic outcomes. The work of \citet{Dean2017} is close to our setting: after an offline estimation phase, they estimate a suboptimality between a minimax controller and the optimal performance. Our work differs in that it addresses a generic shape cost. Another work of interest is \citep{Rosolia2019} where worst-case generic costs are considered. However, they assume the knowledge of the dynamics, and their rollout-based solution only produces inner-approximations and does not yield any guarantee. In this paper, interval prediction is used to produce oversets, while a near-optimal control is found using a tree-based planning procedure. \section{Model Estimation} \label{sec:estimation} To derive a confidence region \eqref{eq:confidence} for $\theta$, the functional relationship $A(\theta)$ must be specified. \begin{assumption}[Structure] \label{assumpt:structure} There exists a known feature tensor $\phi\in \mathbb{R}^{d \times p \times p}$ such that for all $\theta\in\Theta$, \begin{equation} A(\theta) = A + \sum_{i=1}^d \theta_i\phi_i, \end{equation} where $A,\phi_1,\dots,\phi_d\in\mathbb{R}^{p\times p}$ are known. For all $n$, we denote $\Phi_n = [\phi_1 x_n \dots \phi_d x_n]\in\mathbb{R}^{p\times d}$. We also assume to know a bound $S$ such that $\theta\in[-S,S]^d$. \end{assumption} We slightly abuse notations and include additional known terms in the measurement signal $ y(t) = \dot{x}(t) + C\nu(t) - A x(t) - Bu(t), $ to obtain a linear regression system $ y_n = \Phi_n\theta + \eta_n. $ \paragraph{Regularised least square} To derive an estimate on $\theta$, we consider the $L_2$-regularised regression problem with weights $\Sigma_p\in\mathbb{R}^{p\times p}$ and $\lambda\in\mathbb{R}^+_*$:\hfill $$ \min_{\theta\in\mathbb{R}^d} \sum_{n=1}^N \|y_n -\Phi_n\theta\|_{\Sigma_p^{-1}}^2 + \lambda\|\theta\|_{}^2. \refstepcounter{equation}~(\theequation)\label{eq:regression_min} $$ \begin{proposition}[Regularised solution] \label{prop:regularized_solution} The solution to \eqref{eq:regression_min} is \begin{align} \label{eq:vector_rls} \theta_{N,\lambda} = G_{N, \lambda}^{-1} \sum_{n=1}^N \Phi_n^\mathsf{\scriptscriptstyle T} \Sigma_p^{-1} y_n,\qquad \text{where }\quad G_{N, \lambda} = \sum_{n=1}^N \Phi_{n}^\mathsf{\scriptscriptstyle T}\Sigma_p^{-1}\Phi_{n} + \lambda I_d \in \mathbb{R}^{d\times d}. \end{align} \end{proposition} Substituting $y_n$ into \eqref{eq:vector_rls} yields the regression error: $ \theta_{N,\lambda} - \theta = G_{N, \lambda}^{-1}\sum_{n=1}^N \Phi_n^\mathsf{\scriptscriptstyle T} \Sigma_p^{-1}\eta_n - \lambda G_{N, \lambda}^{-1}\theta. $ To bound this error, we need the noise $\eta_n$ to concentrate. \begin{assumption}[Noise Model] \label{assumpt:gaussian-noise} We assume that \begin{enumerate} \item at each time $t\geq0$, the combined noise $\eta(t)$ is an independent sub-Gaussian noise with covariance proxy $\Sigma_p \in \mathbb{R}^{p\times p}$: $$ \forall u\in\mathbb{R}^p,\; \expectedvalue \left[ \exp{\left( u^\mathsf{\scriptscriptstyle T} \eta(t)\right)}\right] \leq \exp{\left( \frac{1}{2} u^\mathsf{\scriptscriptstyle T} \Sigma_p u\right)}; $$ \item at each time $t\geq0$, the disturbance $\omega(t)$ is enclosed by \emph{known} bounds $\underline{\omega}(t) \leq \omega(t) \leq \overline{\omega}(t)$, whose amplitude verifies $\sum_{n=0}^\infty \gamma^n C_\omega(t_n) < \infty$, where $$C_\omega(t) \stackrel{def}{=} \sup_{\tau\in[0,t]} \|\overline{\omega}(\tau) - \underline{\omega}(\tau)\|_2.$$ \end{enumerate} \end{assumption} \begin{theorem}[Confidence ellipsoid, a matricial version of \citealt{Abbasi2011}] \label{thm:confidence_ellipsoid} Under \Cref{assumpt:gaussian-noise}, it holds with probability at least $1-\delta$ that \begin{align} \label{eq:confidence-ellipsoid} \| \theta_{N,\lambda} - \theta\|_{G_{N,\lambda}} \leq \beta_N(\delta), \quad \text{with}\quad \beta_N(\delta)\stackrel{def}{=} \sqrt{2\ln \left(\frac{\det(G_{N,\lambda})^{1/2}}{\delta\det(\lambda I_d)^{1/2}}\right)} + (\lambda d)^{1/2}S. \end{align} \end{theorem} We convert this confidence ellipsoid $\mathcal{C}_{N,\delta}$ from \eqref{eq:confidence-ellipsoid} into a polytope for $A(\theta)$. For simplicity, we present here a simple but coarse strategy: bound the ellipsoid by its enclosing axis-aligned hypercube: \begin{align} \label{eq:polytope} A(\theta)\in \left\{ A_N +\sum_{i=1}^{2^d}\alpha_{i}\Delta A_{N,i}: \alpha\geq 0, \sum_{i=1}^{2^d}\alpha_{i}=1\right\} \end{align} where $A_N = A(\theta_{N,\lambda}),\; h_i\in\{-1,1\}^d,\; \Delta A_{N,i} = {h_i} \sqrt{\frac{\beta_N(\delta)}{\lambda_{\max}(G_{N,\lambda})}}$. A tighter polytope derivation is presented in the Supplementary Material. \section{State Prediction} \label{sec:prediction} A simple solution to \eqref{eq:inclusion-property} is proposed in \citep{Efimov2012}, where, given bounds $\underline{A}\leq A(\theta)\leq\overline{A}$ from $\mathcal{C}_{N,\delta}$ they use matrix interval arithmetic to derive the predictor: \begin{proposition}[Simple predictor of \citealt{Efimov2012}] Assuming that \eqref{eq:confidence} is satisfied for the system \eqref{eq:dynamics}, then the interval predictor following $\underline{x}(t_N)=\overline{x}(t_N)={x}(t_N)$ and: \begin{eqnarray} \dot{\underline{x}}(t) = \underline{A}^{+}\underline{x}^{+}(t)-\overline{A}^{+}\underline{x}^{-}(t)-\underline{A}^{-}\overline{x}^{+}(t) +\overline{A}^{-}\overline{x}^{-}(t) +Bu(t) + D^{+}\underline{\omega}(t)-D^{-}\overline{\omega}(t),\label{eq:predictor-naive}\\ \dot{\overline{x}}(t) = \overline{A}^{+}\overline{x}^{+}(t)-\underline{A}^{+}\overline{x}^{-}(t)-\overline{A}^{-}\underline{x}^{+}(t)+\underline{A}^{-}\underline{x}^{-}(t)\nonumber+Bu(t) + D^{+}\overline{\omega}(t)-D^{-}\underline{\omega}(t),\nonumber \end{eqnarray} ensures the inclusion property \eqref{eq:inclusion-property} with confidence level $\delta$. \end{proposition} However, \citet{leurent2019interval} showed that this predictor can have unstable dynamics, even for stable systems, which causes a fast build-up of uncertainty. They proposed an enhanced predictor which exploits the polytopic structure \eqref{eq:polytope} to produce more stable predictions, at the price of a requirement: \begin{assumption} \label{assumpt:metzler} There exists an orthogonal matrix $Z\in\mathbb{R}^{p\times p}$ such that $Z^\mathsf{\scriptscriptstyle T} A_N Z$ is Metzler\footnote{We say that a matrix is Metzler when all its non-diagonal coefficients are non-negative.}. \end{assumption} In practice, this assumption is often verified: it is for instance the case whenever $A_N$ is diagonalisable. The similarity transformation of \citep{Efimov2013} provides a method to compute such $Z$ when the system is observable. To simplify the notation, we will further assume that $Z = I_p$. Denote $ \Delta A_{+}=\sum_{i=1}^{2^d}\Delta A_{N,i}^{+},\;\Delta A_{-}=\sum_{i=1}^{2^d}\Delta A_{N,i}^{-}$. \begin{proposition}[Enhanced predictor of \citealt{leurent2019interval}] \label{prop:predictor} Assuming that \eqref{eq:polytope} and \Cref{assumpt:metzler} are satisfied for the system \eqref{eq:dynamics}, then the interval predictor following $ \underline{x}(t_N)=\overline{x}(t_N)={x}(t_N)$ and: \begin{eqnarray} \dot{\underline{x}}(t) & = & A_{N}\underline{x}(t)-\Delta A_{+}\underline{x}^{-}(t)-\Delta A_{-}\overline{x}^{+}(t) +Bu(t)+D^{+}\underline{\omega}(t)-D^{-}\overline{\omega}(t),\label{eq:interval-predictor}\\ \dot{\overline{x}}(t) & = & A_{N}\overline{x}(t)+\Delta A_{+}\overline{x}^{+}(t)+\Delta A_{-}\underline{x}^{-}(t) +Bu(t)+D^{+}\overline{\omega}(t)-D^{-}\underline{\omega}(t),\nonumber \end{eqnarray} ensures the inclusion property \eqref{eq:inclusion-property} with confidence level $\delta$. \end{proposition} \begin{figure}[tp] \centering {\includegraphics[trim={0 0.6cm 0 0.4cm}, clip, width=0.6\linewidth]{img/interval-predictor}} \caption{Comparison of \eqref{eq:predictor-naive} and \eqref{eq:interval-predictor} for a simple system $\dot{x}(t)=-\theta x(t)+\omega(t)$, with $\theta\in[1, 2]$ and $\omega(t) \in [-0.05, 0.05]$.} \label{fig:predictor_example} \end{figure} \Cref{fig:predictor_example} compares the performance of the predictors \eqref{eq:predictor-naive} and \eqref{eq:interval-predictor} in a simple example. It suggests to always prefer \eqref{eq:interval-predictor} whenever \Cref{assumpt:metzler} is verified, and only fallback to \eqref{eq:predictor-naive} as a last resort. \section{Robust Control} \label{sec:control} To evaluate the robust objective $V^r$ \eqref{eq:robust-control}, we approximate it thanks to the interval prediction $[\underline{x}(t), \overline{x}(t)]$. \begin{definition}[Surrogate objective] Let $\underline{x}_n(\mathbf{u}), \overline{x}_n(\mathbf{u})$ following the dynamics defined in \eqref{eq:interval-predictor} and \begin{equation} \label{eq:surrogate-objective} \hat{V}^r(\mathbf{u}) \stackrel{def}{=} \sum_{n=N+1}^\infty \gamma^n \underline{R}_n(\mathbf{u})\quad\text{where}\quad\underline{R}_n(\mathbf{u}) \stackrel{def}{=} \min_{x\in[\underline{x}_n(\mathbf{u}), \overline{x}_n(\mathbf{u})]} R(x). \end{equation} \end{definition} Such a substitution makes this pessimistic reward $\underline{R_n}$ \emph{not Markovian}, since the worst case is assessed over the whole past trajectory. \begin{theorem}[Lower bound] \label{prop:lower-bound} The surrogate objective \eqref{eq:surrogate-objective} is a lower bound of the objective \eqref{eq:robust-control}. \begin{equation*} \hat{V}^r(\mathbf{u}) \leq V^r(\mathbf{u}) \leq V(\mathbf{u}) \end{equation*} \end{theorem} Consequently, since all our approximations are conservative, if we manage to find a control sequence such that no \textit{``bad event''} (e.g. a collision) happens according to the surrogate objective $\hat{V}^r$, they are \emph{guaranteed} not to happen either when the controls are executed on the true system. To maximise $\hat{V}^r$, we cannot use DP algorithms since the state space is continuous and the pessimistic rewards are non-Markovian. Rather, we turn to tree-based planning algorithms, which optimise a sequence of actions based on the corresponding sequence of rewards, without requiring Markovity nor state enumeration. In particular, we consider the \emph{Optimistic Planning of Deterministic Systems} (\texttt{OPD}) algorithm \citep{Hren2008} tailored for the case when the relationship between actions and rewards is deterministic. Indeed, the stochasticity of the disturbances and measurements is encased in $\hat{V}^r$: given the observations up to time $N$ both the predictor dynamics \eqref{eq:interval-predictor} and the pessimistic rewards in \eqref{eq:surrogate-objective} are deterministic. At each planning iteration $k\in[K]$, \texttt{OPD} progressively builds a tree $\mathcal{T}_{k+1}$ by forming upper-bounds $B_a(k)$ over the value of sequences of actions $a$, and expanding\footnote{The expansion of a leaf node $a$ refers to the simulation of its children transitions $aA = \{ab, b\in A\}$} the leaf $a_k$ with highest upper-bound: \begin{equation} \label{eq:opd} a_k = \argmax_{a\in\mathcal{L}_k} B_a(k), \quad B_a(k) = \sum_{n=0}^{h(a)-1} \underline{R}_n(a) + \frac{\gamma^{h(a)}}{1-\gamma} \end{equation} where $\mathcal{L}_k$ is the set of leaves of $\mathcal{T}_k$, $h(a)$ is the length of the sequence $a$, and $\underline{R}_n(a)$ the pessimistic reward \eqref{eq:surrogate-objective} obtained at time $n$ by following the controls $u_n = \pi_{a_n}(x_n)$. \begin{lemma}[Planning performance of \citealt{Hren2008}] \label{theorem:opd-regret} The suboptimality of the \texttt{OPD} algorithm \eqref{eq:opd} applied to the surrogate objective \eqref{eq:surrogate-objective} after $K$ planning iterations is: $$ \hat{V}^r(a_{\star}) - \hat{V}^r({a_K}) = \mathcal{O}\left(K^{-\frac{\log 1/\gamma}{\log \kappa}}\right); $$ where $\kappa \stackrel{def}{=} \limsup_{h\rightarrow\infty} \left|\left\{a\in A^h: \hat{V}^r(a)\geq \hat{V}^r(a^{\star}) - \frac{\gamma^{h+1}}{1-\gamma}\right\}\right|^{1/h}$ is a problem-dependent measure of the proportion of near-optimal paths. \end{lemma} Hence, by using enough computational budget $K$ for planning we can get as close as we want to the optimal surrogate value $\hat{V}^r(a^{\star})$, at a polynomial rate. Unfortunately, there exists a gap between $\hat{V}^r$ and the true robust objective $V^r$, which stems from three approximations: (i) the true reachable set was approximated by an enclosing interval in \eqref{eq:inclusion-property}; (ii) the time-invariance of the dynamics uncertainty $A(\theta)\in\mathcal{C}_{N,\delta}$ was handled by the interval predictor \eqref{eq:interval-predictor} as if it were a time-varying uncertainty $A(\theta(t))\in\mathcal{C}_{N,\delta},\forall t$ ; and (iii) the lower-bound $\sum\min\leq \min\sum$ used to define the surrogate objective \eqref{eq:surrogate-objective} is not tight. However, this gap can be bounded under additional assumptions. \begin{theorem}[Suboptimality bound] \label{thm:minimax-regret-bound} Under two conditions: \begin{enumerate} \item a Lipschitz regularity assumption for the reward function $R$: \item a stability condition: there exist $P>0,Q_0\in\mathbb{R}^{p\times p}$, $\rho>0$, and $N_0\in\mathbb{N}$ such that $$\forall N>N_0,\quad \begin{bmatrix} A_N^\mathsf{\scriptscriptstyle T} P + P A_N + Q_0 & P|D| \\ |D|^\mathsf{\scriptscriptstyle T} P & -\rho I_r \\ \end{bmatrix}< 0;$$ \end{enumerate} we can bound the suboptimality of \Cref{alg:full} with planning budget $K$ as: \begin{equation*} V(a_\star) - \hat{V}^r(a_K) \leq \hlr{\underbrace{\Delta_\omega}_{\substack{\text{robustness to}\\ \text{disturbances}}}} + \hlb{\underbrace{\mathcal{O}\left(\frac{\beta_N(\delta)^2}{\lambda_{\min}(G_{N,\lambda})}\right)}_{\text{estimation error}}} + \hlg{\underbrace{\mathcal{O}\left(K^{-\frac{\log 1/\gamma}{\log \kappa}}\right)}_{\text{planning error}}} \end{equation*} with probability at least $1-\delta$, where $V(a)$ is the optimal expected return when executing an action $a\in\mathcal{A}$, $a_\star$ is an optimal action, and $\hlr{\Delta_\omega}$ is a constant which corresponds to an irreducible suboptimality suffered from being robust to instantaneous disturbances $\omega(t)$. \end{theorem} It is difficult to check the validity of the stability condition 2. since it applies to matrices $A_N$ produced by the algorithm rather than to the system parameters. A stronger but easier to check condition is that the polytope \eqref{eq:polytope} at some iteration becomes included in a set where this property is uniformly satisfied. For instance, if the features are sufficiently excited, the estimation converges to a neighbourhood of the true dynamics $A(\theta)$. This also allows to further bound the input-dependent \hlb{estimation error} term. \begin{corollary}[Asymptotic near-optimality] \label{cor:pe} Under an additional persistent excitation (PE) assumption \begin{align} \label{eq:pe} \exists \underline{\phi},\overline{\phi}>0: \forall n\geq n_0,\quad \underline{\phi}^2 \leq \lambda_{\min}(\Phi_{n}^\mathsf{\scriptscriptstyle T}\Sigma_{p}^{-1}\Phi_{n}) \leq \overline{\phi}^2, \end{align} the stability condition 2. of \Cref{thm:minimax-regret-bound} can be relaxed to apply to the true system: there exist $P,Q_0,\rho$ such that $$\begin{bmatrix} A(\theta)^\mathsf{\scriptscriptstyle T} P + P A(\theta) + Q_0 & P|D| \\ |D|^\mathsf{\scriptscriptstyle T} P & -\rho I_r \\ \end{bmatrix}< 0;$$ and the suboptimality bound takes the more explicit form \begin{equation*} V(a_\star) - \hat{V}^r(a_K) \leq \hlr{{\Delta_\omega}} + \hlb{{\mathcal{O}\left(\frac{\log\left(N^{d/2}/\delta\right)}{N}\right)}} + \hlg{{\mathcal{O}\left(K^{-\frac{\log 1/\gamma}{\log \kappa}}\right)}} \end{equation*} which ensures asymptotic near-optimality when $N\to\infty$ and $K\to\infty$. \end{corollary} \section{Multi-model Selection} \label{sec:multi-model} The procedure we developed in \Cref{sec:estimation,sec:prediction,sec:control} relies on strong modelling assumptions, such as the linear dynamics \eqref{eq:dynamics} and \Cref{assumpt:structure}. But what if they are wrong? \paragraph{Model adequacy} One of the major benefits of using the family of linear models, compared to richer model classes, is that they provide strict conditions allowing to quantify the adequacy of the modelling assumptions to the observations. Given $N-1$ observations, \Cref{sec:estimation} provides a polytopic confidence region \eqref{eq:polytope} that contains $A(\theta)$ with probability at least $1-\delta$. Since the dynamics are linear, we can propagate this confidence region to the next observation: $y_{N}$ must belong to the Minkowski sum of a polytope representing model uncertainty $\mathcal{P}(A_{0} x_N + Bu_N, \Delta A_{1}x_N,\dots, \Delta A_{2^d}x_N)$ and a polytope $\mathcal{P}(0_p, \underline{\eta}, \overline{\eta})$ bounding the disturbance and measurement noises. \citet{delos2015} provide a way to test this membership in polynomial time using linear programming. Whenever it is not verified, we can confidently reject the $(A,\phi)$-modelling \cref{assumpt:structure}. This enables us to consider a rich set of potential features $\left((A^1, \phi^1), \dots, (A^M, \phi^M)\right)$ rather than relying on a single assumption, and only retain those that are consistent with the observations so far. Then, every remaining hypothesis must be considered during planning. \paragraph{Robust selection} We temporarily ignore the parametric uncertainty on $\theta$ to simply consider several candidate dynamics models, which all correspond to different modelling assumptions. We also restrict to deterministic dynamics, which is the case of \eqref{eq:interval-predictor}. \begin{assumption}[Multi-model ambiguity] \label{assumpt:multi-model-ambiguity} The dynamics $f$ lie in a finite set of candidates $(f^m)_{m\in[M]}$. \end{assumption} We adapt our planning algorithm to balance these concurrent hypotheses in a robust fashion, i.e. maximise a robust objective with discrete ambiguity: \begin{equation} \label{eq:robust-objective-discrete} V^r = \sup_{a\in\mathcal{A}^\mathbb{N}}\min_{m\in[M]} \sum_{n=N+1}^\infty \gamma^n R_n^m, \end{equation} where $R_n^m$ is the reward obtained by following the action sequence $a$ up to step $n$ under the dynamics $f^m$. This objective could be optimised in the same way as in \Cref{sec:control}, but this would result in a coarse and lossy approximation. Instead, we exploit the finite uncertainty structure of \Cref{assumpt:multi-model-ambiguity} to asymptotically recover the true $V^r$ by modifying the \texttt{OPD} algorithm in the following way: \begin{definition}[Robust UCB] We replace the upper-bound \eqref{eq:opd} on sequence values in \texttt{OPD} by: \begin{equation} \label{eq:robust-b-values} B_a^r(k) \stackrel{def}{=} \min_{m\in[M]} \sum_{n=0}^{h-1} \gamma^n R_n^m + \frac{\gamma^h}{1-\gamma}. \end{equation} \end{definition} Note that it is not equivalent to solving each control problem independently and following the action with highest worst-case value, as we show in the Supplementary Material. We analyse the sample complexity of the corresponding robust planning algorithm. \begin{proposition}[Robust planning performance] \label{theorem:drop-regret} The robust version of \texttt{OPD} \eqref{eq:robust-b-values} enjoys the same regret bound as \texttt{OPD} in \Cref{theorem:opd-regret}, with respect to the multi-model objective \eqref{eq:robust-objective-discrete}. \end{proposition} This result is of independent interest: the solution of a robust objective \eqref{eq:robust-objective-discrete} with discrete ambiguity $f\in\{f^m\}_{m\in[M]}$ can be recovered exactly, asymptotically when the planning budget $K$ goes to infinity, which Robust DP algorithms do not allow. This also contrasts with the results obtained in \Cref{sec:control} for the robust objective \eqref{eq:robust-control} with continuous ambiguity $A(\theta)\in\mathcal{C}_{N,\delta}$, for which \texttt{OPD} only recovers the surrogate approximation $\hat{V}^r$, as discussed in \Cref{thm:minimax-regret-bound}. Note that here the regret depends on the number $K$ of node expansions, but each expansion now requires $M$ times more simulations than in the single-model setting. Finally, the two approaches of Sections \ref{sec:control} and \ref{sec:multi-model} can be merged by using the pessimistic reward \eqref{eq:surrogate-objective} in \eqref{eq:robust-b-values}. \section{Experiments} \label{sec:experiments} Videos and code are available at \url{https://eleurent.github.io/robust-beyond-quadratic/}. \paragraph{Obstacle avoidance with unknown friction} We first consider a simple illustrative example, shown in \Cref{fig:prediction}: the control of a 2D system moving by means of a force $(u_x, u_y)$ in an medium with anisotropic linear friction with unknown coefficients $(\theta_x, \theta_y)$. The reward encodes the task of navigating to reach a goal state $x_g$ while avoiding collisions with obstacles: $R(x) = \delta(x)/(1 + \|x - x_g\|_2)$ where $\delta(x)$ is $0$ whenever $x$ collides with an obstacle, $1$ otherwise. The actions $\mathcal{A}$ are constant controls in the up, down, left and right directions. For the reasons mentioned above, no robust baseline applies to our setting. We compare \Cref{alg:full} to the non-robust adaptive control approach that plans with the estimated dynamics $\theta_{N,\lambda}$, and thus enjoys the same prior knowledge of dynamics structure and reward. This highlights the benefits of the robust formulation solely rather than stemming from algorithm design. We show in \Cref{tab:obstacle} the results of 100 simulations of a single episode: the robust agent performs worse than the nominal agent on average, but manages to ensure safety while the nominal agent collides with obstacles in $4\%$ of simulations. We also compare to a standard model-free approach, DQN, which does not benefit from the prior knowledge on the system dynamics, and is instead trained over multiple episodes. The reported performance is that of the final policy obtained after training for 3000 episodes, during which $897\pm64$ collisions occurred ($29.9\pm2.1$\%). We study the evolution of the suboptimality $V(x_N) - \sum_{n>N}\gamma^{n-N}R(x_n)$ with respect to the number of samples $N$, by comparing the empirical returns from a state $x_N$ to the value $V(x_N)$ that the agent would get by acting optimally from $x_N$ with knowledge of the dynamics. Although the assumptions of \Cref{thm:minimax-regret-bound} are not satisfied (e.g. non-smooth reward), the mean suboptimality of the robust agent, shown in \Cref{fig:regret}, still decreases polynomially with $N$: \Cref{alg:full} gets \emph{more efficient} as it is \emph{more confident} while \emph{ensuring safety} at all times. In comparison, the nominal agent enjoys a smaller suboptimality on average, but higher in the worst-case. \begin{figure}[!tbp] \centering \begin{minipage}[t]{0.43\textwidth} \vspace{0pt} \centering \includegraphics[trim={0 0.2cm 0 0.2cm}, clip, width=0.9\linewidth]{img/regret.pdf} \caption{The mean (solid), $95\%$ CI for the mean (shaded) and maximum (dashed) suboptimality with respect to $N$.} \label{fig:regret} \end{minipage} \hfill \begin{minipage}[t]{0.55\textwidth} \vspace{0pt} \centering \includegraphics[width=0.68\linewidth]{img/highway-small} \caption{The intersection crossing task. Trajectory intervals show behavioural uncertainty for each vehicle, with a multi-model assumption over their route.} \end{minipage} \end{figure} \begin{table}[tbp] \caption{Frequency of collision, minimum and average return achieved on a single episode, repeated with 100 random seeds. In both tasks, the robust agent performs worse than the nominal agent on average, but manages to ensure safety and attains a better worst-case performance.} \subfigure[Performances on the obstacle task]{ \centering \label{tab:obstacle} \begin{tabular}{lccc} \toprule Performance & failures & min & avg $\pm$ std \\ \midrule Oracle & 0\% & {11.6} & {$14.2 \pm 1.3$} \\ \midrule {Nominal} & {4\%} & {2.8} & \textbf{$\mathbf{13.8} \pm 2.0$} \\ \Cref{alg:full} & \textbf{0\%} & \textbf{10.4} & {$13.0 \pm 1.5$} \\ \midrule DQN (trained) & 6\% & 1.7 & $12.3\pm2.5$ \\ \bottomrule \end{tabular} } \subfigure[Performances on the driving task]{ \label{tab:driving} \centering \begin{tabular}{lccc} \toprule Performance & failures & min & avg $\pm$ std \\ \midrule Oracle & 0\% & {6.9} & $7.4 \pm 0.5$ \\ \midrule {Nominal 1} & 4\% & {5.2} & $\mathbf{7.3} \pm 1.5$ \\ {Nominal 2} & 33\% & {3.5} & $6.4 \pm 0.3$ \\ \Cref{alg:full} & \textbf{0\%} & \textbf{6.8} & $7.1 \pm 0.3$ \\ \midrule DQN (trained) & 3\% & 5.4 & $6.3\pm0.6$ \\ \bottomrule \end{tabular} } \vspace*{-0.5cm} \end{table} \textbf{Motion planning for an autonomous vehicle}~~ We consider the \href{https://github.com/eleurent/highway-env}{highway-env} environment \citep{highway-env} for simulated driving decision problems. An autonomous vehicle with state $\chi_0\in\mathbb{R}^4$ is approaching an intersection among $V$ other vehicles with states $\chi_i\in\mathbb{R}^4$, resulting in a joint traffic state $x = [\chi_0, \dots,\chi_V]^\top\in\mathbb{R}^{4V+4}$. These vehicles follow parametrized behaviours $\dot{\chi}_i=f_i(x,\theta_i)$ with unknown parameters $\theta_i\in\mathbb{R}^5$. We appreciate a first advantage of the structure imposed in \Cref{assumpt:structure}: the uncertainty space of $\theta$ is $\mathbb{R}^{5V}$. In comparison, the traditional LQ setting where the whole state matrix $A$ is estimated would have resulted in a much larger parameter space $\theta\in\mathbb{R}^{16V^2}$. The system dynamics $f$, which describes the interactions between vehicles, can only be expressed in the form of \Cref{assumpt:structure} given the knowledge of the desired route for each vehicle, with features $\phi$ expressing deviations to the centerline of the followed lane. Since these intentions are unknown to the agent, we adopt the multi-model perspective of \Cref{sec:multi-model} and consider one model per possible route for every observed vehicle before an intersection. In \Cref{tab:driving}, we compare \Cref{alg:full} to a nominal agent planning with two different modelling assumptions: Nominal 1 has access to the true followed route for each vehicle, while Nominal 2 does not and picks the model with minimal prediction error. Again we also compare to a DQN baseline trained over 3000 episodes, causing $1058\pm113$ collisions while training ($35\pm4\%$). As before, the robust agent has a higher worst-case performance and avoids collisions at all times, at the price of a decreased average performance.. \section*{Conclusion} We present a framework for the robust estimation, prediction and control of a partially known linear system with generic costs. Leveraging tools from linear regression, interval prediction, and tree-based planning, we guarantee the predicted performance and provide a suboptimality bound. The method applicability is further improved by a multi-model extension and demonstrated on two simulations. \clearpage \section*{Broader Impact} The motivation behind this work is to enable the development of Reinforcement Learning solutions for industrial applications, when it has been mainly limited to simulated games so far. In particular, many industries already rely on non-adaptive control systems and could benefit from an increased efficiency, including Oil and Gas, robotics for industrial automation, Data Center cooling, etc. But more often than not, safety-critical constraints proscribe the use of exploration, and industrials are reluctant to turn to learning-based methods that lack accountability. This work addresses these concerns by focusing on risk-averse decisions and by providing worst-case guarantees. Note however that these guarantees are only as good as the validity of the underlying hypotheses, and \Cref{assumpt:structure} in particular should be submitted to a comprehensive validation procedure; otherwise, decisions formed on a wrong basis could easily lead to dramatic consequences in such critical settings. Beyond industrial perspectives, this work could be of general interest for risk-averse decision-making. For instance, parametrized epidemiological models have been used to represent the propagation of Covid\nobreakdash-19 and study the impact of lockdown policies. These model parameters are estimated from observational data and corresponding confidence intervals are often available, but rarely used in the decision-making loop. In contrast, our approach would enable evaluating and optimising the worst-case outcome of such public policies. \begin{ack} This work was supported by the French Ministry of Higher Education and Research, and CPER Nord-Pas de Calais/FEDER DATA Advanced data science and technologies 2015-2020. \end{ack}
1,314,259,994,593
arxiv
\section{Introduction} \label{sec:intro} The IceCube experiment has recently detected neutrinos of astrophysical origin with energies up to $\sim$2~PeV~\cite{bigBird}. This is a breakthrough in high energy astrophysics, and represents the first observation of astrophysical neutrinos (except those seen from the sun and supernova 1987a~\cite{kamiokande1987,imb}). There are several candidate source classes that could make neutrinos up to 100 PeV, including active galactic nuclei, gamma-ray bursts, and starburst galaxies (for reviews, see~\cite{waxman_2013,anchordoqui_2014,murase_2014}). In the coming years, we seek to identify the currently unknown sources of PeV astrophysical neutrinos and determine the energy spectrum of those sources to learn about the physics that drives their central engines. In addition, the detection of ultra-high energy (UHE) neutrinos ($>100$~PeV) would open a new window into the universe, allowing us to study distant energetic astrophysical sources that are otherwise inaccessible. UHE neutrinos are created as a byproduct of the so-called GZK process~\cite{beresinsky_1969_cosmogenic}, the interaction of a cosmic ray (E $>10^{19.5}$~eV) with a cosmic microwave background photon~\cite{g,zk}. The current best limit on flux of UHE neutrinos comes from IceCube below $10^{19.5}$~eV~\cite{icecube2015}, and ANITA above $10^{19.5}$~eV~\cite{anita2}. Detection of these neutrinos, often called cosmogenic neutrinos, would shed light on the source of the highest energy cosmic rays, tell us about the evolution of high energy sources in our universe, and give us information about the composition of high energy cosmic rays. We expect neutrinos from the GZK process to have extremely high energies, above 100~PeV~\cite{ess, kalashev, ABOWY, barger, stanev, kotera}. Detection of UHE neutrinos would also allow us to study weak interactions at center of mass energies $\gtrsim100$~TeV, much greater than those available at particle colliders on Earth, such as the LHC~\cite{hooper_2002_cross_section,Connolly:2011vc,Klein:2013xoa}. As extremely relativistic particles, UHE neutrinos would enable sensitive tests of Lorentz invariance~\cite{gorham_2012_liv} and other beyond Standard Model processes. There are clear requirements for an ideal high energy neutrino observatory driven by the twin science goals of determining the origin and spectrum of the observed astrophysical IceCube signal, and discovering cosmogenic neutrinos to determine the origin of the highest energy cosmic rays while also probing particle physics at extremely high energies. The requirements for a high energy neutrino observatory are to 1) have the sensitivity in the PeV energy range to measure the observed IceCube astrophysical neutrino spectrum and extend the measurement to higher energies, 2) have the geometric acceptance at extremely high energies (E $>10^{17}$ eV) to detect cosmogenic neutrinos, even in the most pessimistic of reasonable neutrino flux models, 3) have the pointing resolution required (sub-degree) to determine the origin of the observed neutrinos, and 4) have the energy resolution (factor $\sim2$) required to measure the neutrino energy spectrum at PeV energies and above to learn about the physics that drives the sources of high energy cosmic rays and neutrinos. One promising way to detect the highest energy neutrinos is through the coherent, impulsive radio emission from electromagnetic showers induced by neutrino interactions in a dielectric --- the Askaryan effect~\cite{askaryan}. When a neutrino interacts with a dielectric such as ice, an electromagnetic cascade is initiated, and a net negative charge excess develops in the medium. This net charge excess moving faster than the speed of light in the medium yields Cherenkov radiation. For long wavelength, low frequency emission (frequency $<$1 GHz), the emission is coherent, and for high energy showers, the radio emission dominates. Beam test measurements~\cite{saltzberg_2001_sand,gorham_2005_salt,gorham_2007_ice} confirm that the emitted radio power scales as the square of the particle cascade energy and validate the expected angular emission pattern and frequency dependence from detailed numerical simulations~\cite{zas_1992_zhs}. A good medium for detection of high energy neutrinos is a large volume of a dense dielectric with a long radio attenuation length, such as glacial ice, which has an attenuation length $L_{\alpha} \sim 1$~km~\cite{avva,southpoleice}. We introduce here a new type of radio detector for high energy neutrinos that will meet these requirements. In Section \ref{sec:concept}, we review current radio detector techniques and describe the phased radio array concept. The projected gains in sensitivity using a phased array are presented in Section \ref{sec:results}. We conclude in Section \ref{sec:conclusion}. \section{A new experimental approach: an in-ice phased radio array} \label{sec:concept} \subsection{Defining a general approach} A high energy neutrino observatory that achieves the goals outlined in Section~\ref{sec:intro} must have sensitivity over a broad energy range from PeV to $10^{5}$ PeV scales. To reach the lowest possible energy threshold, the instrument should be located as close to the neutrino interactions as possible. The electric field strength falls as $1/r$ (where $r$ is the distance from the detector to the shower induced by the neutrino interaction) and also suffers attenuation in the detection medium. The lowest threshold will be achieved by a set of detectors that is directly embedded in a detection medium with a long radio attenuation length, such as glacial ice. To achieve both a low energy threshold and increase the effective volume, the signals used to trigger the detector should have as high an effective gain as possible. High-gain antennas produce a higher signal-to-noise ratio (SNR) per antenna compared to low-gain antennas for radio signals aligned with the boresight, but have reduced angular coverage and may be impractical to deploy down a narrow borehole. Combining signals from multiple low-gain antennas in a phased array provides another way to achieve high gain and therefore allow weaker neutrino signals to be detected while also allowing full angular coverage. The phased array approach is the topic of this work. \subsection{Current radio detector techniques} RICE was a pathfinder experiment for the radio detection of UHE neutrinos and demonstrated the feasibility of many operation-critical technologies~\cite{kravchenko_2012_rice}. The extremely high energy range, above $10^{19}$~eV, is currently probed by the ANITA high altitude balloon experiment~\cite{instrument}. The proposed balloon-borne EVA experiment~\cite{eva} would be a novel way to reach the highest energy neutrinos, above $10^{19}$~eV. The ARA~\cite{araWhitepaper} and ARIANNA~\cite{arianna} experiments, ground-based radio arrays in early stages of development with a small number of stations deployed in Antarctica, have energy thresholds $\gtrsim 50$~PeV, reaching the heart of the cosmogenic neutrino regime. This is achieved by placing the detectors in the ice. ANITA, ARA, and ARIANNA all use a similar fundamental experimental design: an array of antennas (16 in the case of ARA) and a data acquisition system comprise a single station. The stations are quasi-independent in that each individual station can reconstruct a neutrino event. ANITA can be viewed as a single-station experiment in this context. For ground-based experiments, multiple stations can be positioned several kilometers apart to cover large volumes of ice, and the neutrino event rate increases linearly with the number of stations. In current and previously-deployed experiments, a threshold-crossing trigger is used to determine when individual antennas receive an excess in power above typical thermal noise~\cite{instrument,ara,arianna_2015}. If a sufficient number of coincident antenna-level triggers occur within a short time window, a station-level trigger is formed, and the antenna waveforms of the candidate neutrino event are digitized and recorded. Essentially this type of combinatoric threshold-crossing trigger is only sensitive to the amount of power seen by individual antennas as a function of time. With this triggering approach, the smallest signal that a station can see is determined by the gain of each individual antenna (i.e., how much power the antenna sees from signal compared to thermal noise). \subsection{An in-ice phased array concept} \label{sec:calc} We present here a new concept for radio detection of high energy neutrinos that will for the first time have sensitivity in the 1~PeV energy range, allowing us to characterize the measured IceCube astrophysical neutrino spectrum, extend the measurement to higher energies, and achieve meaningful overlap with IceCube in energy for energy calibration. This new radio detector would also achieve improved sensitivity in the UHE regime per station, provide superior pointing resolution at all energies, and provide stronger rejection against anthropogenic radio frequency interference compared to currently-implemented radio techniques. The key to lowering the energy threshold of a radio experiment and increasing sensitivity at higher energies is the ability to distinguish weak neutrino-induced impulsive signals from thermal noise. For antennas triggering independently, the amplitude of the thermal noise is solely determined by the temperature of the ice and the noise temperature of the amplifiers (a smaller effect than the ice itself). Combining signals from many antennas in a phased array configuration averages down the uncorrelated thermal noise from each antenna while maintaining the same signal strength for real plane-wave signals (such as neutrinos). If we combine the signals from multiple antennas with the proper time delays to account for the distance between antennas, we can effectively increase the gain of the system of antennas for incoming plane waves from a given direction. Many different sets of delays with the same antennas can create multiple effective antenna beam patterns that would together cover the same solid angle as each individual antenna but with much higher gain. This procedure is called beamforming, and is an economical and efficient way to achieve the extremely high effective gain needed to push the energy threshold down to 1~PeV. Such interferometric techniques have been extensively used in radio astronomy~(for a review, see~\cite{thompson}). The effective gain $\mathrm{G}_{\mathrm{eff}}$ in dBi or dBd of the system is determined by the gain of each individual antenna in dBi or dBd, G, and the number of antennas, N, by: \begin{equation} \mathrm{G}_{\mathrm{eff}} = 10 \log_{10}(\mathrm{N}\times 10^{\mathrm{G}/10}). \label{eqn:gain} \end{equation} The trigger threshold of radio detectors is typically set by the rate at which antenna waveforms can be digitized and recorded while maintaining a high livetime fraction. Lower thresholds on the electric field at the antennas correspond to both increased efficiency for neutrino signals and higher trigger rates. For a phased array, each trigger channel corresponds to a single effective beam, and the station-level trigger is the union of simple threshold-crossing triggers on the individual effective beams of the phased array, rather than individual antennas. In this configuration, the threshold on the electric field could be reduced by roughly the square root of the number of antennas that are phased together while maintaining the same overall trigger rate per trigger channel. Since the electric field produced at the antenna from Askaryan emission scales linearly with the energy in the particle cascade, this reduction in the effective electric field threshold directly translates into a lower energy threshold for finding neutrinos. Equivalently, the effective volume of the detector is increased at fixed neutrino energy since events could be detected from farther away. To minimize the number of trigger channels, thus minimizing complexity and cost, the phased array that provides the trigger needs to be as closely packed as possible. Since the overall physical size of the array compared to the wavelength of radiation determines the angular resolution of the instrument, the closer the spacing between antennas, the larger each effective beam is for a given frequency of radiation, and the fewer channels are required to cover the same solid angle of ice~\cite{thompson}. Further sensitivity gains may be possible by recognizing that the antennas that form the trigger do not have to be the same antennas used for detailed event analysis. Indeed, it is advantageous to construct two distinct, co-located antenna arrays. The first is the phased array that is as closely packed as possible and provides the most sensitive trigger possible (the ``trigger array''). The second array is a set of antennas that are spaced as far apart as is reasonably possible (many tens of meters) to provide the best pointing resolution and energy resolution possible for neutrino events (the ``pointing array''). Although the station triggers on the compact trigger array, we do not need to digitize and record signals from those antennas -- instead, we digitize and record signals from the much farther spaced pointing array based on the timing given by the trigger. The compact trigger array also has the benefit that the position of the antennas only needs to be determined to within a few inches in order to phase the antennas properly. The distinct sets of antennas that comprise the pointing array and the trigger array enable both optimal sensitivity and optimal directional pointing and energy reconstruction compared to the scenario where one compromises both technical goals in order to find middle ground that uses the same antennas for the trigger and pointing. This two-component station design represents a departure from all previous radio detector neutrino experiments. The phased trigger array also has the benefit that it can help reject anthropogenic radio frequency interference. A low-gain antenna has a very wide beam pattern, and is sensitive to many incoming directions of radiation. However, a high-gain antenna, such as is effectively created by the phased trigger array, is highly directional. Since this configuration would have many effective high-gain beams, each pointed in a different direction, at any given time we can easily mask out directions where there is man-made interference from the trigger. Since man-made noise tends to come from specific incident directions corresponding to the location of sources of noise, the ability to mask out directions from the trigger improves background rejection and allows the threshold of the trigger to be set by thermal noise for a larger fraction of time, rather than the threshold increasing when more man-made radio frequency interference is present. Equation~\ref{eqn:gain} holds for the case where the signal is amplified with a low-noise amplifier for each antenna before the beams are formed (i.e., the noise contribution from each channel is the same and no significant noise is introduced after the beams are formed). If the loss in the system is low enough, the first stage of amplification could be performed after the beams have been formed, so each beam has a noise contribution from only one amplifier. This improvement would come at the cost of including any noise from loss in the cables and the beamforming components. In a typical case, the array views 250~K ice and the front-end system has a temperature of 70~K. Even if the beamforming hardware were lossless, the improvement in overall system temperature is small, approaching 10\% improvement only after phasing 400~antennas, since the noise from the ice dominates the noise from the amplifier system. Despite only modest potential improvements, this technique may ultimately allow for fewer front-end amplifiers to be used, which would reduce the cost of the array. An interferometric technique has been previously developed and used in post-processing analysis for fast, impulsive radio signals from neutrino interactions. This interferometric technique was first developed for and applied to the ANITA experiment~\cite{interferometry}, and has since been applied to other experiments~\cite{ara,ara2015}. Interferometric techniques have also been used to search for extremely high energy neutrinos, above $10^{22}$~eV, using the Westerbork Radio Synthesis Telescope and the moon as a target~\cite{lofar}. There have been efforts directed toward reconfiguring the triggering scheme of currently-deployed or soon-to-be deployed experiments, such as ANITA and ARA, to instead use real-time correlation triggering after 3-bit digitization of the Nyquist-sampled waveforms~\cite{ritc}. This technique is under development, and is expected to be used in the upcoming fourth flight of ANITA. There are two reasons why one should consider doing the beamforming in hardware. The first is that it leaves open the possibility of doing the signal amplification after the beams are formed to reduce the noise contribution from the front-end electronics. The second is that information is lost in the 3-bit digitization that is preserved by doing the full phasing in hardware. As described above, the optimization of the geometry of a station should be different for a phased radio array compared to currently-deployed instruments to fully exploit the power of the beamforming technique. We explore here two scenarios: the first is a 16-channel station much like the deployed stations for the ARA experiment but with a different geometry and with the phased array trigger implemented, and the second is a 400-channel station that would achieve a 1~PeV threshold. \subsection{Example: a 16-channel station} \begin{figure} \centering \includegraphics[width=12cm]{stationLayout.pdf} \caption{An example station layout for a 16-antenna phased trigger array and accompanying pointing array.} \label{fig:station} \end{figure} \begin{figure} \centering \includegraphics[width=12cm]{electronicsLayout.pdf} \caption{An example layout of the radio frequency chain of a 16-antenna phased trigger array and accompanying pointing array. For simplicity, not all channel paths are depicted.} \label{fig:electronics} \end{figure} \begin{figure} \centering \includegraphics[width=11cm]{beamPattern.pdf} \caption{The beam pattern for one trigger channel at 200 MHz for the configuration shown in Figure~\ref{fig:station}.} \label{fig:pattern} \end{figure} We introduce here the example of a 16-channel station that uses dipole-like antennas and is deployed down boreholes beneath the firn layer of a deep glacier. Possible sites for such a station, or set of stations, include South Pole, where the array would be $\sim200$~m below the surface and the ice is $\sim2.8$~km deep~\cite{koci,price}, and Summit Station in Greenland, where the array would be $\sim100$~m below the surface and the ice is $\sim3.0$~km deep~\cite{gow_1997_gisp2}. We simulate a configuration where the antennas are deployed down boreholes below the firn. Our simulations confirm the finding from ARA~\cite{araWhitepaper,ara} that due to the changing index of refraction in the firn as the glacial ice transitions to snow, incident radio emission is often deflected downwards away from a detector on the surface and is therefore undetected (see~\ref{sec:ray-tracing} for details). In contrast, receivers deployed below the firn layer are less affected by refraction and the increase in effective volume compared to a surface configuration is large (factors of 3 to 10 depending on the firn profile). The benefits of borehole-deployed antennas make them a cost effective approach. One possible station layout is shown in Figure~\ref{fig:station}. A trigger array is constructed of 16 dipole antennas strung vertically down one borehole as close together as possible. This configuration would naturally be sensitive to vertically-polarized signals, although one could combine signals from orthogonally-polarized antennas to create an unpolarized trigger. The pointing array would be constructed of additional antennas, with both horizontal and vertical polarization sensitivity, and would require at least two additional boreholes to uniquely determine the incident direction, timing, and polarization of the radio emission and thus the incoming direction of the neutrino. We would only need to digitize the signals from the pointing array antennas. An example of the radio frequency chain for a 16-antenna station is shown in Figure~\ref{fig:electronics}. The effective gain of such a 16-channel trigger array of dipole antennas calculated using Equation~\ref{eqn:gain} is 14.2~dBi (compared to 2.15~dBi for a single dipole), which corresponds to a factor of $\sim$4 in electric field threshold. The effective beam at 200~MHz for one trigger channel is shown in Figure~\ref{fig:pattern} as a function of elevation. The FWHM of the beam of a single trigger channel is $5.4^\circ$ in elevation with complete azimuthal coverage for this configuration. By adjusting the delays among antennas in different trigger channels, we can cover the relevant range of solid angle for incoming emission from visible neutrino events with only $\sim$15 beams. Antennas with a moderately higher gain that still cover the relevant range of solid angles would lead to a higher effective gain of the system. \subsection{Achieving An Energy Threshold of 1 PeV} The phased array design is scalable and can be configured to achieve an even lower energy threshold. For example, a phased array with 400 dipole antennas would have an effective gain of 28.2~dBi, and would push the electric field threshold down by a factor of $\sim$20 compared to currently-implemented techniques. For large numbers of antennas, the single-borehole configuration for the trigger array is no longer optimal. To keep the size of the trigger array compact, trigger antennas would be deployed down multiple boreholes. Phased arrays with thousands of channels are possible. Since the spectrum and angular distribution of Askaryan emission is largely independent of the neutrino energy~\cite{alvarez-muniz_2012_hadronic}, the same experimental design principles and simulation methods are valid over the large range of neutrino energies we consider. \section{Comparison with current experimental techniques using an independent Monte Carlo simulation} \label{sec:results} \begin{figure}[h] \begin{center} \includegraphics[width=12cm]{acceptance_comparison.pdf} \end{center} \caption{ Effective Volume vs. Energy for 10 stations installed 100~m below the surface at Summit Station, Greenland. The yellow line is for 16-channel stations with no phasing, the orange line is for similar stations but with phasing, and the red line is for 400-antenna phased array stations. For each radio array configuration, the volumetric acceptance is presented at the trigger level. Black curves indicate the volumetric acceptance for two difference analyses with IceCube optimized for different energy ranges \cite{icecubeContained,icecubeEHE}.} \label{fig:simulation} \end{figure} \begin{figure*} \centering \includegraphics[width=7cm]{powerlaw.pdf} \includegraphics[width=7cm]{cutoff.pdf} \includegraphics[width=7cm]{optimistic.pdf} \includegraphics[width=7cm]{pessimistic.pdf} \caption{Triggered Event Rates vs. Energy for a variety of neutrino models. Triggered event rates for three years of observation for 10 stations installed 100~m below the surface at Summit Station, Greenland. 16-channel stations with no phasing are shown in yellow, 16-channel stations with phasing are shown in orange, and stations with 400 phased antennas are shown in red. The top two panels show event rates based on two possible neutrino spectra based on the IceCube observed neutrino flux~\cite{bigBird}. The top left panel is an $\mathrm{E}^{-2.3}$ power law, and the top right panel is an $\mathrm{E}^{-2.3}$ power law with an exponential cutoff at 1~PeV. The bottom two panels show event rates based on optimistic and pessimistic cosmogenic fluxes~\cite{kotera}.} \label{fig:events} \end{figure*} \begin{figure}[h] \begin{center} \includegraphics[width=12cm]{model_comparison.pdf} \end{center} \caption{ Flux models used for the predictions shown in Figure~\ref{fig:events}. Shown are two possible neutrino spectra based on the IceCube observed neutrino flux~\cite{bigBird}: an $\mathrm{E}^{-2.3}$ power law and an $\mathrm{E}^{-2.3}$ power law with an exponential cutoff at 1~PeV. Also shown are optimistic and pessimistic cosmogenic fluxes~\cite{kotera}.} \label{fig:models} \end{figure} We have developed a Monte Carlo simulation package to quantify the acceptance of various radio detector configurations, and more specifically, investigate the advantages of a phased array design. The simulation formalism and assumed physics input are described in~\ref{sec:formalism}, and validation studies are presented in~\ref{sec:validation}. Rather than focusing on specific antenna designs, signal processing chains, and analysis algorithms, we have kept the simulations general by defining signal detection thresholds in terms of the electric field strength arriving at the antennas. In this case, different station configurations correspond to different electric field thresholds, as described above. The parameters chosen to define the electric field threshold of our baseline station configuration are discussed in~\ref{sec:threshold}. Depending on the particular system deployed, the overall results could shift up or down (by less than a factor of two), but the relative comparisons between configurations are valid. Figure~\ref{fig:simulation} shows the volumetric acceptance of a 10 station detector with antennas 100~m below the surface at Summit Station as a function of energy for three different configurations: 16-channel stations with no phasing (yellow), 16-channel stations with phasing (orange), and 400-antenna phased array stations (red). See~\ref{sec:formalism} for a description of how effective volume is calculated. We also show for comparison the acceptance corresponding to two IceCube analyses optimized for different energy ranges, calculated using the effective area given in~\cite{icecubeContained,icecubeEHE} and the neutrino interaction cross section given in~\ref{sec:formalism}. The IceCube curves in the figure include analysis efficiency, whereas the results of our simulation do not, so the curves are not directly comparable. The standard method, shown with the yellow line in Figure~\ref{fig:simulation}, is comparable to the approach of the ARA experiment but with only 10 stations~\cite{araWhitepaper}. For this study, we have chosen to simulate the experiment at Summit Station in Greenland. Moving the detectors to South Pole at a depth of 200~m below the surface would increase the acceptance by $<20$\% due to a longer radio attenuation length in ice at the South Pole~\cite{avva}. We have chosen to simulate 10~stations for this study, but we note that the acceptance changes linearly with the number of stations, since stations trigger independently. Figure~\ref{fig:events} shows the number of events detected as a function of energy for the same three detector configurations shown in Figure~\ref{fig:simulation} for a variety of astrophysical and cosmogenic models. We show event rates for an $\mathrm{E}^{-2.3}$ power law based on IceCube observations~\cite{bigBird}, an $\mathrm{E}^{-2.3}$ power law with a 1~PeV exponential cutoff~\cite{bigBird}, and optimistic and pessimistic cosmogenic models~\cite{kotera}. The flux models used for Figure~\ref{fig:events} are shown in Figure~\ref{fig:models}. Table~\ref{tab:events} summarizes the expectation values for the total number of events detected with each detector configuration for each model. A harder spectrum for PeV-scale neutrinos would yield a higher event rate. For the most pessimistic cosmogenic neutrino flux models (not shown in Figures~\ref{fig:events},~\ref{fig:models}, or Table~\ref{tab:events}) with no source evolution and a pure iron composition for UHE cosmic rays~\cite{kotera}, the expected event rate with 10 stations of 400 phased antennas each is $\sim0.1$ event per year. However, recent measurements with Pierre Auger Observatory and the Telescope Array disfavor a significant iron fraction in the cosmic ray composition at energies up to and even exceeding $10^{19.5}$~eV~\cite{auger_2014_composition,telescopeArray}, so we do not discuss these cosmogenic models further in this work. By phasing the antennas in a 16-antenna array, we have improved the UHE neutrino event rate by more than a factor of two over the non-phased case, and extended the sensitivity to lower energies. The 16-antenna phased configuration achieves a low enough energy threshold to distinguish an $\mathrm{E}^{-2.3}$ power law extrapolation of the observed IceCube spectrum from one that has a cutoff at the PeV scale. Phasing more antennas lowers the threshold even further, and makes marked improvements in event rates, especially at low energies (see the 400-antenna configuration in Table~\ref{tab:events} and Figure~\ref{fig:events}). Increasing the number of stations results in a linear increase in the expected number of events. As is evident in Figure~\ref{fig:simulation}, 10 stations of 16-antenna phased arrays have a larger acceptance than IceCube above 30~PeV, and the acceptance grows faster with energy than IceCube, so that by $10^{18}$~eV the acceptance is an order of magnitude more than IceCube. 10 stations of the 400-antenna phased arrays have a larger acceptance than IceCube above 1~PeV. \begin{table*} \begin{center} \begin{tabular}[c]{|l|c|c|c|c|} \hline Station Configuration & Power Law & Power Law & Optimistic & Pessimistic\\ & & with Cutoff & Cosmogenic & Cosmogenic \\ \hline 16-antenna & 0.9 & 0.0 & 7.7 & 2.3 \\ 16-antenna, phased & 3.8 & 0.1 & 19.6 & 6.0 \\ 400-antenna, phased & 18.4 & 2.2 & 52.9 & 15.6 \\ \hline \end{tabular} \end{center} \caption[]{\label{tab:events}Expectation values for the total number of triggered events in 3 years for 10 stations in different configurations for spectra based on IceCube observations~\cite{bigBird} and for cosmogenic models~\cite{kotera}. } \end{table*} \section{Conclusions} \label{sec:conclusion} We have described a new concept for an in-ice phased radio array that is designed to achieve sensitivity to the astrophysical neutrino flux at 1~PeV and above, provide energy overlap with IceCube for calibration, and discover cosmogenic neutrinos in an efficient way. It is worth noting the scalability of the radio technique for increasing acceptance at all energies. The acceptance increases linearly with the number of stations, and further gains can be realized by phasing more antennas in each station, particularly at PeV neutrino energies. An array of 100 stations each with 400-antenna phased arrays could detect hundreds of neutrinos at PeV energies and above each year. Such an experiment would revolutionize our view of the high-energy universe. \acknowledgments This work was supported by the Kavli Institute for Cosmological Physics at the University of Chicago. Computing resources were provided by the University of Chicago Research Computing Center. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. We would like to thank A. Connolly, P. Gorham, D. Saltzberg, and S. Wissel for useful conversations and guidance.
1,314,259,994,594
arxiv
\section{Introduction} \IEEEPARstart{P}{ower} efficiency is one of the driving forces behind the development of current communication technologies. Unfortunately, one of the main sources of power consumption are amplifiers operating at low efficiency. This holds even for state-of-the-art amplifiers. They cannot be operated at their optimal power level, because signals with suboptimal Peak-To-Average-Power-Ratio (PAPR), i.e., PAPR $> 1$, require to backoff the amplifier to avoid serious clipping. Constant envelope modulation schemes such as continuous phase modulation (CPM) provide an optimal PAPR and reduce the backoff compared to other modulation schemes, such as the widely used QAM~\cite{Amp}. While this improves the power efficiency of a transmission system, the bandwidth efficiency suffers: Since the radius in the complex plane is fixed for constant envelope transmission, phase remains the only degree of freedom to represent information. This results in a rate loss compared to QAM, which is the reason why constant envelope modulation did not receive much attention since the development of GSM-Mobile Communiation, a system employing Gaussian Minimum Shift Keying (GMSK), a variant of CPM. The use of MIMO systems allows for another method to increase power efficiency: Not the PAPR, but the Peak-to-Average-Sum-Power-Ratio (PASPR) of a vector-valued signal can determine the required amplifier backoff. For an arbitrary vector-valued function $\vec{x}(t) \in \mathbb{C}^n$, this quantity is defined as \begin{equation} \mathrm{PASPR}(\vec{x}(t)) = \frac{\max_t \|\vec{x}(t)\|^2}{\mathcal{E}\{\|\vec{x}(t)\|^2\}}. \end{equation} The PASPR is a decisive factor when recently proposed load-modulated MIMO amplifiers are used~\cite{loadMod1, loadMod2}. Since the degrees of freedom are reduced by only one for all antennas, the relative rate loss is smaller the more antennas are used~\cite{MPSK}. For massive MIMO systems, the central limit theorem (CLT) guarantees that the PASPR of the continuous-time signal becomes optimal as long as the data points are distributed on a multidimensional hypersphere. We call these constellations \emph{Phase Shift Keying on the Hypersphere} (PSKH), because it is a natural extension of ordinary PSK. In conventional MIMO with only a handful of antennas, large fluctuations of the continuous transmit signal are still possible and therefore the PASPR is far from being optimal. Thus some more adaptations are necessary in order to reduce the PASPR of the transmission signal. The rest of this paper is organized as follows: In Sec.~\ref{sec:sysModel} we introduce our system model. Unlike in PSK, there are multiple ways to construct constellation on the hypersphere. Thus we discuss several algorithms to generate PSKH constellations and their advantages and disadvantages in Sec.~\ref{sec:constellations}. Secs.~\ref{sec:sincSq} and \ref{sec:sphInt} presents two approaches to reduce the PASPR in detail. This includes receiver structures for the corresponding signals as well as numerical results for their performance. Sec.~\ref{sec:paspr} discusses how much the previously introduced approaches reduce the PASPR of the transmitter output signal and how they affect the spectrum and thus the bandwidth efficiency. The paper ends with conclusions in Sec.~\ref{sec:conclusion}. \section{System Model} \label{sec:sysModel} We define a PSKH constellations as a set of $M = 2^{R_m}$ data points $\mathcal{A} = \{\vec{a}_0, \ldots, \vec{a}_{M-1} \; | \; \vec{a}_i \in \mathbb{C}^{n_T}, \, \|\vec{a}_i\| = \sqrt E_s\}$ where $E_s$ is the energy per symbol, $n_T$ is the number of transmit antennas and $R_m$ the rate per modulation interval. Unless otherwise mentioned, these constellations are modulated using conventional PAM with a pulse shaping filter $h(t)$ with normalized symbol period $T = 1$ to generate the continuous-time transmitter output signal in the equivalent complex baseband (ECB) domain \begin{equation} \label{eq:contmodel} \vec{s}(t) = \sum_{k = -\infty}^{\infty} \vec{x}[k] h(t - k), \quad \vec x[k] \in \mathcal{A} \end{equation} for a given data sequence $\langle \vec x[k] \rangle$. If $h(t)$ is a $\sqrt{\mathrm{Nyquist}}$ filter, the channel is not frequency selective, and the corresponding matched filter is applied at the receiver, the overall model is the well known discrete-time MIMO ECB channel model \begin{equation} \label{eq:discmodel} \vec{y}[k] = \vec{Hx}[k] + \vec{n}[k] \end{equation} where $\vec{x}[k] \in \mathbb{C}^{n_T}, \vec{y}[k] \in \mathbb{C}^{n_R}$ are transmit and receive vector at time $k$, respectively, $\vec{H} \in \mathbb{C}^{n_T \times n_R}$ is the channel matrix and $\vec{n}[k] \in \mathbb{C}^{n_R}$ is complex i.i.d. additive white Gaussian noise with variance $\sigma^2 = \frac{N_0}{T} = N_0$ per complex component. $n_R$ and $n_T$ denote the number of transmit and receive antennas respectively, but for the remainder of this paper we assume that $n = n_R = n_T$ and omit the time index $k$ unless confusion is possible. If other pulse shaping filters than $\sqrt{\mathrm{Nyquist}}$ are used, it will be discussed in detail. For this work, both the continuous and the discrete time models (eqs.~\eqref{eq:contmodel} and \eqref{eq:discmodel}) are important, because the first one determines the PASPR and bandwidth of the signal whereas the latter can be used for the detection of the transmitted sequence in the receiver. Because every point in a PSKH constellation in $n$ dimensions has energy $E_s$ and uses $\sqrt{\mathrm{Nyquist}}$ impulses, the equivalent energy per bit for uncoded transmission is given as $E_s / R_m$ at the transmitter side. In this paper, we use two different channel models: $\vec{H}$ can be unitary, which corresponds to a vector AWGN channel after equalization. In this case, transmitter and receiver energy are equal and the received energy per bit over the one-sided noise-spectral density is given as \begin{equation} \frac{E_b}{N_0}_{\mathrm{AWGN}} = \frac{E_s}{R_m\sigma^2}. \end{equation} Our second channel model is the flat-fading Rayleigh model, i.e., the entries of $\vec{H}$ are i.i.d. complex Gaussian random variables with unit variance. Thus each antenna receives an average signal energy of $E_s$ per symbol and hence the total received energy over $n$ receive antennas is $nE_s$. The \emph{average} received energy per bit over the one-sided noise-spectral density is then given as \begin{equation} \frac{E_b}{N_0}_{\mathrm{Fading}} = n \frac{E_b}{N_0}_{\mathrm{AWGN}} = \frac{nE_s}{R_m\sigma^2}. \end{equation} We omit the subindices AWGN or Fading and instead specify the channel we use in a given scenario. \section{PSKH Constellations} \label{sec:constellations} \subsection{Constellation Construction} As explained in Sec.~\ref{sec:sysModel}, a PSKH constellation is a set of $M = 2^{R_m}$ points on the hypersphere with radius $\sqrt{E_s}$ representing $R_m$ bits. The vectors $\vec{a}_i \in \mathcal{A} \subset \mathbb{C}^n$ are $n$-dimensional corresponding to $n$ transmit antennas in a MIMO system. We note that PSKH constellations are also known as \emph{spherical codes} in literature, but to our knowledge they have never been used to improve power efficiency of communication systems by means of PASPR reduction. Without further restrictions than the radius, there are many possible ways to create constellations, which might differ quite vastly in terms of quality. A reasonable measure for quality, as in all PAM schemes, is the minimum distance between signal points. Optimal codes in this sense and their analytic description, however, are known only for some restricted constellation sizes and dimensions~\cite{SpheresBook}.\footnote{Some examples of such optimal packings can be found in~\cite{CodeTable}.} Therefore, we compare four different algorithms to generate PSKH constellations: \begin{itemize} \item \emph{Equal Area Partitioning Algorithm} (EQPA) from~\cite{EQAlg}: Generates a constellation with equally sized areas, which are usually not the Voronoi regions of a data point. \item \emph{k-means Clustering} (kMC) using the spherical k-means algorithm~\cite{sphkmeans}: Generates a large number of uniformly distributed points on the sphere, clusters them using the spherical k-means algorithm. \item \emph{Potential Minimization} (PM): Generates particles on a sphere and minimizes the potential energy between particles. This can be done via a molecular dynamics simulation~\cite{MolDyn}. \item \emph{Per-Antenna PSK} (PA-PSK): Generates independent PSK constellations on each antenna, then scales them to fit the power constraint. \end{itemize} The algorithms can be distinguished in terms of construction complexity: EQPA and PA-PSK are analytic constructions, kMC and PM rely on numerical methods and are therefore more expensive to construct. Of course, such a construction needs to be done only once and can be computed offline. If construction is nonanalytic, it is further necessary to store the constellation in memory, which we think is reasonable to implement for $R_m \lesssim 16$. Additionally, it is possible to construct a constellation for only half the number of antennas and duplicate it, which results in a small degradation of quality. In order to compare constellations with respect to their performance, we take a look at three different properties: Constellation-constrained capacities, minimum distance of the constellation (also known as \emph{packing radius} in the context of spherical codes and packings) and error probabilities. \subsection{Capacity of PSKH} \label{sec:capacity} \begin{figure}[!t] \setlength\figureheight{4.5cm} \setlength\figurewidth{7.5cm} \input{cap_ebn0.tex} \caption{Capacity of different constellations for a constellation sizes $M = 64$ and $M = 512$ points and $n = 3$ antennas over a vector AWGN channel.} \label{fig:capacity} \end{figure} In~\cite{MPSK}, it is proven that the capacity of PSKH for unitary $\vec H$ and continuous input is achieved for a uniform distribution on the hypersphere. In this section, we compare the capacities if the input is discrete and constrained to a certain size, but $\vec{H}$ is still unitary. Fig.~\ref{fig:capacity} shows the constellation constrained capacities for $n = 3$ antennas and $M = 64$ as well as $M = 512$ points. The results are similar for different constellation and antenna array sizes, which is why we restrict ourselves to one exemplary case. As a baseline, we also plot the AWGN capacity for continuous input. For $n = 3$ antennas, this capacity is $C = 3 \log(1+\mathrm{SNR_{ch}})$ with $\mathrm{SNR}_{ch} = \frac{E_s}{n \sigma^2}$ being the SNR on each individual AWGN channel. For Fig.~\ref{fig:capacity}, we use the standard representation over $\frac{E_b}{N_0}$. For the vector AWGN channel, we have $\frac{E_b}{N_0} = \frac{\mathrm{SNR_{ch}}}{C/n}$. The general result regarding capacities can be summarized as follows: PM and kMC have the best capacities with PM outperforming kMC by only approx. 0.01 dB. For a constellation size of 1 bit per real dimension, PA-PSK and EQPA lose up to 0.5 dB at a capacity of $C = 5.5$ bit/use. If we increase the constellation size, PM and kMC still remain on top, EQPA reduces the loss to about 0.3 dB, whereas PA-PSK loses up to 2 dB compared to PM at $C = 8.5$ bit/use. The reason for this big loss when increasing the constellation size is that PA-PSK is the only constellation where no form of global optimization, i.e., over all 6 dimensions, takes place. While a PSK constellation might be perfect on an individual antenna, increasing the total constellation size requires to use different constellations on each antenna. This can have devastating influence on the overall distance properties of the constellation. On the other hand, EQPA works in such a way that the distribution of points becomes more and more uniform as the constellation size increases. This algorithm profits from packing the hypersphere more densely. Fig.~\ref{fig:capacity} also shows that for coded transmission with a target rate well below $R_m$, the larger constellation can get very close to the AWGN capacity: Using $M = 512$ points per constellation and assuming a code of rate $R_c = \frac 1 2$, i.e., a total rate of $4.5$ bit, the gap to AWGN capacity is only approx. 0.1 dB. The capacities fit nicely to a coded modulation rule of thumb that 0.5 bit redundancy per real dimension for coding~\cite{ungerboeck} and another 0.5 bit redundancy for shaping~\cite{Wachsmann} are sufficient to close most of the gap to capacity. \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Average and total minimum distance of constellations} \label{tbl:distances} \centering \begin{tabular}{l||c|c} \hline & $d_{\mathrm{min}}$ & $d_{\mathrm{nb, avg}}$ \\ \hline\hline EQPA, $M = 64$ & 0.6611 & 0.7282 \\ kMC, $M = 64$ & 0.8674 & 0.9207 \\ PM, $M = 64$ & 0.9139 & 0.9474 \\ PA-PSK, $M = 64$ & 0.8165 & 0.8165 \\ \hline EQPA, $M = 512$ & 0.4654 & 0.5350 \\ kMC, $M = 512$ & 0.5235 & 0.5767 \\ PM, $M = 512$ & 0.5894 & 0.6217\\ PA-PSK, $M = 512$ & 0.4419 & 0.4419 \\ \hline\hline \end{tabular} \end{table} \subsection{Distance Properties} In order to compare the distance properties of the individual algorithms, Table~\ref{tbl:distances} lists the \emph{minimum distance} and \emph{average neighbor distance} of a constellation defined as \begin{equation} d_{\mathrm{min}}(\mathcal{A}) = \min_{\vec{a}_{i}, \vec{a}_{j} \in \mathcal{A} \atop i \neq j} ||\vec{a}_i - \vec{a}_j|| \end{equation} and \begin{equation} d_{\mathrm{nb, avg}}(\mathcal{A}) = \frac 1 M \sum_{i = 0}^{M-1} \min_{\vec{a}_j \in \mathcal{A} \atop \vec{a}_i \neq \vec{a}_j} ||\vec{a}_i - \vec{a}_j||. \end{equation} \begin{figure*}[!t] \centering \hspace*{-1cm} \begin{tabular}{ll} \captionsetup[subfloat]{captionskip=0pt, nearskip=0.2cm, margin=0.5cm, justification=centering} \subfloat[Vector AWGN]{ \setlength\figureheight{4.5cm} \setlength\figurewidth{7.5cm} \input{mpsk_awgn_all.tex} \label{fig:subSER1}} & \captionsetup[subfloat]{captionskip=0pt, nearskip=0.2cm, margin=0.5cm, justification=centering} \subfloat[Rayleigh Fading]{ \setlength\figureheight{4.5cm} \setlength\figurewidth{7.5cm} \hspace*{-.5cm} \input{mpsk_rayleigh_all.tex} \label{fig:subSER2}} \end{tabular} \caption{Symbol error rate (SER) for transmission of $R_m = 6$ (solid) or $R_m = 9$ (dashed) bits per constellation point over a vector AWGN channel (a), and a Rayleigh fading channel (b). The system has $n = 3$ antennas.} \label{fig:constSER} \end{figure*} PSKH constellations, especially the ones which were generated numerically, often have asymmetric distance profiles: In conventional modulation schemes like PSK or QAM, every point has at least one neighbor which is the minimum distance apart. A lot of points (in PSK even all points) have the same distance profile to neighboring points. In PSKH, there may be only two neighboring points which are the minimum distance apart, whereas all other points in the constellation only have neighbors which are further apart. This is the reason why we also include the average neighbor distance $d_{nb, avg}$. The results in Table~\ref{tbl:distances} show for example, that EQPA and PA-PSK differ only slightly in terms of minimum distance for $M=512$. Nevertheless, their capacities are quite different. The qualitative result of this is that the distance profile has an effect on the capacity, but it is not possible to estimate capacities from these values alone. Various PSKH constellations with similar minimum distance might have significantly different overall distance profiles, resulting in varying capacities and performances. \subsection{Power Efficiency} In order to elaborate how the distance profile effects the power efficiency of PSKH constellations, Fig.~\ref{fig:constSER} shows the symbol error rate (SER) for constellations when 3 antennas are used with constellation sizes $M = 64$ and $M = 512$. Fig.~\ref{fig:subSER1} shows transmission over a vector AWGN channel, i.e., $\vec{H} = \vec{I}$. This corresponds to the discussion of the capacity on a channel with unitary channel matrix in Sec.~\ref{sec:capacity}. For Fig.~\ref{fig:subSER2} we use the standard Rayleigh fading model, i.e., every element of $\vec{H}$ is a complex i.i.d. Gaussian random variable with unit variance and we average over several thousand realizations. In both cases, one can observe that the error performance of the constellation is ordered according to the minimum distance of the constellation (like the capacity), but due to the size of a constellation and asymmetry of the distance profiles (see last section), a simple quantitative estimation is not developed yet. \emph{Remark}: We note that PA-PSK with $M = 64$ for 3 antennas is regular 4-PSK on each antenna. Without further modification, an optimized constellation such as PM can give a substantial gain compared to 4-PSK/antenna of almost 1.5 dB on the vector AWGN channel. Such a channel is equivalent to subsequent transmissions over a regular AWGN channel. Constellations which achieve a coding gain by being spread over subsequent transmissions on a Single-Input Single-Output (SISO) channel are also known as \emph{multidimensional constellations}~\cite{MultidimConst}. Multidimensional constellations can be used to combat fading~\cite{MultidimConst2, MultidimConst3}, to exploit four available dimensions in optical communications~\cite{OpticMultidimConst1, OpticMultidimConst2} and to introduce a more flexible trade-off between bandwidth the power efficiency of trellis-coded signals~\cite{CodedMultiConst}. Such constellations are usually based on conventional 2-D modulation schemes or on some lattices, which generally does not result in fixed radius constellations. To our knowledge, no work has dealt with multidimensional constellations with fixed radius in MIMO systems to exploit load-modulation transmitters. \section{Sinc$^2$ Pulse Shaping} \label{sec:sincSq} The simplest method to reduce the PASPR is to use pulse shaping filters which show better PASPR properties. PAM employing a $\mathrm{sinc}^2(t)$-function\footnote{We define $\mathrm{sinc}(x) = \sin(\pi x) / (\pi x)$ for $x \neq 0$ and 1 otherwise.} for pulse shaping shows very good properties even for very few antennas (see Sec.~\ref{sec:paspr}). This means that the continuous transmission signal is \begin{equation} \vec{s}(t) = \sum_{k = -\infty}^{\infty} \vec{x}[k] \, \mathrm{sinc}^2\left(\frac{t-kT}{T}\right). \end{equation} Since $\mathrm{sinc}^2$ is not a $\sqrt{\mathrm{Nyquist}}$-function, some ISI has to be equalized at the receiver. This ISI is not generated by the channel, but only by the pulse shaping filter and its corresponding matched filter, i.e., there is no ISI between different receive antennas. Thus there is no need to make use of equalization techniques developed for MIMO ISI channels. Instead, we filter the received signal of each antenna using Forney's \emph{Whitened Matched Filter} (WMF)~\cite{WMF} to get a minimum phase impulse. ISI can then be expressed by a one-dimensional, causal minimum-phase impulse $h_W[i]$ and the resulting discrete time transmission model becomes \begin{equation} \vec{y}[k] = \vec{H}\sum_{i = 0}^{L} h_W[i] \vec{x}[k-i] + \vec{n}[k] \end{equation} with ISI-length $L$. ISI can be equalized with Maximum Likelihood Sequence Estimation (MLSE) using a vector-valued Viterbi Algorithm (VA)~\cite{Viterbi}, Decision Feedback Equalization (DFE) or Delayed Decision Feedback Sequence Estimation (DDFSE)~\cite{DDFSE}, which allows a performance trade-off between DFE and MLSE. In this specific case, almost all energy of $h_W$ is stored in the very first coefficient $h_W[0]$, such that there is only a minimal loss in terms of error probability when using DFE. Results of numerical simulations can be found in Fig.~\ref{fig:sincSq}. There is virtually no loss between $\mathrm{sinc}^2$ pulse shaping with DFE (the simplest equalization method in this scenario) and ISI-free transmission by means of a $\sqrt{\mathrm{Nyquist}}$ pulse shaping in terms of power efficiency. \begin{figure}[t] \setlength\figureheight{4.5cm} \setlength\figurewidth{7.5cm} \input{sincSq_ebn0.tex} \caption{Comparison of a ISI-free tranmission using a $\sqrt{\mathrm{Nyquist}}$ impulse and $\mathrm{sinc}^2$ pulse shaping. Transmission is over $n = 2$ (solid) and $n = 3$ (dashed) antennas with one bit per real dimension ($M = 16$ and $M = 64$). The VA uses $\nu = 2$ memory elements. Simulations were performed over several thousand Rayleigh fading channels and averaged.} \label{fig:sincSq} \end{figure} \section{Spherical Interpolation Signaling} \label{sec:sphInt} Spherical Interpolation (SI) signaling tries to smoothen the transmission signal by forcing it onto the hypersphere also in-between data samples. This is achieved by inserting interpolation points at a certain oversampling rate. The positive effect is a significantly reduced PASPR compared to conventional PAM because the signal becomes smoother and deviations from the hypersphere are reduced, especially zero-crossings. The disadvantage is ISI introduced by the interpolation points and thus an increased receiver complexity. Before presenting two different approaches, we define spherical interpolation also known as SLERP (SphericaL intERPolation)~\cite{SLERP}: Given two points $\vec{x}_1, \vec{x}_2 \in \mathbb{R}^N$ with $||\vec{x}_1|| = ||\vec{x}_2|| = 1$ and $\cos(\theta) = \langle \vec{x}_1, \vec{x}_2 \rangle$, for any $0 \leq \tau \leq 1$, the spherical interpolation of $\vec{x}_1$ and $\vec{x}_2$ is given as \begin{equation} \vec{SI} (\vec{x}_1, \vec{x}_2, \tau) = \frac{\sin\left((1-\tau)\theta\right)}{\sin\theta}\vec{x}_1 + \frac{\sin\left(\tau\theta\right)}{\sin\theta} \vec{x}_2. \end{equation} \subsection{$\frac{T}{2}$-Pulse Shaping} \label{subsec:t2rrc} Generating a signal using spherical interpolation values in-between $T$-spaced data symbols introduces ISI. In order to simplify equalization, our first approach is to use a $\sqrt{\mathrm{Nyquist}}$-filter with respect to $T/2$, i.e., a pulse shaping filter $h(2t)$. The corresponding matched filter is $h^\star(-2t)$. In-between data symbols, the interpolation of two adjacent points is transmitted. This way, no two transmitted symbols are opposite of the hypersphere and hence the PASPR is reduced. The resulting output signal of the transmitter is \begin{align} \vec{s}(t) &= \sum_{k = -\infty}^{\infty} \left( \vphantom{\frac 12} \vec{x}[k] \, h\left(2 (t - kT)\right) \right. \nonumber \\ &+ \left. \vec{SI} \left(\vec{x}[k], \vec{x}[k+1], \frac 12\right) \, h\left(2 \left( t - \left(k + \frac 1 2 \right)T\right)\right) \right) \end{align} which is $\sqrt{\mathrm{Nyquist}}$ with respect to half the symbol rate. Filtering with the corresponding matched filter and sampling with a rate of $\frac{T}{2}$ at the receiver gives a sequence consisting of alternating data points and interpolation values. Because we chose a filter which is $\sqrt{\mathrm{Nyquist}}$ with respect to $\frac{T}{2}$, every sample at the receiver is ISI-free. To keep this system comparable with conventional PAM, both data and interpolation symbols contain only half the original symbol energy. Therefore it is necessary to use all points at the receiver to estimate the data sequence, otherwise half of the energy would be wasted. Data estimation for $\frac{T}{2}$-pulse shaping is done via the Viterbi algorithm. Due to the interpolation, every metric in the receiver depends on current and previous received value, i.e., the VA requires exactly $\nu = 1$ memory element. \begin{figure}[t] \setlength\figureheight{4.5cm} \setlength\figurewidth{7.5cm} \input{rrcT2_ebn0.tex} \caption{Comparison of conventional ISI-free PAM transmission and $\frac{T}{2}$ pulse shaping. Transmission is over $n = 2$ (solid) and $n = 3$ (dashed) antennas with one bit per real dimension ($M = 16$ and $M = 64$). Simulations were performed over several thousand Rayleigh fading channels and averaged.} \label{fig:rrcT2} \end{figure} It is obvious that this method has a huge disadvantage due to occupying twice the spectrum. The reason why we nevertheless include it in this comparison is that $\frac{T}{2}$-pulse shaping increases the slope of the error curve such that in the medium- to high-SNR regime it might still be a valid alternative given the largely reduced PASPR compared to conventional PAM. Results for this pulse shaping method are plotted in Fig.~\ref{fig:rrcT2}. The increased slope can be explained by the linear transformation of the hypersphere induced by $\vec H$: At high SNRs, symbol errors will usually occur because the noise moves the data symbol into the decision region of a neighboring symbol. Symbol errors to the opposite side of the hypersphere occur only rarely, because such points are farthest apart. If the receiver constellation $\vec{H}\mathcal{A} = \left\{\vec{Hx} \; | \; \vec{x} \in \mathcal{A} \right\}$ is distorted enough, such errors may be much more likely because the distance between opposing points might be drastically reduced. In the case of $\frac{T}{2}$-pulse shaping, not only the distance between constellation points, but also the distance between interpolation points affects the performance of the system. The spherical interpolation thus corresponds to a nonlinear convolutional code. For data points on opposing sides of the hypersphere, $\frac{T}{2}$-pulse shaping generates interpolation points which are usually far away from each other. This increases the total minimum distance and thus the performance of the transmission. The magnitude of this effect is dependent on $\vec{H}$. A good measure for this effect is the ratio $\frac{\sigma_{\mathrm{SVD}, \mathrm{max}}}{\sigma_{\mathrm{SVD}, \mathrm{min}}}$ with $\sigma_{\mathrm{SVD}, \mathrm{max}}$ and $\sigma_{\mathrm{SVD}, \mathrm{min}}$ being the maximum and minimum singular values of the real representation of $\vec{H}$, respectively\footnote{Every complex-valued model $\vec{y} = \vec{Hx} + \vec n $ of dimension $n$ can be transformed into an equivalent real-valued model of dimension $2n$, see e.g.~\cite{realChannelModel}.}. The two extreme cases would be $\frac{\sigma_{\mathrm{SVD}, \mathrm{max}}}{\sigma_{\mathrm{SVD}, \mathrm{min}}} = 1$ in which $\vec{H}\mathcal{A}$ would still be a hypersphere (possibly with a different radius) and $\frac{\sigma_{\mathrm{SVD}, \mathrm{max}}}{\sigma_{\mathrm{SVD}, \mathrm{min}}} \rightarrow \infty$. In the latter case, the $n$-dimensional hypersphere would be compressed down to fewer dimensions, effectively reducing the distance between opposing points and possibly making them nearest neighbors. \subsection{Spherical Interpolation} The main problem of $\frac{T}{2}$-pulse shaping is the large bandwidth due to the use of pulse shaping filters at higher frequency. In order to mitigate the problem, we combine SI with a conventional pulse shaping filter being $\sqrt\mathrm{Nyquist}$ with respect to $T$. This method is characterized by the interpolation frequency $f_\mathrm{IP} \in \mathbb{N}$: In each symbol interval, the original data point as well as $f_\mathrm{IP} - 1$ interpolation points are transmitted. $f_\mathrm{IP} = 1$ corresponds to conventional PAM without SI. The resulting output signal of the transmitter is \begin{align} \vec s (t) &= \sum_{k = -\infty}^{\infty} \left( \vphantom{ \sum_{l = 1}^{f_\mathrm{IP}-1}} \vec x [k] \, h(t - kT) \right. \nonumber \\ &\left. + \sum_{l = 1}^{f_\mathrm{IP}-1} \vec{SI}\left( \vec{x}[k], \vec{x}[k+1], \frac{l}{f_\mathrm{IP}} \right) \, h\left(t - \left(k + \frac{l}{f_\mathrm{IP}} \right) T\right) \right). \end{align} At the receiver, matched filtering with $h^\ast(-t)$ is performed followed by $T$-spaced sampling. This is slightly suboptimal, but simplifies the receiver structure greatly. Introducing the autocorrelation \begin{equation} \varphi_{hh}(\tau) = \int_{-\infty}^{\infty} h(t+\tau) h^\star(t) \mathrm{d}t, \end{equation} the received discrete-time signal is \begin{align} \vec{y}[k] &= \vec{y}(kT) = \vec{H} \left( \sum_{\bar{k} = -\infty}^{\infty} \vec{x}[\bar{k}] \varphi_{hh}(kT - \bar{k}T) \right. \nonumber \\ &+ \sum_{l = 1}^{f_\mathrm{IP}-1} \vec{SI}\left( \vec{x}[\bar{k}], \vec{x}[\bar{k}+1], \frac{l}{f_\mathrm{IP}} \right) \nonumber \\ &\cdot \left. \varphi_{hh}\left( kT - \left(\bar{k} + \frac{l}{f_\mathrm{IP}}\right) T\right) \right) + \vec{n}[k]. \end{align} By using a $\sqrt{\mathrm{Nyquist}}$-pulse and $T$-spaced sampling, the direct influence of adjacent data symbols may be suppressed and the resulting noise at the receiver is white, but the influence of the interpolation values inserted at rate $T_\mathrm{IP} = T / f_\mathrm{IP}$ remains. Thus ISI-equalization in form of MLSE via the VA has to be performed at the receiver. Fig.~\ref{fig:FIR_SI} shows the system model used at the receiver to estimate the data sequence. The VA has to consider all $f_\mathrm{IP}$ vectors, which were transmitted during one symbol interval, to calculate a branch metric. Since $T$-spaced sampling is used, it is vital to use the contribution from the interpolation values, otherwise serious ISI would be unprocessed and its energy would be wasted. For SI pulse shaping, the choice of the filter does have an influence on the error probability, because it determines the shape of $\varphi_{hh}$. For the following results, we used a root-raised cosine (RRC) pulse shaping filter with roll-off factor $\beta = 0.25$. Additionally, the choice of $f_\mathrm{IP}$ provides a trade-off between receiver complexity and smoothness of the output signal (which in term improves PASPR and bandwidth, see Sec.~\ref{sec:paspr}). The results in Figs.~\ref{fig:paspr} and \ref{fig:spectra} were generated using $f_\mathrm{IP} = 4$. Increasing $f_\mathrm{IP}$ to 16 showed a 0.15 dB improvement in PASPR, whereas the error probability is unaffected by increasing $f_\mathrm{IP}$. In order to achieve full ML detection, all coefficients of the impulse response have to be considered and MLSE can be performed using the VA. The system model for $\nu = 3$ memory elements is depicted in Fig.~\ref{fig:FIR_SI}: Two adjacent symbols in time generate SI data, which is weighted according to $\varphi_{hh}$ and summed up. Given the size of PSKH constellations, full ML detection is computationally impossible. Thus we need to apply various complexity reduction methods, which are described in the next subsection. \subsection{Complexity Reduction Techniques} In order to make a detector computationally feasible, we only use those intervals of $\varphi_{hh}$ which have the highest energy and use it for sequence estimation in the VA. Prior taps are considered as noise and remaining taps at the end are equalized using DFE, which makes the overall scheme a Delayed Decision-Feedback Sequence Estimation (DDFSE)~\cite{DDFSE}. Fig.~\ref{fig:SI} shows numerical results for the spherical interpolation shaping using $f_\mathrm{IP} = 4$. Detection was performed using DDFSE employing $\nu = 3$ memory elements for the 16-ary constellation and $\nu = 2$ memory elements for the 64-ary constellation (corresponding to 4096 states in both cases). \begin{figure}[t] \setlength\figureheight{4.5cm} \setlength\figurewidth{7.5cm} \input{fir_si.tex} \caption{System used to model the ISI produced by spherical interpolation transmission. An SI block calculates $f_\mathrm{IP}$ vectors and $\varphi_{hh}(\cdot)$ weighs them with the autocorrelation of the pulse shaping filter. Thus each block processes all interpolation vectors within one symbol period. This model omits the channel matrix $\vec{H}$ and noise $\vec{n}$. This example employs $\nu = 3$ memory elements.} \label{fig:FIR_SI} \end{figure} \begin{figure}[t] \setlength\figureheight{4.5cm} \setlength\figurewidth{7.5cm} \input{si_std_ebn0.tex} \caption{Comparison of ISI-free transmission and SI pulse shaping with $f_\mathrm{IP} = 4$. Transmission is over $n = 2$ (solid) and $n = 3$ (dashed) antennas with one bit per real dimension ($M = 16$ and $M = 64$). Detection was performed using DDFSE employing $\nu = 2$ memory elements for $n = 3$ antennas and $\nu = 3$ memory elements for $n = 2$ antennas. Simulations are averaged over several thousand Rayleigh fading channels.} \label{fig:SI} \end{figure} For DDFSE, usually a prefilter is applied to make the overall impulse response minimum-phase, see e.g.~\cite{MinPhasePrefilter}. In this case, we have to deal with a number of problems: First of all, SI is a nonlinear operation. Secondly, the overall impulse response has a large linear phase portion. Thirdly, for a given interval width, the minimum phase response captures less of the total energy than the overall impulse response: The original impulse response has a large fraction of its total energy concentrated around the center, whereas the minimum phase part spreads its energy over a wider interval in the beginning of the impulse. Thus, more memory elements in the VA are required to capture the same amount of energy. This is computationally not feasible. Fourthly, given the total length of $\varphi_{hh}$, numerical inaccuracies might occur when calculating the prefilter. It is thus advantageous to use the original filter instead of applying any prefilter to make the overall filter minimum phase and simply treat the influence of the first taps as noise. The power efficiency of the SI pulse compared to conventional PAM employing a RRC filter depends on the roll-off factor and the number of states in the VA: For a fixed number of states in the receiver, increasing the roll-off factor $\beta$ of the RRC filter improves power efficiency, because more of the energy of $\varphi_{hh}(t)$ is concentrated around the center. This is in contrast to conventional PAM, where power efficiency is unaffected by the roll-off factor. Our analyses show that for $\beta = 0.1$, a loss of approx. 1.1 dB occurs at a target symbol error rate of $10^{-4}$. This loss shrinks quickly with an increasing roll-off factor: For $\beta = 0.25$, the gap is closed. Increasing the number of memory elements also reduces the loss, because the amount of energy used for sequence estimation is increased. The feasibility of this is restricted by computational complexity. Therefore, we now discuss how the gap between SI RRC and conventional RRC can be closed with reduced computational complexity. It is usually sufficient to use only 2 or 3 delay elements to capture almost all energy of the pulse. The remaining energy at the end of the filter can be equalized by means of DDFSE. But since every delay element is $M$-ary, the number of states can become infeasible even for such a small number of elements. We thus compare system performance and complexity when using two different complexity reduction techniques: The well-known Reduced State Sequence Estimation (RSSE)~\cite{RSSE} and a newly proposed iterative application of the VA. For RSSE, we use a Viterbi algorithm with $\nu = 2$ memory elements and generate hyperstates by using hypersymbols in the second delay element only. Combining different input symbols into a hypersymbol in the first element leads to large performance degradation due to two effects: Hyperstates are calculated in advance based on the original constellation $\mathcal{A}$. We did this by numerically optimizing the minimum distance within each hyperstate~\cite{art:spinnler}. The effective constellation at the receiver, however, is $\vec{H}\mathcal{A}$ which might have a drastically different distance profile than the constellation with originally optimal hyperstates. The other negative impact is the fact that both the impulse response $\varphi_{hh}(t)$ as well as its minimum-phase component do not have monotonously decreasing values. The decision in the first delay element is thus based on only a small fraction of the total pulse energy. Our second approach to reduce the complexity is to apply the Viterbi algorithm iteratively. This works well if the performance gap between the use of $\nu$ and $\nu + 1$ memory elements is not too large. The idea behind it is that if each error pattern is a neighboring symbol of the correct signal point, it is sufficient to consider these neighboring symbols in future steps. This is a valid assumption if the SNR is sufficiently high. Our algorithm works as follows: \begin{enumerate} \item Initialize $\nu = 1$. \item Run the Viterbi algorithm with one memory element. \item For each estimate $\hat{\vec{x}}[k]$, find the $n_\mathrm{NB}$ nearest neighboring points. \item Set $\nu = \nu + 1$. \item Run the Viterbi algorithm with $\nu$ memory elements, only allowing $n_\mathrm{NB} + 1$ points in each time step. \item If $\nu = \nu_\mathrm{max}$ finish, otherwise go back to 3. \end{enumerate} The neighboring points can be calculated in advance and stored in a table. This works best if the neighboring points are taken from $\vec{H}\mathcal{A}$, but a reasonable performance can also be achieved if they are taken directly from $\mathcal{A}$, which reduces the overhead to recalculate them every time $\vec{H}$ changes. \begin{figure}[t] \setlength\figureheight{4.5cm} \setlength\figurewidth{7.5cm} \input{si_redcomm_ebn0.tex} \caption{Comparison of SI pulse shaping for $n = 3$ antennas with a RRC pulse shape with $\beta = 0.25$. RSSE used a quaternary second delay element and $n_\mathrm{NB} = 4$ for all iterative variants. Simulations were performed over several thousand Rayleigh fading channels and averaged.} \label{fig:complRed} \end{figure} Fig.~\ref{fig:complRed} shows the performance of a 64-ary alphabet transmitted via SI signaling with a RRC pulse with $\beta = 0.25$ employing $n = 3$ antennas. The VA curve using two memory elements ($\nu = 2$) is the same as in Fig.~\ref{fig:SI} and is our baseline. As a measure of complexity we count the number of trellis branches in each time step \begin{equation} \Xi = \begin{cases} M^{\nu+1},\; \; &\text{Standard VA} \\ M \cdot \prod_{i = 1}^{\nu} M_i, \; \; &\text{VA, RSSE} \\ M^2 + \sum_{i=2}^{\nu_\mathrm{max}} (n_\mathrm{NB, i} + 1)^{\nu_i+1}, \; \; &\text{Iterative VA}. \end{cases} \end{equation} In this term, $M_i$ is the number of possible values in the $i$-th delay element if RSSE is used (the number of hypersymbols) and $\nu_\mathrm{i}$ is the number of memory elements in the $i$-th iteration of the iterative VA. Table~\ref{tbl:complexity} shows the complexity for the algorithms used to create Fig.~\ref{fig:complRed}. The computational complexity for a VA with $\nu = 2$ is already impractical. The iterative VA, however, provides the same performance as the VA with $\nu = 2$ with only minor complexity increase compared to the VA with $\nu = 1$. Increasing the number of iterations by one allows to improve the power efficiency (approx. 0.5 dB at $\mathrm{SER} = 10^{-4}$) such that the iterative VA outperforms the VA with two delay elements. The exact results for the iterative VA depend on the shape of the overall impulse response which changes with the roll-off factor. For practical values $\beta > 0.2$, we found the differences to be only marginal. \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \caption{Complexity Comparison of \newline SI Demodulation (M = 64) for Fig.~\ref{fig:complRed}} \label{tbl:complexity} \centering \begin{tabular}{l||l} \hline Algorithm & $\Xi$ \\ \hline\hline Viterbi, $\nu = 1$ & 4096 \\ Viterbi, $\nu = 2$ & 262144 \\ Viterbi, RSSE & 16384 \\ It. Viterbi, $\nu_\mathrm{max} = 2$ & 4221 \\ It. Viterbi, $\nu_\mathrm{max} = 3$ & 4846 \\ \hline\hline \end{tabular} \end{table} \section{PASPR, Spectrum and Bandwidth Efficiency} \label{sec:paspr} \begin{figure}[t] \setlength\figureheight{4.5cm} \setlength\figurewidth{7.5cm} \input{spectrum.tex} \caption{Occupied spectra of different pulse shaping methods. The spectra are calculated for a $n = 4$ antenna system.} \label{fig:spectra} \end{figure} \begin{figure}[t] \setlength\figureheight{4.5cm} \setlength\figurewidth{7.5cm} \input{PASPR.tex} \caption{Resulting PASPRs of different pulse shaping methods. A modulation rate of 1 bit per real dimension and an interpolation frequency $f_\mathrm{IP} = 4$ for SI RRC were used.} \label{fig:paspr} \end{figure} In the previous sections, we introduced several methods to reduce the PASPR of a signal and discussed their power efficiency. Some methods may have a negative impact on the bandwidth, but a wider spectrum may be tolerable, if the gain in PASPR is substantial. In this section we discuss how much PASPR reduction can be achieved and how the corresponding spectrum behaves. Our baseline is a RRC pulse shaping filter with roll-off factor $\beta = 0.25$. The comparison of bandwidth and PASPR is given in Figs.~\ref{fig:spectra} and \ref{fig:paspr}, respectively. In these plots, RRC and $\mathrm{sinc}^2$ describe conventional pulse shaping (see Sec.~\ref{sec:sincSq} for $\mathrm{sinc}^2$ pulse shaping), T/2-RRC describes pulse shaping at twice the symbol rate and SI based pulse shaping is named SI RRC (see Sec.~\ref{sec:sphInt}). A simple conclusion is that PAPSR reduction can be traded for bandwidth, i.e., the widest spectrum produces the lowest PASPR: T/2-RRC has the lowest PASPR, followed by $\mathrm{sinc}^2$ pulse shaping and SI RRC has the least PASPR reduction, but it also is the only method which does not widen the spectrum. One should also take into account the receiver complexity for these techniques: $\mathrm{sinc}^2$ requires almost no additional complexity compared to ISI-free PAM, T/2-RRC and SI RRC require a sequence estimation to achieve a reasonable power efficiency. For all RRC based methods, the well-known trade-off between bandwidth and roll-off still holds. Additionally, increasing $\beta$ also improves the PASPR slightly. \begin{figure*}[!t] \centering \hspace*{-1cm} \begin{tabular}{ll} \captionsetup[subfloat]{captionskip=0pt, nearskip=0.2cm, margin=0.5cm, justification=centering} \subfloat[Plot over PASPR]{ \setlength\figureheight{4.5cm} \setlength\figurewidth{7.5cm} \input{paspr_vs_speceff.tex} \label{fig:spec_eff1}} & \captionsetup[subfloat]{captionskip=0pt, nearskip=0.2cm, margin=0.5cm, justification=centering} \subfloat[Plot over $E_{b, \mathrm{max}}/N_0 = E_b \cdot \mathrm{PASPR} / N_0$]{ \setlength\figureheight{4.5cm} \setlength\figurewidth{7.5cm} \hspace*{-.5cm} \input{spec_eff_max.tex} \label{fig:spec_eff2}} \end{tabular} \caption{Spectral efficiencies for different pulse shaping averaged over many Rayleigh fading channels with $n = 3$ antennas and a constellation size of $M = 64$. For all methods, except RRC, we plot the spectral efficiency based on the bandwidth $B_x$ for a fraction $x$ of the total energy with $x \in \{99\%, 99.9\%, 99.99\%, 100\%\}$. A target symbol error rate of $\mathrm{SER} = 10^{-4}$ was used for the power bandwidth plane in Fig.~\ref{fig:spec_eff2}. Simulations were performed over several thousand Rayleigh fading channels and averaged.} \label{fig:spec_eff} \end{figure*} To summarize the results, we compare PASPR as well as power and spectral efficiencies of the methods presented in this paper. Since some pulse shaping methods have wide spectra, we also consider the bandwidth $B_x$ which includes a fraction $x$ of the total signal energy, e.g., $B_{99\%}$ is the bandwidth which holds $99\%$ of the total energy of a signal. In Fig.~\ref{fig:spec_eff}, the spectral efficiencies are plotted for a transmission system employing $n = 3$ antennas and a constellation size of $M = 64$ over a Rayleigh fading channel. As a baseline, a RRC with $\beta = \{0, 0.25, 0.5\}$ is used. For all other pulse shaping methods, we plot the spectral efficiencies for $B_x$ with $x \in \{99\%, 99.9\%, 99.99\%, 100\%\}$. Fig.~\ref{fig:spec_eff1} plots the spectral efficiencies over the PASPR, whereas Fig.~\ref{fig:spec_eff2} uses the maximum energy per bit over the noise spectral density, i.e., $E_{b, \mathrm{max}}/N_0 = E_b \cdot \mathrm{PASPR} / N_0$. This takes both the maximum instantaneous power and the power efficiency for a given error rate into account and thus provides a fair comparison. These results are especially applicable for load-modulated transmitters. The general result can be summarized as follows: All methods presented in this paper provide reasonable reduction of the PASPR. Some methods, however, do so at the cost of reduced spectral efficiency. SI pulse shaping is the only method to reduce the PASPR without sacrificing spectral efficiency. The cost to be paid in this case is a more complex receiver architecture. In Fig.~\ref{fig:spec_eff2}, the power-bandwidth plane for a target symbol error rate of $\mathrm{SER} = 10^{-4}$ is shown. Because all losses in terms of power efficiency for a given symbol error rate are minor (if existing), especially SI RRC is superior to the conventional PAM. The gain due to the reduced PASPR generally outweighs the loss due to suboptimal detection of ISI if $\beta > 0.1$. Depending on the roll-off factor, the final gain in $E_{b, \mathrm{max}}/N_0$ is in the range of 1 to 2 dB. Substantial gain can also be achieved by T/2-RRC and $\mathrm{sinc}^2$ pulse shaping. Since these variants have almost no loss in power efficiency, they can realize their whole PASPR gain. The downside of them, again, is the reduced spectral efficiency. \section{Conclusion} \label{sec:conclusion} PSKH is a novel modulation scheme for MIMO that is applicable in various scenarios: Because PSKH constellations are optimized over all antennas, both the constellation-constrained capacity as well as the error rate for a given power are improved compared to conventional PSK. This shows that PSKH is an interesting alternative even for MIMO systems which do not employ load modulation. If PSKH is combined with load-modulation amplifiers, additional improvements are possible. The distribution on the hypersphere can be exploited to achieve a transmit signal with a low PASPR. By reducing the PASPR, amplifiers can be driven at a higher efficiency and thus the power loss is reduced. To achieve this, there is a trade-off between three degrees of freedom: Power efficiency, bandwidth efficiency and receiver complexity. It is possible to improve power efficiency at the cost of either bandwidth efficiency or receiver complexity. These results underline that load-modulation transmitters are a valid alternative for power-efficient communications of MIMO systems, which only employ a small number of antennas. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,314,259,994,595
arxiv
\section{Introduction} \label{sec:intro} Variational inference (VI) has emerged as a fast, albeit biased, alternative to Markov chain Monte Carlo for Bayesian inference. VI methods attempt to minimize the KL divergence from a parametrized family of distributions to a true posterior over latent variables. The expressiveness of this family is essential for good performance, with under-expressive models leading to both increased bias and underestimation of posterior variance \citep{yin2018semi}. If the density of the approximate posterior is available in closed-form, then the variational family is said to be \emph{explicit}. Explicit models allow for straightforward estimation of the VI objective, but can often lead to reduced expressiveness, which limits their performance overall. Mean-field VI \citep{blei2017variational}, for example, imposes restrictive independence assumptions between the latent variables of interest. Normalizing flows \citep{tabak2010density, rezende2015variational} provide an alternative family of explicit density models that yield improved expressiveness compared with mean field alternatives. These methods push samples from a simple base distribution (typically Gaussian) through parametrized bijections to produce complex, yet still exact, density models. Normalizing flows have performed well in tasks requiring explicit density models (e.g.\ \citep{louizos2017multiplicative, papamakarios2017masked, ho2019compression}), including VI, where flows have demonstrated the ability to improve the quality of approximate posteriors \citep{rezende2015variational, durkan2019neural}. Although normalizing flows can directly improve the expressiveness of mean-field VI schemes, their inherent bijectivity remains quite restrictive. We can overcome this limitation by instead using continuously-indexed flows (CIFs) \citep{cornish2019relaxing}. CIFs relax the bijectivity constraint of standard normalizing flows by augmenting them with continuous index variables, thus parametrizing an \emph{implicit} density model defined as the marginalization over these additional indexing variables. Beyond being well-grounded theoretically, CIFs also have empirically demonstrated the ability to outperform relevant normalizing flow baselines in the context of density estimation, and thus it is sensible to investigate the performance of CIFs in VI. A difficulty in applying CIFs to VI -- and implicit models more generally -- is that their marginal distribution is intractable, precluding evaluation of the standard VI objective. However, conveniently, CIFs still admit a tractable \emph{joint} distribution over the variables of interest (latent variables in VI) and the auxiliary indexing variables. We can therefore appeal to the framework of \emph{auxiliary variational inference} (AVI) \citep{agakov2004auxiliary}, which facilitates the training of implicit models with tractable joint densities as variational inference models. CIFs also \emph{already} prescribe a model for inferring auxiliary variables -- typically required in AVI schemes -- suggesting that CIFs are a natural fit here. AVI methods more generally have shown improved expressiveness over the explicit counterparts in several settings \citep{DBLP:journals/corr/BurdaGS15, yin2018semi}, and are becoming more popular with the rise of implicit models overall \citep{tran2017hierarchical, lawson2019energy, kleinegesse2020sequential}, suggesting that this framework is able to overcome any supposed drawbacks associated with not having access to explicit densities. In this work, we show that these benefits are also realized when CIFs are applied within the AVI framework. We first describe how CIFs can be used as the variational family in AVI, naturally incorporating the components of CIF models designed for density estimation, and we explain how we can also \emph{amortize} these inference models. We then empirically demonstrate the advantages of using CIFs over standard normalizing flows for modelling posteriors with complicated topologies, and additionally how CIFs can facilitate maximum likelihood estimation of the parameters of complex latent-variable generative models. \section{Continuously-Indexed Flows for Variational Inference} In this section we first review necessary background on variational inference (VI) -- including auxiliary variational inference (AVI) -- and continuously-indexed flows (CIFs). We then describe how CIFs naturally fit in as a class of auxiliary variational posteriors, and extend to include amortization. We summarize the results of this section in \autoref{alg:ELBO}. \subsection{VARIATIONAL INFERENCE} Given a joint probability density $p_{X,Z}$, with observed data $X \in \mathcal X$ and latent variable $Z \in \mathcal Z$, variational inference (VI) provides us with a means to approximate the intractable posterior $p_{Z|X}(\cdot \mid x)$. This is accomplished by introducing a parametrized approximate posterior\footnote{We may also \emph{amortize} $q_Z$ and replace it with the conditional $q_{Z|X}$, especially when using VI to facilitate generative modelling. Further discussion on amortization is deferred to \autoref{sec:amortization-main}.} $q_Z$, and maximizing the evidence lower bound (ELBO) \begin{equation} \label{eq:elbo_1} \mathcal L_1(x) \coloneqq \E_{z \sim q_Z} [ \log p_{X, Z}(x, z) - \log q_Z(z) ] \end{equation} with respect to the parameters of $q_Z$. This is equivalent to minimizing the KL divergence between $q_Z$ and the true posterior $p_{Z|X}(\cdot \mid x)$. Explicit VI methods, such as mean-field approaches or normalizing flow models, define $q_Z$ in such a way that it can be evaluated pointwise. Although this approach is computationally convenient, the expressiveness of the resulting methods can often be limited. To improve on this, implicit methods define $q_Z$ typically through some type of sampling process with intractable marginal distribution, such as the pushforward of a simple distribution through an unrestricted deep neural network. These methods can be quite powerful but also challenging to optimize, especially in the context of VI \citep{tran2017hierarchical}, as we lose the tractability of \eqref{eq:elbo_1}. \paragraph{Auxiliary Variational Inference} In contexts where $q_Z$ is obtained as $q_Z(z) \coloneqq \int q_{Z,U}(z, u) \, \mathrm{d}u$ for some joint density $q_{Z,U}$ that can be sampled from and evaluated pointwise, its parameters can be learned via \emph{auxiliary variational inference} (AVI) \citep{agakov2004auxiliary}. We refer to $U$ here as an \emph{auxiliary} variable. These approaches introduce an auxiliary inference distribution $r_{U\mid Z}$ and optimize \begin{equation} \label{eq:elbo_2} \mathcal L_2(x) \!\coloneqq\! \E_{(z, u) \sim q_{Z, U}}\!\left[ \log \frac{ p_{X, Z}(x, z) \cdot r_{U|Z}(u \mid z)}{q_{Z, U}(z, u)} \right]\!. \end{equation} Key to this approach is the fact that $\mathcal L_1(x)\! \geq\! \mathcal L_2(x)$, and that this bound is tight when $r_{U\mid Z} = q_{U\mid Z}$, which holds because \begin{equation} \label{eq:elbo_difference} \mathcal L_1 (x) \!= \!\mathcal L_2(x)\! +\! \E_{z \sim q_Z}\! \left[ D_\text{KL}(q_{U|Z}(\cdot | z) || r_{U|Z}(\cdot | z)) \right]. \end{equation} As such, optimizing the parameters of $r_{U \mid Z}$ jointly with those of $q_{Z,U}$ will encourage learning better approximations to the true posterior $p_{Z\mid X}$. Note that, although we now are optimizing a lower bound on $\mathcal L_1$, we are also optimizing over a larger family of approximate posteriors which may end up yielding a better optimum (cf.\ \autoref{prop:cif_vs_baseline} below). \subsection{CONTINUOUSLY-INDEXED FLOWS} We now describe in detail the \emph{continuously-indexed flow} (CIF) model \citep{cornish2019relaxing}, which we intend to incorporate into an AVI scheme. CIFs define a density $q_Z$ over $\mathcal Z$ as the $Z$-marginal of \begin{equation} \label{eq:cif_generative} W \sim q_W, \quad U \sim q_{U | W}(\cdot \mid W), \quad Z = G(W; U), \end{equation} where $q_W$ is a noise distribution over $\mathcal Z$, $q_{U \mid W}$ is a conditional distribution over $\mathcal U$ describing an auxiliary indexing variable, and $G : \mathcal Z \times \mathcal U \rightarrow \mathcal Z$ is a function such that $G(\cdot; u)$ is a bijection for each $u \in \mathcal U$. For all $z \in \mathcal Z$, the density model $q_Z$ is then given by the intractable integral $q_Z(z) \coloneqq \int q_{Z,U}(z, u) \, \mathrm d u$ over the tractable joint density $q_{Z,U}$ given by \begin{align} \label{eq:cif_joint} &q_{Z, U} (z, u) = q_W\!\left(G^{-1}(z; u)\right) \\ &\quad \times q_{U|W}\!\left(u \mid G^{-1}(z; u)\right) \nonumber \left|\det \mathrm D_z G^{-1}(z; u)\right| \end{align} for all $z \in \mathcal Z$ and $u \in \mathcal U$, where $G^{-1}$ denotes the inverse of $G$ (and $\mathrm D_z G^{-1}$ the Jacobian of $G^{-1}$) with respect to its first argument $z$ (see \autoref{sec:app-density-obj} for a derivation). Typically, $q_{U|W}$ is chosen to be conditionally Gaussian with mean and covariance as the outputs of neural networks taking the conditioning variables $W$ and $Z$ as input, and \begin{equation} \label{eq:cif_G} G(w; u) \coloneqq e^{s(u)} \odot \left(g(w) + t(u)\right), \end{equation} where $g : \mathcal Z \rightarrow \mathcal Z$ is some base bijection, $s, t: \mathcal U \rightarrow Z$ are arbitrary neural networks, and $\odot$ denotes elementwise multiplication. \citet{cornish2019relaxing} used the model \eqref{eq:cif_generative} in the context of density estimation to model the generative process of a set of i.i.d.\ data. \paragraph{Multi-layer CIFs} \citet{cornish2019relaxing} also propose to improve the expressiveness of \eqref{eq:cif_generative} by taking the noise distribution $q_W$ to be a CIF model itself. Applying this recursively $L$ times, we can take $q_Z$ to be the $W_L$-marginal in the following model: \begin{align} \label{eq:cif_generative_multilayer} W_0 \sim q_{W_0}, \quad U_\ell &\sim q_{U_\ell | W_{\ell-1}}(\cdot \mid W_{\ell-1}) \nonumber \\ W_\ell &= G_\ell(W_{\ell-1}; U_\ell), \end{align} where $\ell \in \{1, \ldots, L\}$. Here $q_{W_0}$ is typically a mean-field Gaussian and each $G_\ell : \mathcal Z \times \mathcal U \rightarrow \mathcal Z$ is bijective in its first argument. Practically, multi-layer CIF models have demonstrated far more representational power than single-layer versions, although we note that we can still view this multi-layer model as an instance of \eqref{eq:cif_generative} for certain choices of $q_{U|W}$ and $G$ (as in \autoref{sec:app-stacking}).% \paragraph{Auxiliary Inference Distribution} The intractability of $q_Z$ arising from both \eqref{eq:cif_generative} and \eqref{eq:cif_generative_multilayer} precludes direct maximum likelihood estimation. \citet{cornish2019relaxing} therefore introduce an auxiliary \emph{backward} distribution, either $r_{U|Z}$ or $r_{U_{1:L}|Z}$ respectively, to enable training of CIFs through an amortized ELBO. Particularly noteworthy is the structure of this distribution in the multi-layer case. The optimal choice for $r_{U_{1:L}|Z}$ would be $q_{U_{1:L}|Z}$, which can be shown to factorize as $q_{U_{1:L} | Z}(u_{1:L} \mid z) = \prod_{\ell=1}^L q_{U_\ell | W_\ell}(u_\ell \mid w_\ell),$ where $w_L \coloneqq z$ and $w_\ell \coloneqq G^{-1}_{\ell+1}(w_{\ell+1}; u_{\ell+1})$ recursively for $\ell \in \{1, \ldots, L-1\}$. Although this gives us the form of $q_{U_{1:L} | Z}$, the backward distributions $q_{U_\ell | W_\ell}$ are not generally available in closed form. However this does at least motivate defining $r_{U_{1:L} | Z}$ to have the same form, which can be done by introducing (reparametrizable) densities $r_{U_\ell | W_\ell}$ and setting \begin{equation} \label{eq:multilayer_backward} r_{U_{1:L} | Z}(u_{1:L} \mid z) \coloneqq \prod_{\ell=1}^L r_{U_\ell | W_\ell}(u_\ell \mid w_\ell) \end{equation} with $w_\ell$ defined as above. The densities for $r_{U_\ell | W_\ell}$ are also taken to be parametrized conditional Gaussians. This structured inference procedure induces a natural weight-sharing scheme between the forward and backward directions of the model, as both are defined using $G_\ell$. \subsection{CIF MODELS IN AVI} \label{sec:cif-avi} We can use CIFs as the family of approximate posteriors $q_Z$ in VI by appealing to the framework of AVI. Starting with the single-layer version, we see from \eqref{eq:cif_joint} that CIFs admit a tractable joint distribution $q_{Z,U}$ over latent and auxiliary variables. We can then plug this distribution into \eqref{eq:elbo_2}, noting also that CIFs already prescribe a form for $r_{U|Z}$ and thus are a natural fit within an AVI scheme. However, we must take one additional step to formulate an objective amenable to optimization, as na\"ively substituting $q_{Z,U}$ into \eqref{eq:elbo_2} produces an expectation over a distribution containing the parameters of $G$ itself. To address this, we show in \autoref{sec:app-density-obj} how to rewrite this as an expectation over $q_{W,U}$ rather than $q_{Z,U}$, obtaining the objective \begin{align} \label{eq:elbo_cif} &\E_{(w, u) \sim q_{W,U}}\! \!\left[ \log \frac{p_{X, Z}(x, z)\! \cdot\! r_{U|Z}(u \mid z)}{q_{W,U}(w,u) \!\cdot \!|\det \mathrm D_w G(w; u) |^{\!-\!1}}\right], \end{align} where we write $z \coloneqq G(w; u)$ for readability. We always select $q_{W,U}$ to be reparametrizable \citep{DBLP:journals/corr/KingmaW13}, which makes the objective straightforward to optimize via stochastic gradient descent with respect to the parameters of $q, r,$ and $G$. Note however that $r_{U\mid Z}$ need not necessarily be reparametrizable. This ``direction'' of reparametrization contrasts with CIF models for density estimation which require $r_{U|Z}$ -- not $q_{U|W}$ -- to be reparametrizable. Further discussion demonstrating that CIF models in density estimation and VI can be viewed as ``opposites'' of each other is provided in \autoref{sec:cif_for_de} and \autoref{sec:app-cif_de_vi}. \paragraph{Multi-layer CIFs in AVI} We can also use multi-layer CIFs as past of an AVI scheme. Now, each of the $q_{U_\ell | W_{\ell-1}}$ distributions in \eqref{eq:cif_generative_multilayer} is chosen to be reparametrizable, again contrasting with \citep{cornish2019relaxing} which does not require reparametrizable distributions here. \autoref{fig:sub-first} graphically displays the joint model $q_{Z,U_{1:L}}$. We also adopt the form of $r_{U_{1:L}|Z}$ from \eqref{eq:multilayer_backward} and demonstrate this auxiliary inference procedure in \autoref{fig:sub-second}, although we do not require the individual $r_{U_\ell | W_\ell}$ distributions to be reparametrizable. Being able to have $r_{U_{1:L}|Z}$ match the structure of the true auxiliary posterior $q_{U_{1:L}|Z}$ is likely useful in lowering the variance of estimators of the ELBO and gradients thereof. \begin{figure*} \centering \begin{subfigure}{.49\textwidth} \centering \includegraphics[width=.95\linewidth]{figs/generative_multilayer.png} \caption{Sampling $Z \sim q_{Z \textcolor{red}{| X}}$ as defined in \eqref{eq:cif_generative}} \label{fig:sub-first} \end{subfigure} \begin{subfigure}{.49\textwidth} \centering \includegraphics[width=.95\linewidth]{figs/inference_multilayer.png} \caption{Sampling $U_{1:L} \sim r_{U_{1:L} | Z, \textcolor{red}{X}}$ as defined in \eqref{eq:multilayer_backward}} \label{fig:sub-second} \end{subfigure} \caption{Diagrams demonstrating how to sample from the CIF approximate posterior (left) and the auxiliary inference model (right). The \textcolor{red}{red highlighting} corresponds to \textcolor{red}{amortization} -- these can be ignored for models not requiring amortization.} \label{fig:model_schematic} \end{figure*} We can now substitute our definitions for $q_{Z, U_{1:L}}$ (implied by \eqref{eq:cif_generative_multilayer}) and $r_{U_{1:L} \mid Z}$ into \eqref{eq:elbo_2} to derive an optimization objective for training multi-layer CIFs as the approximate posterior in VI. We again must be careful about reparametrization, as we need to write the objective as an expectation over $q_{W_0, U_{1:L}}$ instead of $q_{Z, U_{1:L}}$ to be able optimize all parameters of $q$, analogously to \eqref{eq:elbo_cif}. Further details on how to do this are provided in \autoref{sec:app-stacking}, with the full objective given in \eqref{eq:cif_multilayer_obj}. \autoref{alg:ELBO} describes how to compute an unbiased estimator of this objective, from which we can then obtain unbiased gradients via automatic differentiation. \subsection{Amortization} \label{sec:amortization-main} VI methods can also be used to provide a surrogate objective for maximum likelihood estimation of the parameters of latent-variable models, particularly for deep generative models such as the variational auto-encoder (VAE) \citep{DBLP:journals/corr/KingmaW13, rezende2014stochastic}. In these settings, the goal is to maximize the marginal log-likelihood $\sum_i \log p_X(x_i)$ over the observed data $\{x_i\}_i$, with respect to the parameters of $p$, where $\log p_X(x) \coloneqq \log \int p_{X,Z}(x,z) \,\mathrm d z$ is a density model containing parametrized $p_{X,Z}(x,z)$. This integral is often intractable, and thus we resort to maximizing the ELBO \eqref{eq:elbo_1} (or \eqref{eq:elbo_2}) with respect to both the parameters of $p$ and $q$ as it bounds the marginal log-likelihood from below. In this case, we would like to \emph{amortize} the cost of variational inference across an entire dataset, rather than compute a brand new approximate posterior for each datapoint, and so we parametrize our variational distribution as an explicit function of the data. We can readily incorporate amortization into the single-layer CIF by replacing $q_W$ with $q_{W|X}$ in \eqref{eq:cif_generative}, and $r_{U|Z}$ with $r_{U|Z, X}$ in \eqref{eq:elbo_cif} since the true auxiliary posterior $q_{U|Z, X}$ will now carry an explicit dependence on the data $X$. For multi-layer CIFs, it is again straightforward to incorporate amortization into the model for $q$ by replacing $q_{W_0}$ with $q_{W_0 | X}$ in \eqref{eq:cif_generative_multilayer}. Additional care must be taken when constructing the \emph{auxiliary} inference model $r$, however, as the explicit dependence on data will appear in each term of the factorization of the true auxiliary posterior $q_{U_{1:L} | Z, X}$. We thus structure $r_{U_{1:L} | Z, X}$ similarly: \[ r_{U_{1:L} | Z, X}(u_{1:L} \mid z, x) \coloneqq \prod_{\ell=1}^L r_{U_\ell | W_\ell, X}(u_\ell \mid w_\ell, x), \] where $w_\ell$ is as defined in \eqref{eq:multilayer_backward}. The full amortized objective is given in \eqref{eq:a_cif_multilayer_obj} in the Appendix. \autoref{fig:model_schematic} graphically demonstrates how to incorporate amortization into both $q$ and $r$, while \autoref{alg:ELBO} includes a provision for this case as well. \begin{algorithm} \caption{Unbiased $L$-layer CIF ELBO estimator} \label{alg:ELBO} \begin{algorithmic} \FUNCTION{ELBO($x$, amortized)} \IF{amortized} \STATE $q_0 \gets q_{W_0 | X}(\cdot \mid x)$ \ELSE \STATE $q_0 \gets q_{W_0}$ \ENDIF \STATE $w_0 \sim q_0$ \STATE $\Delta \gets - \log q_0(w)$ \FOR{$\ell=1,\ldots,L$} \STATE $u \sim q_{U_\ell | W_{\ell-1}}(\cdot \mid w_{\ell-1})$ \STATE $w_\ell \gets G_\ell(u; w_{\ell-1})$ \IF{amortized} \STATE $r_\ell \gets r_{U_\ell | W_\ell, X}(\cdot \mid w_\ell, x)$ \ELSE \STATE $r_\ell \gets r_{U_\ell | W_\ell}(\cdot \mid w_\ell)$ \ENDIF \STATE $\Delta \gets \Delta + \log r_\ell(u) - \log q_{U_\ell | W_{\ell-1}}(u \mid w_{\ell-1})$ \STATE $\qquad \qquad + \log | \det \mathrm D G_\ell(w_{\ell-1}; u)| $ \ENDFOR \STATE {\bf return} $\Delta + \log p_{X,Z}(x, w_L)$ \ENDFUNCTION \end{algorithmic} \end{algorithm} \section{Comparison to Related Work} In this section we first compare against methods using explicit normalizing flow models for variational inference, then move on to a discussion of implicit VI methods, and lastly compare the structure of CIFs in basic density estimation to CIFs in VI. \subsection{Normalizing Flows for VI} Normalizing flows (NFs) originally became popular as a method for increasing the expressiveness of explicit variational inference models \citep{rezende2015variational}. NF methods define $q_Z$ as the $Z$-marginal of \begin{equation} \label{eq:nf_generative} W \sim q_W, \qquad Z = g(W), \end{equation} where $g : \mathcal Z \rightarrow \mathcal Z$ is a bijection. We can equivalently write $q_Z$ as $q_Z \coloneqq g_\# q_W$, where $g_\# q_W$ denotes the \emph{pushforward} of the distribution $q_W$ under the map $g$. Using the change of variable formula, we can rewrite \eqref{eq:elbo_1} here as \begin{equation} \label{eq:elbo_nf} \mathcal L_1(x) = \E_{w \sim q_W} \left[ \log \frac{p_{X, Z}(x, g(w))}{q_W(w) \cdot |\det \mathrm D g(w)|^{-1}}\right]. \end{equation} This objective is a simplified version of the CIF VI objective \eqref{eq:elbo_cif}. The following proposition, which we adapt here to the VI setting from \citet[Proposition~4.1]{cornish2019relaxing}, shows that generalizing from \eqref{eq:elbo_nf} to \eqref{eq:elbo_cif} is beneficial, as a CIF model trained by this auxiliary bound will perform at least as well in inference as its corresponding baseline flow trained via maximization of \eqref{eq:elbo_nf}. \begin{proposition} \label{prop:cif_vs_baseline} Assume a CIF inference model with components $q^\phi_{U|W}$, $r^\phi_{U|Z}$, and $G_\phi$ is parametrized by $\phi \in \Phi$, with associated objective \eqref{eq:elbo_cif} denoted as $\mathcal L_2^\phi$. Suppose there exists $\psi \in \Phi$ such that for some bijection $g$, $G_\psi(\cdot; u) = g(\cdot)$ for all $u \in \mathcal U$. Similarly, suppose $q_{U|W}^\psi$ and $r_{U|Z}^\psi$ are such that, for some density $\rho$ on $\mathcal U$, $q_{U|W}^\psi(\cdot \mid w) = r_{U|Z}^\psi(\cdot \mid z) = \rho(\cdot)$ for all $w, z \in \mathcal Z$. For a given $x \in \mathcal X$, if $\mathcal L_2^\phi(x) \geq \mathcal L_2^\psi(x)$, \[ \KL{q_Z^\phi}{p_{Z|X}(\cdot \mid x)} \leq \KL{g_\# q_W}{p_{Z|X}(\cdot \mid x)}. \] \end{proposition} The proof of this result, from which we also see that $\mathcal L_2^\psi(x) = \mathcal L_1(x)$ (where $\mathcal{L}_1(x)$ is as written in \eqref{eq:elbo_nf}) for all $x \in \mathcal X$, is provided in \autoref{sec:app-generalization}. This shows that optimizing a CIF using the auxiliary ELBO $\mathcal L_2$ will produce at least as good of an inference model (as measured by the KL divergence) as a baseline normalizing flow optimized using the marginal ELBO \eqref{eq:elbo_nf}, in the limit of infinite samples from the inference model. Note that our choices of $G$ from \eqref{eq:cif_G} and $q_{U|W}$ and $r_{U|Z}$ as conditionally Gaussian will usually entail the conditions of \autoref{prop:cif_vs_baseline}, since for example we have $G(w; u) = g(w)$ in \eqref{eq:cif_G} if the final layer weights in the $s$ and $t$ networks are zero. We also empirically confirm that \autoref{prop:cif_vs_baseline} holds in the experiments. Beyond the discussion above, we also note that the bijectivity constraint of baseline normalizing flows can lead to problems when modelling a density that is concentrated on a region with complicated topological structure \citep[Corollary~2.2]{cornish2019relaxing}, and may cause flows to become numerically non-invertible in this case \citep{behrmann2020on}. Many models such as neural spline flows (NSFs) \citep{durkan2019neural} and \emph{universal} flows \citep{huang2018neural,jaini2019sum} have been proposed to improve expressiveness within the standard framework based on a single bijection. CIFs, on the other hand, use auxiliary variables to provide a mechanism for circumventing the limitations of using a single bijection, but lose analytical tractability as a result. \subsection{Implicit VI Methods} Several other AVI methods exist that, like our approach, also require the specification of parametrized auxiliary inference distribution $r_{U|Z}$. Hierarchical variational models (HVMs) \citep{ranganath2016hierarchical} are one such example, which take $q_{Z,U}(z,u) \coloneqq q_{Z \mid U} (z \mid u) \cdot q_U(u)$ for parametrized distributions $q_{Z \mid U}$ and $q_U$ both analytically tractable. Although both CIFs and HVMs specify tractable $q_{Z,U}$, the CIF joint distribution \eqref{eq:cif_joint} does not admit such a simple factorization, which may therefore increase expressiveness. Furthermore, unlike CIFs, HVMs do not admit a natural mechanism for matching the auxiliary inference model $r_{U|Z}$ to the structure of the true auxiliary posterior $q_{U\mid Z}$ when considering multiple levels of hierarchy. % Related to these are approaches are Hamiltonian-based VI methods \citep{salimans2015markov, caterini2018hamiltonian}, which build $q_{Z,U}$ by numerically integrating Hamiltonian dynamics, inducing a flow that is bijective now on the extended space $\mathcal Z \times \mathcal U$ instead of just $\mathcal Z$. In contrast, CIFs can be used to augment any type of normalizing flow (not just Hamiltonian dynamics), and are not restricted to a specific family of bijections $G$. Hamiltonian methods also suffer from greatly increasing computational requirements as the number of parameters in $p_{X,Z}$ grows, since they require $\mathrm D_z \log p_{X,Z}(x,z)$ at every flow step. There also exist methods which that do not parametrize $r_{U\mid Z}$, but instead build an auxiliary inference distribution in VI by drawing extra samples from the approximate posterior $q_Z$ and re-weighting (as noted in \citet{lawson2019energy}). These methods, including the importance-weighted autoencoder (IWAE) \citep{DBLP:journals/corr/BurdaGS15} and semi-implicit variational inference \citep{yin2018semi}, effectively perform inference over an extended space consisting of $K$ copies of the original latent space \citep{domke2018importance}. These approaches may thus require far more memory to train than parametrized AVI methods, and often require care to ensure the variance of estimators of the objective (and gradients thereof) is controlled \citep{rainforth2018tighter, DBLP:conf/iclr/TuckerLGM19}. That being said, it may be possible to combine multi-sample bounds with CIF models using a framework such as the one in \citet{sobolev2019importance}, which demonstrates how to use IWAE-like approaches within HVMs. A separate class of implicit VI models proposes expressive but intractable joint densities requiring density ratio estimation to train \citep{huszar2017variational, tran2017hierarchical}. CIFs, along with other AVI methods, avoid density ratio estimation by instead constructing a tractable joint density $q_{Z,U}$. \subsection{CIFs for Density Estimation} \label{sec:cif_for_de} As mentioned earlier, CIFs were originally proposed as a model for density estimation (DE), a setting in which we have access to a set of observed data $\{x_i\}_i$ over which we would like to build a density model $p_X$ maximizing the marginal likelihood. This constitutes the key distinction between this work and \citet{cornish2019relaxing}: here, we only use CIFs for parametrizing an \emph{inference} model $q_Z$, assuming we \emph{already} have access to a forward density model $p_{X,Z}$. However, the inference procedure required to \emph{train} CIFs for DE is actually very closely related to the model \eqref{eq:cif_generative}. In particular, if we relabel the forward CIF model for DE as $r$ (instead of $p$ used by \citet{cornish2019relaxing}), the single-layer CIF density estimation objective is equivalent to \begin{equation} \label{eq:cif_de_vs_vi} \E_{(x,u) \sim q_{X\!, U}}\!\! \left[ \log\! \frac{r_Z(G(x; u))\cdot r_{U|Z}(u \mid G(x; u))}{q_X^*(x) \!\cdot\! q_{U|X}(u \mid x)\! \cdot\! |\det \mathrm D_x G(x; u)|^{-\!1\!}} \!\right]\!, \end{equation} where $q_X^*$ is the unknown data-generating distribution from which we have i.i.d.\ samples, and $q_{X, U}(x, u) \coloneqq q^*_X(x) \cdot q_{U|X}(u \mid x)$. See \autoref{sec:app-cif_de_vi} for a derivation. Comparing this with \eqref{eq:elbo_cif}, we see that CIFs for density estimation may be interpreted as performing AVI targeting $r_Z$ with an amortized inference model defined as the $Z$-marginal of \begin{equation} \label{eq:cif_inference_de} X \sim q^*_{X}, \quad U \sim q_{U | X}(\cdot \mid X), \quad Z = G(X; U). \end{equation} Furthermore, despite the aesthetic similarities between (7) of \citet{cornish2019relaxing} defining $p$ for DE, and \eqref{eq:cif_generative} here defining $q$ for VI, it is actually the $q$ models that share a natural correspondence with each other. In both cases, $q$ refers to an inference model that must be reparametrized, whereas neither $p$ in DE nor $r$ here require this. We might even consider using a CIF as the inference distribution for a CIF density model, which may yield additional benefits from added compositionality, although we leave these considerations as future work. \begin{figure*} \begin{minipage}{.33\textwidth} \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.9\linewidth]{all_2d_figs_main/baseline_False_sigma0_0.1_Aug21_20-45-08.pdf} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.9\linewidth]{all_2d_figs_main/baseline_True_sigma0_0.1_Aug21_19-22-07.pdf} \end{subfigure} \end{minipage}% \begin{minipage}{.33\textwidth} \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.9\linewidth]{all_2d_figs_main/baseline_False_sigma0_1_Aug22_02-16-49.pdf} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.9\linewidth]{all_2d_figs_main/baseline_True_sigma0_1_Aug21_22-42-31.pdf} \end{subfigure} \end{minipage}% \begin{minipage}{.33\textwidth} \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.9\linewidth]{all_2d_figs_main/baseline_False_sigma0_10_Aug22_06-23-27.pdf} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=0.9\linewidth]{all_2d_figs_main/baseline_True_sigma0_10_Aug22_04-25-59.pdf} \end{subfigure} \end{minipage}% \put(-430,-111){$\sigma_0 = 0.1$} \put(-265,-111){$\sigma_0 = 1$} \put(-107,-111){$\sigma_0 = 10$} \vspace{-0.5em} \caption{ Samples from the trained inference models visualized using a KDE plot for a range of $\sigma_0$ values. We ran each configuration 3 times, displaying the average case of the three runs in the image, with the average plus/minus standard error of the marginal ELBO across the three runs shown in the title of the plot (higher is better). Models in the top row are CIF-NSFs, and those in the bottom row are baseline NSFs. We can see that when $\sigma_0 = 0.1$, the NSF does not have enough initial noise to consistently cover the target, and when $\sigma_0 = 10$, the NSF has too much noise and cannot locate the target. The CIF-NSF at least locates each mode in all cases and provides higher-quality approximations across the board.} \label{fig:mog-9-components} \end{figure*} \section{Experiments} In this section, we investigate using CIFs to build more expressive variational models in posterior sampling and maximum likelihood estimation of generative models. We compare inference models based on the Masked Autoregressive Flow (MAF) \citep{papamakarios2017masked} and the autoregressive variant of the Neural Spline Flow (NSF) \citep{durkan2019neural} to CIF-based extensions. Both of these baseline models empirically provide good performance in general-purpose density estimation. We use the ADAM optimizer \citep{DBLP:journals/corr/KingmaB14} throughout. Hyperparameters for all experiments are available in \autoref{sec:experiment-details}. Code will be made available at \url{https://github.com/anthonycaterini/cif-vi}. \subsection{Toy Mixture of Gaussians} \label{sec:mog} Our first example looks at using VI to sample from a toy mixture of Gaussians. Given component means $\{\mu_k\}_k$ and covariances $\{\Sigma_k\}_k$, we directly define the ``posterior''\footnote{Note that there is no data $x$ in this example -- we define the ``posterior'' directly. Details are in \autoref{sec:experiment-details}.} $p_{Z|X}(z \mid x) \coloneqq \sum_{k=1}^K \mathcal N (z;\mu_k, \Sigma_k) / K$, where $K$ is the total number of components, so that the joint target is $p_{X,Z}(x, z) \propto p_{Z|X}(z \mid x)$. We work in two dimensions with component means adequately spaced out in a square lattice. Although the support of $p_{Z|X}$ is all of $\mathbb R^2$, it is concentrated on a subset of $K$ disconnected components, which is not homeomorphic to $\mathbb R^2$, and thus we anticipate difficulties in using just a normalizing flow as the approximate posterior. We compare baseline NSF models to CIF-based extensions. The initial distribution for both the NSF and CIF models is given by $q_W \coloneqq \mathcal N(0, \sigma_0^2 \mathbf I)$, with $\sigma_0$ taken as either a fixed hyperparameter or a trainable variational parameter. The CIF extension includes an auxiliary variable $u \in \mathbb R$ at each layer, conditional Gaussian distributions for $q_{U_\ell|W{\ell-1}}$ and $r_{U_\ell | W_\ell}$ parametrized by small neural networks, and a single small two-headed neural network to output $s$ and $t$ in \eqref{eq:cif_G} at each layer, adding only $8.5\%$ more parameters on top of the baseline NSF model. \paragraph{Marginal ELBO Estimator} For all experiments in this section, we will measure the trained models on estimates of the marginal ELBO \eqref{eq:elbo_1}. When using an explicit variational method, such as an NSF, this is readily estimated by basic Monte Carlo (MC) with $N$ i.i.d.\ samples $z^{(i)} \sim q_Z$ for $i \in \{1, \ldots, N\}$: \begin{equation} \label{eq:explicit_mc_estimator} \widehat{\mathcal L}(x) \coloneqq \frac 1 N \sum_{i=1}^N \log \frac{p_{X, Z}(x, z^{(i)})}{q_Z(z^{(i)})}. \end{equation} However, recall that in implicit methods $q_Z$ is not available in closed form, which precludes direct evaluation of \eqref{eq:explicit_mc_estimator}. Thus, we first must build an estimator of $q_Z(z)$ for all $z \in \mathcal Z$ to use within \eqref{eq:explicit_mc_estimator}. We can do this via importance sampling, taking $M$ i.i.d.\ samples $u^{(j)} \sim r_{U|Z}(\cdot \mid z)$ for $j \in \{1,\ldots, M\}$ from our trained auxiliary inference model: \begin{equation} \label{eq:q_estimator} q_Z(z) \approx \frac 1 M \sum_{j=1}^M \frac{q_{Z,U}(z, u^{(j)})}{r_{U|Z}(u^{(j)} \mid z)} \eqqcolon \widehat q_Z(z). \end{equation} The full estimator of the marginal ELBO for auxiliary models is then obtained by substituting \eqref{eq:q_estimator} into \eqref{eq:explicit_mc_estimator}; this is written out in full in \autoref{sec:marginal_elbo_estimator}. Although this estimator is positively biased (because it includes the negative logarithm of an unbiased MC estimator), it is still \emph{consistent}, and its bias is naturally controlled by the training procedure which encourages $r_{U|Z}$ to match the intractable $q_{U|Z}$. We can mitigate any further bias by increasing $M$ \citep{rainforth2018nesting}. A table displaying estimates of the marginal ELBO on a single trained model for various choices of $N$ and $M$ is also available in \autoref{sec:marginal_elbo_estimator}; we choose $N = 10{,}000$ and $M = 100$ based on these results. \paragraph{Results} For our first experiment, we select $K = 9$ and fix $\sigma_0$ throughout training to either $0.1, 1,$ or $10$. We train both NSF baselines and CIF-NSF extensions with three different random seeds for each setting of $\sigma_0$. We show a kernel density estimation of the approximate posterior of the average case model on each configuration in \autoref{fig:mog-9-components} and report the average of the marginal ELBO estimates across all three runs in the titles of the plots. We can clearly see from both the ELBO values and the plots themselves that the CIF extensions are more consistently producing higher-quality variational approximations across the range of $\sigma_0$, as form of \eqref{eq:cif_G} allows the model to directly control the noise of the outputted samples. The NSF baselines only produce reliable models for $\sigma_0=1$. In this example it is quite clear how the parametrization of the CIF model ``cleans up'' a major deficiency of the baseline method by rescaling the initial noise. However, we might also allow $\sigma_0$ to be learned as part of the overall variational inference procedure to further probe the effectiveness of CIFs, and we experiment with this on a more challenging problem ($K = 16$). We find that the trained CIF models again outperform the baseline NSFs (estimated marginal ELBO over $3$ runs of $\bf{-0.116 \pm 0.021}$ for CIFs vs.\ $-0.562 \pm 0.008$ for basleine NSFs), thus demonstrating the increased expressiveness of CIFs beyond just rescaling. \subsection{Generative Modelling of Images} \label{sec:images} \begin{table*}[!ht] \caption{Test-set average marginal log-likelihood (plus/minus one standard error) over three runs. Runs that are within one standard error of the best-performing model are shown in bold.} \label{tab:images} \begin{center} \begin{tabular}{l|l|l|l|l} \toprule \multirow{2}{*}{\textbf{Model}} & \multicolumn{2}{c|}{\textbf{Small Target}} & \multicolumn{2}{c}{\textbf{Large Target}} \\ & MNIST & Fashion-MNIST & MNIST & Fashion-MNIST \\ \midrule VAE & $-94.83 \pm 0.05$ & $-238.54 \pm 0.11$ & $-86.27 \pm 0.04$ & $-229.72 \pm 0.03$ \\ IWAE ($K=5$) & $-93.14 \pm 0.10$ & $-237.03 \pm 0.05$ & $-84.23 \pm 0.09$ & $-227.80 \pm 0.02$ \\ \midrule Small MAF & $-91.98 \pm 0.19$ & $-237.09 \pm 0.15$ & $-83.41 \pm 0.09$ & $ -228.74 \pm 0.24$ \\ Large MAF & $-92.68 \pm 0.26$ & $-237.57 \pm 0.03$ & $-83.38 \pm 0.12$ & $-228.72 \pm 0.27$ \\ CIF-MAF & $-90.87 \pm 0.05$ & ${\bf -236.31 \pm 0.14}$ & ${\bf -82.70 \pm 0.12}$ & ${\bf -227.64 \pm 0.05}$ \\ \midrule Small NSF & $-91.12 \pm 0.15$ & $-236.65 \pm 0.17$ & $-83.06 \pm 0.05$ & $-228.58 \pm 0.18$ \\ Large NSF & ${\bf -90.79 \pm 0.02}$ & ${\bf -236.48 \pm 0.13}$ & $-83.12 \pm 0.10$ & $-228.46 \pm 0.07$ \\ CIF-NSF & ${\bf -90.82 \pm 0.09}$ & ${\bf -236.48 \pm 0.20}$ & $-83.31 \pm 0.17$ & $-228.54 \pm 0.12$ \\ \bottomrule \end{tabular} \end{center} \end{table*} For our second example, we use amortized variational inference to facilitate the training of a generative model of image data in the style of the variational auto-encoder (VAE) method \citep{DBLP:journals/corr/KingmaW13}. We attempt to build models of the MNIST \citep{lecun1998gradient} and Fashion-MNIST \citep{xiao2017fashion} datasets, which both contain $256$-bit greyscale images of size $28 \times 28$. We employ dynamic binarization of these greyscale images at each training step. The likelihood function to describe an image relies on a neural network ``decoder'' $\pi : \mathcal Z \rightarrow [0,1]^d$, such that \[ Z \sim \mathcal N(0, \mathbf I), \quad X \sim \bigotimes_{j=1}^d \text{Ber}(\cdot \mid \pi_j(Z)) \] is the generative process for an image $X$. In our experiments, we consider two different types of decoders: a small convolutional network with only one hidden layer, and a larger convolutional network with several residual blocks as in e.g.\ \citet{durkan2019neural}. For the experiments with the smaller decoder, we use a $20$-dimensional latent space $\mathcal Z$, and for the larger decoder, we increase to $32$ dimensions. \paragraph{Inference Methods} We consider several models of inference to aid in surrogate maximum likelihood estimation of the parameters of $\pi$. First we consider a VAE inference model, where \[ q_{Z|X}(\cdot \mid x) \coloneqq \mathcal N\left(\mu_Z(x), \text{diag } \sigma^2_Z(x)\right) \] with an ``encoder'' neural network taking in image data $x$ and outputting both $\mu_Z$ and $\log \sigma_Z$. The encoder that we use in all experiments is a single-hidden-layer convolutional network which ``matches'' the structure of the small decoder; we keep the encoder small since the VAE here is just a base upon which we build more complicated inference models. We also consider an importance-weighted version of this VAE model (IWAE) with $K=5$ importance samples \citep{DBLP:journals/corr/BurdaGS15}, which we find roughly matches the computation time per epoch of the flow-based inference methods below. The first flow-based model that we consider is a $5$-layer masked autoregressive flow (MAF) \citep{papamakarios2017masked}, which is equivalent to an inverse autoregressive flow (IAF) \citep{kingma2016improved} when removing the hypernetworks producing the flow parameters. We also run experiments with a $10$-layer neural spline flow (NSF) \citep{durkan2019neural}, for which we clip the norm of the gradients to a maximum of $5$ -- as suggested for tabular density estimation -- for increased stability of training. Additional hyperparameter settings for each flow are available in \autoref{sec:experiment-details}. As alluded to previously, for each of the flow-based methods we will use the small VAE encoder as a base distribution $q_{W_0 | X}$ to project the image data into the dimension of the latent space; we do this rather than using a large VAE encoder as the base distribution in the large target experiments (as is typically done) to force the flow models to handle more of the inference. We also consider two baseline variants for each model, a larger and smaller version, which we control by changing the number of hidden channels in the autoregressive maps. Finally, we consider amortized CIF-based extensions of the \emph{smaller} variants of the flow models mentioned above, so that in the end our CIF models have approximately the same total number of parameters as the larger baseline flows. We use a $2$-dimensional $u$ at each flow step. We include parametrized conditional Gaussian distributions for $q_{U_\ell | W_{\ell-1}}$ and $r_{U_\ell | W_\ell, X}$ at each layer $\ell \in \{1, \ldots, L\}$, with additional care taken in the structure of the $r$ network to combine vector inputs $W_\ell$ with image inputs $X$ -- details are provided in Appendix~\ref{sec:cif-settings}. We use a single neural network at each layer to parametrize $s_\ell$ and $t_\ell$ appearing in $G_\ell$. \paragraph{Results} The results of the experiment are available in \autoref{tab:images}. We use the standard importance-sampling based estimator of the marginal likelihood from \citet[Appendix~E]{rezende2014stochastic} with $1{,}000$ samples, which we find empirically produces low-variance estimates for the small target model\footnote{We expect the same low-variance behaviour to translate to the larger target model, but did not run this for computational reasons.} as noted in Appendix \ref{sec:marg_ll_est}. We see that, in each experiment, CIF models are either producing the best average performance as measured by test-set estimated average marginal likelihood, or are within error bars of the best. Importantly, we note that CIFs are outperforming the baseline models which they are built directly on top of across the board: CIF-MAF and CIF-NSF significantly improve upon Small MAF and Small NSF, respectively. This justifies the claims of \autoref{prop:cif_vs_baseline}, demonstrating that we are not penalized for using the auxiliary objective instead of the standard ELBO. We also can see that the CIF models produce better results than the IWAE models, which can themselves be seen as a method for auxiliary VI as previously mentioned. Despite IWAE methods being more parameter-efficient, we found that increasing $K$ for IWAE significantly increased training time per epoch over the CIF models. \section{Conclusion and Discussion} In this work, we have presented continuously-indexed flows (CIFs) as a novel parametrization of an approximate posterior for use within variational inference (VI). We did this by naturally incorporating the CIF model into the framework of AVI. We have shown that the theoretical and empirical benefits of CIFs over baseline flow models extend to the VI setting, as CIFs outperform baseline flows in both sampling from complicated target distributions and facilitating maximum likelihood estimation of parametrized latent-variable models. We now add a brief further discussion on CIFs in VI and consider some directions for future work. \paragraph{Modelling Discrete Distributions} One issue with CIFs for VI (indeed, CIFs more generally) is that they are currently only designed to model continuous distributions, unlike e.g.\ HVMs. It may be possible however to alleviate this constraint by using discrete flows \citep{NEURIPS2019_9e9a30b7} as a component of the overall CIF model, although it remains to be seen if the theoretical and empirical benefits of CIFs over baseline flows would extend to this case. \paragraph{CIFs in Other Applications} This work can serve as a template for applying CIFs more generally in applications where NFs have proven effective, such as compression \citep{ho2019compression} and approximate Bayesian computation \citep{papamakarios2019sequential}. These approaches may require the formulation of appropriate, application-specific surrogate objectives, but the expressiveness gains could overcome the additional costs (as in VI and density estimation) and could therefore be investigated. \begin{acknowledgements} Anthony Caterini is a Commonwealth Scholar supported by the U.K.\ Government. Rob Cornish is supported by the Engineering and Physical Sciences Research Council (EPSRC) through the Bayes4Health programme Grant EP/R018561/1. Arnaud Doucet is supported by the EPSRC CoSInES (COmputational Statistical INference for Engineering and Security) grant EP/R034710/1 \end{acknowledgements} \subsubsection*{References}} \usepackage{algorithm} \usepackage[noend]{algorithmic}
1,314,259,994,596
arxiv
\section{Introduction} Semimetals that straddle insulators and metals are fascinating systems. Elemental semimetals, graphene, $\alpha$-Sn, As, Sb and Bi are special, partly because of their structural simplicity. Two dimensional graphene, a relatively recent entry has created a great excitement in the scientific community. Bi, on the other hand, has been investigated \cite{BiRussianReview} for more than a century, from basic physics point of view. Phenomena such as, diamagnetism, Nernst effect, de Haas van Alphen quantum oscillation etc., were discovered \cite{Fukuyama} for the first time in Bi, inspite of its ultra low carrier density $\sim$ 3 x 10$^{17}$ /cm$^3$. In recent times, anomalies in quantum oscillations, quantum Hall phenomena \cite{BehniaKopelevich,CavaOng,QHallSurface}, Nernst effect \cite{NernstEffectBehnia}, optics \cite{vanDerMarel}, NMR \cite{CavaNMR}, pressure effects \cite{BiPressure}, laser induced structural changes \cite{BiStructureLaser} and topological electronic phases \cite{MurakamiKaneCavaTopology,Fukuyama} on surfaces of Bi have been catching the attention of physics community. Solid Bi does not exhibit superconductivity. It is not a surprise, as Bi has about one free carrier per 10,000 Bi atoms. However, Bi behaves like a fermi liquid with small fermi pockets, leading to an expectation that it might superconduct at sufficiently low temperatures. On the other hand, small fermi energy and non-adiabatic effects make phonon mediated superconductivity doubtful \cite{BiTIFR}. It is in this background discovery of type I superconductivity in ultra pure Bi crystal by the TIFR group \cite{BiTIFR}, with an ultra low Tc $ \sim $ 0.5 mK, has come as a surprise. An element of surprise (sic). In terms of low carrier density superconductors, closest to Bi is lightly doped SrTiO$_3$. In a recently studied doped SrTiO$_3$ \cite{STO,STOTheory} carrier density is $\sim$ 50 times larger and Tc is in the scale of 100 mK. Further, Tc in Bi is close to the record low (finite) Tc $\sim$ 0.3 mK seen in Rhodium \cite{Rhodium}, a metal having a high carrier density $\sim 10^{22}/cm^3$. Interestingly, pure Bi seems to be at the \textit{verge of superconductivity, even in the Kelvin scale}. It readily superconducts when perturbed \cite{PerturbedBiSupCond}. Pressurized crystalline Bi, disordered thin films, amorphous Bi, interfaces, nano wires etc., exhibit superconductivity with Tc's in the range of 6 - 8 K. Even though Tc is small, it is a four order of magnitude jump from 0.5 mK ! There has been also reports \cite{BiSupCond36K} of interfacial superconductivity in bicrystals of Bi and Bi$_{1-x}$ Sb$_x$ alloys, with a Tc onset as high as 36 K. Bi has been theoretically investigated extensively, as a band semi metal \cite{MHCohenWolf} and a satisfactory and quantitative understanding of its low energy electrical and magnetic properties is believed to exist. In these theoretical attempts a large spin orbit coupling in Bi plays an important role \cite{BiBandStructure}. Coulomb interactions in Bi has been studied in the past while discussing possibility of exciton condensation and Wigner crystallization \cite{BiRussianReview,BiExcitonCond}. A very recent work by Koley, Laad and Taraphder \cite{Koley} that followed heels of experimental discovery, offers an exciton fluctuation mediated mechanism of superconductivity in Bi. In an attempt to understand normal state properties of Bi quantitatively, Craco and Leoni \cite{BiCracoLeoni} have emphasized importance of using a moderate Hubbard U in the 6p orbitals of Bi. Briefly, in our theory a potential high Tc superconductivity in crystalline Bi has been pushed down to an ultra low, milli Kelvin temperature scale. Superconducting fluctuations however survive even at room temperatures and makes normal state anomalous. Our theory suggests in a natural fashion ways to resurrect high Tc superconductivity in Bi and its neighbours Sb and As, in the same column in the periodic table. To motivate our model and scenario, we begin with a known fact \cite{PeierlsBook,RoldHoffmann} that weakly coupled quasi one dimensional tight binding bands make three dimensional bands in the A7 structure of Bi, Sb and As. Three sets of mutually perpendicular tight binding chains at half filling emerge from a \textit{cubic lattice based orbital organization of three valence electrons in the 6p$_x$, 6p$_{y}$ and 6p$_z$ orbitals} of Bi. The quasi one dimensionality, together with an available moderate Hubbard U in 6p orbitals of Bi leads to remarkable possibilities and an interesting interplay. This is because even a small Hubbard U in one dimensional Hubbard chain is relevant; for example, at half filling any finite repulsive U leads to Mott localization and opens a charge gap, and maintains a zero spin gap \cite{LiebWu}. In our theory, there is an interesting interplay of three competing phenomena and tendencies: i) Mott localization in the chains, ii) metallization by interchain hopping and iii) valence bond trapping and chain dimerization by electron-lattice coupling. It leads to a rich scenario and possibilities that is hidden in semimetallic solid Bi. Our paper is organized as follows. We present a model that manifestly brings out a hidden Mott physics in the form of weakly coupled Mott Hubbard chains at half filling. Then we discuss, using known results, how small interchain electron hopping (small relative to intrachain hopping) causes a Mott insulator to metal transition, but keeps the metal close to the Mott insulator boundary (figure 2). This is followed by discussion of a potential RVB physics and high Tc superconductivity in this correlated metallic state. Next section discusses an inevitable electron lattice coupling that traps valence bonds by chain dimerization, leading to major consequences: i) disappearance of a potential high Tc superconductivity, ii) surviving superconducting fluctuations in an anomalous normal state and iii) remnant \textit{evanescent Bogoliubov quasi particles} in the form of small electron and hole fermi pockets in the BZ. Ways by which we could control lattice dimerization and resurrect high Tc superconductivity in Bi as well as in isostructural and isoelectronic Sb and As are discussed next. At the end we make some comments about normal state anomalies in Bi, in the light of our proposal. \begin{figure \includegraphics[width=0.3\textwidth]{Figure1.pdf} \caption{\textbf{Orbital organization} of 6p orbitals into mutually perpendicular (half filled band) chains in a reference cubic lattice for Bismuth. From known band structure results \cite{BiBandStructure}, intrachain hopping t $\approx$ 1.85 eV, t$_{||} \approx$ 0.4 eV and t$_{\perp} \approx$ 0.2 eV.} \label{Fugure 1} \end{figure} \section{Model and a Reference State} Bi has a distorted cubic A7 structure \cite{BiRussianReview}. Electronic configuration of Bi atom is [Xe] [4f${^{14}}$ 5d$^{10}$ 6s$^2$] 6p$^3$. Three valence electrons in the half filled 6p$^3$ shell essentially determine low energy physics of solid Bi. Filled 6s band lies nearly 10 eV below the fermi level \cite{BiBandStructure}. It has been well recognized that 6p$_x$, 6p$_y$ and 6p$_z$ orbitals strongly hybridize along respective x, y and z directions and form weakly coupled one dimensional bands that are half filled. In the nearly cubic structure, interchain hopping is relatively weak. To build on known non-interacting tight binding model, we start with \textit{an undistorted cubic lattice of Bi atoms} (Figure 1) \textit{as a reference solid} \cite{RoldHoffmann}. Most importantly, we recognize that coulomb interaction in the quasi one dimensional tight binding model can't be ignored. Our model Hamiltonian has four parts: \begin{eqnarray} H &=& H_{\rm c} + H_{\rm ic} + H_{\rm lrc} + H_{ep} \end{eqnarray} First term H$_{\rm c}$ is the Hubbard Hamiltonian of the chains. Second term H$_{\rm ic}$ contains interchain hopping. Third term, H$_{lrc}$ is the long range Coulomb interaction term, beyond on site Hubbard U. Last term, H$_{ep}$ is the sum of phonon and electron-phonon coupling Hamiltonians. Coupled chain Hamiltonians containing p$_x$, p$_y$ and p$_z$ orbitals (generalized three orbital Hubbard models) in square \cite{KivelsonTelluride} and cubic lattices have been investigated, including exact results for itinerant ferromagnetism \cite{LiLiebWu} for some choice of parameters. In our problem we are in a different region of parameter space and at half filling, where singlets dominate. \textbf{Decoupled One Dimensional Chains as a Reference System:} Chain Hamiltonian, which provides us a convenient reference state is: \begin{eqnarray} H_{\rm c} &=& -t \sum_{\textbf{i}\alpha \sigma} (c^\dagger_{\textbf{i}\alpha \sigma}c^{}_{\textbf{i} +\textbf{a}_{\alpha}\alpha\sigma} + h.c.) + U \sum_{\textbf{\textbf{i}},\alpha} n_{\textbf{i}\alpha \uparrow} n_{\textbf{i}\alpha \downarrow}~~~~ \end{eqnarray} Cubic lattice sites are denoted by \textbf{i}; $\textbf{a}_{\alpha}$'s are nearest neighbour lattice vectors along $\alpha$ = x,y and z directions. The c operators are electron operators; $\sigma$ is the spin index and $\alpha$ = x,y and z are orbital indices. From band theory \cite{BiBandStructure} results we find that the intrachain nearest neighbour hopping term t $\sim$ 1.85 eV. Hubbard U, estimated in reference \cite{BiCracoLeoni}, required for a quantitative understanding normal state and high energy properties of Bi is U $\sim$ 5 to 8 eV. We first focus on the half filled band Hubbard chain Hamiltonian H$_{\rm c}$, which provides us a convenient reference phase. We use Lieb-Wu exact solution \cite{LiebWu} to understand the above decoupled 3d network of one dimensional nearest neighbour repulsive Hubbard chain Hamiltonian. From Lieb-Wu solution we find that {Mott-Hubbard gap is $\sim $ 1 to 1.5 eV, for a chain, when t $\approx$ 1.85 eV and U $\approx$ 5 to 8 eV. \begin{figure* \includegraphics[width=0.8\textwidth]{Figure2.pdf} \caption{ \textbf{Schematic Phase Diagram} for model Hamiltonian (equation 1), in the temperature and interchain hopping $\frac{t_\perp}{t}$ plane; here t is the intrachain hopping. a) Reference Cubic Phase in the absence of electron-lattice coupling: A first order phase transition line, ending at a critical point, separates Mott insulator and metal. Location of Bi solid in the t$_{\perp}$-axis is marked. b) Presence of electron-lattice coupling: Spin Peierls Insulator and semimetallic phase in A7 dimerized cubic structure, high temperature cubic metallic phase and ultralow superconductivity are shown. } \label{Figure 2} \end{figure*} This is a key result: \textit{a robust Mott Hubbard gap in the range of 1 to 1.5 eV exists, in the decoupled chains of the reference state} we start with. This Mott insulating Hubbard chain has a spin liquid ground state- It contains enhanced nearest neighbour spin singlet pairing correlations and supports gapless spinon excitations and gapful charge excitations. When we consider the three dimensional network of chains, spinon excitations form three sets of pseudo fermi surfaces (sheets) in the three dimensional BZ, lying parallel to xy, yz and zx planes. Since we are dealing with Mott insulating chain with a reasonable charge gap, low energy spin dynamics is well approximated by three sets ($\alpha$ = x,y and z) of Heisenberg chain Hamiltonians: \begin{equation} H_{\rm c} \approx \frac{J}{2}\sum_{\textbf{i}\alpha} (\textbf{S}_{\textbf{i}\alpha} \cdot \textbf{S}_{\textbf{i} +\textbf{a}_{\alpha}\alpha} - \frac{1}{4}) \end{equation} The effective \textit{superexchange} (kinetic exchange) is traditionally approximated by J = $\frac{U}{2} [1 + \frac{16t^2}{U}]^{\frac{1}{2}} - \frac{U}{2}$, the energy difference between excited triplet and singlet ground state of two electrons on neighbouring p-orbitals, in a given chain. It is a measure of spin singlet correlations, in the background of fluctuating charges. In the large U limit we get the standard superexchange term, J $\approx \frac{4t^2}{U}$. We find that the effective exchange parameter for Bi is J $\approx$ 1 eV; it is large and comparable to the Mott Hubbard charge gap within a chain. \textbf{Interchain Hopping and Transition from Mott Insulator to Correlated Metallic State:} Repeated electron electron scattering in one dimensional tight binding chain builds a Mott gap for arbitrarily small value of U, at half filling. However, interchain hopping and residual long range interaction could close Mott Hubbard gap and cause metallization. The transition takes place when the \textit{energy gain from three dimensional delocalization by interchain hopping becomes comparable to Mott Hubbard gap}. In real systems Mott transition is a first order transition (rather than 2nd order as given by Hubbard model), inview of the long range part of the coulomb interaction. From theory point of view, before the first order transition we have weakly coupled three dimensional network of spin-half Heisenberg chains. This magnetic state could support antiferromagnetic order and three dimensional spin liquid ground states, depending on the parameters (figure 2a). Evolution of the fermi surface on the metallic side of the Mott transition, as a function of interchain hopping \cite{InterchainCouplingMetallization} or pressure is interesting. Correlated metallic state close to the Mott transition boundary might support low energy and gapped spin-1 bound state branch in some parts of the BZ. That is, fluctuating Mott localization in the metallic chains is likely to support gapful spin-1 excitations, in regions where spinon fermi surface existed prior to metallization. Now we consider interchain hopping terms (figure 1) present in the Hamiltonian H$_{\rm ic}$ for Bi. Nearest neighbour interchain hopping between same type of chains, for example x chains is t$_\parallel$. Coupled x chains alone form an anisotropic 3 dimensional tight binding lattice. Similarly, y and z chains form their own anisotropic 3 dimensional lattices. These three systems are coupled by a smaller interchain hopping t$_{\perp}$. From existing band theory \cite{BiBandStructure} we find that the largest interchain hopping is t$_{\parallel} \sim $ 0.4 to 0.6 eV, This is less than one third of the nearest neighbour intra chain hopping t $\approx$ 1.8 to 2 eV. The next leading interchain hopping is t$_{\perp} \approx$ 0.2 eV. Major perturbation arising from interchain hopping can gain delocalization energy $\sim z t_{\parallel} \approx$ 1.5 eV, where z = 4 is the number of nearest parallel chains to a given chain. Since this energy is close to our estimated charge gap $\sim$ 1 to 1.5 eV in the Mott chain, it is strongly suggestive that the cubic reference system for Bi is in a metallic state close to the Mott transition boundary (Figure 2). \textbf{Potential High Tc RVB Superconductivity:} In the resonating valence bond (RVB) theory \cite{RVBTheory}, singlet pair correlations arising from superexchange in a doped Mott insulator is a key requirement for high Tc superconductivity. Where is superexchange in the correlated half filled band metallic state ? The conducting state stabilised by small interchain hopping and coulomb interaction gains delocalization energy and Madelung energy. This correlated state has been viewed by the present author \cite{GBOrganics} as a \textit{self doped Mott insulator} in the following sense. Even while maintaining superexchange and Mottness locally, the system spontaneously creates a small and equal density of holons (an empty p orbital) and doublons (a doubly occupied p orbital) and sustains it. Density of self doping is determined self consistently by coulomb and band parameters. Survival of superexchange and presence of self doping are seen in the optical conductivity of organic conductors, close to Mott transition point; the former as a Mott Hubbard gap and the later as a small Drude peak \cite{GBOrganics,OrganicsOptics}. Important scales in this correlated metallic states are t, the hopping matrix element, J the surviving superexchange, and n the density of self doping. This is contained in an effective model called 2 species tJ model \cite{GBOrganics}: \begin{eqnarray} H_{tJ} = &-& \sum_{\textbf{i} \textbf{j}\alpha \beta }t^{\alpha\beta}_{\textbf{i j}}~~{\hat P}_{\rm hd}~(c^\dagger_{\textbf{i}\alpha\sigma}c^{}_{\textbf{j}\beta\sigma} + H.c. ){\hat P}_{\rm hd} + \nonumber \\ & + & \frac{J}{2} \sum_{\textbf{i}\alpha} ( {\bf S}_{\textbf{i}\alpha} \cdot {\bf S}_{\textbf{i} + \textbf{a}_{\alpha} \alpha} - \frac{1}{4} n^{}_{\textbf{i}\alpha} n^{}_{\textbf{i} + \textbf{a}_{\alpha} \alpha} ) \end{eqnarray} In the above Hamiltonian, projection operator ${\hat P}_{\rm hd}$ ensure that as holons and doublons hop they don’t' annihilate each other. This ensures that number of doublons and number of holons, which are equal, are individually conserved. In the background of this conserved number of dynamic doublons and holons we have singly occupied sites containing spins. We have already discussed interchain hopping matrix elements $t^{\alpha\beta}_{\textbf{i j}}$: they are t$_{\parallel} \sim$ 0.4 to 0.6 eV and t$_{\perp} \sim 0.2 $ eV. In the above Hamiltonian we have retained the largest of J, between nearest neighbour sites within a given chain. It is straight forward to perform RVB mean field theory for the above Hamiltonian directly or using slave-particle formalism. We will not go into details but point out that we get very high Tc superconductivity within mean field theory. Primary reason is the large t and J that we have identified. However, strong one dimensional fluctuations, very small self doping etc. are likely to reduce superconducting Tc further. There is a heuristic way to estimate superconducting Tc within RVB mechanism. As temperature is lowered spinons get paired and charged valence bonds dominate. At a particular temperature charged valence bonds (whose density is the same as holon and doublon density) undergo Bose Einstein condensation \cite{KivelsonShortRangeRVB}, resulting in superconductivity. We can use the Bose Einstein condensation formula and estimate superconducting Tc: \begin{equation} k_B Tc \approx 3.3125 ~ \frac{\hbar^2 n^{\frac{2}{3}}}{m^* k_{\rm B}} \end{equation} In this simple expression for Tc there are two unknown parameters, a mean effective mass m$^*$ and n, density of self doped carriers. Unfortunately these parameters are known experimentally only in the dimerized, distorted cubic, A7 structure. If we make a reasonable guess of the two parameters in the reference undistorted cubic phase, we get superconducting Tc in the scale of 100 K. Now we will discuss a lurking danger for high Tc superconductivity from a competing valence bond order. \textbf{Valence Bond Order and Collapse of Superconducting Tc:} We begin by discussing the effect of electron-phonon interaction on the reference Mott insulating chains. Then discuss its drastic effect on quasi three dimensional correlated metallic state, which support strong pairing correlations and high Tc superconductivity. In a single band free fermion chain on a deformable lattice a 2k$_F$ instability opens a gap at the fermi level. This is the well known Peierls instability \cite{PeierlsBook}. It arises from a singular response of one dimensional fermi gas to perturbations at wave vector 2k$_F$. After the Peierls distortion we have a band insulator; a finite gap exists for any type of electronic excitation. Same dimerization phenomenon in a Mott-Hubbard chain at half filling, has interesting extra physics because of presence of spin-charge decoupling. In the undistorted Mott Hubbard chain we have two type of low energy excitations. A gapless spinon excitation branch and a gapful charge excitation branch. Dimerization opens up gap in the spin spectrum and makes an extra contribution to the already existing large charge gap. Even after dimerization spin charge decoupling continues in the following fashion. A gapful low energy spin-1 branch (bound state of two spinons) lies below the charge gap. Similar physics exists in polyacetylene, a one dimensional tight binding system. Here the important role of electron correlation effects \cite{RamaseshaSoos}, on top of electron-phonon coupling \cite{SuSchriefferHeeger} has been well recognized. Dimerization arising from electron-phonon interaction in a Mott insulating chain is called spin-Peierls instability; because the soft spin degree of freedom are mostly getting affected by dimerization (Figure 2b). The situation in the quasi one dimensional correlated metallic state is somewhat complex. Major complication is presence of a long range superconducting order and strong superconducting pairing correlation and valence bond resonance that supports it. It is obvious that valence bond localization and lattice dimerization continues in the quasi one dimensional case as well. This is manifest when we look at A7 structure of Bi. Valence bond order takes place in a cooperative fashion in all x,y and z - chains, leading to the observed distortion of the cubic structure to A7 structure. Bond charge repulsion arising from the three double bonds emanating from every Bi atom leads to the small ($\sim 3^0$) rhombohedral angular distortion. From experimental point of view A7 structure does not change to a cubic structure till the melting point. That is, free energy gain from valence bond order is high. Valence bond order is a strong competitor to superconductivity - no wonder that superconductivity is suppressed (figure 2b) to milli Kelvin scale in Bi. A closer look reveals interesting hidden physics in the form of superconducting fluctuations till room temperature scales in crystalline Bi. \textbf{Survival of Superconducting Fluctuations and remnant Evanescent Bogoliubov Quasi particles in the Normal State:} Before electron lattice coupling intervened, metallic state has a strong local pairing and a finite zero momentum Cooper pair condensate fraction, a high Tc superconducting state. In the superconducting state low energy excitations are Bogoliubov quasi particles; in our mean field theory they have a finite (superconducting) gap. We imagine turning on electron-phonon interaction adiabatically in this superconducting state. Strength of zero momentum condensate will continuously decrease, as competing valence bond order and chain dimerization grow. We interpret development of valence bond order as build-up of a commensurate finite momentum pair condensate at the reciprocal lattice vector of the A7 structure, at the expense of zero momentum condensate. At the same time superconducting gap decreases and gives way for valence bond order. During this evolution, superconducting gap is likely to close at some regions in k-space. We interpret what is seen in real Bi as a remnant of superconducting fluctuations after superconducting Tc has crashed to 0.5 mK. In this case it is natural to view existing electron and hole pockets at L and T points in the BZ as remnant \textit{evanescent Bogoliubov quasi particles} in the normal state, in the background of superconducting fluctuations. By evanescent Bogoliubov quasi particle we mean the following. A real Bogoliubov quasi particle is a coherent superposition of an electron and hole state. It reflects presence of a finite zero momentum condensate of Cooper pairs present in the vacuum. We envisage that in the presence of a fluctuating condensate the electron and hole quasi particles in Bi will have transient Bogoliubov quasi particle character, through Andreev reflection by a strongly fluctuating local phase order. An electron or hole quasi particle in a standard band insulator does not have superconducting fluctuation in the vacuum. In this sense the vacuum that supports the electron and hole quasi particles and the superconducting pairing fluctuations in Bi are different from a standard band insulator vacuum. Strong pairing correlation in Bi is a reservoir which represents resonating valence bonds. Since the effective superexchange J is high, valence bonds are highly quantum. \textit{Consequently the valence bond solid is a quantum solid, a crystal of paired electrons with strong pair fluctuations}. Normal state in Bi has some similarity to the pseudogap phase of cuprates, where valence bonds get trapped by lattice distortions and form ordered stripes (valence bond order), at the expense of superconductivity. In cuprates we have external doping; in bismuth we have self doping. In the 1/8-th commensurate doped La$_{2-x}$Ba$_x$CuO$_4$ cuprate we have a commensurate valence bond order and a nearly vanishing superconducting Tc. \textbf{Recovery of Superconductivity by Quantum Melting of Valence Bond Solid:} Is there a way of quantum melting the competing valence bond order and resurrecting high Tc superconductivity ? At least two ways seem possible. One is external doping and destabilization of valence bond order. Alloys such as Bi$_{1-x}$A (A = Pb, Sn, Sb, As, P, Tl, Se, Te, ..) formed from neighbouring elements in the periodic table hold some promise. If alloying converts the A7 structure to a simple cubic structure, there exists prospects for superconductivity, with a Tc much higher than 0.5 mK ! Atomic radius and suitable quantum chemistry issues needs to be taken care of in this approach. In one of the experiments Bi$_{1-x}$Sb$_x$ shows signals of superconductivity with a Tc onset as large as 36 K \cite{BiSupCond36K}. It is likely that in the bicrystal interface, because of lattice mismatch and strain, interface region acquires local cubic character. It results is quantum melting of valence bond order and recovery of RVB state in the interface region. This could enhance superconducting Tc. Use of hydrostatic or uniaxial pressure is another route. Early works and a very recent work \cite{BiPressure} points out that in single crystal Bi pressure brings in a series of structural changes and also makes superconducting Tc as high as 6 to 8 K. On closer inspection we find that pressure induced new structures have decreasing valence bond ordering in different fashion. Experimentally a simple cubic phase does not get stabilized at finite pressure in Bi however. The closest is a cubic phase with monoclinic distortion, which is superconducting with a Tc of about 6 K. An interesting structure called \textit{host-guest phase}, occurring at higher pressure also has finite Tc superconductivity. In this structure superconductivity is likely to arise from weakly coupled guest chains of Bi embedded in the covalent insulating network of host three dimensional Bi lattice. We hope to discuss these structures in a future publication. As we mentioned earlier As and Sb have the same A7 structure as Bi and are isoelectronic. The physics we have discussed is relevant there. It is interesting that As has a cubic structure at a range of moderate pressure, in addition to other structures. Nature of low temperature phase in the cubic structure is worth investigating further. Our proposal predicts recovery of valence bond resonance and resurrection of high Tc superconductivity in the undistorted cubic structure of As. \textbf{Possibility of PT Violating Chiral Superconductivity:} In the theory we have presented so far, each one of the x,y and z - chain system becomes an anisotropic three dimensional system. Every one of them is a three dimensional self doped Mott insulator. They could individually support high Tc superconductivity based on RVB mechanism. Residual interaction among these self doped Mott insulating systems could bring in new possibilities for symmetry of superconducting order parameter. It is known that repulsive coulomb interaction can cause zero momentum electron pair tunnelling (scattering) between the 3 sets of bands. Repulsive pair scattering will frustrate relative phases of the s-wave order parameters in the three systems. In the context of multiband superconductivity this problem has been studied in the past. In the case of 2 bands with repulsive pair scattering, Kondo \cite{Kondo} finds that phases of the order parameter change sign between two bands. If we have three bands related by symmetry, repulsive pair scattering could lead to \cite{Tesanovic} two degenerate \textit{PT violating chiral superconducting states} having s-wave singlet order parameter and a relative phase difference between three bands: (0, $\frac{2\pi}{3}$, $\frac{4\pi}{3}$) or (0, - $\frac{2\pi}{3}$, - $\frac{4\pi}{3}$). What happens after a high Tc superconductivity crashes, once valence bond order and lattice dimerization sets in ? We find that the situation is somewhat similar for pairing in the electron and hole pockets. That is, repulsive inter band scattering among pockets at L and T points favours a PT violating ultra low Tc chiral superconducting state. \textbf{Normal State Anomalies:} In our theory Bi is a kind of supersolid of Cooper pairs. A quantum crystal of paired electrons coexists with phase fluctuating zero momentum Cooper pairs. It is likely that phase fluctuations are organized into vortices and fluctuating circulating diamagnetic currents, reminiscent of Anderson-Ong's theory \cite{AndersonOng} of vortex liquid for the pseudo gap phase of cuprates. In this sense normal state of Bi in our theory, following Anderson's analogy \cite{PWASupSolidCuprate} for pseudogap phase of cuprates, is similar to supersolid phase in $^4$He. New aspect here is a small density of electron and hole carriers present in this rich background, behaving as remnant evanescent Bogoliubov quasi particle. Some of the striking anomalies in the normal state of Bi are long mean free paths, large Nernst and diamagnetism signals, a rich magnetic field induced quantum oscillation phenomena etc. There has been serious study in the past to explain these anomalies using one electron physics, based on the peculiar band structure of Bi and the strong spin-orbit coupling. Our thesis is that pairing correlation is very basic and unavoidable in solid Bi. Such hidden electron pairing correlations, not contained in a standard band insulator description, could make substantial contribution to the above anomalies. One of the less known and little understood anomalies in Bi is a large scattering rate $\frac{1}{T_1}$ seen in Li NMR \cite{CavaNMR} in the normal state. Scattering rates are comparable to that in metallic Au, which has a 4 to 5 order of magnitude higher conduction electron density. This anomaly might have an origin in a residual superconducting fluctuations that we have proposed. \section{Discussion} Bismuth is one of the well studied elemental solids in condensed matter physics. From experimental point of view it continues to surprise us. In the present paper we have studied one such surprise, namely ultra low Tc superconductivity. Focussing on a well known quasi one dimensionality, arising from the 6p valence orbital organization in Bi, we have discussed an interplay of Mott localization, metallization and chain dimerization. We suggest interesting possibilities to arise from even moderate electron-electron repulsion present in 6p orbitals of Bi. We have suggested a direction to think about, based on phenomenology and microscopic considerations. A lot remains to be investigated on the heuristics and model we have presented. For example, making use of the hidden one dimensional character one could go beyond RVB meanfield theory and develop coupled chain Bosonization and renormalization theory, including Umklapp terms. This will help one obtain a phase diagram and see how U and interchain hopping compete under renormalization. One of the consequences of our proposal is presence of low energy spin-1 collective modes supported by fluctuating Mott localization in the chains. It will be interesting to study this issue in detail theoretically and also look for signals in neutron scattering and Raman scattering. Any experiment which tries to bring out hidden Mottness and superconducting fluctuations will be welcome. We have not explicitly taken into account a strong spin-orbit coupling present in Bi in our theory. Since Kramers theorem replaces real spin by Kramer pair, most of our qualitative conclusion for the bulk are valid. Further the strong covalency (directional bonding in the A7 structure) present in Bi seems to quench the effect of spin-orbit coupling. Situation we encounter is some what similar to Hg, Tl and Pb; they are neighbours of Bi in the same row in the periodic table, having comparable strong spin orbital coupling. Bulk electronic properties in Hg, Tl and Pb, including superconductivity can be discussed by replacing spin by Kramer pair, without invoking spin orbit coupling explicitly. The tiny hole and electron Fermi pockets in the BZ are viewed as remnant evanescent Bogoliubov quasi particle excitations in a vacuum containing superconducting fluctuations, after superconductivity has collapsed. Do they leave any direct experimental signatures ? One of our prediction is a possibility of PT violating order parameter in the recently observed ultra low Tc superconductivity in Bi. It will be interesting to look for this in experiments. Surface physics in Bi is an active field now: strong spin-orbit coupling, at the level of band theory, leads to topological phases and phenomena. Surface physics in Bi is likely to become richer from an added dimension of strong correlation effects we have suggested in the present article. As we mentioned earlier, our theory is applicable to Sb and As, neighbours of Bi in the same column in the periodic table. Revival of high Tc superconductivity in Bi, As and Sb, through quantum melting of the valence bond crystal seems plausible. Further, theoretical and experimental studies for a better understanding of these systems, with a hope to find high Tc superconductivity and other exotic phases are needed. \textbf{Acknowledgement:} I thank S Ramakrishnan (TIFR) for discussion of his results. I thank P.W. Anderson, R.N. Bhatt, N.P. Ong, S. Sondhi and Z. Soos at Princeton for critical remarks; E.H. Lieb for bringing to my attention reference \cite{LiLiebWu}. I am grateful to Science and Engineering Research Board (SERB, India) for a National Fellowship. This work, partly performed at the Perimeter Institute for Theoretical Physics, Waterloo, Canada is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation.
1,314,259,994,597
arxiv
\section{Introduction} It is a remarkable fact that an asymptotically flat four dimensional Kerr black hole could be described holographically by a two-dimensional conformal field theory (CFT). The Kerr/CFT correspondence was first proposed for the extremal Kerr black hole by studying the asymptotic symmetry group of its near horizon geometry~\cite{AndyWei, matsuo, Castro:2009jf}. This correspondence was firstly supported by the match of the microscopic entropy counting with the macroscopic Bekenstein-Hawking entropy, and moreover supported by the agreement of the superradiant scattering amplitudes with the dual CFT prediction~\cite{Bredberg:2009pv, Hartman:2009nz, Cvetic:2009jn, ChenChu, Becker:2010jj}. Furthermore, the Kerr/CFT correspondence was generalized to a generic non-extremal Kerr black hole by the study of the hidden conformal symmetry acting on the solution space~\cite{Castro:2010fd}. It was found that in the low-frequency limit the radial equation of the scalar scattering off a Kerr black hole could be written in terms of $SL(2, R)$ quadratic Casimir. Though the hidden conformal symmetry can not generate new solutions, it really determines the scattering amplitudes. Both the entropy counting and the low frequency amplitudes again support the holographic picture. Inspired by the Kerr/CFT correspondence, the holographic descriptions of other kinds of black holes have been investigated~\cite{KerrCFT, HiddenSymmetry}. Among them, the holographic description of the Reissner-Nordstr\"om~(RN) charged black hole is of particular interest. For the extremal RN black hole, the charge gives the radius of the AdS space appearing in the near horizon geometry, and therefore is expected to determine the central charge of the dual CFT~\cite{Hartman:2008pb, Garousi:2009zx, Chen:2009ht, Chen:2010bsa, Chen:2010yu, Chen:2010as}. However, unlike the Kerr case, the AdS$_3$ information in RN black hole is not completely encoded in the geometry, but also in the gauge potential, therefore it is subtle to compute the central charge of the dual CFT~\cite{Chen:2010bsa}. The Kerr/CFT and RN/CFT correspondences naturally suggest that for the four-dimensional Kerr-Newman black hole there should exist two distinct hidden conformal symmetries associated with twofold holographic descriptions, called J-picture and Q-picture respectively~\cite{Chen:2010yw}. This is reminiscent of the multi-fold descriptions of higher-dimensional extremal Kerr black holes which have multiple independent angular momenta~\cite{Lu:2008jk, Chen:2009xja, Krishnan:2010pv}. On the other hand, the investigation of holographic pictures of the Kerr black holes in AdS or dS spacetime is also remarkable. In this case, the function determining the horizon is quartic rather than quadratic for asymptotically flat black hole and this fact causes the search for the hidden conformal symmetry be more tricky. Different from the Kerr case in which one can move away from the horizon and work with the ``near'' region, one has to focus strictly on the near horizon region to find the hidden conformal symmetry. This is in accordance with the universal behavior of the black hole. In~\cite{Chen:2010bh}, a holographic picture, in view point of angular momentum, of the Kerr(-Newman)-AdS-dS black hole has been proposed. Both the Bekenstein-Hawking entropy and the superradiant scattering amplitudes are in good agreement with the CFT prediction. It is a natural expectation that there should be a Q-picture description dual to a Kerr-Newman-AdS-dS black hole. In this article, we will explore this picture in more details. In the next section, we discuss the hidden conformal symmetry in holographic Q-picture. Similar to the treatment in~\cite{Chen:2010bh}, we need to focus on the near-horizon region. After turning off the angular mode of the probe scalar field, we find that the radial equation could be rewritten in terms of $SL(2, R)$ quadratic Casimir, indicating the existence of a hidden conformal symmetry. As a result, we read out the temperatures of dual CFT. In section 3, we present the microscopic description in Q-picture. The key theme is to find the central charges of the dual CFT. As in the other cases, we first consider the central charges of the near-extremal black holes and expect the same expression holds even for generic black holes. In the Q-picture, another subtle point is that we have to uplift the 4D black hole to 5D in order to read the central charges. It turns out that the central charges and temperatures include a free parameter. Furthermore we discuss the thermodynamics of the black hole and calculate the superradiant scattering amplitude, which is in perfect agreement with the CFT prediction. We end with some discussions in section 4. \section{Hidden conformal symmetry in Q-picture} For a four-dimensional Kerr-Newman-AdS-dS black hole, its metric takes the following form in Boyer-Lindquist-type coordinates~\cite{Caldarelli:1999xj} \begin{equation} \label{KerrNewman} ds^2 = - \frac{\Delta_r}{\rho^2} \left( d t - \frac{a \sin^2\theta}{\Xi} d\phi \right)^2 + \frac{\rho^2}{\Delta_r} dr^2 + \frac{\rho^2}{\Delta_\theta} d\theta^2 + \frac{\Delta_\theta}{\rho^2} \sin^2\theta \left( a dt - \frac{r^2 + a^2}{\Xi} d\phi \right)^2, \end{equation} where \begin{eqnarray} \Delta_r &=& (r^2 + a^2) \left( 1 + \frac{r^2}{l^2} \right) - 2 M r + q^2, \qquad q^2 = q^2_e + q^2_m, \nonumber\\ \Delta_\theta &=& 1 - \frac{a^2}{l^2} \cos^2\theta, \nonumber\\ \rho^2 &=& r^2 + a^2 \cos^2\theta, \nonumber\\ \Xi &=& 1 - \frac{a^2}{l^2}. \end{eqnarray} Here $l^{-2}$ is the normalized cosmological constant, which is positive for dS and negative for AdS. The above metric reduces to the one of a Kerr-Newman black hole when $l^{-2} = 0$, and it describes a RN-AdS-dS black hole when $a = 0$. The physical mass, angular momentum and charges of the black hole are related to the parameters $M, a, q_{e, m}$ by \begin{equation} M_{ADM} = \frac{M}{\Xi^2}, \qquad J = \frac{a M}{\Xi^2}, \qquad Q_{e, m} = \frac{q_{e, m}}{\Xi}. \end{equation} The gauge potential and its field strength are respectively \begin{eqnarray} A_{[1]} &=& - \frac{q_e r}{\rho^2} \left( d t - \frac{a \sin^2\theta}{\Xi} d\phi \right) - \frac{q_m \cos\theta}{\rho^2} \left( a dt - \frac{r^2 + a^2}{\Xi} d\phi \right), \\ F_{[2]} &=& - \frac{q_e (r^2 - a^2 \cos^2\theta) + 2 q_m r a \cos\theta}{\rho^4} \left( d t - \frac{a \sin^2\theta}{\Xi} d\phi \right) \wedge dr \nonumber\\ & & + \frac{q_m (r^2 - a^2 \cos^2\theta) - 2 q_e r a \cos\theta}{\rho^4} \sin\theta d\theta \wedge \left( a dt - \frac{r^2 + a^2}{\Xi} d\phi \right). \end{eqnarray} In the following, for simplicity, we just focus on the electrically charged black hole, i.e. $q_m = 0, \; q = q_e$. The Hawking temperature, entropy and angular velocity of the horizon are respectively \begin{eqnarray} T_H &=& \frac{r_+}{4 \pi (r_+^2 + a^2)} \left( 1 - \frac{a^2 + q^2}{r^2_+} + \frac{a^2}{l^2} + \frac{3 r^2_+}{l^2} \right), \nonumber\\ S_{BH} &=& \frac{\pi (r_+^2 + a^2)}{\Xi}, \nonumber\\ \Omega_H &=& \frac{a \Xi}{r_+^2 + a^2}. \label{OH} \end{eqnarray} The electric potential $\Phi$, measured at infinity with respect to the horizon, is \begin{equation} \Phi_e = A_\mu \xi^\mu \biggr|_{r \to \infty} - A_\mu \xi^\mu \biggr|_{r=r_+} = \frac{q_e r_+}{r^2_+ + a^2}, \label{Epotential} \end{equation} where $\xi = \partial_t + \Omega_H \partial_\phi$ is the null generator of the horizon. As discussed in~\cite{Chen:2010bh}, the scattering issue in the Kerr-AdS-dS and Kerr-Newman-AdS-dS black holes needs a careful treatment, because that the function $\Delta_r$ is quartic. In order to find the the hidden conformal symmetry, one has to focus on the near-horizon region. In the near-horizon region, the function $\Delta_r$ can be expanded to the quadratic order of $r - r_+$, \begin{eqnarray} \Delta_r &=& (r^2 + a^2) \left( 1 + \frac{r^2}{l^2} \right) - 2 M r + q^2 \nonumber\\ &\simeq & k (r - r_+) (r - r_\ast), \end{eqnarray} where $r_+$ is the outer horizon, and \begin{eqnarray} k &=& 1 + \frac{a^2}{l^2} + \frac{6 r^2_+}{l^2}, \label{k} \\ r_\ast &=& r_+ - \frac{r_+}{k} \left( 1 - \frac{a^2 + q^2}{r^2_+} + \frac{a^2}{l^2} + \frac{3 r^2_+}{l^2} \right). \label{rstar} \end{eqnarray} For a complex massless scalar field with charge $e$, its dynamics is given by the Klein-Gordon (KG) equation \begin{equation} (\nabla_\mu + i e A_\mu) (\nabla^\mu + i e A^\mu) \Phi = 0. \end{equation} By imposing the following ansatz \begin{equation} \label{ansatz} \Phi = e^{-i \omega t + i m \phi} {\cal S}(\theta) {\cal R}(r), \end{equation} the KG equation is decoupled into an angular equation \begin{equation} \frac{1}{\sin\theta} \partial_\theta \left( \sin\theta \Delta_\theta \partial_\theta {\cal S} \right) - \frac{(m \Xi)^2}{\Delta_\theta \sin^2\theta} {\cal S} + \frac{2 m a \Xi \omega - a^2 \omega^2 \sin^2\theta}{\Delta_\theta} {\cal S} + K {\cal S} = 0, \end{equation} and a radial equation \begin{equation} \partial_r \left( \Delta_r \partial_r {\cal R} \right) + \frac{[ \omega (r^2 + a^2) - m a \Xi - e q_e r ]^2}{\Delta_r} {\cal R} - K {\cal R} = 0, \end{equation} where $K$ is the separation constant. In the near-horizon region and in the low frequency limit $r_+ \omega \ll 1$, the radial equation could be simplified as \begin{equation} \label{scalarKNAdS} \partial_r [ (r - r_+) (r - r_\ast) \partial_r {\cal R} ] + \frac{r_+ - r_\ast}{r - r_+} A \, {\cal R} + \frac{r_+ - r_\ast}{r - r_\ast} B \, {\cal R} + C \, {\cal R} = 0, \end{equation} with \begin{eqnarray} A &=& \frac{\left[ \omega (r^2_+ + a^2) - m a \Xi - e q_e r_+ \right]^2}{k^2 (r_+ - r_\ast)^2}, \nonumber\\ B &=& - \frac{\left[ \omega (r^2_\ast + a^2) - m a \Xi - e q_e r_\ast \right]^2}{k^2 (r_+ - r_\ast)^2}, \nonumber\\ C &=& \frac{e^2 q_e^2}{k^2} - \frac{K}{k}. \end{eqnarray} In order to study the hidden conformal symmetry, we need to introduce the following conformal coordinates for non-extremal black holes: \begin{eqnarray} \omega^+ &=& \sqrt{\frac{r - r_+}{r - r_\ast}} \; e^{2 \pi T_R \phi + 2 n_R t}, \nonumber\\ \omega^- &=& \sqrt{\frac{r - r_+}{r - r_\ast}} \; e^{2 \pi T_L \phi + 2 n_L t}, \nonumber\\ y &=& \sqrt{\frac{r_+ - r_\ast}{r - r_\ast}} \; e^{\pi (T_L + T_R) \phi + (n_L + n_R) t}, \nonumber \end{eqnarray} from which we may define locally the vector fields \begin{eqnarray} H_1 &=& i \partial_+, \nonumber\\ H_0 &=& i \left( \omega^+ \partial_+ + \frac{1}{2} y \partial_y \right), \nonumber\\ H_{-1} &=& i \left( \omega^{+2} \partial_+ + \omega^+ y \partial_y - y^2 \partial_- \right), \end{eqnarray} and \begin{eqnarray} \tilde H_1 &=& i \partial_-, \nonumber\\ \tilde H_0 &=& i \left( \omega^- \partial_- + \frac{1}{2} y\partial_y \right), \nonumber\\ \tilde H_{-1} &=& i \left( \omega^{-2} \partial_- + \omega^- y \partial_y - y^2 \partial_+ \right). \end{eqnarray} These vector fields obey the $SL(2,R)$ Lie algebra \begin{equation} [ H_0, H_{\pm 1} ] = \mp i H_{\pm 1}, \qquad [H_{-1}, H_1] = -2 i H_0, \end{equation} and similarly for $(\tilde H_0, \tilde H_{\pm 1})$. The quadratic Casimir is \begin{eqnarray} {\cal H}^2 = \tilde{\cal H}^2 &=& - H_0^2 + \frac{1}{2} (H_1 H_{-1} + H_{-1} H_1) \nonumber\\ &=& \frac{1}{4} (y^2 \partial^2_y - y \partial_y) + y^2 \partial_+ \partial_-. \end{eqnarray} In terms of $(t, r, \phi)$ coordinates, the Casimir becomes \begin{eqnarray} \label{H2} {\cal H}^2 &=& (r - r_+) (r - r_\ast) \partial_r^2 + (2 r- r_+ - r_\ast) \partial_r \nonumber\\ && + \frac{r_+ - r_\ast}{r - r_\ast} \left( \frac{n_L - n_R}{4 \pi G} \partial_\phi - \frac{T_L - T_R}{4 G} \partial_t \right)^2 \nonumber\\ && - \frac{r_+ - r_\ast}{r - r_+} \left( \frac{n_L + n_R}{4 \pi G} \partial_\phi - \frac{T_L + T_R}{4 G} \partial_t \right)^2, \end{eqnarray} where $G = n_L T_R - n_R T_L$. By assuming $q_e = 0$ in~(\ref{scalarKNAdS}), one can find a hidden conformal symmetry from the radial equation. This leads to a holographic J-picture of a Kerr-Newman-AdS-dS black hole, as shown in~\cite{Chen:2010bh}. In this case, the radial equation could be written in terms of the $SL(2,R)$ quadratic Casimir as \begin{equation} \tilde{\cal H}^2 {\cal R}(r) = {\cal H}^2 {\cal R}(r) = -C {\cal R}(r). \end{equation} with the identification \begin{eqnarray} n^J_R = 0, & & n^J_L = - \frac{k}{2 (r_+ + r_\ast)}, \nonumber\\ T^J_R = \frac{k (r_+ - r_\ast)}{4 \pi a \Xi}, & & T^J_L = \frac{k (r^2_+ + r^2_\ast + 2 a^2)}{4 \pi a \Xi(r_+ + r_\ast)}. \label{identificationJ} \end{eqnarray} This helps us to fix the temperatures of dual CFT. On the other hand, in the radial equation~(\ref{scalarKNAdS}), one may set $m = 0$ as well. This leads to another holographic description of the Kerr-Newman-AdS-dS black hole, which will be called Q-picture. Define an operator $\partial_\chi$ acts on the ``internal space" of $U(1)$ symmetry of the complex scalar field and its eigenvalue is the charge of the scalar field $\partial_\chi \Phi = i \eta e \Phi$. Then the radial equation could be rewritten as \begin{equation} {\cal H}^2 {\cal R}(r) = - C {\cal R}(r), \end{equation} with the $SL(2,R)$ Casimir operator~(\ref{H2}) in which the derivative $\partial_\phi$ is replaced by $\partial_\chi$. The temperatures in the dual 2D CFT, $T_{L, R}$, and $n_{L, R}$ and be identified as \begin{eqnarray} n^Q_L = - \frac{k (r_+ + r_\ast)}{4 (r_+ r_\ast - a^2)}, & & n^Q_R = - \frac{k (r_+ - r_\ast)}{4 (r_+ r_\ast - a^2)}, \nonumber\\ T^Q_L = \frac{k \eta(r^2_+ + r^2_\ast + 2 a^2)}{4 \pi q_e (r_+ r_\ast - a^2)}, & & T^Q_R = \frac{k \eta (r^2_+ - r^2_\ast)}{4 \pi q_e (r_+ r_\ast - a^2)}. \label{identificationQ} \end{eqnarray} It would be interesting to study the hidden conformal symmetry in the extremal limit. In this case, the radial equation~(\ref{scalarKNAdS}) reduces to \begin{eqnarray} \label{scalarexKNAdS} \partial_r (r - r_+)^2 \partial_r {\cal R}(r) + \frac{1}{k^2} \frac{2 [ \omega (r^2_+ + a^2) - m a \Xi - e q_e r_+ ] (2 \omega r_+ - e q_e)}{r - r_+} {\cal R}(r) \nonumber\\ + \frac{1}{k^2} \frac{[ \omega (r^2_+ + a^2) - m a \Xi - e q_e r_+ ]^2}{(r - r_+)^2} {\cal R}(r) + C {\cal R}(r) = 0. \end{eqnarray} For the extremal black hole, we should use the following conformal coordinates~\cite{Chen:2010fr} \begin{eqnarray} \omega^+ &=& \frac{1}{2} \left( \alpha_1 t + \beta_1 \phi - \frac{\gamma_1}{r - r_+} \right), \nonumber\\ \omega^- &=& \frac{1}{2} \left( e^{2 \pi T_L \phi + 2 n_L t} - \frac{2}{\gamma_1} \right), \label{confcor}\\ y &=& \sqrt{\frac{\gamma_1}{2 (r - r_+)}} e^{\pi T_L \phi + n_L t}. \nonumber \end{eqnarray} Then the corresponding $SL(2,R)$ quadratic Casimir is \begin{equation} {\cal H}^2 = \partial_r (\Delta \partial_r) - \left( \frac{\gamma_1 (2 \pi T_L \partial_t - 2 n_L \partial_\phi)}{\bar A (r - r_+)} \right)^2 - \frac{2 \gamma_1 (2 \pi T_L \partial_t - 2 n_L \partial_\phi)}{\bar A^2 (r - r_+)}(\beta_1 \partial_t - \alpha_1 \partial_\phi), \end{equation} where $\bar A = 2 \pi T_L \alpha_1 - 2 n_L \beta_1$ and $\Delta = (r - r_+)^2$. In the J-picture, we set $q_e = 0$ and rewrite the radial equation as \begin{equation} \tilde{\cal H}^2 {\cal R}(r) = {\cal H}^2 {\cal R}(r) = -C {\cal R}(r). \end{equation} with the identification \begin{equation} \alpha^J_1 = 0, \qquad \beta^J_1 = \frac{\gamma_1 k}{a}, \qquad T^J_L = \frac{k}{\Xi} \frac{r_+^2 + a^2}{4 \pi a r_+}, \qquad n^J_L = - \frac{k}{4 r_+}. \end{equation} The left-temperature $T^J_L$ and $n^J_L$ are consistent with the identification~(\ref{identificationJ}) in the extremal limit. In the Q-picture, we set $m = 0$ and rewrite the radial equation as \begin{equation} {\cal H}^2 {\cal R}(r) = - C {\cal R}(r), \end{equation} with the identification \begin{eqnarray} & & \alpha^Q_1 = - \frac{k}{r_+^2 - a^2} \gamma_1, \qquad \beta^Q_1 = \frac{2 r_+ \eta k}{q_e (r^2_+ - a^2)} \gamma_1, \nonumber\\ & & n^Q_L = - \frac{k r_+}{2 (r^2_+ - a^2)}, \qquad T^Q_L = \frac{\eta k (r^2_+ + a^2)}{2 \pi q_e (r^2_+ - a^2)}. \end{eqnarray} Once again, the left-temperature $T^Q_L$ and $n^Q_L$ are both consistent with the identification~(\ref{identificationQ}) in the extremal limit. For a RN-AdS-dS black hole, corresponding to the limit $a = 0$, there is only one holographic description. The J-picture in such limit is actually singular. In the limit of vanishing cosmological constant, we recover both the J-picture and Q-picture of the asymptotically flat case, studied in~\cite{Chen:2010yw}. \section{Microscopic description in Q-picture} In order to have a complete Q-picture description, we need to determine the central charges of the dual CFT. Let us consider the near horizon geometry of the extreme Kerr-Newman-AdS-dS black hole with degenerated horizon at $r_+ = r_* = r_0$. In terms of the following new coordinates \begin{equation} \label{coordinate} r = r_0 + \epsilon \hat r, \qquad t = \frac{r_0^2 + a^2}{k} \frac{\hat t}{\epsilon}, \qquad \phi = \hat \phi + \Omega_H t, \end{equation} the metric becomes~\cite{Hartman:2008pb, Rasmussen:2010xd} \begin{equation} \label{NHmetric} ds^2 = \Gamma(\theta) \left[ - \hat r^2 d\hat t^2 + \frac{d\hat r^2}{\hat r^2} + \alpha(\theta) d\theta^2 \right] + \gamma(\theta) (d\hat \phi + p^J \, \hat r d\hat t)^2, \end{equation} where \begin{equation} \Gamma(\theta) = \frac{\rho_0^2}{k}, \qquad \alpha(\theta) = \frac{k}{\Delta_\theta}, \qquad \gamma(\theta) = \frac{(r_0^2 + a^2)^2 \Delta_\theta \sin^2\theta}{\Xi^2 \, \rho_0^2}, \end{equation} and \begin{equation} \rho_0^2 = r_0^2 + a^2 \cos^2\theta, \qquad p^J = \frac{2 a r_0 \Xi}{k (r_0^2 + a^2)}. \end{equation} The near horizon gauge field is \begin{equation} A_{[1]} = \frac{q_e}{\rho_0^2} \left( \frac{r_0 a \sin^2\theta}{\Xi} d\hat\phi + \frac{r_0^2 - a^2 \cos^2\theta}{k} \hat r d\hat t \right) + f(\theta) (d\hat \phi + p^J \, \hat r d\hat t), \end{equation} with \begin{equation} f(\theta) = \frac{q_m (r_0^2 + a^2)}{\Xi \rho_0^2} \cos\theta. \end{equation} The central charges of the dual CFT could be read from the near-horizon geometry of the extremal black holes. It turns out that in the J-picture ($q_e = q_m = 0$, i.e. $q = 0$), the dual 2D CFT has the central charges \begin{equation} c = 3 p^J \int_0^{\pi} d\theta \sqrt{\Gamma(\theta) \alpha(\theta) \gamma(\theta)} = \frac{12 a r_0}{k}. \end{equation} For a general expression of the central charge, one should rewrite $r_0$ in the notion of mass, i.e. $(r_+ + r_\ast)/2$, as it has done in the Kerr black holes. Consequently, the J-picture left- and right-handed central charges are \begin{equation} c^J_L = c^J_R = \frac{6 a (r_+ + r_\ast)}{k}. \end{equation} It is easy to see from the Cardy formula \begin{equation} S^J_{CFT} = \frac{\pi^2}{3} (c^J_L T^J_L + c^J_R T^J_R), \end{equation} that the CFT entropy recovers exactly the macroscopic Bekenstein-Hawking entropy. This provides a primary evidence to our holographic picture. In order to get the central charges in Q-picture, ($a = 0$), we need to uplift the geometry to 5D. Combine the $U(1)$ gauge bundle \begin{equation} A_{[1]} = p^Q \hat r d\hat t + \frac{q_m}{\Xi} \cos\theta d\hat \phi, \qquad p^Q = \frac{q_e}{k}, \end{equation} with the geometry and write the 5D space as \begin{equation} ds^2 = ds^2_{BH} + (d y + A_{[1]})^2, \end{equation} where $y$ is the fiber coordinate with period $2 \pi \eta$ and $ds^2_{BH}$ is the 4D near horizon metric~(\ref{NHmetric}). We can choose similar boundary conditions as in~\cite{Hartman:2008pb} \begin{equation} h_{\mu\nu} \sim \left(\begin{array}{ccccc} \hat r^2 & \hat r & 1/\hat r & 1/\hat r^2 & 1 \\ & 1/\hat r & 1 & 1/\hat r & 1 \\ & & 1/\hat r & 1/\hat r^2 & 1/\hat r \\ & & & 1/\hat r^3 & 1/\hat r \\ & & & & 1 \end{array} \right), \end{equation} in the basis of $(\hat t, \hat \phi, \theta, \hat r, y)$. The most general diffeomorphisms preserving those boundary conditions are \begin{equation} \zeta^{(y)} = \epsilon(y) \partial_{y} - \hat r \epsilon^\prime(y) \partial_{\hat r}, \end{equation} where $\epsilon(y) = e^{-i n y}$. The central charge can be computed from the 5D generalization of the treatment in~\cite{Hartman:2008pb}. It turns out that the central charges associated with $\zeta^{(y)}$ is \begin{equation}\label{centralQ} c = \frac{3 p^Q}{\eta} \int_0^{\pi} d\theta \sqrt{\Gamma(\theta) \alpha(\theta) \gamma(\theta)} = \frac{6 q_e r_0^2}{\eta \Xi k}. \end{equation} Similarly, one should rewrite $r_0$ in terms of general variables, $r_+, r_\ast$ and $a$, to have general central charges. For the Q-picture, inspired by the results of the RN black hole, one should rewrite $r_0^2$ in the notion of charge square, i.e. $r_+ r_\ast - a^2$, which leads to the general expression of Q-picture left- and right-handed central charges \begin{equation} c^Q_L = c^Q_R = \frac{6 q_e}{\eta \Xi} \frac{r_+ r_\ast - a^2}{k}. \end{equation} In the extreme limit $r_+ = r_\ast$, we have \begin{equation} a^2 = \frac{r_+^2 (1 + 3 r_+^2/l^2) - q^2}{1 - r_+^2/l^2}\,. \end{equation} Taking $a \to 0$, we see that the central charge~(\ref{centralQ}) recover precisely the one of the extreme Reissner-Nordstr\"om-AdS-dS black holes~\cite{Hartman:2008pb}, \begin{equation} c \to \frac{6 q_e \tilde r_0^2}{\eta}\,, \end{equation} where \begin{equation} \label{ceRNAdS} \tilde r_0^2 \to \frac{r_+^2 (1 - r_+^2/l^2)}{1 + 6 r_+^2/l^2 - 3 r_+^4/l^4 - q^2/l^2}\,. \end{equation} We conclude that in the Q-picture, the Kerr-Newman-AdS-dS black hole is described by a dual 2D CFT with the central charges~(\ref{centralQ}) and temperatures~(\ref{identificationQ}). \subsection{Thermodynamics} One can check that in the Q-picture the Bekenstein-Hawking entropy of a Kerr-Newman-AdS-dS black hole could be reproduced through the Cardy formula using~(\ref{identificationQ}) and~(\ref{centralQ}) \begin{equation} S_{BH} = \frac{\pi (r_+^2 + a^2)}{\Xi} \equiv \frac{\pi^2}{3} (c_L T_L + c_R T_R) = S_{CFT}. \end{equation} This provides a nontrivial check for our suggestion. From the first law of thermodynamics \begin{equation} \delta S_{BH} = \frac{\delta M - \Omega_H \delta J - \Phi_e \delta q}{T_H} = \frac{\delta E_L}{T_L} + \frac{\delta E_R}{T_R}, \end{equation} we always have \begin{equation} \delta E_L = \omega_L - q_L \mu_L, \qquad \delta E_R = \omega_R - q_R \mu_R, \end{equation} in both pictures. But the identifications are quite different. The derivations correspond to the parameters of probe scalar field by $\delta M = \omega, \; \delta J = m, \; \delta Q = e$. In the J-picture, we have \begin{eqnarray} \label{identificationJ1} && \omega^J_L = \frac{r^2_+ + r_\ast^2 + 2 a^2}{2 a \Xi} \omega, \qquad \omega^J_R = \frac{r^2_+ + r_\ast^2 + 2 a^2}{2 a \Xi} \omega - m, \nonumber\\ && q^J_L = q^J_R = \delta Q = e, \nonumber\\ && \mu^J_L = \frac{q (r^2_+ + r_\ast^2 + 2 a^2)}{2 a \Xi (r_+ + r_\ast)}, \qquad \mu^J_R = \frac{q (r_+ + r_\ast)}{2 a \Xi}. \end{eqnarray} While in the Q-picture we have \begin{eqnarray} \label{identificationQ1} && \omega^Q_L = \frac{\eta (r_+ + r_\ast) (r_+^2 + r_\ast^2 + 2a^2)}{2 q_e (r_+ r_\ast - a^2)} \omega, \nonumber\\ && \omega^Q_R = \frac{\eta (r_+ + r_\ast) (r_+^2 + r_\ast^2 + 2a^2)}{2 q_e (r_+ r_\ast - a^2)} \omega - \frac{\eta a \Xi (r_+ + r_\ast)}{q_e (r_+ r_\ast - a^2)} m, \nonumber\\ && q^Q_L = q^Q_R = \delta Q = e, \nonumber\\ && \mu^Q_L = \frac{\eta (r_+^2 + r_\ast^2 + 2 a^2)}{2 (r_+ r_\ast - a^2)}, \qquad \mu^Q_R = \frac{\eta (r_+ + r_\ast)^2}{2 (r_+ r_\ast - a^2)}. \end{eqnarray} In the limit of vanishing cosmological constant, these quantities reduce to the ones found in~\cite{Chen:2010yw}. \subsection{Superradiant scattering} In a 2D conformal field theory, one can define the two-point function \begin{equation} G(t^+, t^-) = \langle {\cal O}^\dagger_\phi(t^+, t^-) {\cal O}_\phi(0) \rangle, \end{equation} where $t^+, t^-$ are the left and right moving coordinates of 2D worldsheet and ${\cal O}_\phi$ is the operator corresponding to the field perturbing the black hole. For an operator of dimensions $(h_L, h_R)$, charges $(q_L, q_R)$ at temperatures $(T_L, T_R)$ and chemical potentials $(\mu_L, \mu_R)$, the two-point function is dictated by conformal invariance and takes the form~\cite{Cardy:1984bb}: \begin{equation} \label{G-Mink} G(t^+, t^-) \sim (-1)^{h_L + h_R} \left( \frac{\pi T_L}{\sinh(\pi T_L t^+)} \right)^{2h_L} \left( \frac{\pi T_R}{\sinh(\pi T_R t^-)} \right)^{2h_R} e^{i q_L \mu_L t^+ + i q_R \mu_R t^-}. \end{equation} The retarded correlator $G_R (\omega_L, \omega_R)$ is analytic on the upper half complex $\omega_{L,R}$-plane and its value along the positive imaginary $\omega_{L,R}$-axis gives the Euclidean correlator: \begin{equation} \label{GER} G_E(\omega_{L, E}, \omega_{R, E}) = G_R(i \omega_{L,E}, i \omega_{R,E}), \qquad \omega_{L, E}, \; \omega_{R, E} > 0. \end{equation} At finite temperature, $\omega_{L,E}$ and $\omega_{R,E}$ take discrete values of the Matsubara frequencies \begin{equation} \omega_{L, E} = 2 \pi m_L T_L, \qquad \omega_{R,E} = 2 \pi m_R T_R, \end{equation} where $m_L, m_R$ are integers for bosonic modes and are half integers for fermionic modes. In a 2D CFT, the Euclidean correlator $G_E$ is obtained by a Wick rotation $t^+ \to i \tau_L$, $t^- \to i \tau_R$, and is determined by the conformal symmetry. At finite temperature the Euclidean time is taken to have period $2 \pi/T_L, 2 \pi/T_R$ and via analytic continuation the momentum space Euclidean correlator is given by~\cite{Maldacena:1997ih} \begin{eqnarray} \label{GE} G_E(\omega_{L, E}, \omega_{R, E}) &\sim& T_L^{2 h_L - 1} T_R^{2 h_R - 1} \, e^{i \frac{\bar\omega_{L, E}}{2 T_L}} \, e^{i \frac{\bar\omega_{R, E}}{2 T_R}} \; \Gamma\left( h_L + \frac{\bar\omega_{L, E}}{2 \pi T_L} \right) \Gamma\left( h_L - \frac{\bar\omega_{L, E}}{2 \pi T_L} \right) \nonumber\\ && \times \Gamma\left( h_R + \frac{\bar\omega_{R, E}}{2 \pi T_R} \right) \Gamma\left( h_R - \frac{\bar\omega_{R, E}}{2 \pi T_R} \right), \end{eqnarray} where \begin{equation} \bar\omega_{L, E} = \omega_{L,E} - i q_L \mu_L, \qquad \bar\omega_{R, E} = \omega_{R, E} - i q_R \mu_R. \end{equation} Since the function $\Delta_r$ is quartic, the radial equation for generic black hole is intractable. In the study of the hidden conformal symmetry, we focus on the near horizon region. However, when we try to discuss the scattering issue and move away from the horizon, the expansion of $\Delta_r$ to the second order of $(r - r_+)$ breaks down. Therefore in general, we cannot use the radial equation~(\ref{scalarKNAdS}) to discuss the scattering process, even though it provides useful information on dual CFTs. Nevertheless, for near-extremal Kerr-Newman-AdS-dS black hole, we may pose a well-defined scattering problem. To this end, we need to zoom in the near-horizon region and introduce the coordinates~(\ref{coordinate}) to describe the geometry. In this case, we have to focus on the frequencies near the superradiant bound $\omega_s$, \begin{equation} \omega = \omega_s + \hat \omega \frac{\epsilon}{r_0}\,, \qquad \omega_s = m \Omega_H + e \Phi_e, \end{equation} where $\Omega_H$ and $\Phi_e$ are the horizon angular velocity and the electric potential given in~(\ref{OH}) and~(\ref{Epotential}) respectively. The wave function of the radial equation is then \begin{equation} {\cal R}(z) = z^{\alpha} (1 - z)^{\beta} F(a, b, c\,; z), \end{equation} with \, $z = \frac{\hat r - \lambda}{\hat r + \lambda}$ \, and \begin{eqnarray} && \alpha = -i \hat A, \qquad \beta = \frac{1}{2} \left( 1 - \sqrt{1 - 4 \hat C} \right), \\ && c = 1 + 2 \alpha, \qquad a = \alpha + \beta + i \hat B, \qquad b = \alpha + \beta - i \hat B, \\ && \hat A = \frac{\hat \omega}{2 \lambda}, \quad \hat B = \frac{\hat \omega}{2 \lambda} - \frac{2 m \Omega_H r_+}{k} - \frac{e q}{k} \frac{r^2_+ - a^2}{r^2_+ + a^2}, \quad \hat C = \hat C(\omega_s). \end{eqnarray} The solution behaves asymptotically as \begin{equation} {\cal R}(r) \simeq A_1 r^{h-1} + A_2 r^{-h}, \end{equation} where $h$ is the conformal weight of the scalar field \begin{equation} h = 1 - \beta = \frac{1}{2} \left( 1 + \sqrt{1 - 4 \hat C} \right). \end{equation} Taken $A_1$ as the source and $A_2$ as the response, the retarded Green's function is just~\cite{Chen:2010xu} \begin{eqnarray} G_R &\sim& \frac{A_2}{A_1} \nonumber\\ &=& \frac{\Gamma(1 - 2h)}{\Gamma(2h - 1)} \frac{\Gamma\left( h - i (\hat A - \hat B) \right) \Gamma\left( h - i (\hat A + \hat B) \right)}{\Gamma\left( 1 - h - i (\hat A - \hat B) \right) \Gamma\left( 1 - h - i (\hat A + \hat B) \right)} \nonumber\\ &=& \frac{\Gamma(1 - 2h)}{\Gamma(2h - 1)} \frac{\Gamma\left( h - i \frac{\omega_L - q_L \mu_L}{2 \pi T_L} \right) \Gamma\left( h - i \frac{\omega_R - q_R \mu_R}{2 \pi T_R} \right)}{\Gamma\left( 1 - h - i \frac{\omega_L - q_L \mu_L}{2 \pi T_L} \right) \Gamma\left( 1 - h - i \frac{\omega_R - q_R \mu_R}{2 \pi T_R} \right)}. \label{realtime} \end{eqnarray} In the last line, we have applied the identification~(\ref{identificationJ}) and~(\ref{identificationJ1}) in J-picture and the identification~(\ref{identificationQ}) and~(\ref{identificationQ1}) in Q-picture. We see that in both pictures, the real-time correlator~(\ref{realtime}) is in perfect match with the CFT prediction. So is the absorption cross section. Note that in two different pictures, though the retarded Green's functions share the same expression~(\ref{realtime}), the temperatures, chemical potentials and frequencies are different. However the conformal weights are the same, reflecting the fact the conformal weight is closely related to the asymptotical behavior. \section{Discussions} In this paper, we showed that there are two different holographic descriptions of a generic non-extremal Kerr-Newman-AdS-dS black hole. One is called J-picture, whose construction is based on the black hole angular momentum. The other one is called Q-picture, whose construction originate from the electric charge of the black hole. In these two different pictures, neither the central charges nor the temperatures of the dual CFTs is the same. In particular, in the Q-picture, the central charges and the temperatures are parameterized by a varying constant $\eta$, sharing the same feature in the holographic descriptions of RN and Kerr-RN black holes. As a byproduct of our analysis, we showed that there exists only one holographic description for a generic nonextremal RN-AdS-dS black hole. The discussion in this paper could be generalized easily in several directions. It is straightforward to consider a dyonic Kerr-Newman-AdS-dS black hole. Also the analysis could be applied to the higher dimensional Kerr-AdS-dS black holes and multi-charged black holes, both of which are expect to have multiple holographic descriptions. Moreover, the existence of multiple dual pictures leads to a natural believe that there should exist a certain duality among those different pictures. It is desirable to clarify this important issue. Very recently it was pointed out in \cite{Guica:2010ej} that the existence of different holographic dual descriptions of 5D Kerr-Newman black hole is related to the long and short string pictures. It would be interesting to see if there exist similar pictures in our case. \section*{Acknowledgements} CMC is grateful to Institute of Theoretical Physics and Morningside Center of Mathematics, Chinese Academy of Sciences for the hospitality when this paper was initiated. The work of BC was partially supported by NSFC Grant No.10775002, 10975005. This work by CMC was supported by the National Science Council of the R.O.C. under the grant NSC 99-2112-M-008-005-MY3 and in part by the National Center of Theoretical Sciences (NCTS).
1,314,259,994,598
arxiv
\section{Introduction} \label{sec:intro} In wireless communications, relay stations have been used to relay radio signals between radio stations that cannot directly communicate with each other due to the signal attenuation. More recently, relay stations are used to achieve a certain spatial diversity called cooperative diversity to cope with fading channels \cite{Laneman}. On the other hand, it is important to efficiently utilize the scarce bandwidth due to the limitation of frequency resources \cite{Cov}, while conventional relay stations commonly use different wireless resources, such as frequency, time and code, for their reception and transmission of the signals. For this reason, a single-frequency full-duplex relay station, in which signals with the same carrier frequency are received and transmitted simultaneously, is considered as one of key technologies in the fifth generation (5G) mobile communications systems \cite{2020beyond}. In order to realize such full-duplex relay stations, {\em self-interference} caused by coupling waves is the key issue \cite{Jain}. Fig.~\ref{coupling} illustrates self-interference by coupling waves. In this figure, radio signals with carrier frequency $f$ are transmitted from the base station (denoted by BS). One terminal (denoted by T1) directly receives the signal from the base station, but the other terminal (denoted by T2) is so far from the base station that they cannot communicate directly. Therefore, a relay station (denoted by RS) is attached between them to relay radio signals. Then, radio signals with the same carrier frequency $f$ are transmitted from RS to T2, but also they are fed back to the receiving antenna directly or through reflection objects. As a result, self-interference is caused in the relay station, which may deteriorate the quality of communication and, even worse, may destabilize the closed-loop system. To tackle with the issue of self-interference cancelation, many methods have been proposed for single-frequency full-duplex systems. Analog cancelation has been proposed in \cite{Rad,Knox}, in which analog devices are used for canceling coupling waves. Since coupling wave paths are physically analog systems and there is no quantization problem, this design is theoretically the most ideal except for implementation issues. On the other hand, \textit{digital cancelation} has attracted increasing attention, in which the interference is subtracted in the digital domain by using digital signal processing techniques \cite{sakai2006simple,Dua,Gol,Chun,Snow,Haya13,Sen}. Digital cancelers benefit easy implementation on digital devices in exchange for the response between sampling instants. In addition, spatial domain techniques, called antenna cancelation, has been also proposed in \cite{Jain,Kho}, in which they try to reduce the interference by arranging antenna placement. See \cite{Jain,Dua} for details. \begin{figure}[t] \centering \scalebox{0.55}{\includegraphics{fig/coupling.eps}} \caption{Self-interference} \label{coupling} \end{figure} For the problem of self-interference, a pre-nulling method \cite{Chun} and adaptive methods \cite{sakai2006simple,Haya13} have been proposed to cancel the effect of coupling waves. In these studies, a relay station is modeled by a discrete-time system, and the performance is optimized in the discrete-time domain. However, radio waves are in nature continuous-time signals and hence the performance should be discussed in the continuous-time domain. In other words, one should take account of {\em intersample behavior} for coupling wave cancelation. In theory, if the signals are completely band-limited below the Nyquist frequency, then the intersample behavior can be restored from the sampled-data in principle \cite{Shannon}, and the discrete-time domain approaches might work well. However, the assumption of perfect band limitedness is hardly satisfied in real signals since \begin{enumerate} \item real baseband signals are not fully band-limited, \item pulse-shaping filters, such as raised-cosine filters, do not act perfectly, \item and the nonlinearity in electric circuits adds frequency components beyond the Nyquist frequency. \end{enumerate} One might think that if the sampling frequency is fast enough, then the assumption is almost satisfied and there is no problem. But this is not true; firstly, the sampling frequency cannot be arbitrarily increased in real systems, and secondly, even though the sampling is quite fast, intersample oscillations may happen in feedback systems \cite[~Sect.~7]{Yamamoto99-1}. To solve the problem mentioned above, we propose a new design method for coupling wave cancelation based on the {\em sampled-data control theory} \cite{Chen95,Yamamoto99-1}. We model the transmitted radio signals and coupling waves as continuous-time signals, and optimize the worst case continuous-time error due to coupling waves by a {\em digital} canceler. This is formulated as a sampled-data $H^\infty$ optimal control problem, which can be solved via the fast-sampling fast-hold (FSFH) method \cite{Kel,Yamamoto1999729}. We also propose robust feedback cancelers that can take account of uncertainties in coupling wave path characteristic such as unknown multipath interference due to, for example, large structures that reflect radio waves, or the change of weather conditions \cite{Tak}. Design examples are shown to illustrate the proposed methods. The present manuscript expands on our recent conference contributions \cite{SSHRsci,sasaharaSICE14} by incorporating robust feedback control into the formulation. The remainder of this article is organized as follows. In Section \ref{sec:rs}, we derive a mathematical model of a relay station considered in this study. In Section \ref{sec:fb}, we propose sampled-data $H^\infty$ control for cancelation of self-interference. Here we also discuss robust control against uncertainty in the delay time. In Section \ref{sec:sim}, simulation results are shown to illustrate the effectiveness of the proposed method. In Section \ref{sec:conc}, we offer concluding remarks. \subsection*{Notation} Throughout this article, we use the following notation. We denote by $L^2$ the Lebesgue space consisting of all square integrable real functions on $[0, \infty)$ endowed with $L^2$ norm $\|\cdot\|_2$. The symbol $t$ denotes the argument of time, $s$ the argument of Laplace transform and $z$ the argument of $Z$ transform. These symbols are used to indicate whether a signal or a system is of continuous-time or discrete-time. The operator $e^{-Ls}$ with nonnegative real number $L$ denotes the continuous-time delay operator with delay time $L$. For a matrix $A$, $\overline{\sigma}(A)$ denotes the maximum singular value of $A$. \section{Relay Station Model} \label{sec:rs} In this section, we provide a mathematical model of a relay station with self-interference phenomenon. \begin{figure}[t] \includegraphics[width = 85mm]{fig/relay.eps} \caption{Relay Station} \label{Relay Station} \end{figure} Fig.~\ref{Relay Station} depicts a single-frequency full-duplex relay station implemented with a digital canceler \cite{Knox}. A radio wave with carrier frequency $f$ from a base station is accepted at the receiving antenna and amplified by the low noise amplifier (LNA). Then, the received signal is demodulated to a baseband signal by the demodulator, and converted to a digital signal by the analog-to-digital converter (ADC). The obtained digital signal is then processed by the digital signal processor (DSP) into another digital signal, which is converted to an analog signal by the digital-to-analog converter (DAC). Finally, the analog signal is modulated to a radio wave with carrier frequency $f$, amplified by the power amplifier (PA) and transmitted by the transmission antenna. A problem here is that the transmitted signal will again reach the receiving antenna. This is called coupling wave and causes self-interference, which deteriorates the communication quality. \begin{figure}[t] \includegraphics[width = 85mm]{fig/relay2.eps} \caption{Simple Block Diagram of Relay Station} \label{Relay Station2} \end{figure} Fig.~\ref{Relay Station2} shows a simplified block diagram of the relay station. In Fig.~\ref{Relay Station}, we model LNA and PA in Fig.~\ref{Relay Station} as static gains, $a_1$ and $a_2$, respectively. The modulator is denoted by ${\mathcal M}$ and the demodulator by ${\mathcal D}$. We assume that the coupling wave channel is a flat fading channel, that is, all frequency components of a signal through this channel experience the same magnitude fading. Then the channel can be treated as an all-pass system. In this study, we adopt a delay system, $re^{-Ls}$, as a channel model, where $r>0$ is the attenuation rate and $L>0$ is a delay time. The block named ``Digital System'' includes ADC, DSP and DAC in Fig.~\ref{Relay Station}. In this article, we consider the quadrature amplitude modulation (QAM), which is used widely in digital communication systems, as a modulation method. QAM transforms a transmission signal into two orthogonal carrier waves, that is a sine wave and a cosine wave. We assume the transmission signal $u(t)$ is given by \begin{Meqnarray} u(t) := \sum_{k} g(t-kh) \left[ \begin{array}{c} u_k^I \\ u_k^Q \\ \end{array} \right]. \end{Meqnarray} In this expression, $g(t)$ is a general pulse-shaping function, $h$ is the sampling period, and $u_k^I,u_k^Q$ denote respectively the in-phase and the quadrature components of a transmission symbol. We assume that the support of the Fourier transform $G(j\omega)$ of $g(t)$ is finite and the bandwidth is much less than $4 \pi f$. In other words, there exists a frequency $f_g $ ($0<f_g\ll f$) such that $|G(j\omega)|=0$ for any $\omega \notin (-2\pi f_g,2\pi f_g)$. Then the modulated signal $\tilde{u}(t)$ can be written as \cite[~Chap.~2]{HayComm} \begin{Meqnarray} \tilde{u}(t) &=& {\mathcal M} u(t)\nonumber\\ &=& \sum_{k} g(t-kh) ( u_k^I \cos 2\pi ft- u_k^Q \sin 2\pi ft). \end{Meqnarray} On the other hand, the demodulation operator ${\mathcal D}$ is a linear operator satisfying ${\mathcal D}{\mathcal M}=1$ \cite{HayComm}. Fig.~\ref{Demodulator} shows the block diagram of ${\mathcal D}$. In this block diagram, $H_{\rm id}(j\omega)$ is the ideal low-pass filter with cut-off frequency $f_c$ satisfying \begin{Meqnarray} H_{\rm id}(j\omega) = \begin{cases} 1, & \text{~if~} \omega<2\pi f_c,\\ 0, & \text{~otherwise}. \end{cases} \end{Meqnarray} The cut-off frequency is chosen to satisfy $f_g \ll f_c \ll f$. By the linearity of ${\mathcal D}$, we obtain an equivalent block diagram shown in Fig.~\ref{Relay Station3} to Fig.~\ref{Relay Station2}. Here \begin{Meqnarray} \tilde{u}(t-L) &=& \sum_k g(t-L-kh) \biggl\{ \bigl(u_k^I \cos (2\pi fL) \nonumber\\ &+& u_k^Q \sin (2\pi fL)\bigr) \cos (2\pi ft) \nonumber\\ &+& \bigl(u_k^I \sin (2\pi fL) - u_k^Q \cos (2\pi fL)\bigr) \sin (2\pi ft)\biggr\}. \end{Meqnarray} Thus, we have \begin{Meqnarray} & &\cos (2\pi ft) \cdot \tilde{u}(t-L)\nonumber\\ &~& =\frac{1}{2} \sum_k g(t-L-kh) \biggl\{ u_k^I \cos (2\pi fL) + u_k^Q \sin (2 \pi fL) \nonumber\\ &~& +(u_k^I \cos (2\pi fL) + u_k^Q \sin (2\pi fL)) \cos (4 \pi ft) \nonumber\\ &~& +(u_k^I \sin (2\pi fL) - u_k^Q \cos (2\pi fL)) \sin (4 \pi ft) \Big\}. \end{Meqnarray} From this, we have \begin{Meqnarray} & & 2H_{\rm id}[\cos(2\pi ft) \cdot \tilde{u}(t-L)]\nonumber\\ &~& = \sum_k g(t-L-kh) \bigl\{u_k^I \cos(2\pi f L) + u^Q_k \sin(2\pi f L)\bigr\}. \end{Meqnarray} In the same way, we have \begin{Meqnarray} & & 2H_{\rm id}[-\sin(2\pi ft) \cdot \tilde{u}(t-L)]\nonumber\\ &~& = \sum_k g(t-L-kh) \bigl\{-u_k^I \sin(2\pi f L) + u^Q_k \cos(2\pi f L)\bigr\}. \end{Meqnarray} Finally, we have the following relation: \begin{Meqnarray} u_L(t) &=& {\mathcal D}\left( a_1ra_2 \tilde{u}(t-L) \right)\nonumber \\ &=& \alpha A_L u(t-L), \end{Meqnarray} where $\alpha := a_1a_2r$ and \begin{Meqnarray} A_L := \left[ \begin{array}{cc} \cos (2\pi fL) & \sin (2\pi fL) \\ -\sin (2\pi fL) & \cos (2\pi fL) \\ \end{array} \right]. \end{Meqnarray} \begin{figure}[t] \centering \includegraphics[width = 70mm]{fig/demodulator.eps} \caption{Structure of a demodulator} \label{Demodulator} \end{figure} \begin{figure}[t] \includegraphics[width = 85mm]{fig/relay3.eps} \caption{Equivalent Block Diagram of Relay Station} \label{Relay Station3} \end{figure} Finally we obtain a relay station model depicted in Fig.~\ref{Relay Station Model}. By this figure, we can see that the relay station with self-interference is a \emph{feedback} system. In practice, the gain of PA in Fig.~\ref{Relay Station} is very high (e.g., $a_2=1000$) and the loop gain becomes much larger than $1$, and hence we should discuss the \emph{stability} as well as self-interference cancelation. To achieve these requirements, we design the digital controller in the digital system, which is precisely shown in Fig.~\ref{Digital Factors}. \begin{figure}[t] \includegraphics[width = 85mm]{fig/relay_model.eps} \caption{Relay Station Model} \label{Relay Station Model} \end{figure} \begin{figure}[t] \centering \includegraphics[width = 80mm]{fig/digital_system.eps} \caption{Digital System} \label{Digital Factors} \end{figure} In Fig.~\ref{Digital Factors}, ADC in Fig.~\ref{Relay Station} is modeled by an anti-aliasing analog filter $F(s)$ with an ideal sampler ${\mathcal S}_h$ with sampling period $h > 0$, defined by \begin{Meqnarray} \begin{array}{c} {\mathcal S}_h : \{ y_0(t) \} \mapsto \{ y_d[n] \} : y_d[n] = y_0(nh), \\ n = 0,1,2,\ldots. \\ \end{array} \end{Meqnarray} For the DSP block in Fig.~\ref{Relay Station}, we assume a digital filter denoted by $K(z)$, which we design for self-interference cancelation. DAC in Fig.~\ref{Relay Station} is modeled by a zero-order hold, ${\mathcal H}_h$, defined by \begin{Meqnarray} \begin{array}{c} {\mathcal H}_h : \{u_d[n]\} \mapsto \{u_0(t)\}:u_0(t)=u_d[n], \\ t \in [nh, (n+1)h), n=0,1,2,\ldots,\\ \end{array} \end{Meqnarray} and a post analog low-pass filter denoted by $P(s)$. We assume that $F(s)$ and $P(s)$ are proper, stable and real-rational transfer function matrices. Note that a strictly proper function is normally used for $F(s)$ and it is included in the assumption. \section{Feedback Control} \label{sec:fb} Fig.~\ref{Feedback Canceler} shows the block diagram of the feedback control system of the relay station. \begin{figure}[t] \centering \includegraphics[width = 80mm]{fig/feedback_design.eps} \caption{Feedback Canceler} \label{Feedback Canceler} \end{figure} For this system, we find the digital controller, $K(z)$, that stabilizes the feedback system and also minimize the effect of self-interference, $z:=v-u$, for any $v$. To obtain a reasonable solution, we restrict the input continuous-time signal $v$ to the following set \begin{Meqnarray} WL^2 := \{v=Ww:w \in L^2, \|w \|_{2} = 1\}, \end{Meqnarray} where $W$ is a continuous-time LTI system with real-rational, stable, and strictly proper transfer function $W(s)$. Under this assumption, we first solve a \emph{nominal} control problem where all system parameters are previously known. Then we propose a \emph{robust} controller design against uncertainty in the coupling wave paths. \subsection{Nominal Controller Design} Here we consider the nominal controller design problem formulated as follows: \begin{problem} Find the digital controller (canceler) $K(z)$ that stabilizes the feedback system in Fig.~\ref{Feedback Canceler} and uniformly minimizes the $L^2$ norm of the error $z = v-u$ for any $v \in WL^2$. \end{problem} This problem is reducible to a standard sampled-data $H^{\infty}$ control problem \cite{Chen95,Yamamoto99-1}. To see this, let us consider the block diagram shown in Fig.~\ref{Feedback Canceler2}. Let $T_{zw}$ be the system from $w$ to $z$. Then we have \begin{Meqnarray} z = v-u = T_{zw}w \end{Meqnarray} and hence uniformly minimizing $\| z\|_{2}$ for any $v \in WL^2$ is equivalent to minimizing the $H^{\infty}$ norm of $T_{zw}$, \begin{Meqnarray} \| T_{zw}\|_{\infty} = \sup_{w \in L^2,~\|w \|_{2} = 1} \|T_{zw} w \|_{2}. \end{Meqnarray} Let $\Sigma (s)$ be a generalized plant given by \begin{Meqnarray} \Sigma (s) = \left[ \begin{array}{cc} W(s) & -P(s) \\ F(s)W(s) & \alpha e^{-Ls}A_LF(s)P(s) \\ \end{array} \right]. \end{Meqnarray} By using this, we have \begin{Meqnarray} T_{zw}(s) = {\mathcal F}(\Sigma (s), {\mathcal H}_h K(z) {\mathcal S}_h), \end{Meqnarray} where ${\mathcal F}$ denotes the linear-fractional transformation (LFT) \cite{Chen95}. Fig.~\ref{Ge_Feedback} shows the block diagram of this LFT. Then our problem is to find a digital controller $K(z)$ that minimizes $\|T_{zw}\|_{\infty}$. This is a standard sampled-data $H^{\infty}$ control problem, and can be efficiently solved via FSFH approximation \cite{Kel,Nagahara13-2,Yamamoto1999729}. \begin{figure}[t] \includegraphics[width = 85mm]{fig/feedback_design2.eps} \caption{Block Diagram for Feedback Canceler Design} \label{Feedback Canceler2} \end{figure} \begin{figure}[t] \centering \includegraphics[width = 70mm]{fig/feedback_generalized_plant.eps} \caption{LFT $T_{zw}={\mathcal F}(\Sigma, {\mathcal H}_hK{\mathcal S}_h)$} \label{Ge_Feedback} \end{figure} Note that if there exists a controller $K(z)$ that minimizes $\|T_{zw}\|_{\infty}$, then the feedback system is stable and the effect of self-interference $z=v-u$ is bounded by the $H^{\infty}$ norm. We summarize this as a proposition. \begin{prop} Assume $\|T_{zw}\|_{\infty} \leq \gamma$ with $\gamma >0$. Then the feedback system shown in Fig.~\ref{Feedback Canceler} is stable, and for any $v \in WL^2$ we have $\|v-u\|_{2} \leq \gamma$. \end{prop} \begin{proof} First, if the feedback system is unstable, then the $H^{\infty}$ norm becomes unbounded. Next, for $v \in WL^2$ there exists $w \in L^2$ such that $v = Ww$ and $\| w \|_{2} = 1$. Then, inequality $\| T_{zw} \|_{\infty} \leq \gamma $ gives \begin{Meqnarray} \|v-u\|_{2} = \|T_{zw}w\|_{2} \leq \|T_{zw}\|_{\infty} \|w\|_{2} \leq \gamma. \end{Meqnarray} \hfill $\Box$ \end{proof} \subsection{Robust Controller Design against Multipath Interference} \label{sec:rob} In practice, the characteristic of the coupling wave channel changes due to, for example, large structures that reflect radio waves. In this situation, it is difficult to predict the coupling wave paths beforehand, and hence there must be uncertainties in the paths. Under this uncertainty, the nominal controller may lead to deterioration of cancelation performance, and even worse, it may make the feedback system unstable. To solve this problem, we propose \emph{robust} controller design against the uncertainty. Let us assume the characteristic of the coupling wave paths in Fig.~\ref{Relay Station Model} is perturbed as \begin{Meqnarray} r e^{-Ls} \mapsto r e^{-Ls} + \displaystyle{ \sum_{i=1}^{M} r_{i} e^{-L_is}}, \end{Meqnarray} where $r_i$ and $L_i$ are the attenuation ratio and the delay time of the $i$-th path, respectively. Note that $M$ represents the number of additional paths. Since the additional paths are detour paths, we assume \begin{Meqnarray} L_i > L, \quad i=1,2,\ldots,M. \end{Meqnarray} Then the characteristic of the feedback path in Fig.~\ref{Feedback Canceler} is perturbed as \begin{Meqnarray} \alpha e^{-Ls}A_L \mapsto \alpha e^{-Ls}A_L + \sum_{i=1}^{M}\alpha_i e^{-L_is}A_{L_i} \end{Meqnarray} where $\alpha_i:=a_1a_2r_i$. Define the error transfer function matrix $E(s)$ as \begin{Meqnarray} E(s) := \sum_{i=1}^M \frac{\alpha_i}{\alpha}e^{-(L_i-L)s}A_{L_i-L}. \label{eq:Es} \end{Meqnarray} Since $A_L$ is a rotation matrix on $\mathbf{R}^2$ and the angle is $2\pi fL$ clockwise, we have \begin{Meqnarray} & & \alpha e^{-Ls}A_L + \sum_{i=1}^{M} \alpha_i e^{-L_is}A_{L_i} \nonumber\\ & & \quad = \alpha e^{-Ls}A_L\biggl(I + \sum_{i=1}^{M}\dfrac{\alpha_i}{\alpha} e^{-(L_i-L)s} A_L^{-1} A_{L_i}\biggr)\nonumber \\ & & \quad = \alpha e^{-Ls}A_L\left(I + E(s)\right). \end{Meqnarray} Take a frequency weighting function matrix $W_2(s)$ that is real rational and satisfies \begin{Meqnarray} \overline{\sigma}(E(j\omega)) < \overline{\sigma}(W_2(j\omega)), \end{Meqnarray} for all $\omega \in {\mathbf R}$. Since $A_{L_i-L}$ is an orthogonal matrix, the equation (\ref{eq:Es}) gives \begin{Meqnarray} \label{singular value} & &\overline{\sigma}(E(j\omega)) \leq \sum_{i=1}^{M} \frac{\alpha_{i}}{\alpha} \overline{\sigma}\left( e^{-j(L_i-L)\omega}A_{L_i-L}\right) \leq \sum_{i=1}^M\frac{r_i}{r}.\nonumber\\ \end{Meqnarray} Then the uncertainty in the coupling wave paths can be modeled as multiplicative perturbation, that is, for any $M>0$, $r_i \geq 0$, $L_i > L$ ($i=1,\ldots,M$), we have \begin{Meqnarray} & &\alpha e^{-Ls}A_L + \displaystyle{ \sum_{i=1}^{M} \alpha_i e^{-L_is}A_{L_i}}\nonumber \\ & & \quad \in \{\alpha e^{-Ls}A_L\left(1+\Delta(s)W_2(s)\right): \|\Delta\|_\infty < 1\}. \end{Meqnarray} From the inequality (\ref{singular value}), we can take \begin{Meqnarray} W_2(s) = \left(\sum_{i=1}^M \frac{r_i}{r} + \varepsilon \right)I \label{eq:W2s} \end{Meqnarray} where $\varepsilon$ is an appropriately small and positive number. Based on the formulation of the uncertainty, we consider the block diagram shown in Fig.~\ref{Robust Model}, where $W_1(s)$ plays the same role as $W(s)$ in the nominal controller design. \begin{figure}[t] \includegraphics[width = 85mm]{fig/robust_model.eps} \caption{Relay Station Model with Perturbation} \label{Robust Model} \end{figure} Let $T_{z_1w_1}$ be the system from $w_1$ to $z_1$ and $T_{z_2w_2}$ be the system from $w_2$ to $z_2$. If $\|T_{z_1w_1}\|_{\infty}$ is finite and $\|T_{z_2w_2}\|_{\infty} \leq 1$, then the feedback system is robustly stable, that is, the feedback system is internally stable for all $\Delta$ satisfying $\|\Delta\|_\infty < 1$ from the small gain theorem for sampled-data control systems \cite{Siv}. Now we formulate the robust controller design problem as follows: \begin{problem} Find the digital controller (canceler) $K(z)$ that minimizes $\|T_{z_1w_1}\|_\infty$ subject to $\|T_{z_2w_2}\|_\infty \leq 1$. \end{problem} To solve this problem, we adopt the finite dimensional $Q$-parametrization where we limit feasible controllers \cite{Hindi}. Then, the constraints are represented by linear matrix inequalities (LMI's) and the problem can be efficiently solved via numerical optimization software such as \texttt{SDPT3} or \texttt{SeDuMi} on \texttt{MATLAB} \cite{Toh,Str}. For more details, see \cite{Hindi}. \section{Design Examples} \label{sec:sim} In this section, we show simulation results to illustrate the effectiveness of the proposed methods. We assume that the sampling period $h$ is normalized to $1$, the carrier frequency $f$ is $10000$~Hz, the attenuation rate of the coupling wave channel $r = 0.2$, and the time delay $L=1$. Note that sampling frequency is $1$~Hz, which is much smaller than the carrier frequency. Note also that the time delay is equal to the sampling period $h$. We assume the low noise amplifier $a_1 = 1$. An anti-alias analog filter is not employed in this examples, namely we assume $F(s) = I$. The post filter $P(s)$ is modeled by \begin{Meqnarray} P(s) = \dfrac{1}{0.001s+1}I. \end{Meqnarray} We also assume the transmission gain to be \begin{Meqnarray} a_2 = 1000, \end{Meqnarray} that is, 60~dB. The frequency characteristic $W(s)$ is modeled by \begin{Meqnarray} W(s) = \dfrac{1}{2s+1}I. \label{eq:Ws} \end{Meqnarray} With these parameters, we compute the $H^\infty$-optimal nominal controller $K(z)$ by FSFH with discretization number $N=16$. With this controller, we simulate coupling wave cancelation with a random rectangular wave input with period $4$~s filtered by the low-pass filter $P(s)$. Note that this signal contains frequency components beyond the Nyquist frequency, $\pi/h = \pi$ [rad/sec], although the frequency of the wave, $\pi/8h = \pi/8$ [rad/sec] is much lower than $\pi$. Fig.\ref{fb} shows the reconstructed signal $u$ in the feedback system (see Fig.~\ref{Feedback Canceler}). The feedback system is guaranteed to be stable and the canceler achieves small reconstruction errors as shown in Fig.~\ref{fb_error}. \begin{figure}[t] \includegraphics[width = 85mm]{graph/feedback.eps} \caption{Feedback Cancelation: input signal (dash-dot line), reconstructed signal $u$ by feedback canceler (solid line)} \label{fb} \end{figure} \begin{figure}[t] \includegraphics[width = 85mm]{graph/fb_error.eps} \caption{Coupling wave effect $|v(t) - u(t)|$ by feedback canceler shown in Fig.~\ref{Feedback Canceler}} \label{fb_error} \end{figure} Next, let us consider uncertainty in the coupling wave paths. If the characteristic of the coupling wave paths changes, then the nominal controller may not work well. To see this, let us design the nominal controller with \begin{Meqnarray} a_2 = 100, \end{Meqnarray} that is, 40~dB, and the other parameters are the same as above. Then perturbing the nominal coupling wave path to be \begin{Meqnarray} r e^{-Ls} + r_1 e^{-L_1s} \end{Meqnarray} where $r_1 = 0.07r, L_1 = 1.1L$. It results in instability as shown in Fig.~\ref{fb_500}. \begin{figure}[t] \includegraphics[width = 85mm]{graph/nom1117.eps} \caption{Feedback Cancelation: input signal (dash-dot line) and reconstructed signal (solid line) with the perturbation} \label{fb_500} \end{figure} To overcome this, we use the robust controller proposed in subsection \ref{sec:rob}, $M=1$, $r_1=0.1r$ with $W_1(s)=W(s)$ given in (\ref{eq:Ws}) and $W_2(s)$ as in (\ref{eq:W2s}). The dimension of $Q$-parametrization is $8$ with the FSFH number $N=4$. Fig.~\ref{rob_500} shows the reconstructed signal, by which we can observe the robust controller works well. \begin{figure}[t] \includegraphics[width = 85mm]{graph/rob1117.eps} \caption{Robust Cancelation: input signal (dash-dot line) and reconstructed signal (solid line) with the perturbation.} \label{rob_500} \end{figure} \section{Conclusions} \label{sec:conc} In this paper, we have proposed feedback controller design for self-interference cancelation in single-frequency full-duplex relay stations based on the sampled-data $H^\infty$ control theory. In particular, we proposed robust controller design against the unknown additive multipath. Simulation results have been shown to illustrates the effectiveness of the proposed cancelers in view of stability and robust stability. Future work may include FIR (Finite Impulse Response) filter design and adaptive FIR filtering as discussed in \cite{Nagahara13-1,Nag3}.
1,314,259,994,599
arxiv
\section{Introduction} Currently, laser systems of high intensities are being intensively created \cite{1,2,3,4,5,6,7,8}, as well as sources of high-energy particles \cite{9,10,11,12}. This contributes to the intensive development of quantum electrodynamics (QED) in strong electromagnetic fields \cite{13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55}. An important place in such processes is occupied by resonant effects (Oleinik resonances \cite{24,15}) associated with the release of an intermediate particle in an external electromagnetic field onto the mass shell (see, for example, articles \cite{26,27,28,29,30,31,32,33,34,35,36,37}.]). It is important to emphasize that the resonant differential cross-sections can significantly exceed the corresponding non-resonant differential cross-sections \cite{29,30,31,32,33,34,35,36,37,38}. In a recent paper \cite{36}, the resonant photogeneration of electron-positron pairs on nuclei in strong light fields was studied. At the same time, however, the possibility of generating positrons (electrons) with energies close to the energies of the initial gamma quanta has not been studied. We will study the resonant process of photogeneration of pairs (PGP) for high-energy initial gamma quanta, as well as produced electrons and positrons when the basic classical parameter \begin{equation} \label{eq1} \eta =\frac{eF\lambdabar}{mc^2} \end{equation} satisfies the relation \begin{equation} \label{eq2} \eta \ll \frac{\hbar \omega_i}{mc^2}\gg 1, \qquad \eta \ll \frac{E_{\pm}}{mc^2}\gg 1. \end{equation} Here $e$ and $m$ are the charge and mass of the electron, and $F$ and\quad$\lambdabar=c/\omega$ are the electric field strength and wavelength, $\omega $ is the frequency of the wave; $\omega_i$ is the high-energy initial gamma quantum, $E_{\pm}$ is the ultrarelativistic energy of the positron (electron). The article \cite{36} shows that the resonant energy of the produced positron (for channel A) and electron (for channel B) is determined by their outgoing angles (see Exps. (\ref{eq10}, \ref{eq12})), as well as the quantum parameter \begin{equation} \label{eq3} \varepsilon_{\eta\left(r\right)} =r \varepsilon_\eta \ge 1, \quad \varepsilon_\eta=\frac{\omega_i}{\omega_\eta}, \end{equation} Here the parameter $\varepsilon_{\eta\left(r\right)}$ is numerically equal to the product of the number of absorbed photons in the external field stimulated Braith-Wheeler process $\left(r=1,2,3 \dots\right)$ by the quantum parameter $\varepsilon_\eta$. This parameter is equal to the ratio of the energy of the high-energy initial gamma quantum to the characteristic energy of the process $\hbar \omega_\eta$ which is determined by the experimental conditions and the laser installation: \begin{equation} \label{eq4} \hbar \omega_\eta =\frac{\left(mc^2\right)^2\left(1+\eta^2\right)}{\left(\hbar \omega\right)\sin^2\left(\theta_i/2\right)}. \end{equation} Here $\theta_i$ is the angle between the momentum of the initial gamma quantum and the direction of wave propagation. It can be seen from expression (\ref{eq4}) that the value of the characteristic energy $\hbar \omega_\eta$ is inversely proportional to the photon energy of the wave $\left(\hbar \omega\right)$ and is also directly proportional to the intensity of the wave $\left(I\sim \eta^2\ \left(\mbox{Wcm}^{-2}\right)\right)$. It is important to note that the number of absorbed photons under resonance conditions significantly depends on the range of values of the quantum parameter $\varepsilon_\eta$ (\ref{eq3}) or the relationship between the characteristic energy of the process and the initial energy of the gamma quantum. If $\varepsilon_\eta < 1 \left(\omega_i < \omega_\eta\right)$, then the number of absorbed photons of the wave in the external field-stimulated Braith-Wheeler process must exceed or be equal to some minimum number of photons of the wave $r_{\min}$, which is determined by the parameter $\varepsilon_\eta$ (see the equation (\ref{eq13})). At the same time, the number of absorbed photons of the wave can start with quite large numbers when $r_{\min} \gg 1 \left(\omega_i \ll \omega_\eta\right)$ (this is usually true for very strong electromagnetic fields). If it is a quantum parameter $\varepsilon_\eta \ge 1 \left(\omega_i \ge \omega_\eta\right)$, then the number of absorbed photons of the wave always begins with one photon (see the equation (\ref{eq14})). It is important to emphasize that the number of absorbed photons of the wave significantly affects the magnitude of the resonant differential cross section. For a small number of absorbed photons of the wave $\left(r \sim 1\right)$ , the resonant cross section will be significantly larger than for a large number of absorbed photons $\left(r \gg 1\right)$. Because of this, the case when the energy of the initial gamma quanta exceeds the characteristic energy of the process is of undoubted interest. Note that case (\ref{eq13}) was studied in detail in \cite{36}. At the same time, case (\ref{eq14}) was not considered. In this paper, within the framework of the ratio (\ref{eq14}) we will mainly study consider the case \begin{equation} \label{eq5} \omega_i \gg \omega_\eta \quad \left(\varepsilon_{\eta\left(r\right)}=r\varepsilon_\eta \gg 1, r=1,2,3 \dots \right), \end{equation} when the resonant generation of ultrarelativistic positrons (electrons) takes place with maximum probability and with energies close to the energies of the initial gamma quanta. We will use the relativistic system of units: $\hbar=c=1$. \section{Resonant energies of positrons (electrons) in strong fields} Oleinik resonances occur when an intermediate electron (positron) in the electromagnetic wave field enters the mass shell \cite{24,25,36}. Because of this, for channels A and B, we get (see Figure~\ref{figure1}): \begin{figure}[H] \centering \includegraphics[width=7cm]{Figure1a} \qquad \includegraphics[width=7cm]{Figure1b} \caption{Resonant photogeneration of electron-positron pairs on nuclei in the field of a plane electromagnetic wave.} \label{figure1} \end{figure} \unskip \begin{equation}\label{eq6} \tilde q^2_{\mp}=m^2_*,\ \tilde q_{\mp}=k_i-\tilde p_{\pm}+rk. \end{equation} Here $\tilde q_-$ and $\tilde q_+$ are the 4-quasimomenta of intermediate electron (for channel A) and intermediate positron (for channel B), $m_*$ is the effective mass of the electron (positron) in the field of a circularly polarized wave \cite{23,36} \begin{equation}\label{eq7} \tilde p_{\pm}=p_{\pm}+\eta^2 \frac{m^2}{2\left( kp_{\pm} \right)}k,\quad \tilde q_{\mp}=q_{\mp}+\eta^2 \frac {m^2}{2\left(kq_{\mp} \right)}k,\quad j=i,f. \end{equation} \begin{equation}\label{eq8} \tilde p^2_{\pm}=m^2_*,\quad m_*=m\sqrt{1+\eta^2}. \end{equation} In expressions (\ref{eq6})-(\ref{eq7}) $k=\left(\omega, \mathbf{k} \right)$ is the 4-momentum of the external field photon, $ p_{\pm}=\left(E_{\pm},\mathbf{p}_{\pm}\right)$ is the 4-momentum of the positron (electron). Such behavior is caused by the quasi-discrete energy spectrum of fermion propagating within the plane electromagnetic wave. Due to that fact, one may interpretate it as the reduction of the second order process into the two successive second order processes in fine structure constant (see Figure~\ref{figure1}). In this paper, we examine the case of high-energy energies of the initial gamma quantum (\ref{eq2}). Moreover, we confine ourselves with the configuration, where all produced ultrarelativistic particles propagate within the narrow cone with the initial gamma quantum direction. Additionally, we demand those directions of initial gamma quantum and external wave propagation do not coincidence, otherwise resonances are impossible \cite{29,30,36}: \begin{eqnarray}\label{eq9} \theta_{i\pm} =\measuredangle \left( \mathbf{k_i},\mathbf p_{\pm} \right)\ll 1,\quad & \overline{\theta}_{\pm} =\measuredangle \left( \mathbf p_-,\mathbf p_+ \right) \ll 1, \nonumber \\ \theta_i =\measuredangle \left( \mathbf{k_i},\mathbf k \right)\sim 1,\quad & \theta_{\pm}=\measuredangle \left( \mathbf k,\mathbf p_{\pm} \right)\sim 1. \end{eqnarray} In this paper, we will consider the energies of the initial gamma quantum $\omega_i \lesssim 10^3 \mbox{GeV}$ and also in a wide range of photon energies of an electromagnetic wave $\left(1 \mbox{eV} \lesssim \omega \lesssim 10^4 \mbox{eV}\right)$. At the same time, we will consider the intensities of the electromagnetic wave significantly less than the critical intensities of the Schwinger $\left(I\ll I_* \sim 10^{29} \mbox{Wcm}^{-2}\right)$. We determine the resonant energy of the positron $\left(E_{\eta + \left(r\right)}\right)$ (for channel A, see Figure~\ref{figure1}~A) and electron $\left(E_{\eta - \left(r\right)}\right)$ (for channel B, see Figure~\ref{figure1}~B). We take into account the relations (\ref{eq9}) in the resonant condition (\ref{eq6}). After simple calculations, we get \cite{36} \begin{equation}\label{eq10} x_{\eta j\left( r \right)}=\frac{\varepsilon_{\eta\left(r\right)}\pm \sqrt{\varepsilon_{\eta\left(r\right)}\left(\varepsilon_{\eta\left(r\right)}-1\right)-\delta^2_{\eta j}}}{2\left(\varepsilon_{\eta\left(r\right)}+\delta^2_{\eta j}\right)}, \quad j=\pm. \end{equation} Here it is indicated: \begin{equation}\label{eq11} x_{\eta \pm \left( r \right)}=\frac{E_{\eta \pm \left( r \right)}}{\omega_i}, \quad \delta_{\eta \pm} = \frac{\omega_i \theta_{i\pm}}{2m_*}. \end{equation} It can be seen from the expression (\ref{eq10}) that there are restrictions on the values of the quantum parameter $\varepsilon_{\eta\left(r\right)}$ and the outgoing angles of the positron (electron): \begin{equation}\label{eq12} \varepsilon_{\eta\left(r\right)}=r\varepsilon_\eta \ge 1, \quad \delta^2_{\eta \pm} \le \delta^2_{\eta \max \left(r\right)}=\varepsilon_{\eta\left(r\right)}\left(\varepsilon_{\eta\left(r\right)}-1\right). \end{equation} It is important to note that the resonant energy of the positron and electron is determined by the corresponding outgoing angle (ultrarelativistic parameter ${\delta'}^2_{\eta \pm}$ (\ref{eq11})), as well as the quantum parameter $\varepsilon_{\eta\left(r\right)}$ (\ref{eq3}). Note that the resonant energy spectrum (\ref{eq10}) is essentially discrete, since each value of the number of absorbed laser photons corresponds to its resonant energy: $r \to E_{\eta \pm \left( r \right)}$ (\ref{eq10}). Note that the first relation in expression (\ref{eq12}), depending on the value of the quantum parameter , can be represented as a condition for the number of the wave absorbed photons required for the resonant process: \begin{equation}\label{eq13} r \ge r_{\min}=\lceil \varepsilon^{-1}_{\eta} \rceil, \ if \ \varepsilon_{\eta} < 1 \ \left(\omega_i<\omega_\eta\right), \end{equation} \begin{equation}\label{eq14} r \ge 1, \ if \ \varepsilon_{\eta} \ge 1 \ \left(\omega_i \ge \omega_\eta\right), \end{equation} Figure~\ref{figure2} shows the resonant energy of a positron (for channel A) or an electron (for channel B) as a function of its square of the outgoing angle with a fixed number of absorbed photons of the wave (\ref{eq10}). \begin{figure}[H] \centering \includegraphics[width=10cm]{Figure2.eps} \caption{Resonant positron (for channel A) and electron (for channel B) energies as functions of corresponding outgoing angles (at fixed $\omega_i = 60 \mbox{GeV}, \ \omega = 10 \mbox{eV}, \ \theta_i = \pi$). The solid lines represent high-energy solutions, the dashed lines stand for low-energy solutions. Curves 1 and 2 are plotted for $r=2, \ 3$, correspondingly.} \label{figure2} \end{figure} Solid lines correspond to the maximum resonant energy (the "+" sign before the square root in the ratio (\ref{eq10})). The dashed lines correspond to the minimum resonant energy (the "-" sign in front of the square root in the ratio (\ref{eq10})). From this figure it can be seen that with an increase in the number of absorbed photons of the wave, the maximum resonant energy, as well as the maximum outgoing angle, increase (for a fixed outgoing angle). We present the values of the characteristic energy of the process $\omega_\eta$ (\ref{eq4}) for various frequencies and intensities of the external electromagnetic wave. So, for the case of the flux of gamma quanta moving towards the direction of propagation of the electromagnetic wave $\left(\theta_i=\pi\right)$ , we obtain: \begin{equation}\label{eq15} \omega_\eta \approx \begin{cases} 523.9\mbox{GeV}, \mbox{if} \ \omega=1 \mbox{eV}, \ \ \ \ \ I=1.861\cdot 10^{18}\mbox{Wcm}^{-2}; \\ 52.39\mbox{GeV}, \mbox{if} \ \omega=10 \mbox{eV}, \ \ \ I=1.861\cdot 10^{20}\mbox{Wcm}^{-2}; \\ 5.239\mbox {GeV}, \mbox{if} \ \omega=0.1\mbox{keV}, I=1.861\cdot 10^{22}\mbox{Wcm}^{-2}; \\ 1.31\mbox{GeV}, \ \ \mbox{if} \ \omega=1 \mbox{keV}, \ \ \ I=7.452\cdot 10^{24}\mbox{Wcm}^{-2}; \\ 0.26\mbox{GeV}, \ \ \mbox{if} \ \omega=10 \mbox{keV}, \ I=1.675\cdot 10^{27}\mbox{Wcm}^{-2}. \\ \end{cases}. \end{equation} From this it can be seen that in the optical frequency domain $\left(\omega \sim 1 \mbox{eV}\right)$ , the minimum characteristic energy of the process has an order of magnitude $\sim 500 \mbox{GeV} \left(\eta \lesssim 1\right)$ . With an increase in the frequency of the external electromagnetic wave, the characteristic energy of the process decreases. The resonant process of photogeneration of pairs for optical laser frequencies in the region (\ref{eq13}) was studied in detail in the article \cite{36}. Here, within the framework of the relation (\ref{eq14}), we will study the case very high energies of the initial gamma quanta (\ref{eq5}). Because of this, for not very large outgoing angles, the resonant energies of the positron and electron (\ref{eq10}) will take the form: \begin{equation}\label{eq16} x_{\eta \pm \left( r \right)} \approx 1-\frac{\left(1+4\delta^2_{\eta \pm}\right)}{4\varepsilon_{\eta \left(r\right)}} \approx 1, \quad \left(\delta^2_{\eta \pm}\ll \varepsilon_{\eta \left(r\right)}, \ \varepsilon_{\eta \left(r\right)}\gg 1 \right). \end{equation} Here we have taken into account the maximum energy of the positron (electron). From here it can be seen that one of the possible values of the positron (electron) energies is close to the energy of the initial gamma quantum. \section{Maximum resonant cross section of the PGP in a strong field} It is important to note that the resonant energy of the electron-positron pair is determined for channel A by the positron outgoing angle, and for channel B by the electron outgoing angle. In addition, channels A and B of the resonant FRP process do not interfere with each other. Because of this, the resonant differential cross section for channel A can be integrated at the electron outgoing angles, and for channel B at the positron outgoing angles. In the article \cite{36}, a general relativistic expression was obtained for the resonant differential cross-section of the FRP process in the field of a strong electromagnetic wave with intensities up to $10^{27} \mbox{Wcm}^{-2}$. Integrations were carried out on the energies of the electron (positron), as well as the outgoing angles of the electron (for channel A) or positron (for channel B), on which the resonant energy of the positron (channel A) or electron (channel B) does not depend. Moreover, the integration at the outgoing angles was carried out in a special kinematic region, in which small relativistic corrections of the order $\left(m_*/\omega_i\right)^2 \ll1$ in the momentum transmitted to the nucleus are taken into account \cite{32,33,34,35,36,37,38,55}. It is this integration that leads to the appearance of a large order parameter $\left(\omega_i/m_*\right)^2 \gg1$ in the resonant differential cross-section \cite{32,33,34,35,36,37,38}. Also assume that the flux of initial gamma quanta is directed opposite to the direction of propagation of the electromagnetic wave. Taking this into account, the resonant differential cross section of the FRP with simultaneous registration of the energy and outgoing angles of the positron (sign "+" in equation (\ref{eq17}), channel A) or electron (sign "-" in equation (\ref{eq17}), channel B) can be represented as follows: \begin{equation}\label{eq17} R^{\max}_{\eta\pm\left(r\right)}=\frac{d\sigma^{\max}_{\eta\pm\left(r \right)}}{dx_{\eta\pm\left(r\right)}d\delta^2_{\eta \pm}}=\left(Z^2\alpha r^2_e\right)c_{\eta}\mathrm{H}_{\eta\pm\left(r \right)}. \end{equation} Here $\alpha $ is the fine structure constant, $Z$ is the charge of the nucleus, $r_e$ is the classical radius of the electron, $\mathrm{H}_{\eta \pm \left( r \right)}$ are functions that determine the energy spectrum and angular distribution of a positron or electron: \begin{equation}\label{eq18} \mathrm{H}_{\eta \pm \left( r \right)}=\frac{x^3_{\eta\pm\left(r\right)}\left(1-x_{\eta\pm\left(r\right)}\right)^3}{\rho^2_{\eta\pm\left(r\right)}}P\left(u_{\eta\pm\left(r\right)},\varepsilon_{\eta \left(r\right)}\right), \end{equation} \begin{equation}\label{eq19} \rho_{\eta\pm\left(r\right)}=x^2_{\eta\pm\left(r\right)}\delta^2_{i \pm}+\frac{1}{4\left(1+\eta^2\right)}. \end{equation} and the magnitude of the $c_\eta$ coefficient is determined by the small transmitted momenta of the order $\left(m_*/\omega_i\right)^2 \ll1$, as well as the resonance width \begin{equation}\label{eq20} c_\eta=2\left[\frac{2\pi\omega_i}{\alpha m_* \mathrm{K}\left(\varepsilon_{\eta}\right)}\right]^2\gg1. \end{equation} Here the $\mathrm{K}\left(\varepsilon_{\eta}\right)$ function is determined by the resonance width (the full probability of the external field-stimulated Compton effect) and has the form \cite{23}: \begin{eqnarray}\label{eq21} \mathrm{K} \left(\varepsilon_\eta \right)=\sum_{r=1}^\infty \mathrm K_r \left(\varepsilon_\eta \right),\ \mathrm K_r \left(\varepsilon_\eta \right)=\int\limits_0^{\varepsilon_{\eta \left( r \right)}} \frac{du}{\left(1+u\right)^2} K \left(u, \varepsilon_{\eta \left( r \right)} \right). \end{eqnarray} \begin{eqnarray}\label{eq22} K\left(u, \varepsilon_{\eta \left( r \right)} \right)=-4J^2_r\left( \gamma_{\eta\left( r \right)} \right)+\eta^2 \left(2+\frac{u^2}{1+u} \right) \left( J^2_{r+1}+J^2_{r-1}-2J^2_r \right), \end{eqnarray} \begin{equation}\label{eq23} \gamma_{\eta \left( r \right)}=2r\frac{\eta }{\sqrt{1+\eta ^2}}\sqrt{\frac{u}{\varepsilon_{\eta \left( r \right)}}\left( 1-\frac{u}{\varepsilon_{\eta \left( r \right)}} \right)}. \end{equation} In expression (\ref{eq18}) the $P\left(u_{\eta\pm\left(r\right)},\varepsilon_{\eta \left( r \right)}\right)$ functions are determined by the probability (per unit of time) of the external field-stimulated Breit-Wheeler process \cite{23}: \begin{eqnarray}\label{eq24} P\left(u_{\eta\pm\left(r\right)},\varepsilon_{\eta \left( r \right)}\right)=J^2_r\left( \gamma_{\eta \pm \left(r\right)} \right)+\eta^2 \left(2u_{\eta\pm\left(r\right)}-1\right)\left[\left(\frac{r^2}{\gamma^2_{\eta \pm \left(r\right)}}-1\right)J^2_r+\frac{1}{4}\left(J_{r-1}-J_{r+1}\right)^2\right]. \end{eqnarray} Here, the relativistically invariant parameter $u_{\eta\pm\left(r\right)}$ and the arguments of the Bessel functions $\gamma_{\eta \pm\left(r\right)}$ have the form: \begin{equation}\label{eq25} u_{\eta \pm \left( r \right)}\approx \frac{1}{4x_{\eta \pm \left( r \right)}\left(1-x_{\eta \pm \left( r \right)}\right)}. \end{equation} \begin{eqnarray}\label{eq26} \gamma_{\eta \pm \left( r \right)}=2r\frac{\eta }{\sqrt{1+\eta ^2}}\sqrt{\frac{u_{\eta \pm \left( r \right)}}{\varepsilon_{\eta \left( r \right)}}\left( 1-\frac{u_{\eta \pm \left( r \right)}}{\varepsilon_{\eta \left( r \right)}} \right)}=4r\frac{\eta }{\sqrt{1+\eta ^2}}\frac{x_{\eta \pm \left( r \right)}\delta_{i \pm}}{\left(1+4x^2_{\eta \pm \left( r \right)}\delta^2_{i \pm}\right)}. \end{eqnarray} Note that the right part of the expression (\ref{eq26}) for the argument of the Bessel functions is obtained by taking into account the relations (\ref{eq10}) and (\ref{eq25}). These resonant differential cross sections (\ref{eq17}) were studied in detail for the case when the energy of the initial gamma quanta did not exceed the characteristic energy of the process (\ref{eq13}). However, the most interesting case when the energy of the initial gamma quanta exceeds the characteristic energy of the process (\ref{eq14}) has not been studied. Here we consider the maximum resonant differential cross section (\ref{eq17}) for the case when the energy of the initial gamma quanta exceeds the characteristic energy of the process (\ref{eq14}). It is also of interest to consider the case (\ref{eq5}) when the energy of the initial gamma quanta significantly exceeds the characteristic energy of the process $\left(\omega_i \gg \omega_\eta \right)$ . In this case, the resonant energy of the positron or electron is close to the energy of the initial gamma quanta (see relation (\ref{eq16})). Given this, after simple transformations, the maximum resonant differential cross sections of the PGP (\ref{eq17})-(\ref{eq25}) is significantly simplified and takes the form \begin{equation}\label{eq27} R^{\max}_{\eta\pm\left(r\right)}=\frac{d\sigma^{\max}_{\eta\pm\left(r \right)}}{dx_{\eta\pm\left(r\right)}d\delta^2_{\eta \pm}}=\left(Z^2\alpha r^2_e\right)b_{\eta}\mathrm{\Phi}_{\eta\pm\left(r \right)}. \end{equation} Here the $\mathrm{\Phi}_{\eta\pm\left(r \right)}$ are functions determine the spectral-angular distribution of the resonant SB cross-section for channels A and B: \begin{eqnarray} \label{eq28} \mathrm{\Phi}_{\eta\pm\left(r \right)}=\frac{\left(1+4\delta^2_{\eta \pm}\right)^3}{r^3}\left[\delta^2_{i \pm}+\frac{1}{4\left(1+\eta^2\right)}\right]^{-2}P\left(\delta^2_{\eta \pm},\varepsilon_{\eta \left( r \right)}\right) \end{eqnarray} and $b_\eta$ - the coefficient, which is determined by the parameters of the laser installation \begin{equation}\label{eq29} b_\eta=\frac{1}{2\varepsilon_{\eta}}\left(\frac{\pi\omega_\eta}{2 \alpha m_* \mathrm{K}\left(\varepsilon_{\eta}\right)}\right)^2. \end{equation} In expression (\ref{eq28}) the function (\ref{eq24}) takes the form: \begin{eqnarray} \label{eq30} P\left(\delta^2_{\eta \pm},\varepsilon_{\eta \left( r \right)}\right)=J^2_r\left( \gamma_{\eta \pm \left(r\right)} \right)+\eta^2 \left[\frac{2\varepsilon_{\eta \left( r \right)}}{\left(1+4\delta^2_{\eta \pm}\right)}-1\right]\left[\left(\frac{r^2}{\gamma^2_{\eta \pm \left(r\right)}}-1\right)J^2_r+\frac{1}{4}\left(J_{r-1}-J_{r+1}\right)^2\right], \end{eqnarray} \begin{eqnarray}\label{eq31} \gamma_{\eta \pm \left( r \right)}=4r\frac{\eta }{\sqrt{1+\eta ^2}}\frac{\delta_{\eta \pm}}{\left(1+4\delta^2_{\eta \pm}\right)}. \end{eqnarray} It is worth noting that the obtained expressions (\ref{eq17}) and (\ref{eq27}) are true for the case of the one initial gamma quantum. To derive the relations for the case of the gamma quantum flux one has to multiply the corresponding equations by the concentration $n_\gamma$. \section{Main results} Let the flux of initial gamma quanta propagate towards an external electromagnetic wave $\left(\theta_i=\pi\right)$. Let's choose the energy of the initial gamma quanta $\omega_i=60\mbox{GeV}$. Then, for characteristic energies $\omega_\eta$ (\ref{eq15}), the quantum parameter $\varepsilon_{\eta}$ (\ref{eq3}) takes the corresponding values: $\varepsilon_{\eta}=0.115$; $1.145$; $11.453$; $45.812$; $229.057$. Here, the first case corresponds to the optical frequencies of the laser $\left(\omega=1\mbox{eV}, \ I=1.863\cdot 10^{18} \mbox{Wcm}^{-2}\right)$ and meets the condition (\ref{eq13}) when $r\ge r_{\min}=9$. The remaining cases for the X-ray frequencies of the external wave meet the condition (\ref{eq14}) when $r\ge 1 \left(\omega_i\ge \omega_\eta\right)$. Moreover, the last three cases meet the condition $\varepsilon_{\eta}\gg1\left(\omega_i\gg \omega_\eta\right) $ (see Exp. (\ref{eq16})). Note that when plotting the resonant differential cross-section (\ref{eq17}), (\ref{eq27}), the maximum energy of the positron (electron) was selected (the "+" sign before the square root in expression (\ref{eq10})). It is these positron (electron) energies that make the main contribution to the resonant differential cross section. Figure~\ref{figure3} shows the dependences of the maximum resonant differential cross section (\ref{eq17}) on the square of the positron (electron) outgoing angle for a fixed number of absorbed photons of the wave in the optical frequency range $\left(\omega=1\mbox{eV}, \ I=1.863\cdot 10^{18} \mbox{Wcm}^{-2}\right)$ under conditions (\ref{eq13}) when $r_{\min}=9$. It is important to note that in this case, the maximum value of the resonant differential cross section takes place at the number of absorbed photons $r=13$ and is the value $R^{\max}_{\eta \pm \left(13 \right)} \approx 10^{13} \left( Z^2\alpha r^2_e \right)$ . At the same time, the resonant energies of the positron and electron are approximately equal to half the energy of the initial gamma quanta (see Table~\ref{tab1}). \begin{figure}[H] \centering \includegraphics[width=10cm]{Figure3.eps} \caption{The dependence of the maximum resonant differential cross-section (in $Z^2\alpha r^2_e$ units) (\ref{eq17}) on the square of the outgoing angle of the positron (for channel A) or electron (for channel B) for a fixed number of absorbed photons of the wave when the value $r_{\min}=9$ (\ref{eq13}). The curves are constructed for the maximum energy of the positron (electron) (in the ratio (\ref{eq10}), the sign "+" is chosen before the square root).The energy of the initial gamma quanta is equal to $\omega_i=60\ \text{GeV}$. The direction of propagation, frequency and intensity of the wave are equal to $\theta_i=\pi$, $\omega=1\ \text{eV}$, $I=1.863\cdot 10^{18} \mbox{Wcm}^{-2}$.} \label{figure3} \end{figure} \begin{table}[h!] \caption{The values of the resonant energies of the electron-positron pair and the corresponding square outgoing angle of the positron (electron) for the maximum value of the maximum resonant cross section (see Figure~\ref{figure3}). The frequency and intensity of the laser wave are $\omega=1\ \text{eV}$ and $I=1.863\cdot 10^{18} \mbox{Wcm}^{-2}$. The energy of the initial gamma quanta is $\omega_i=60\ \text{GeV}$.\label{tab1}} \centering \setlength{\extrarowheight}{2mm} \begin{tabular}{|c|c|c|c|c|} \hline $r$ & $\delta^2_{\eta \pm}$ & $R^{\max}_{\eta \pm \left( r \right)}$ & $E_{\pm \left( r \right)}$ & $E_{\mp \left( r \right)}$ \\\hline & & $\left( Z^2\alpha r^2_e \right)$ & $\left(\mbox{GeV}\right)$ & $\left(\mbox{GeV}\right)$ \\ \hline 12 & 0.374 & $9.623\times 10^{12}$ & 30.00000 & 30.00000\\\hline 13 & 0.488 & $1.080\times 10^{13}$ & 30.01535 & 29.98465 \\\hline 14 & 0.603 & $9.356\times 10^{12}$ & 30.00000 & 30.00000 \\\hline \end{tabular} \setlength{\extrarowheight}{0mm} \end{table} Figure~\ref{figure4} shows the dependences of the maximum resonant differential cross section (\ref{eq17}) on the square of the positron (electron) outgoing angle for a fixed number of absorbed photons of the wave $r=1,2,3$ in the X-ray frequency range $\left(\omega=10\mbox{eV}, \ I=1.863\cdot 10^{20} \mbox{Wcm}^{-2}\right)$ under conditions (\ref{eq14}). At the same time, the energy of the initial gamma quanta slightly exceeds the characteristic energy of the process $\left(\varepsilon_{\eta}=1.145\right)$. It is important to note that in this case, the maximum value of the resonant differential cross section occurs when one photon of the wave is absorbed at $\delta^2_{\eta \pm}=0$ and is of the order of magnitude $R^{\max}_{\eta \pm \left(1 \right)} \sim 10^{15} \left( Z^2\alpha r^2_e \right)$. With an increase in the number of absorbed photons of the wave, the peak of the maximum value of the resonant differential cross section shifts towards large outgoing angles of the positron (electron). At the same time, the resonant differential cross-section decreases quite quickly. Thus, the ratio of the resonant cross sections for three and one absorbed photons of the wave has the order of magnitude: $R^{\max}_{\eta \pm \left(3 \right)}/R^{\max}_{\eta \pm \left(1 \right)}\sim 10^{-2}$. Note also that the resonant energy of a positron (for channel A) or an electron (for channel B) increases from $40.685 \mbox{GeV}$ for $r=1$ to $52.727 \mbox{GeV}$ for $r=3$ with an increase in the number of absorbed photons of the wave (see Table~\ref{tab2}). \begin{figure}[H] \centering \includegraphics[width=10cm]{Figure4.eps} \caption{The dependence of the maximum resonant differential cross-section (in $Z^2\alpha r^2_e$ units) (\ref{eq17}) on the square of the outgoing angle of the positron (for channel A) or electron (for channel B) for a fixed number of absorbed photons of the wave in the conditions (\ref{eq14}). The curves are constructed for the maximum energy of the positron (electron) (in the ratio (\ref{eq10}), the sign "+" is chosen before the square root).The energy of the initial gamma quanta is equal to $\omega_i=60\ \text{GeV}$. The direction of propagation, frequency and intensity of the wave are equal to $\theta_i=\pi$, $\omega=10\ \text{eV}$, $I=1.863\cdot 10^{20} \mbox{Wcm}^{-2}$.} \label{figure4} \end{figure} \begin{table}[h!] \caption{The values of the resonant energies of the electron-positron pair and the corresponding square outgoing angle of the positron (electron) for the maximum value of the maximum resonant cross section (see Figure~\ref{figure4}). The frequency and intensity of the laser wave are $\omega=10\ \text{eV}$ and $I=1.863\cdot 10^{20} \mbox{Wcm}^{-2}$. The energy of the initial gamma quanta is $\omega_i=60\ \text{GeV}$.\label{tab2}} \centering \setlength{\extrarowheight}{2mm} \begin{tabular}{|c|c|c|c|c|} \hline $r$ & $\delta^2_{\eta \pm}$ & $R^{\max}_{\eta \pm \left( r \right)}$ & $E_{\pm \left( r \right)}$ & $E_{\mp \left( r \right)}$ \\\hline & & $\left( Z^2\alpha r^2_e \right)$ & $\left(\mbox{GeV}\right)$ & $\left(\mbox{GeV}\right)$ \\ \hline 1 & 0 & $3.937\times 10^{15}$ & 40.685 & 19.315\\\hline 2 & 0.088 & $1.811\times 10^{14}$ & 50.238 & 9.762 \\\hline 3 & 0.150 & $3.478\times 10^{13}$ & 52.727 & 7.273 \\\hline \end{tabular} \setlength{\extrarowheight}{0mm} \end{table} Figures~\ref{figure5}, \ref{figure6}, \ref{figure7} show the dependences of the maximum resonant differential cross-section (\ref{eq17}), (\ref{eq27}) on the square of the positron (electron) outgoing angle for a fixed number of absorbed photons of the wave $\left(r=1,2,3\right)$ for X-ray frequencies $\omega=0.1\ \text{keV}$, $1\ \text{keV}$, $10\ \text{keV}$ and corresponding wave intensities (see expression (\ref{eq15})). These graphs are constructed under conditions when the energy of the initial gamma quanta significantly exceeds the characteristic energy of the process: $\varepsilon_{\eta}\approx11.45$, $45.81$, $229.06$ (see the ratios (\ref{eq14}), (\ref{eq15}), (\ref{eq16})). Tables~\ref{tab3}, \ref{tab4}, \ref{tab5} show the outgoing angles, the resonant energies of the positron (electron), as well as the values of the resonant differential cross sections corresponding to the maxima of the distributions in graphs~\ref{figure5}, \ref{figure6}, \ref{figure7}. From these figures and tables it can be seen that the maximum value of the resonant differential cross section is realized with one absorbed photon at $\delta^2_{\eta \pm}=0$ . With an increase in the number of absorbed photons, as well as the intensity of the wave, the value of the maximum resonant differential cross section decreases. \begin{figure}[H] \centering \includegraphics[width=10cm]{Figure5.eps} \caption{The dependence of the maximum resonant differential cross-section (in $Z^2\alpha r^2_e$ units) (\ref{eq17}), (\ref{eq27}) on the square of the outgoing angle of the positron (for channel A) or electron (for channel B) for a fixed number of absorbed photons of the wave in the conditions (\ref{eq14}), (\ref{eq16}). The curves are constructed for the maximum energy of the positron (electron) (in the ratio (\ref{eq10}), the sign "+" is chosen before the square root).The energy of the initial gamma quanta is equal to $\omega_i=60\ \text{GeV}$. The direction of propagation, frequency and intensity of the wave are equal to $\theta_i=\pi$, $\omega=0.1\ \text{keV}$, $I=1.863\cdot 10^{22} \mbox{Wcm}^{-2}$.} \label{figure5} \end{figure} \begin{table}[h!] \caption{The values of the resonant energies of the electron-positron pair and the corresponding square outgoing angle of the positron (electron) for the maximum value of the maximum resonant cross section (see Figure~\ref{figure5}). The frequency and intensity of the laser wave are $\omega=0.1\ \text{keV}$ and $I=1.863\cdot 10^{22} \mbox{Wcm}^{-2}$. The energy of the initial gamma quanta is $\omega_i=60\ \text{GeV}$.\label{tab3}} \centering \setlength{\extrarowheight}{2mm} \begin{tabular}{|c|c|c|c|c|} \hline $r$ & $\delta^2_{\eta \pm}$ & $R^{\max}_{\eta \pm \left( r \right)}$ & $E_{\pm \left( r \right)}$ & $E_{\mp \left( r \right)}$ \\\hline & & $\left( Z^2\alpha r^2_e \right)$ & $\left(\mbox{GeV}\right)$ & $\left(\mbox{GeV}\right)$ \\ \hline 1 & 0 & $8.754\times 10^{12}$ & 58.660 & 1.340\\\hline 2 & 0.063 & $2.952\times 10^{11}$ & 59.173 & 0.827 \\\hline 3 & 0.118 & $5.221\times 10^{10}$ & 59.353 & 0.647 \\\hline \end{tabular} \setlength{\extrarowheight}{0mm} \end{table} So, for the wave intensities $I=1.863\cdot 10^{22}$, $7.452\cdot 10^{24}$, $1.675\cdot 10^{27} \ \mbox{Wcm}^{-2}$, as well as for the number of absorbed photons $r=1$ and $r=2$ the value of the maximum resonant differential cross section, respectively, is equal to $R^{\max}_{\eta \pm \left( 1 \right)}\approx 8.75\cdot 10^{12}$, $1.56\cdot 10^{11}$, $3.03\cdot 10^9 \left( Z^2\alpha r^2_e \right)$ and $R^{\max}_{\eta \pm \left( 2 \right)}\approx 2.95\cdot 10^{11}$, $3.66\cdot 10^9$, $4.73\cdot 10^7 \left( Z^2\alpha r^2_e \right)$ (see Tables~\ref{tab3}, \ref{tab4}, \ref{tab5}). It is very important to note that with an increase in parameter $\varepsilon_{\eta}$ (\ref{eq3}), the value of the resonant energy of the positron (for channel A) or electron (for channel B) all it tends closer to the energy of the initial gamma quanta $E_i=60\mbox{GeV}$ (\ref{eq16}). So, for the wave intensities $I=1.863\cdot 10^{22}$, $7.452\cdot 10^{24}$, $1.675\cdot 10^{27} \ \mbox{Wcm}^{-2}$, as well as for the number of absorbed photons $r=1$ and $r=2$ the magnitude of the resonant energy of the positron (electron) at the maximum of the distribution of the resonant differential cross section (see Figures~\ref{figure5}, \ref{figure6}, \ref{figure7} ), respectively, is equal to $E_{\eta \pm \left( 1 \right)}\approx 58.660$, $59.671$, $59.934 \mbox{GeV}$ and $E_{\eta \pm \left( 2 \right)}\approx 59.173$, $59.820$, $59.965 \mbox{GeV}$ (see Tables~\ref{tab3}, \ref{tab4}, \ref{tab5}). Thus, under conditions (\ref{eq16}), when the energy of gamma quanta significantly exceeds the characteristic energy of the process, it is very likely to obtain narrowly directed streams of positrons (electrons) with energies close to the energy of the initial gamma quanta. \begin{figure}[H] \centering \includegraphics[width=10cm]{Figure6.eps} \caption{The dependence of the maximum resonant differential cross-section (in $Z^2\alpha r^2_e$ units) (\ref{eq17}), (\ref{eq27}) on the square of the outgoing angle of the positron (for channel A) or electron (for channel B) for a fixed number of absorbed photons of the wave in the conditions (\ref{eq14}), (\ref{eq16}). The curves are constructed for the maximum energy of the positron (electron) (in the ratio (\ref{eq10}), the sign "+" is chosen before the square root).The energy of the initial gamma quanta is equal to $\omega_i=60\ \text{GeV}$. The direction of propagation, frequency and intensity of the wave are equal to $\theta_i=\pi$, $\omega=1\ \text{keV}$, $I=7.452\cdot 10^{24} \mbox{Wcm}^{-2}$.} \label{figure6} \end{figure} \begin{table}[h!] \caption{The values of the resonant energies of the electron-positron pair and the corresponding square outgoing angle of the positron (electron) for the maximum value of the maximum resonant cross section (see Figure~\ref{figure6}). The frequency and intensity of the laser wave are $\omega=1\ \text{keV}$ and $I=7.452\cdot 10^{24} \mbox{Wcm}^{-2}$. The energy of the initial gamma quanta is $\omega_i=60\ \text{GeV}$.\label{tab4}} \centering \setlength{\extrarowheight}{2mm} \begin{tabular}{|c|c|c|c|c|} \hline $r$ & $\delta^2_{\eta \pm}$ & $R^{\max}_{\eta \pm \left( r \right)}$ & $E_{\pm \left( r \right)}$ & $E_{\mp \left( r \right)}$ \\\hline & & $\left( Z^2\alpha r^2_e \right)$ & $\left(\mbox{GeV}\right)$ & $\left(\mbox{GeV}\right)$ \\ \hline 1 & 0 & $1.564\times 10^{11}$ & 59.671 & 0.329\\\hline 2 & 0.023 & $3.661\times 10^9$ & 59.820 & 0.180 \\\hline 3 & 0.046 & $5.844\times 10^8$ & 59.870 & 0.130 \\\hline \end{tabular} \setlength{\extrarowheight}{0mm} \end{table} \begin{figure}[H] \centering \includegraphics[width=10cm]{Figure7.eps} \caption{The dependence of the maximum resonant differential cross-section (in $Z^2\alpha r^2_e$ units) (\ref{eq17}), (\ref{eq27}) on the square of the outgoing angle of the positron (for channel A) or electron (for channel B) for a fixed number of absorbed photons of the wave in the conditions (\ref{eq14}), (\ref{eq16}). The curves are constructed for the maximum energy of the positron (electron) (in the ratio (\ref{eq10}), the sign "+" is chosen before the square root).The energy of the initial gamma quanta is equal to $\omega_i=60\ \text{GeV}$. The direction of propagation, frequency and intensity of the wave are equal to $\theta_i=\pi$, $\omega=10\ \text{keV}$, $I=1.676\cdot 10^{27} \mbox{Wcm}^{-2}$.} \label{figure7} \end{figure} \begin{table}[h!] \caption{The values of the resonant energies of the electron-positron pair and the corresponding square outgoing angle of the positron (electron) for the maximum value of the maximum resonant cross section (see Figure~\ref{figure6}). The frequency and intensity of the laser wave are $\omega=10\ \text{keV}$ and $I=1.676\cdot 10^{27} \mbox{Wcm}^{-2}$. The energy of the initial gamma quanta is $\omega_i=60\ \text{GeV}$.\label{tab5}} \centering \setlength{\extrarowheight}{2mm} \begin{tabular}{|c|c|c|c|c|} \hline $r$ & $\delta^2_{\eta \pm}$ & $R^{\max}_{\eta \pm \left( r \right)}$ & $E_{\pm \left( r \right)}$ & $E_{\mp \left( r \right)}$ \\\hline & & $\left( Z^2\alpha r^2_e \right)$ & $\left(\mbox{GeV}\right)$ & $\left(\mbox{GeV}\right)$ \\ \hline 1 & 0 & $3.029\times 10^9$ & 59.934 & 0.066\\\hline 2 & 0.014 & $4.730\times 10^7$ & 59.965 & 0.035 \\\hline 3 & 0.031 & $6.209\times 10^6$ & 59.975 & 0.025 \\\hline \end{tabular} \setlength{\extrarowheight}{0mm} \end{table} \section{Conclusions} The study of the resonant process of generation of ultrarelativistic electron-positron pairs by high-energy gamma quanta in the field of the nucleus and a strong electromagnetic wave showed: \begin{itemize} \item The resonant energy of an electron-positron pair is determined by two parameters: the outgoing angle of a positron (for channel A) or an electron (for channel B), as well as a quantum parameter $\varepsilon_{\eta \left( r \right)}=r\varepsilon_{\eta} \ge 1$ equal to the number of absorbed photons of the wave multiplied by the quantum parameter $\varepsilon_{\eta}$. This parameter is equal to the ratio of the energy of the initial gamma quanta $\left(\omega_i\right)$ to the characteristic energy of the process $\left(\omega_\eta\right)$. The characteristic energy is determined by the parameters of the laser installation: frequency, intensity, as well as the direction of propagation of the electromagnetic wave relative to the momentum of the initial gamma quanta (\ref{eq3}), (\ref{eq4}). \item The magnitude of the quantum parameter significantly affects the probability and energy of the electron-positron pair. So, if the energy of the initial gamma quanta is less than the characteristic energy $\left(\omega_i < \omega_\eta\right)$, then $\varepsilon_{\eta}<1$. In this case, there is a minimum number of photons $\left(r \ge r_{\min}=\lceil \varepsilon^{-1}_{\eta} \rceil\right)$, starting from which photons are absorbed by the wave in the external field-stimulated Breit-Wheeler process. In strong fields $r_{\min}\gg1$, the resonance process also proceeds with the absorption of a very large number of photons of the wave. \item If the energy of the initial gamma quanta is equal to or exceeds the characteristic energy of the process $\left(\omega_i \ge \omega_\eta\right)$ , then the quantum parameter $\varepsilon_{\eta}\ge1$. In this case, the resonant process takes place for the number of absorbed photons of the wave $r\ge1$. Note that the probability of processes with the absorption of a small number of photons of the wave $\left(r\sim1\right)$ significantly exceeds the corresponding probability with the absorption of a large number of photons of the wave $\left(r\gg1\right)$. \item If the energy of the initial gamma quanta significantly exceeds the characteristic energy of the process $\varepsilon_{\eta}\gg1$ , then the resonant energy of the positron (for channel A) or electron (for channel B) will be close to the energy of gamma quanta (see relation (\ref{eq16})). Under these conditions, narrow streams of high-energy positrons (electrons) are generated with a very high probability. So, for the number of absorbed photons $r=1$ and $r=2$ the value of the maximum resonant differential cross section, respectively, is equal to $R^{\max}_{\eta \pm \left( 1 \right)}\approx 8.75\cdot 10^{12}$, $1.56\cdot 10^{11}$, $3.03\cdot 10^9 \left( Z^2\alpha r^2_e \right)$ and $R^{\max}_{\eta \pm \left( 2 \right)}\approx 2.95\cdot 10^{11}$, $3.66\cdot 10^9$, $4.73\cdot 10^7 \left( Z^2\alpha r^2_e \right)$ (see Tables~\ref{tab3}, \ref{tab4}, \ref{tab5}). \end{itemize}
1,314,259,994,600
arxiv
\section{Introduction} In many bio-surveillance and healthcare applications, data sources are measured from many spatial locations repeatedly over time, say, daily, weekly, or monthly. In these applications, we are typically interested in detecting {\it hot-spots}, which are defined as some structured outliers that are sparse over the spatial domain but persistent over time. A concrete real-world motivating application is the weekly number of gonorrhea cases from $2006$ to $2018$ for $50$ states in the United States, also see the detailed data description in the next section. From the monitoring viewpoint, there are two kinds of changes: one is the global-level trend, and the other is the local-level outliers. Here we are more interested in detecting the so-called hot-spots, which are local-level outliers with the following two properties: (1) spatial sparsity, i.e., the local changes are sparse over the spatial domain; and (2) temporal persistence, i.e., the local changes last for a reasonably long time period unless one takes some actions. Generally speaking, the hot-spot detection can be thought as detecting sparse anomaly in spatio-temporal data, and there are three different categories of methodologies and approaches in the literature. The first one is LASSO-based control chart that integrates LASSO estimators for change point detection and declares non-zero components of the LASSO estimators as the hot-spot, see \cite{LASSO}, \cite{LassoBased1}, \cite{vsaltyte2011spatial}. Unfortunately, the LASSO-based control chart lacks the ability to separate the local hot-spots from the global trend of the spatio-temporal data. The second category of methods is the dimension reduction based control chart where one monitors the features from PCA or other dimension reduction methods, see \cite{PCA}, \cite{tensorPCA1}, \cite{tensorPCA2}. The drawback of PCA or other dimension reduction methods is that it fails to detect sparse anomalies and cannot take full advantage of the spatial location of hot-spot. The third category of anomaly detection methods is the decomposition-based method that uses the regularized regression methods to separate the hot-spots from the background event, see \cite{AnomalyInVideo}, \cite{AnomalyInImage}, \cite{SSD}. However, these existing approaches investigate structured images or curves data under the assumption that the hot-spots are independent over the time domain. In this paper, we propose a decomposition-based anomaly detection method for spatial-temporal data when the hot-spots are autoregressive, which is typical for time series data. Our main idea is to represent the raw data as a $3$-dimensional tensor: states, weeks, years. To be more specific, at each year, we observe a $50 \times 52$ data matrix that corresponds to $50$ states and $52$ weeks (we ignore the leap years). Next, we propose to decompose the $3$-dimension tensor into three components: Smooth global trend, Sparse local hot-spot, and Residuals, and term our proposed decomposition model as SSR-Tensor. When fitting the observed raw data to our proposed SSR-Tensor model, we develop a penalized likelihood approach by adding two penalty functions: one is the LASSO type penalty to guarantee the sparsity of hot-spots, and the other is the fused-LASSO type penalty for the autoregressive properties of hot-spots or time-series data. By doing so, we are able to (1) detect when the hot-spots occur (i.e., the change point detection problem); and (2) localize where and which type of the hot-spots occur (i.e., the spatial localization problem). We would like to acknowledge that much research has been done on modeling and prediction of the spatio-temporal data. Some popular time series models are AR, MA, ARMA model, etc., and the parameter can be estimated by Yule-Walker method \citep{hannan1979determination}, maximum likelihood estimation or least square method \citep{hamilton1994time}. In addition, spatial statistics have also been extensively investigated on its own right, see \citep{early_defination_neighbor,ecology,lan2004landslide,elhorst2014spatial,lots-of-ST-regression} for examples. When one combines time series with spatial statistics, the corresponding spatio-temporal models generally become more complicated, see \citep{ZhuJun,lai2015asymptotically,ST-model-book} for more discussions. \yujie{ In principle, it is possible to represent the spatio-temporal process as a sequence of random vector ${\bf Y}_{t}$ with weekly observation $t$, where ${\bf Y}$ is $p$-dimensional vector that characterize the spatial domain (i.e., spatial dimension $p=50$ in our case study). However, such an approach might not be computationally feasible in the context of hot-spot detection, in which one needs to specify the covariance structure of ${\bf Y}_{t}$, not only over the spatial domain, but also over the time domain. If we wrote all data into a vector, then the dimension of such vector is $50 \times 52 \times 13= 33,800$, and thus the covariance matrix is of dimension $33,800 \times 33,800,$ which is not computationally feasible. Meanwhile, under our proposed SSR-Tensor model, we essentially conduct a dimensional reduction by assuming that such a covariance matrix has a nice sparsity structure, as we reduce the dimensions $50, 52$ and $13$ to much smaller numbers, e.g., AR(1) model over the week or year dimension, and local correlation over the spatial domain. } It is useful to point out that while our paper focuses only on $3$-dimensional tensor due to our motivating application in gonorrhea, our proposed SSR-Tensor model can easily be extended to any $d$-dimensional tensor or data with $d \geq 3$, e.g., when we have further information, such as the unemployment rate, economic performance, and so on. As the dimension $d$ increases, we can simply add more corresponding bases, as our proposed model uses \textit{basis} to describe correlation within each dimension, and utilizes \textit{tensor product} for interaction between different dimensions. The capability of extending to high-dimensional data is one of the main advantages of our proposed SSR-Tensor model. Furthermore, our proposed SSR-Tensor model essentially involves block-wise diagonal covariation matrix, which allows ut to develop computationally efficient methodologies by using tensor decomposition algebra, see Section \ref{sec:computational_complexity} for more technical details. The remainder of this paper is as follows. Section \ref{sec:data} discusses and visualizes the gonorrhea dataset, which is used as our motivating example and in our case study. Section \ref{sec:model_whole} presents our proposed SSR-Tensor model, and discusses how to estimate model parameters from observed data. Section \ref{sec:hot-spot_detection} describes how to use our proposed SSR-Tensor model to find hot-spots, both for temporal detection and for spatial localization. \yujie{Efficient numerical optimization algorithms are discussed in Section \ref{sec:estimation}. } Our proposed methods are then validated through extensive simulations in Section \ref{sec:simulation} and a case study in gonorrhea dataset in Section \ref{sec:case_study}. \section{Data Description} \label{sec:data} \begin{figure}[t] \centering \begin{tabular}{ccc} \includegraphics[width = 0.3\textwidth]{Spatial_Pattern_AllYear_W1} & \includegraphics[width = 0.3\textwidth]{Spatial_Pattern_AllYear_W11} & \includegraphics[width = 0.3\textwidth]{Spatial_Pattern_AllYear_W21} \\ week 1 & week 11 & week 21 \\ \includegraphics[width = 0.3\textwidth]{Spatial_Pattern_AllYear_W31} & \includegraphics[width = 0.3\textwidth]{Spatial_Pattern_AllYear_W41} & \includegraphics[width = 0.3\textwidth]{Spatial_Pattern_AllYear_W51} \\ week 31 & week 41 & week 51 \\ \end{tabular} \caption{The cumulative number of gonorrhea cases at some selected weeks during years 2006-2018. \yujie{The deeper the color, the higher number of gonorrhea cases.} \label{fig:spatial_pattern_3}} \end{figure} To protect Americans from serious disease, the National Notifiable Disease Surveillance System (NNDSS) at the Centers for Disease Control and Prevention (CDC) helps public health monitor, control, and prevent about $120$ diseases, see its website \url{https://wwwn.cdc.gov/nndss/infectious-tables.html}. One disease that receives intensive attention in recent years is gonorrhea, due to the possibility of multi-drug resistances. Historically the instances of antibiotic resistance (in gonorrhea) have first been in the west and then move across the country. \yujie{Since 1965, the CDC has collected the number of cumulative new infected patients every week in a calendar year. There are several changes on report policies or guidelines, and the latest one is year 2006. As a result, we focus on the weekly numbers of new gonorrhea patients during January 1, 2006 and December 31, 2018. The new weekly gonorrhea cases are computed as the difference of the cumulative cases in two consecutive weeks. The last week is dropped during this calculation.} Let us first discuss the spatial patterns of the gonorrhea data among 50 states. For this purpose, we consider the cumulative number of gonorrhea cases from week 1 to week 52 by sum up all data during years 2006-2018. Figure \ref{fig:spatial_pattern_3} plots some selected weeks (\#1, \#11, \#21, \#31, \#41, \#51). \yujie { In Figure 1, if the state has a deeper and bluer color, then it experiences a higher number of gonorrhea cases. } One obvious pattern is that, California and Texas have generally higher number of gonorrhea cases as compared to other states. In addition, the number of gonorrhea cases in the northern US is smaller than that in the southern US. Next, we consider the temporal pattern of the gonorrhea data set. Figure \ref{fig:TimeSeriesAnnualUS} plots the annual number of gonorrhea cases over the years 2006-2018 in the US. It is evident that there is a global-level decreasing trend during 2010-2013. One possible explanation is the Obamacare, which seems to reduce the risk of infectious diseases. As we mentioned before, we are not interested in detecting this type of global changes, and we focus on the detection of the changes on the local patterns, which are referred to as hot-spots in our paper. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{TimeSeriesAnnualUS} \caption{Annual number of gonorrhea cases (in thousands) over the years 2006-2018 in the US} \label{fig:TimeSeriesAnnualUS} \end{figure} Moreover, the gonorrhea data consists of weekly data, and thus it is necessary to address the circular patterns over the direction of ``week''. Figure \ref{fig:circular_pattern} shows the country-scaled weekly gonorrhea case in the form of ``rose'' diagram for some selected years. In this figure, each direction represents a given week, and the length represents the number of gonorrhea cases for a given week. It reveals differences in the number of gonorrhea cases across a different week of the year. For instance, in July and August (in the direction of 8 o'clock on the circle), the number of gonorrhea case tends to be larger than other weeks. \begin{figure}[htbp] \centering \begin{tabular}{cc} \includegraphics[width = 0.5\textwidth]{Rose_2006} & \includegraphics[width = 0.5\textwidth]{Rose_2010} \\ 2006 & 2010 \\ \includegraphics[width = 0.5\textwidth]{Rose_2014} & \includegraphics[width = 0.5\textwidth]{Rose_2018} \\ 2014 & 2018 \end{tabular} \caption{\yujie{Histograms of the number of gonorrhea cases of Year 2006, 2010, 2014, 2018. Each direction represents a given week, and the length represents the number of gonorrhea cases for a given week.} \label{fig:circular_pattern}} \end{figure} \section{Proposed Model} \label{sec:model_whole} In this section, we present our proposed SSR-Tensor model, and postpone the discussion of hot-spot detection methodology to the next section. Owing to the fact that the gonorrhea data is of three dimensions, namely, \{state, week, year\}, it will likely have complex ``within-dimension'' and ``between-dimension'' interaction/correaltion relationship. Within-dimension relationship includes within-state correlation, within-week correlation, and within-year correlation. Between-dimension relationship includes between-state-and-week interaction, between-state-and-year interaction, as well as between-week-and-year interaction. In order to handle these complex ``within'' and ``between'' interaction structures, we propose to use the tensor decomposition method, where bases are used to address ``within-dimension'' correlation, and the tensor product is used for ``between-dimension'' interaction. Here, the basis is a very important concept where different basis can be chosen for different dimensions. \textcolor[rgb]{0.00,0.07,1.00} { Detailed discussions of the choice of bases are presented in Section \ref{sec:hot-spot_detection_performance}. } For the convenience of notation and easy understanding, we first introduce some basic tensor algebra and notation in Section \ref{sec:Tensor_Algebra_and_Notation}. Then Section \ref{sec:model} presents our proposed model that is able to characterize the complex correlation structures. \subsection{Tensor Algebra and Notation } \label{sec:Tensor_Algebra_and_Notation} In this section, we introduce basic notations, definitions, and operators in tensor (multi-linear) algebra that are useful in this paper. Throughout the paper, scalars are denoted by lowercase letters (e.g., $\theta$), vectors are denoted by lowercase boldface letters ($\boldsymbol{\theta}$), matrices are denoted by uppercase boldface letter ($\boldsymbol{\Theta}$), and tensors by curlicue letter ($\vartheta$). For example, an order-$N$ tensor is represented by $ \vartheta \in \mathbb{R}^{I_{1} \times \cdots \times I_{N}} $, where $I_{k}$ represent the mode-$n$ dimension of $\vartheta$ for $k = 1, \ldots, N$. \yujie{ The mode-$n$ product of a tensor $ \vartheta \in \mathbb{R}^{I_{1} \times \ldots \times I_{N}} $ by a matrix $\mathbf{B} \in \mathbb{R}^{J_{n}\times I_{n}}$ is a tensor $ \mathcal{A} \in \mathbb{R}^{ I_{1} \times \ldots I_{n-1} \times J_n \times I_{n+1} \times \ldots I_{N} } $, denoted as $ \mathcal{A} = \vartheta \times_n \mathbf{B}, $ where each entry of $\mathcal{A}$ is defined as the sum of products of corresponding entries in $\mathcal{A}$ and $\mathbf{B}$: $ \mathcal{A}_{i_1,\ldots, i_{n-1},j_{n},i_{n+1}, \ldots, i_N} = \sum_{i_{n}} \vartheta_{i_1, \ldots, i_{N}} \mathbf{B}_{j_n,i_n} $. Here we use the notation $\mathbf{B}_{j_n,i_n}$ to refer the $(j_n, i_n)$-th entry in matrix $\mathbf{B}$. The notation $\vartheta_{i_1, \ldots, i_{N}}$ is used to refer to the entry in tensor $\vartheta$ with index $(i_1, \ldots, i_{N})$. The notation $ \mathcal{A}_{i_1,\ldots, i_{n-1},j_{n},i_{n+1}, \ldots, i_N} $ is used to refer the entry in tensor $\mathcal{A}$ with index $(i_1,\ldots, i_{n-1},j_{n},i_{n+1}, \ldots, i_N)$. The mode-n unfold of the tensor $\vartheta \in \mathbb{R}^{I_{1} \times \ldots \times I_{N}}$ is denoted by $ \vartheta_{(n)} \in \mathbb{R}^{I_n \times (I_1\times \ldots I_{n-1} \times I_{n+1} \times I_N)}, $ where the column vector of $\vartheta_{(n)}$ are the mode-n vector of $\vartheta$. The mode-n vector of $\vartheta$ are defined as the $I_n$ dimensional vector obtained from $\vartheta$ by varying the index $i_n$ while keeping all the other indices fixed. For example, $\vartheta_{:,2,3}$ is a model-1 vector. A very useful technique in the tensor algebra is the Tucker decomposition, which decomposes a tensor into a core tensor multiplied by matrices along each mode: $ \mathcal{Y} = \vartheta \times_{1} \mathbf{B}^{(1)}\times_{2}\mathbf{B}^{(2)}\cdots\times_{N}\mathbf{B}^{(N)} $, where $\mathbf{B}^{(n)}$ is an orthogonal $I_{n}\times I_{n}$ matrix and is a principal component mode-$n$ for $n =1, \ldots, N$. Tensor product can be represented equivalently by a Kronecker product, i.e., $ \mathrm{vec}(\mathcal{Y}) = (\mathbf{B}^{(N)} \otimes \cdots \otimes \mathbf{B}^{(1)}) \mathrm{vec} (\vartheta) $, where $\mathrm{vec}(\cdot)$ is the vectorized operator. Finally, the definition of Kronecker product is as follow: Suppose $\mathbf{B}_{1}\in\mathbb{R}^{m \times n}$ and $\mathbf{B}_{2}\in\mathbb{R}^{p\times q}$ are matrices, the Kronecker product of these matrices, denoted by $\mathbf{B}_{1}\otimes\mathbf{B}_{2}$, is an $mq\times nq$ block matrix defined by $$ \mathbf{B}_{1}\otimes\mathbf{B}_{2} = \left[\begin{array}{ccc} b_{11}\mathbf{B}_2 & \cdots & b_{1n}\mathbf{B}_2 \\ \vdots & \ddots & \vdots \\ b_{m1}\mathbf{B}_2 & \cdots & b_{mn}\mathbf{B}_2 \end{array}\right]. $$ } \subsection{Our Proposed SSR-Tensor Model} \label{sec:model} Our proposed SSR-Tensor model is built on tensors of order three, as it is inspired by the gonorrhea data, which can be represented as a three-dimension tensor $\mathcal{Y}_{n_{1}\times n_{2}\times T}$ with $n_1=50$ states, $n_2=51$ weeks, and $T=13$ years. Note that the $i$-th$, $j$-th$, and $k$-th slice of the 3-D tensor along the dimension of state, week, and year can be achieved as $\mathcal{Y}_{i::},\mathcal{Y}_{:j:},\mathcal{Y}_{::k}$ correspondingly, where $i=1\cdots n_{1}$, $j=1\cdots n_{2}$ and $k=1\cdots T$. For simplicity, we denote $\mathbf{Y}_{k}=\mathcal{Y}_{::k}$. We further denote $\mathbf{y}_{k}$ as the vectorized form of $\mathbf{Y}_{k}$, and $\mathbf{y}$ as the vectorized form of $\mathcal{Y}$. The key idea of our proposed model is to separate the global trend from the local pattern by decomposing the tensor $\mathbf{y}$ into three parts, namely the smooth global trend $\boldsymbol{\mu}$, local hot-spot $\mathbf{h}$, and residual $\mathbf{e}$, i.e. $\mathbf{y}=\boldsymbol{\mu}+\mathbf{h}+\mathbf{e}$. For the first two of the components (e.g. the global trend mean and local hot-spots), we introduce basis decomposition framework to represent the structure of the within correlation in the global background and local hot-spot, also see \citet{SSD}. To be more concrete, we assume that global trend mean and local hot-spot can be represented as $\boldsymbol{\mu}=\mathbf{B}_{m}\boldsymbol{\theta}_{m}$ and $\boldsymbol{h}=\mathbf{B}_{h}\boldsymbol{\theta}_{h}$, where $\mathbf{B}_{m}$ and $\mathbf{B}_{h}$ are two bases that will discussed below, and $\boldsymbol{\theta}_{m}$ and $\boldsymbol{\theta}_{h}$ are the model coefficients vector of length $n_{1}n_{2}T$ and needed to be estimated (see Section \ref{sec:estimation}). Here the subscript of \textit{m} and \textit{h} are abbreviations for mean and hot-spot. Next, it is useful to discuss how to choose the bases $\mathbf{B}_{m}$ and $\mathbf{B}_{h},$ so as to characterize the complex ``within'' and ``between'' correlation or interaction structures. For the ``within" correlation structures, we propose to use pre-specified bases, $\mathbf{B}_{m,s}$ and $\mathbf{B}_{h,s}$, for within-state correlation in global trend and hot-spot, where the subscript of \textit{s} is an abbreviation for states. Similarly, $\mathbf{B}_{m,w}$ and $\mathbf{B}_{h,w}$ are the pre-specified bases for within-correlation of the same week, whereas $\mathbf{B}_{m,y}$ and $\mathbf{B}_{h,y}$ are the bases for within-time correlation over time. As for the ``between'' interaction, we use tensor product to describe it, i.e, $\mathbf{B}_{m}=\mathbf{B}_{m,s}\otimes\mathbf{B}_{m,w}\otimes\mathbf{B}_{m,y}$ and $\mathbf{B}_{h}=\mathbf{B}_{h,s}\otimes\mathbf{B}_{h,w}\otimes\mathbf{B}_{h,y}$. This Kronecker product has been proved to have better computational efficiency in the tensor response data \cite{TensorAlgebra}. \textcolor[rgb]{0.00,0.07,1.00} { Mathematically speaking, all these bases are matrices, which is pre-assigned in our paper. And the choice of bases in shown in Section \ref{sec:hot-spot_detection_performance}. } With the well-structured ``within'' and ``between'' interaction, our proposed model can be written as: \begin{equation} \mathbf{y}=(\mathbf{B}_{m,s}\otimes\mathbf{B}_{m,w}\otimes\mathbf{B}_{m,y})\boldsymbol{\theta}_{m}+(\mathbf{B}_{h,s}\otimes\mathbf{B}_{h,w}\otimes\mathbf{B}_{h,y})\boldsymbol{\theta}_{h}+\mathbf{e}, \label{equ:model} \end{equation} where $\mathbf{e}{\sim}N(0,\sigma^{2}\mathbf{I})$ is the random noise. Mathematically speaking, both $\mathbf{B}_{m,s}$ and $\mathbf{B}_{h,s}$ are $n_{1}\times n_{1}$ matrix, $\mathbf{B}_{m,w}$ and $\mathbf{B}_{h,w}$ are $n_{2}\times n_{2}$ matrix and $\mathbf{B}_{m,y}$ and $\mathbf{B}_{h,y}$ are $T \times T$ matrix, respectively. Mathematically, our proposed model in \eqref{equ:model} can be rewritten into a tensor format: \begin{equation} \mathcal{Y}= \vartheta_{m} \times_{3} \mathbf{B}_{m,y} \times_{2} \mathbf{B}_{m,w} \times_{1} \mathbf{B}_{m,s} + \vartheta_{h}\times_{3} \mathbf{B}_{h,y} \times_{2} \mathbf{B}_{h,w}\times_{1} \mathbf{B}_{h,s}+\mathbf{e}, \label{equ:meaning_of_theta} \end{equation} where $\vartheta_{m}$ and $\vartheta_{h}$ is the tensor format of $\boldsymbol{\theta}_{m}$ and $\boldsymbol{\theta}_{h}$ with dimensional $n_{1}\times n_{2}\times T$. Accordingly, the $((k-1)n_{1}n_{2}+(i-1)n_{1}+j)$-th entry of $\boldsymbol{\theta}_{h}$, $\boldsymbol{\theta}_{m}$ can estimate the global mean and hot-spot in $i$-th state and $j$-th week in $k$-th year respectively. The tensor representation in equation \eqref{equ:meaning_of_theta} allows us to develop computationally efficient methods for estimation and prediction. \subsection{Estimation of Hot-spots} With the proposed SSR-Tensor model above, we can now discuss the estimation of hot-spot parameters $\boldsymbol{\theta}$'s (including $\boldsymbol{\theta}_m$, $\boldsymbol{\theta}_h$) in our model in \eqref{equ:model} or \eqref{equ:meaning_of_theta} from the data via the penalized likelihood function. We propose to add two penalties in our estimation. First, because hot-spots rarely occur, we assume that $\boldsymbol{\theta}_{h}$ is sparse and the majority of entries in the hot-spot coefficient $\boldsymbol{\theta}_{h}$ are zeros. Thus we propose to add the penalty $R_{1}(\boldsymbol{\theta}_{h})=\lambda\Vert\boldsymbol{\theta}_{h}\Vert_{1}$ to encourage the sparsity property of $\boldsymbol{\theta}_{h}$. Second, we assume there is temporal continuity of the hot-spots, as the usual phenomenon of last year is likely to affect the performance of hot-spot in this year. Thus, we add the second penalty $ R_{2}(\boldsymbol{\theta}_{h})= \lambda_ 2\Vert \mathbf{D} \boldsymbol{\theta}_{h} \Vert_1$ to ensure the yearly continuity of the hot-spot, where $ \mathbf{D} = \mathbf{D}_{s} \otimes \mathbf{D}_{w} \otimes \mathbf{D}_{y} $ with $ \mathbf{D}_{s}$ as identical matrix of dimension $n_1\times n_1$, and $T \times T$ matrix $ \mathbf{D}_{y} = \left[ \begin{array}{ccccc} 1 & -1\\ & & \ddots & \ddots\\ & & & 1 & -1\\ & & & & 1 \end{array} \right] $, $n_2 \times n_2$ matrix $ \mathbf{D}_{w} = \left[ \begin{array}{ccccc} 1 & -1\\ & & \ddots & \ddots\\ & & & 1 & -1\\ -1& & & & 1 \end{array} \right]. $ With the formula of $\mathbf{D}_y$, the hot-spot has the property of yearly continuity. By the formula of $\mathbf{D}_w$, the hot-spot has a weekly circular pattern. By combining both penalties, we propose to estimate the parameters via the following optimization problem: \begin{eqnarray} \label{equ:eatimation} && \arg\min_{\boldsymbol{\theta}_{m},\boldsymbol{\theta}_{h}} \Vert\boldsymbol{e}\Vert^{2} + \lambda_{1}\Vert\boldsymbol{\theta}_{h}\Vert_{1} + \lambda_{2}\Vert \mathbf{D}\boldsymbol{\theta}_{h}\Vert_{1}\\ && \mbox{subject to} \;\; \boldsymbol{y} = (\mathbf{B}_{m,s}\otimes\mathbf{B}_{m,w} \otimes \mathbf{B}_{m,y})\boldsymbol{\theta}_{m} + (\mathbf{B}_{h,s} \otimes \mathbf{B}_{h,w} \otimes \mathbf{B}_{h,y})\boldsymbol{\theta}_{h} +\mathbf{e}, \nonumber \end{eqnarray} where $ \boldsymbol{\theta}_{m} = \mathrm{vec}(\boldsymbol{\theta}_{m,1},\ldots, \boldsymbol{\theta}_{m,t},\ldots,\boldsymbol{\theta}_{m,T})$ and $\boldsymbol{\theta}_{h} = \mathrm{vec}(\boldsymbol{\theta}_{h,1},,\ldots, \boldsymbol{\theta}_{h,t},\ldots, \boldsymbol{\theta}_{h,T})$. \textcolor[rgb]{0.00,0.07,1.00} { The choice of the turning parameters $\lambda_1, \lambda_2$ will be discussed in Section \ref{sec:hot-spot_detection}. } Note that there are two penalties in equation \eqref{equ:eatimation}: $\lambda_{1}\Vert\boldsymbol{\theta}_{h}\Vert_{1}$ is the LASSO penalty to control both the sparsity of the hot-spots and $\lambda_{2}\Vert \mathbf{D}\boldsymbol{\theta}_{h}\Vert_{1}$ is the fused LASSO penalty (Tibshirani et al., 2005) to control the temporal consistency of the hot-spots. Traditional algorithms often involve the storage and computation of the matrix $\mathbf{B}_m$ and $\mathbf{B}_h$, which is of the dimension $n_1n_2n_3 \times n_1n_2n_3.$ Thus they might work to solve the optimization problem in equation \eqref{equ:eatimation} when the dimensions are small, but they will be computationally infeasible as the dimensions grow. To address this computational challenge, we propose to simplify the computational complexity by modifying the matrix algebra in traditional algorithm into tensor algebra, and will discuss how to optimize the problem in equation \eqref{equ:eatimation} computationally efficiently in Section \ref{sec:estimation}. \section{Hot-spot Detection} \label{sec:hot-spot_detection} This section focuses on the detection of the hot-spot, which includes the detection and identification of the year (when), the state (where) and the week (which) of the hot-spots. In our case study, we focus on the upward shift of the number of gonorrhea cases, since the increasing gonorrhea is generally more harmful to the societies and communities. Of course, one can also detect the downward shift with a slight modification of our proposed algorithms by multiplying $-1$ to the raw data. For the purpose of easy presentation, we first discuss the detection of the hot-spot, i.e., detect when hot-spot occurs in Subsection \ref{sec:Temporal_Detection}. Then, in Subsection \ref{sec:spatial_location}, we consider the localization of the hot-spot, i.e., determine which states and which weeks are involved for the detected hot-spots. \subsection{Detect When the Hot Spot Occurs} \label{sec:Temporal_Detection} To determine when the hot-spot occurs, we consider the following hypothesis test and set up the control chart for the hot-spot detection \eqref{eq:chaneg_hypothesis_testing}. \begin{equation} H_{0}: \widetilde{\mathbf{r}}_{t} = 0 \;\;\; v.s. \;\;\; H_{1}: \widetilde{\mathbf{r}}_{t} = \delta \widehat{\mathbf{h}}_{t} \;\;\; (\delta>0), \label{eq:chaneg_hypothesis_testing} \end{equation} where $\widetilde{\mathbf{r}}_{t}$ is the expected residuals after removing the mean. The essence of this test is that, we want to detect whether $\widetilde{\mathbf{r}}_{t}$ has a mean shift in the direction of $\widehat{\mathbf{h}}_{t}$, estimated in Section \ref{sec:estimation}. To test this hypotheses, the likelihood ratio test is applied to the residual $\mathbf{r}_{t}$ at each time $t$, i.e. $\mathbf{r}_{t}=\mathbf{y}_{t}-\boldsymbol{\mu}_{t}$, where it assumes that the residuals $\mathbf{r}_{t}$ is independent after removing the mean and its distribution before and after the hot-spot remains the same. Accordingly, the test statistics monitoring upward shift is designed as $ P_{t}^{+} = \widehat{\mathbf{h}}_{t}'^{+}\mathbf{r}_{t}/\sqrt{\widehat{\mathbf{h}}_{t}'^{+}\widehat{\mathbf{h}}_{t}^{+}} $ \citep{hawkins1993regression}, where $\widehat{\mathbf{h}}_{t}^{+}$ only takes the positive part of $\widehat{\mathbf{h}}_{t}$ with other entries as zero. Here we put a superscript ``+'' to emphasis that it aims for upward shift. \yujie{ The choices of the penalty parameters $\lambda_{1},\lambda_{2}$ are describled as follows. } In order to select the one with the most power, we propose to calculate a series of $P_{t}^{+}$ under different combination of $(\lambda_{1},\lambda_{2})$ from the set $ \Gamma = \{(\lambda_{1}^{(1)},\lambda_{2}^{(1)})\cdots(\lambda_{1}^{(n_{\lambda})},\lambda_{2}^{(n_{\lambda})})\} $. For better illustration, we denote the test statistics under penalty parameter $(\lambda_{1},\lambda_{2})$ as $P_{t}^{+}(\lambda_{1},\lambda_{2})$. The test statistics \citep{LASSO} with the most power to detect the change, noted as $\widetilde{P}_{t}^{+}$, can be computed by \begin{equation} \widetilde{P}_{t}^{+}=\max_{(\lambda_{1},\lambda_{2})\in\Gamma}\frac{P_{t}^{+}(\lambda_{1},\lambda_{2})-E(P_{t}^{+}(\lambda_{1},\lambda_{2}))}{\sqrt{Var(P_{t}^{+}(\lambda_{1},\lambda_{2}))}},\label{equ:most_power} \end{equation} where $E(P_{t}^{+}(\lambda_{1},\lambda_{2}))$, $Var(P_{t}^{+}(\lambda_{1},\lambda_{2}))$ respectively are the mean and variance of $P_{t}(\lambda_{1},\lambda_{2})$ under $H_{0}$ (e.g. for phase-I in-control samples). Note that the penalty parameter $(\lambda_{1},\lambda_{2})$ to realize the maximization in equation \eqref{equ:most_power} is generally different under different time $t$. To emphasize such dependence of time $t$, denote by $(\lambda_{1,t}^{*},\lambda_{2,t}^{*})$ the parameter pair that attains the maximization in equation \eqref{equ:most_power} at time $t$, i.e, \begin{equation} (\lambda_{1,t}^{*},\lambda_{2,t}^{*})=\arg\max_{(\lambda_{1},\lambda_{2})\in\Gamma}\frac{P_{t}^{+}(\lambda_{1},\lambda_{2})-E(P_{t}^{+}(\lambda_{1},\lambda_{2}))}{\sqrt{Var(P_{t}^{+}(\lambda_{1},\lambda_{2}))}}. \label{eq:lambda12} \end{equation} Thus, the series of the test statistics for the hot-spot at time $t$ is $\widetilde{P}_{t}^{+}(\lambda_{1,t}^{*},\lambda_{2,t}^{*})$ where $t=1\cdots T$. With the test statistic available, we design a control chart based on the CUSUM procedure due to the following reasons: (1) we are interested in detecting the change with the temporal continuity, therefore, aligns with the objective of CUSUM. (2) In the view of social stability, we want to keep gonorrhea at a target value without sudden changes, which makes the CUSUM chart is a natural better fit. To be more specific, in the CUSUM procedure, we compute the CUSUM statistics recursively by $$W_{t}^{+}=\max\{0,W_{t-1}^{+}+\widetilde{P}_{t}^{+}(\lambda_{1,t}^{*},\lambda_{2,t}^{*})-d\}, $$ and $W_{t=0}^{+}=0,$ where $d$ is a constant and can be chosen according to the degree of the shift that we want to detect. Next, we set the control limit $L$ to achieve a desirable ARL for in-control samples. Finally, whenever $W_{t}^{+} > L$ at some time $t=t^{*},$ we declare that a hot-spot occurs at time $t^{*}$. \subsection{Localize Where and Which the Hot Spot Occur?} \label{sec:spatial_location} After the hot-spot $t^*$ has been detected by the CUSUM control chart in the previous section, the next step is to localize where and which crime rate may account for this hot-spot. To do so, we propose to utilize the vector $$ \widehat{\mathbf{h}}_{\lambda_{1, t^{*}}^{*}, \lambda_{2,t^{*}}^{*}} = \mathbf{B}_{h}\widehat{\boldsymbol{\theta}}_{h, \lambda_{1,t^{*}}^{*}, \lambda_{2,t^{*}}^{*}} $$ at the declared hot-spot time $t^{*}$ and the corresponding parameter $\lambda_{1,t^{*}}^{*},\lambda_{2,t^{*}}^{*}$ in equation \eqref{eq:lambda12}. For the numerical computation purpose, it is often easier to directly work with the tensor format of the hot-spot $ \widehat{\mathbf{h}}_{ \lambda_{1,t^{*}}^{*},\lambda_{2,t^{*}}^{*}} $, denoted as $ \widehat{\mathcal{H}}_{\lambda_{1,t^{*}}^{*}, \lambda_{2,t^{*}}^{*}} $, which is a tenor of dimension $ n_{1} \times n_{2} \times T $. If the $(i,j, t^*)$-th entry in $\widehat{\mathcal{H}}_{\lambda_{1,t^{*}}^{*}, \lambda_{2,t^{*}}^{*}}$ is non-zero, then we declare that there is a hot-spot for the $j$-th crime rate type in the $i$-th state in $t^*$-th year. \section{Optimization Algorithm} \label{sec:estimation} In this section, we will develop an efficient optimization algorithm for solving the optimization problem in equation \eqref{equ:eatimation}. For notion convenience, we adjust the notation above a little bit. Because $\boldsymbol{\theta}_{m},\boldsymbol{\theta}_{h}$ in equation \eqref{equ:eatimation} is solved under penalty $\lambda_{1}R_{1}(\boldsymbol{\theta}_{h})+\lambda_{2}R_{2}(\boldsymbol{\theta}_{h})$, we change $\boldsymbol{\theta}_{m}$, $\boldsymbol{\theta}_{h}$ into $\boldsymbol{\theta}_{m,\lambda_{1},\lambda_{2}},\boldsymbol{\theta}_{h,\lambda_{1},\lambda_{2}}$ to emphasis the penalty parameter $\lambda_{1}$ and $\lambda_{2}$. Accordingly, $\boldsymbol{\theta}_{h,0,\lambda_{2}}$ refers to the estimator only under the second penalty $\lambda_{2}R_{2}(\boldsymbol{\theta}_{h})$, i.e, \begin{equation} \boldsymbol{\theta}_{h,0,\lambda_{2}}=\arg\min_{\boldsymbol{\theta}_{m},\boldsymbol{\theta}_{h}}\{\Vert\mathbf{e}\Vert_{2}^{2}+\lambda R_{2}(\boldsymbol{\theta}_{h})\}.\label{equ:one_penalty} \end{equation} The structure of this section is that, we first develop the procedure of our proposed method in Subsection \ref{sec:algorithm_procedure} and then gives the computational complexity in Subsection \ref{sec:computational_complexity}. \subsection{Procedure of Our Algorithm} \label{sec:algorithm_procedure} In the optimization problem shown in equation \eqref{equ:eatimation}, there are two unknown vectors, namely $\boldsymbol{\theta}_{m,\lambda_{1},\lambda_{2}}$, $\boldsymbol{\theta}_{h,,\lambda_{1},\lambda_{2}}$. To simplify the optimization above, we first figure out the close-form correlation between $\boldsymbol{\theta}_{m,\lambda_{1},\lambda_{2}}$ and $\boldsymbol{\theta}_{h,\lambda_{1},\lambda_{2}}$. Then, we solve the optimization by modifying the matrix algebra in FISTA\citep{FISTA} into tensor algebra. The key to realize it is the proximal mapping of $\lambda_{1}R_{1}(\boldsymbol{\theta}_{h,\lambda_1,\lambda_2})+\lambda_{2}R_{2}(\boldsymbol{\theta}_{h,\lambda_1,\lambda_2})$. To address it, we first aims at the proximal mapping of $\lambda_{2}R_{2}(\boldsymbol{\theta}_{h,0,\lambda_1})$, where SFA via gradient descent \citep{liu2010efficient} is used. And then the proximal mapping of $\lambda_{1}R_{1}(\boldsymbol{\theta}_{h,\lambda_1,\lambda_2})+\lambda_{2}R_{2}(\boldsymbol{\theta}_{h,\lambda_1,\lambda_2})$ can be solved with a close-form correlation between it and the proximal mapping of $\lambda_{2}R_{2}(\boldsymbol{\theta}_{h,0,\lambda_2})$. There are three subsections in this section, where each subsection represents one step in our proposed algorithm. \subsubsection{Estimate the mean parameter} To begin with, we first simplify the optimization problem in equation \eqref{equ:eatimation}, i.e., figure out the close-form correlation between $\boldsymbol{\theta}_{m,\lambda_{1},\lambda_{2}}$ and $\boldsymbol{\theta}_{h,\lambda_{1},\lambda_{2}}$. Although there are two sets of parameters $\boldsymbol{\theta}_{m,\lambda_{1},\lambda_{2}}$ and $\boldsymbol{\theta}_{h,\lambda_{1},\lambda_{2}}$ in the model, we note that given $\boldsymbol{\theta}_{h,\lambda_{1},\lambda_{2}}$, the parameter $\boldsymbol{\theta}_{m,\lambda_{1},\lambda_{2}}$ is involved in the standard least squared estimation and thus can be solved in the closed-form solution, see equation \eqref{equ:theta_and_theta_a} in the proposition below. \begin{prop} Given $\boldsymbol{\theta}_{h,\lambda_{1},\lambda_{2}}$, the closed-form solution of $\boldsymbol{\theta}_{m,\lambda_{1},\lambda_{2}}$ is given by: \begin{equation} \boldsymbol{\theta}_{m,\lambda_{1},\lambda_{2}}= (\mathbf{B}_{m}'\mathbf{B}_{m})^{-1}(\mathbf{B}_{m}'y-\mathbf{B}_{m}'\mathbf{B}_{h}\boldsymbol{\theta}_{h,\lambda_{1},\lambda_{2}}). \label{equ:theta_and_theta_a} \end{equation} \end{prop} It remains to investigate how to estimate the parameter $\boldsymbol{\theta}_{h,\lambda_{1},\lambda_{2}}.$ After plugging in \eqref{equ:theta_and_theta_a} into \eqref{equ:eatimation}, the optimization problem for estimating $\boldsymbol{\theta}_{h,\lambda_{1},\lambda_{2}}$ becomes \begin{equation} \arg\min_{\boldsymbol{\theta}_{h,\lambda_{1},\lambda_{2}}} \Vert\mathbf{y}^{*}-\mathbf{X}\boldsymbol{\theta}_{h,\lambda_{1},\lambda_{2}}\Vert_{2}^{2} + \lambda_{1}\Vert\boldsymbol{\theta}_{h,\lambda_{1},\lambda_{2}}\Vert_{1} + \lambda_{2}\Vert\mathbf{D}\boldsymbol{\theta}_{h,\lambda_1,\lambda_2}\Vert_{1} ,\label{equ:opt_with_2_penalty} \end{equation} where $\mathbf{y}^{*}=\left[\mathbf{I}-\mathbf{H}_{m}\right]\mathbf{y}$ , $\mathbf X=\left[\mathbf{I}-\mathbf{H}_{m}\right]\mathbf{B}_{h}$ and $\mathbf{H}_{m}=\mathbf{B}_{m}(\mathbf{B}_{m}'\mathbf{B}_{m})^{-1}\mathbf{B}_{m}'$ is the projection matrix. Due to the high dimension, we need to develop an efficient and precise optimization algorithm to optimize\eqref{equ:eatimation}. Obviously, \eqref{equ:opt_with_2_penalty} is a typical sparse optimization problem. However, most of the sparse optimization frameworks focus on optimizing Eq. (7). \begin{equation} \label{equ:estimate_theta0lambda2_without_D} \arg\min_{\boldsymbol{\theta}_{h,0,\lambda_{2}}} \Vert \mathbf{y}^{*}-\mathbf{X}\boldsymbol{\theta}_{h,\lambda_1,0}\Vert_{2}^{2} + \lambda_{1}\Vert\boldsymbol{\theta}_{h,\lambda_1,0} \Vert_{1}, \end{equation} such as \cite{ISTA}, \cite{FISTA}, \cite{glmnet} and so on, where iterative updating rule are used base either on the gradient information or the proximal mapping. In most cases, the algorithms above works, however, two challenges occur in our paper: \begin{enumerate} \item When the dimension of $\mathbf{X}$ (of size $n_1n_2T \times n_1n_2T$) become increasingly large, it is difficult for the computer to store and memorize it. \item When the penalty term is $ \lambda_1 \Vert \boldsymbol{\theta}_{h,\lambda_1,\lambda_2}\Vert_1 + \lambda_{2}\Vert\boldsymbol{D\theta}_{h,\lambda_1,\lambda_{2}}\Vert_{1} $, instead of only $ \lambda_{1}\Vert \boldsymbol{\theta}_{h,\lambda_1,\lambda_{2}}\Vert_{1} $, direct application of the proximal mapping of $\lambda_{1}\Vert \boldsymbol{\theta}_{h,\lambda_1,\lambda_{2}}\Vert_{1}$ is not workable. \end{enumerate} Therefore, directly applying these above algorithms(\cite{FISTA}, \cite{ISTA}, \cite{glmnet}) to our case is not feasible. To extend the existing research, we proposed an iterative algorithm in Algorithm \ref{alg:Tensor_and_FISTA} and we explain the approach to solve the proximal mapping of $ \lambda_1 \Vert \boldsymbol{\theta}_{h,\lambda_1,\lambda_2}\Vert_1 + \lambda_{2}\Vert\boldsymbol{D\theta}_{h,\lambda_1,\lambda_{2}}\Vert_{1} $ in Section \ref{sec:proximal_map}. \subsubsection{Proximal Mapping} \label{sec:proximal_map} The main tool we use to solve the optimization problem in equation \eqref{equ:opt_with_2_penalty} is a variation of proximal mapping. Denote that $ F(\boldsymbol{\theta}_{h,\lambda_1,\lambda_2}) = \frac{1}{2} \| \mathbf{y}^*-\mathbf{X} \boldsymbol{\theta}_{h,\lambda_1,\lambda_2} \|_2^2 . $ And in the $i$-th iteration, the according recursive estimator of $\boldsymbol{\theta}_{h,\lambda_1,\lambda_2}$ is noted as $\boldsymbol{\theta}_{h,\lambda_1,\lambda_2}^{(i)}$. Besides,an auxiliary variable $\boldsymbol{\eta}^{(i)}$ is introduced to update from $\boldsymbol{\theta}_{h,\lambda_1,\lambda_2}^{(i)}$ to $\boldsymbol{\theta}_{h,\lambda_1,\lambda_2}^{(i+1)}$ through \begin{eqnarray*} \boldsymbol{\theta}_{h,\lambda_1,\lambda_2}^{(i+1)} & = & \arg\min_{\boldsymbol{\theta} } F( \boldsymbol{\eta}^{(i)} ) + \frac{\partial}{\partial \boldsymbol{\theta}_{h,\lambda_1,\lambda_2}} F( \boldsymbol{\eta}^{(i)} ) \left( \boldsymbol{\theta} - \boldsymbol{\eta}^{(i)} \right) + \\ & & \lambda_1 \| \boldsymbol{\theta} \|_1 + \lambda_2 \| \mathbf{D} \boldsymbol{\theta} \|_1 + \frac{L}{2} \| \boldsymbol{\theta} - \boldsymbol{\eta}^{(i)} \|_2^2 \\ & = & \arg\min_{\boldsymbol{\theta} } \left[ \frac{1}{2} \left[ \boldsymbol{\theta} -\left( \boldsymbol{\eta}^{(i)} -\frac{\partial}{L \partial \boldsymbol{\theta}} F( \boldsymbol{\eta}^{(i)} ) \right) \right]^2 + \lambda_1 \| \boldsymbol{\theta} \|_1+ \lambda_2 \| \mathbf{D} \boldsymbol{\theta} \|_1\right]\\ & \triangleq & \pi_{\lambda_2}^{ \lambda_1 }(\mathbf{v}) \end{eqnarray*} where $ \mathbf{v}=\boldsymbol{\eta}^{(i)} -\frac{\partial}{L \partial \boldsymbol{\theta}} F( \boldsymbol{\eta}^{(i)} ) $, $ \boldsymbol{\eta}^{(i)} = \boldsymbol{\theta}_{h,\lambda_1,\lambda_2}^{(i)} + \frac{ t_{i-2} -1 }{t_{i-1}} (\boldsymbol{\theta}_{h,\lambda_1,\lambda_2}^{(i)} - \boldsymbol{\theta}_{h,\lambda_1,\lambda_2}^{(i-1)}) $ and $t_{-1}=t_0=1$, $t_{i+1} = \frac{1+\sqrt{1+4t_i^2}}{2}$ Because it is difficult to solve $\pi_{\lambda_2}^{ \lambda_1 }(\mathbf{v})$ directly, we aim to solve $\pi_{\lambda_2}^{ 0 }(\mathbf{v})$ first. And proved by \cite{liu2010efficient}, there is a close-form correlation between $\pi_{\lambda_2}^{ \lambda_1 }(\mathbf{v})$ and $\pi_{\lambda_2}^{ 0 }(\mathbf{v})$, which is shown in Proposition \ref{prop:theta_0_lambda2_and_theta_lambda1_lambda2}. \begin{prop} \label{prop:theta_0_lambda2_and_theta_lambda1_lambda2} The close form relationship between $\pi_{\lambda_2}^{\lambda_1}(\mathbf{v}) $ and $\pi_{\lambda_2}^{0}(\mathbf{v})$ is \begin{equation} \pi_{\lambda_2}^{\lambda_1}(\mathbf{v}) = \mbox{sign}(\pi_{\lambda_2}^{0}(\mathbf{v})) \odot \max\{|\pi_{\lambda_2}^{0}(\mathbf{v})|-\lambda_{1},0\} .\label{equ:corre_between_lambda1_and_lambda2} \end{equation} where $\odot$ is an element-wise product operator. \end{prop} With the proximal mapping function in Proposition \ref{prop:theta_0_lambda2_and_theta_lambda1_lambda2}, we can now develop the algorithm shown in Algorithm \ref{alg:Tensor_and_FISTA}. \begin{algorithm}[H] \caption{Iterative updating based on tensor decomposition } \label{alg:Tensor_and_FISTA} \LinesNumbered \KwIn{ $\mathbf{y}^*, \mathbf{B}_s, \mathbf{B}_w, \mathbf{B}_y, \mathbf{D}_s, \mathbf{D}_w, \mathbf{D}_y, K, L, \lambda_1, \lambda_2, L_0, M_1, M_2$ } \KwOut{$ \boldsymbol{\theta}_{h,\lambda_1,\lambda_2}$} \bfseries{initialization}\; $\boldsymbol{\Theta}^{(1)} = \boldsymbol{\Theta}^{(0)}, t_{-1}=1, t_0=1, L=L_0 $\\ \For{$i =1 \cdots M_1$}{ $ \mathcal{N}^{(i)} = \mathcal{N}^{(i)} + \frac{ t_{i-2} -1 }{t_{i-1}} (\boldsymbol{\Theta}^{(i)} - \boldsymbol{\Theta}^{(i-1)}) $ \begin{eqnarray*} \mathcal{V} & = &\mathcal{N}^{(i)} - \frac{1}{L} \mathcal{N}^{(i)} \times_1 (\mathbf P'_s \mathbf P_s) \times_2 (\mathbf P'_w \mathbf P_w) \times_3 (\mathbf P'_y \mathbf P_y) - \\ & &\frac{1}{L} \mathcal{Y}^* \times_1 \mathbf P'_s \times_2 \mathbf P'_w \times_3 \mathbf P'_y \end{eqnarray*} \For{ $j=0 \cdots M_2$ }{ \begin{eqnarray*} \mathcal{G} ^{(i)} & =& \left( \mathcal{Z}^{(j)} \times_1 (\mathbf D'_s \mathbf D_s) \times_2 (\mathbf D'_w \mathbf D_w) \times_3 (\mathbf D'_y \mathbf D_y) ) \right) -\\ & & \left( \mathcal{V} \times_1 \mathbf D_s \times_2 \mathbf D_w \times_3 \mathbf D_y \right) \end{eqnarray*} $ \mathcal{Z}^{(j+1)} = P\left( \mathcal{Z}^{(j)} - \mathcal{G}^{(j)}/L \right) $ } $ \pi^0_{\lambda_2}(\mathcal {V}) = \mathcal{V} - (\mathcal{Z}^{(M_2)}) \times_1 \mathbf D_s \times_2 \mathbf D_w \times_3 \mathbf D_y $\\ $ \pi_{\lambda_2}^{\lambda_1} (\mathcal V) = \mbox{sign}( \pi^0_{\lambda_2}(\mathcal V) ) \odot \mbox{max} \{ \left| \pi^0_{\lambda_2}(\mathcal V) \right| -\lambda_1, 0 \} $ \\ $ t_{i+1} = \frac{1+\sqrt{1+4t_i^2}}{2} $ } $ \widehat{\boldsymbol{\Theta}}_{h,\lambda_1,\lambda_{2}} = \pi_{\lambda_2}^{\lambda_1} (\mathcal V) $\\ $\widehat{\boldsymbol{\theta}}_{h,\lambda_1,\lambda_{2}}=\mbox{vector}(\widehat{\boldsymbol{\Theta}}_{h,\lambda_1,\lambda_{2}})$ ${\boldsymbol{v}}=\mbox{vector}({\mathcal{V}})$ \end{algorithm} $\mbox{vector}(\cdot)$ is a function that unfolding a order-3 tensor of dimension $n_1\times n_2 \times n_3$ into a vector $n_1n_2n_3$ . \subsection{Computational Complexity} \label{sec:computational_complexity} This section discusses the computational complexity of our proposed algorithm. Suppose the raw data is structured into a tensor of order three with dimensional $n_1 \times n_2 \times n_3$, then the computation complexity of our propose method is of order $O\left( n_1n_2n_3\max\{n_1,n_2,n_3\} \right)$ (see Proposition \ref{prop:computational_complexity}). \begin{prop} \label{prop:computational_complexity} The computational complexity of Algorithm \ref{alg:Tensor_and_FISTA} is of order $ O \left( n_1n_2n_3\max\{n_1,n_2,n_3\} \right) $. \end{prop} \begin{proof} The main computational load in Algorithm \ref{alg:Tensor_and_FISTA} is on the calculation of $\mathbf{v}$ (line 4), $\mathbf{g}^{(i)}$(line 5) and $\pi_{\lambda_2}^{0} (\mathbf v)$ (line 7). We will take the calculation of $\mathbf{v}$ in line 4 in the algorithm as an example. To begin with, we focus on the computational complexity of \begin{equation} \label{equ:computional_complexity_proof_part1} \mathcal{N}^{(i)} \times_1 (\mathbf P'_s \mathbf P_s) \times_2 (\mathbf P'_w \mathbf P_w) \times_3 (\mathbf P'_y \mathbf P_y) ). \end{equation} For better illustration, we denote $\mbox{tensor}(\boldsymbol{\eta}^{(i)})$ as $\mathcal{N}^{(i)}$ and $\mathcal{N}^{(i)} \times_1 (\mathbf P'_s \mathbf P_s)$ as tensor $\mathcal L_1$. According to the tensor algebra \citep[Section 2.5]{TensorAlgebra}, $$ \mathcal L_1 = \mathcal{N}^{(i)} \times_1 (\mathbf P'_s \mathbf P_s) \Longleftrightarrow \mathcal L_{1(1)} = \mathbf P'_s \mathbf P_s \mathcal N^{(i)}_{(1)}. $$ Therefore, the computational complexity of equation \eqref{equ:computional_complexity_proof_part1} is the same as two-matrix multiplication with order $n_1 \times n_1$ and $n_1 \times n_1n_2$, which is of order $O\left( n_1n_2n_3(2n_1-1) \right)$. After the calculation of $\mathcal L_1$, equation \eqref{equ:computional_complexity_proof_part1} is reduced to \begin{equation} \label{equ:computional_complexity_proof_part2} \mathcal L_1 \times_2 (\mathbf P'_w \mathbf P_w) \times_3 (\mathbf P'_y \mathbf P_y) ). \end{equation} Similarly, denotes $\mathcal L_2 = \mathcal L_1 \times_2 (\mathbf P'_w \mathbf P_w)$, then $$ \mathcal L_2 = \mathcal L_1 \times_2 (\mathbf P'_w \mathbf P_w) \Longleftrightarrow \mathcal L_{2(2)} = \mathbf P'_w \mathbf P_w \mathcal N_{(2)}. $$ Therefore, the computational complexity of equation \eqref{equ:computional_complexity_proof_part2} is the same as two-matrix multiplication with order $n_2 \times n_2$ and $n_2 \times n_1n_3$, which is of order $O\left( n_1n_2n_3(2n_2-1) \right)$. After the calculation of $\mathcal L_2$, equation \eqref{equ:computional_complexity_proof_part2} is reduced to \begin{equation} \label{equ:computional_complexity_proof_part3} \mathcal L_2 \times_3 (\mathbf P'_y \mathbf P_y) ). \end{equation} Similarly, denotes $\mathcal L_3 = \mathcal L_2 \times_2 (\mathbf P'_y \mathbf P_y)$, then $$ \mathcal L_3 = \mathcal L_2 \times_3 (\mathbf P'_y\mathbf P_y) \Longleftrightarrow \mathcal L_{3(3)} = \mathbf P'_w \mathbf P_w \mathcal N_{(3)}. $$ Therefore, the computational complexity of equation \eqref{equ:computional_complexity_proof_part2} is the same as two-matrix multiplication with order $n_3 \times n_3$ and $n_3 \times n_1n_2$, which is of order $O\left( n_1n_2n_3(2n_3-1) \right)$. By combining all these blocks built above, we conclude that the computational complexity of equation \eqref{equ:computional_complexity_proof_part1} is of order $O(n_1n_2n_3\left(\max \{n_1,n_2,n_3\} \right))$. In the same way, the computational complexity in line 5 and 7 of Algorithm \ref{alg:Tensor_and_FISTA} is also of order $O(n_1n_2n_3\left(\max \{n_1,n_2,n_3\} \right))$. Thus, the computational complexity of Algorithm is of order $O(n_1n_2n_3\left(\max \{n_1,n_2,n_3\} \right))$. \end{proof} \section{Simulation} \label{sec:simulation} In this section, we conduct simulation studies to evaluate our proposed methodologies by comparing with several benchmark methods in the literature. The structure of this section is as follows. We first present the data generation mechanism for our simulations in Subsection \ref{sec:sim_data_generation}, then discuss the performance of hot-spot detection and localization in Subsection \ref{sec:hot-spot_detection_performance}. \subsection{Generative Model in Simulation} \label{sec:sim_data_generation} In our simulation, at each time index $t (t=1 \cdots T)$, we generate a vector $\mathbf y_t$ of length $n_{1} n_{2} $ by \begin{equation} \label{equ:sim_data_generation} \mathbf y_{i,t}=(\mathbf B\boldsymbol\theta_t)_i+\delta\mathbbm 1\{t\geq \tau\} \mathbbm 1_i\{i \in S_h\}+\mathbf w_{i,t}, \end{equation} where $\mathbf y_{i,t}$ denotes the $i$-th entry in vector $\mathbf y_t$, $(\mathbf{B}\boldsymbol{\theta}_t)_i$ denotes the $i$-th entry in vector $\mathbf B\boldsymbol\theta_t$, and $\delta$ denotes the change magnitude. Here $\mathbbm 1(A)$ is the indicator function, which has the value 1 for all elements of $A$ and the value 0 for all elements not in $A$, and $\mathbf w_{i,t}$ is the $i$-th entry in the white noise vector whose entries are independent and follow $N(0,0.1^{2})$ distribution. Next, after the temporal detection of hot-spots, we need to further localize the hot-spots in the sense that we need to find out which state and which week may lead to the occurrence of temporal hot-spot. Because the baseline methods, PCA and T2, can only realize the detection of temporal changes, we only show the localization of spatial hot-spot by SSR-Tensor, SSD \citep{SSD}, ZQ lasso \citep{LASSO}. For the anomaly setup, $\mathbbm 1\{t\geq \tau\} $ indicates that the spatial hot-spots only occur after the temporal hot-spot $\tau$. This ensures that the simulated hot-spot is temporal consistent. The second indicator function $\mathbbm 1_i\{i \in S_h\}$ shows that only those entries whose location index belongs set $S_h$ are assigned as local hot-spots. This ensures that the simulated hot-spot is sparse. Here we assume the change happens at $ \tau = 50$ among total $T=100$ years. And the spatial hot-spots index set is formed by the combination of states Conn, Ohio, West Va, Tex, Hawaii and week from 1-10 and 41-51. To match the dimension in the case study, we choose $n_{1}=50,n_{2}=51$. As for the three terms on the right side of equation \eqref{equ:sim_data_generation}, they serve for the global trend mean, local sparse anomaly and white noise respectively. In our simulation, the matrix $\mathbf{B}$ is $\mathbf{B}_{m,s} \otimes \mathbf{B}_{m,w} \otimes \mathbf{B}_{m,y}$ with the same choice as that in Section \ref{sec:model}. Besides, in each of these two scenarios, we further consider two sub-cases, depending on the value of change magnitude $\delta$ in equation \eqref{equ:sim_data_generation}: one is $\delta = 0.1$ (small shift) and the other is $\delta=0.5$ (large shift). \subsection{Hot-spot Detection Performance} \label{sec:hot-spot_detection_performance} In this section, we compare the performance of our proposed method (denoted as `SSR-tensor') for detection of hot-spot with some benchmark methods. Specifically, we compare our proposed method with Hotelling $T^{2}$ control chart \citep{T2} (denoted as `T2'), LASSO-based control chart proposed by \cite{LASSO} (denoted as `ZQ LASSO'), PCA-based control chart proposed by \cite{PCA} (denoted as `PCA') and SSD proposed by \citet{SSD} (denoted as `SSD'). Note that there are two main differences between our SSR-tensor method and the SSD method in \citet{SSD}. First, SSR-Tensor has the autoregressive or fussed LASSO penalty in equation \eqref{equ:eatimation} so as to ensure the temporal continuity of the hot-spot. Second, SSD uses the Shewhart control chart to monitor temporal changes, while SSR-Tensor utilizes CUSUM instead, which is more sensitive for a small shift. For the basis choices of our proposed method, to model the spatial structure of the global trend, we choose $\mathbf{B}_{m,1}$ as the kernel matrix to describe the smoothness of the background, whose $(i,j)$ entry is of value $\exp\{-d^2/(2c^2)\}$ where $d$ is the distance between the $i$-th state and $j$-th state and $c$ is the bandwidth chosen by cross-validation. In addition, we choose identical matrices for the yearly basis and weekly basis since we do not have any prior information. Moreover, we use the identity matrix for the spatial and temporal basis of the hot-spots. For SSD in \citet{SSD}, we will use the same spatial and temporal basis in order to have a fair comparison. For evaluation, we will compute the following four criteria: (i) precision, defined as the proportion of detected anomalies that are true hot-spots; (ii) recall, defined as the proportion of the anomalies that are correctly identified; (iii) F measure, a single criterion that combines the precision and recall by calculating their harmonic mean; and (iv) the corresponding average run length ($\mbox{ARL}_1$), a measure on the average detection delay in the special scenario when the change occurs at time $t=1$. All simulation results below are based on $1000$ Monte Carlo simulation replications. Table \ref{table:simulation_hotspot_detection} shows the merits of our methodology mainly lies on the higher precision and shorter $\mbox{ARL}_1$. For example, when the shift is very small, i.e., $\delta=0.1$, the $\mbox{ARL}_1$ of our SSR-Tensor method is only 1.6420 compared with 7.4970 of SSD and 9.5890 of ZQ-LASSO. The reason for SSR-Tensor has shorter $\mbox{ARL}_1$ than that of SSD is that, SSD use Shewhart control chart to detect temporal changes, which make it insensitive for a small shift. While for SSR-Tensor, it applies the CUSUM control chart, which is capable to detect the shift of small size. The reason for both SSR-Tensor and SSD have shorter $\mbox{ARL}_1$ than that of ZQ-LASSO, PCA and T2 is that ZQ-LASSO fails to capture the global trend mean. Yet, the data generated in our simulation has both decreasing and circular global trend, which makes it hard for ZQ-LASSO to model well. \begin{table}[t] \scriptsize \centering \begin{tabular}{c|cccc|cccc} \hline \multicolumn{1}{c|}{ \multirow{2}{*}{methods} } & \multicolumn{4}{c|}{small shift $\delta=0.1$} & \multicolumn{4}{c}{large shift $\delta=0.5$}\\ \cline{2-9} \multicolumn{1}{c|}{} & precision & recall & F measure & ARL & precision & recall & F measure & ARL \\ \hline SSR-tensor &\bf{0.0824} &\bf{0.9609} &\bf{0.5217} &\bf{1.6420} &\bf{0.0822} &\bf{0.9633} &\bf{0.5228} &\bf{1.0002} \\ & (0.0025) & (0.0536) & (0.0270) &(0.7214) & (0.0022) & (0.0549) & (0.0277) & (0.0144) \\ SSD &0.0404 &0.9820 &0.5112 &7.4970 &0.0412 &1.0000 &0.5206 &1.0000 \\ &(0.0055) & (0.1330) & (0.0692) &(9.4839) &(0.0000)&(0.0000) &(0.0000) &(0.0000) \\ ZQ LASSO &0.0412 &1.000 & 0.5206 & 9.5890 &0.0412 &1.0000 & 0.5206 & 8.8562 \\ &(0.0000) & (0.0000) & (0.0000) &(7.5414) &(0.0000) & (0.0000) & (0.0000) &(7.1169)\\ PCA & - & - & - & 28.7060 & - & - & - & 32.0469\\ & - & - & - &(16.9222) & - & - & - &(17.4660) \\ T2 & - & - & - & 50.0000 & - & - & - & 50.0000\\ & - & - & - &(0.0000) & - & - & - &(0.0000)\\ \hline \end{tabular} \caption{Scenario 1 (decreasing global trend): Comparison of hot-spot detection under small shift and large shift } \label{table:simulation_hotspot_detection} \end{table} \section{Case Study} \label{sec:case_study} In this section, we apply our proposed SSR-tensor model and hot-spot detection/localization method to the weekly gonorrhea dataset in Section \refeq{sec:data}. For the purpose of comparison, we also consider other benchmark methods mentioned in Section \ref{sec:simulation}), and consider two performance criteria: one is the temporal detection of hot-spots (i.e., which year it occurs) and the other is the localization of the hot-spots (i.e., which state and which week might involve the alarm). \subsection{When the temporal changes happen?} Here we consider the performance on the temporal detection of hot-spots of our proposed method and other benchmark methods. For our proposed SSR-Tensor method, we build a CUSUM control chat utilizing the test statistic in Subsection \ref{sec:Temporal_Detection}, which is shown in Figure \ref{fig:control_chart_CUSUM}. From this plot, we can see that the hot-spots are detected at $10$-th year, i.e., 2016. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{ControlChart} \caption{CUSUM Control chart of gonorrhea dataset during years 2006-2018. \label{fig:control_chart_CUSUM}} \end{figure} For the purpose of comparison, we also apply the benchmark methods, SSD \citep{SSD}, ZQ LASSO \citep{LASSO}, PCA \citep{PCA} and T2\citep{T2}, into the gonorrhea dataset. Unfortunately, all benchmark methods are unable to raise any alarms, but our proposed SSR-tensor method raises the first hot-spot alarm in year $2016.$ \subsection{Which state and week the spatial hot-spots occur?} Next, after the temporal detection of hot-spots, we need to further localize the hot-spots in the sense that we need to find out which state and which week may lead to the occurrence of temporal hot-spot. Because the baseline methods, SSD, ZQ-LASSO, PCA, and T2, can only realize the detection of temporal changes, we only show the localization of spatial hot-spot by SSR-Tensor, which is visualized in Figure \ref{fig:hot-spot_map_representative}. \begin{figure}[t] \begin{tabular}{ccccc} \centering \includegraphics[width = 0.19\textwidth]{Y10W8} & \includegraphics[width = 0.19\textwidth]{Y10W19} & \includegraphics[width = 0.19\textwidth]{Y10W30} & \includegraphics[width = 0.19\textwidth]{Y10W42} & \includegraphics[width = 0.19\textwidth]{Y10W51} \\ week 8 & week 19 & week 30 & week 42& week 51 \end{tabular} \caption{Hot-spot detection result of circular pattern of W.S. CENTRAL(Arkansas, Louisiana, Oklahoma, Texas) \label{fig:hot-spot_map_representative} } \end{figure} There are some circular patterns in specific areas. For example, CENTRAL(Ark, La, Okla, Tex) tends to have a circular pattern every $11$ weeks, which is shown in Figure \ref{fig:hot-spot_map_representative} . Besides, there are also some circular pattern for a certain state, for instance, Kansas has the bi-weekly pattern as shown in Figure \ref{fig:acf_and_time_series_for_bi-weekly_pattern}. To validate the bi-weekly circular pattern of Kansas, we plot the time series plot of Kansas in 2016 as well as the auto-correlation function plot in Figure \ref{fig:hot-spot_map_representative}. Besides, the auto-correlation function plot in the left panel of Figure \ref{fig:acf_and_time_series_for_bi-weekly_pattern} serves as a baseline. It can be seen from the middle and right plot of Figure \ref{fig:acf_and_time_series_for_bi-weekly_pattern} that, Kansas has some bi-weekly or tri-weekly circular pattern. \begin{figure} \centering \begin{tabular}{ccc} \includegraphics[width = 0.33 \textwidth]{acfCountry} & \includegraphics[width = 0.33 \textwidth]{acfKans} & \includegraphics[width = 0.33 \textwidth]{tsKans} \end{tabular} \caption{Auto-correlation of all US (left) \& Kans.(middle) in 2016 and time series plot of Kansas in 2016 (right) \label{fig:acf_and_time_series_for_bi-weekly_pattern} } \end{figure} \newpage \bibliographystyle{apalike}
1,314,259,994,601
arxiv
\section{What is Effective Altruism?} The term \textit{effective altruism} (often abbreviated as \textit{EA}) was coined in 2011 at Oxford University, by a small group of academic philosophers and individuals involved in charity and philanthropy organizations \cite{macaskill2019definition}. In 2016, the head of this group, William MacAskill, worked with many leaders involved in the EA community to write a definition that has been widely endorsed by the community: \vspace{1mm} \begin{quote} \textit{Effective altruism is about using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.} \cite{macaskill2019definition} \end{quote} \vspace{1mm} In 2018, using again input from many EA leaders, MacAskill proposed a more precise definition to be used in academic discussions: \vspace{1mm} \begin{quote} \textit{Effective altruism is: (i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and (ii) the use of the findings from (i) to try to improve the world.} \cite{macaskill2019definition} \end{quote} \vspace{1mm} This definition highlights the double aspect of EA as \textit{(i)} an intellectual project (a research field) and \textit{(ii)} a practical project (a social movement). The definition is non-normative: it does not say how people should behave (e.g., that we should make personal sacrifices to help others). \textit{Welfarist} means that views that assign intrinsic value to other things than well-being (e.g., biodiversity, art, or knowledge) are excluded, while \textit{impartial} means that views that do not weigh people equally (e.g., prioritizing nationals over foreigners) are also excluded. \textit{Tentative} means that the impartial welfarist view is a working assumption that can be debated and refined within EA. For example, while animal welfare is a central concern for many EA proponents, how much moral weight should they be given compared to humans remains unclear. These broadly-accepted definitions are very helpful when discussing the merits and weaknesses of EA, because many criticisms of EA arise from people using their own interpretation of what it is. This leads to common misconceptions, such as: EA is just applied utilitarianism, it is only about fighting poverty, it is only about donations or earning to give, and it ignores systemic change (for discussions see \cite{macaskill2019definition}). \section{How can EA Inform Visualization} Because people have different moral intuitions, not all researchers working on -- or considering working on -- humanitarian visualization will find the EA philosophy compelling enough to embrace it. But for those who do, EA can provide a clear thinking framework in an area that has been lacking one. Indeed, many discussions so far have focused on how to design visualizations that elicit empathy, often ignoring that empathy does not necessarily promote helping behavior \cite{morais2022showing}. Even when a visualization does cause people to act, their actions can have a negligible, null, or possibly even negative impact on global human welfare. EA provides clear grounds to think about research goals and metrics of success. The EA lens can also help researchers think out of the box and broaden the scope of humanitarian visualization research by identifying new types of solutions and approaches. In particular, some visualizations may not promote prosocial feelings or behavior -- and thus might not be considered conventional humanitarian visualizations -- but may still promote welfare. For example, a visualization that helps a charity director effectively allocate money across different health programs does not promote prosocial feelings or behavior (since all the money will be used to help people no matter what), but it can tremendously increase human welfare. \begin{figure*} \centerline{\includegraphics[width=0.9\textwidth]{givewell-table.png}} \caption{Table showing impact metrics for six charities identified as among the most effective by GiveWell in 2021. Source \url{https://www.givewell.org/cost-to-save-a-life}. } \label{fig:givewell-table} \end{figure*} EA is a thinking framework but it is also a community. This community is full of people who are deeply knowledgeable about humanitarian issues or have been extensively involved in humanitarian actions, and thus visualization researchers could learn a lot by connecting with them. In addition, the EA community has unique needs that visualization could help address. For example, several EA organizations do research on the effectiveness of different charities and charity programs, in order to guide potential donors. GiveWell is a known example: it maintains a list of top effective charities, primarily based on the cost of life saved (see \autoref{fig:givewell-table}). GiveWell shares a range of spreadsheets with data and calculations to explain how it arrived at its estimates. All such initiatives generate lots of useful data, but the amount of information can rapidly become overwhelming for potential donors. And yet, data is currently largely communicated through numbers and text, and very rarely through visualizations. Perhaps visualization could also be used by the EA communicators to better explain its general principles to naive audiences, and by EA researchers to help them analyze the effectiveness of different charity programs. \section{Using Visualization and Psychology to Support EA} People share many misconceptions and biases preventing them from helping effectively -- for example, geographical and cultural proximity often greatly affect how much people feel like helping \cite{caviola2021psychology}. Researchers studying humanitarian visualization can take inspiration from recent work on judgment and decision making with visualizations \cite{dimara2018task}, and apply findings and methods from psychology to study how visualizations interact with cognitive biases, and whether visualizations can help alleviate those biases. Unfortunately, much like visualization research, psychology research has mostly focused on how to make people donate more, rather than more effectively. However, Lucius Caviola and colleagues \cite{caviola2021psychology} have recently done a tremendous job at reframing past findings through an EA lens, leading them to identify major psychological obstacles to effectiveness, which fall in two categories: \textit{1. Motivational obstacles.} People think that whether and how to help is largely a matter of personal preference; they give based on how much they feel emotionally connected to the issue (e.g., they feel more strongly about diseases that are common in their country or have affected their loved ones); they dislike prioritizing some causes over others; they view people who try to donate rationally more negatively than those who donate based on empathy. \textit{2. Epistemic obstacles.} People think that charity overhead is wasteful, or find funding overhead unsatisfying; they think that effectiveness cannot be quantified; they do not think clearly about probabilities; they are not aware that charities differ greatly in their effectiveness; they don't know which charities are the most effective. Caviola and colleagues also identified four types of strategies to increase effective giving: information, choice architectures and incentives, philosophical reasoning, and norm changes. \textit{Information} addresses epistemic obstacles through education. As I mentioned before, one of the areas where visualization can help is by conveying rich quantitative facts about charity effectiveness in a way that is easy to process. Visualization could also be used to argue for lesser-known EA causes such as wild animal suffering and global catastrophic risks, by conveying data about how serious, neglected and tractable these causes are. Finally, visualization could also help dispel misconceptions, for example by showing data about how charity overhead is employed, together with simulations illustrating how cutting overhead would likely yield less positive outcomes. Information is the type of strategy where the possible benefits of visualizations are the most evident, and where a lot can be done in collaboration with the EA community. \textit{Choice architectures and incentives} address motivational obstacles by nudging (e.g., using effective charity programs as default options) and incentivization (e.g., using donation matching or tax deductions targeted to effective charity programs). Here, possible roles for visualization are less immediately evident, but this type of strategy can potentially lead to the most interesting innovations and contributions to knowledge. In particular, it could be interesting to study which nudging techniques can translate to visualizations. For example, a well-documented bias is the decrease of people's concern for individual victims as the number of victims increases: a tragedy that affects one million people typically does not generate 100 times more concern or donations than a tragedy that affects a thousand people \cite{caviola2021psychology}. However, this effect is less pronounced when donors evaluate all options at the same time -- which could mean seeing data about multiple tragedies visualized side-by-side -- than if they evaluate the options sequentially. The last two categories listed by Caviola and colleagues, \textit{philosophical reasoning} (exposing people to philosophical arguments) and \textit{norm changes} (pushing for a change of moral standards) are important but probably less directly relevant to visualization. \section{Conveying Personal Experiences with Quantitative Facts} Again, a major way in which visualization can support EA is by helping people compare charity programs. To take a trivial example, an EA website could include as an overview of its top programs a bar chart of the number of lives saved per unit of donation for each program. \vspace{3mm} \noindent\fbox{\parbox{6.8cm}{\small{ Sources for the figures mentioned in this section: \begin{itemize} \item Against Malaria Foundation (2022), Why nets? \url{https://www.againstmalaria.com/WhyNets.aspx} \item F. Ricci (2021) Social implications of malaria and their relationships with poverty. \textit{Med. j. of hematology and infectious diseases.} \item World Health Organization (2022) Vitamin A deficiency \url{https://www.who.int/data/nutrition/nlis/info/vitamin-a-deficiency} \item Stephen Clare (2020) Homelessness in the US and UK Executive Summary \url{https://www.founderspledge.com/stories/homelessness-in-the-us-and-uk-executive-summary} \item John Halstead (2019) Founders Pledge -- Mental Health Executive Summary \url{https://founderspledge.com/stories/mental-health-report-summary}\vspace{-2mm} \end{itemize} }}} \vspace{2mm} However, this is only a minimalist example, and important visualization design challenges arise when a variety of outcomes need to be visualized and compared. For example, about 600 mosquito nets prevent the death of a child, but they also prevent 500 to 1,000 cases of malaria. This is an enormous benefit in and of itself, as malaria is a crippling disease with flu-like symptoms that can periodically return, can be highly disruptive for the life of households, and can leave children disabled. Similarly, GiveWell lists a charity that saves lives by giving vitamin A supplements to children, but even when it is not fatal, vitamin A deficiency causes a range of terrible problems such as repetitive infections and blindness. GiveWell sometimes go beyond lives saved and considers charities expected to impact the recipient's lifetime earnings (treatments for parasitic worm infections) or their overall quality of life (cash transfers for extreme poverty). Another effective altruism website lists a charity that can use about \$20,000 to prevent a year of homelessness in the US or UK, and another one that can use \$200--\$300 to prevent the equivalent of one year of severe major depressive disorder for a woman in Uganda. It is very hard to imagine how to visualize those widely different types of outcomes in a way that supports informed, effective-altruist decisions. Ideally, a major donation or funds allocation decision should be based both on quantitative facts (e.g., the number of people affected, the cost of interventions) and a deep understanding of people's subjective experiences with and without the interventions, especially concerning the degree of physical and psychological suffering involved. However, it is hard for a person who has never contracted malaria or never had a vitamin A deficiency to have a reliable intuition of what those experiences entail. This is where stories -- in the form of text, images, graphic novels, movies or video games -- could play an important role by helping people understand subjective experiences on a visceral level. I have previously emphasized the limits of storytelling for EA purposes, but certain ways of combining stories with data may be very effective at supporting EA. One potentially effective strategy could be to \textit{(i)} use stories to give a qualitative understanding of the personal experiences involved in a human tragedy, and \textit{(ii)} use data to give a quantitative understanding of the extent of the tragedy. It seems important that both elements are provided in order to support EA decisions. In particular, stories of personal tragedies provide a proof of existence but can give a distorted vision of reality in the presence of selection bias: news media, for example, often select atypical stories based on their shock value. But if personal stories are complemented with clear data about how representative they are, viewers will get a more accurate appreciation of the extent of the problems and of the magnitude of the human suffering involved. \begin{figure} \centerline{ \includegraphics[height=6.6pc, trim={4mm 0 12mm 0}, clip]{rosling-1.jpg} \includegraphics[height=6.6pc]{rosling-2.png} } \caption{Excerpts from Hans Rosling's TED talk \textit{The Magic Washing Machine}, which combines a personal story (left) with data (right). Source \url{https://www.ted.com/}. } \label{fig:rosling} \end{figure} It will likely be a major research challenge to find out more concretely how to effectively combine stories with quantitative data. There are at least two possible approaches: in a \textit{data-then-story} approach, people would view statistical data about tragedies or social issues, and then zoom into individuals to see their personal stories, either real or hypothetical. The choice of individuals may be decided by the viewer following a detail-on-demand approach, or it may follow a random sampling scheme. Meanwhile, in a \textit{story-then-data} approach, people would first see one or several typical stories (for example, the daily life of someone with disease A or disease B), and would then be able to explore statistical data (for example, the prevalence of those two diseases, and how they could be reduced with different interventions). An example of story-then-data approach is Hans Rosling's talk \textit{The Magic Washing Machine} (\autoref{fig:rosling}): he first tells a story that gives a powerful account of how life-changing washing machines are, and then goes through data about how many people in the world have access to them, and how this is likely to change with economic growth. It is challenging to reconcile the world of numbers with the world of subjective experience, but not impossible -- for example, if an effective altruist judges that having disease A is twice as bad as having disease B, they could conclude that preventing 10,000 cases of disease A is equally desirable as preventing 20,000 cases of disease B. \section{Emerging Technologies} In visualization research, there has been a lot of interest in conveying visualizations through other media than computer screens, like physical objects \cite{dragicevic2020data} and mixed reality displays \cite{kraus2021value}. In a previous position paper \cite{dragicevic2022towards}, I discuss the interesting research opportunities offered by such media for the purpose of humanitarian visualization. I summarize them here. \begin{figure} \centerline{\includegraphics[width=18.5pc]{ivanov.png}} \caption{VR visualization of mass shooting data in the US. Source \cite{ivanov2019walk}. } \label{fig:ivanov} \end{figure} \textit{Virtual reality}. By providing a way for viewers (e.g., donors or charity managers) to immerse themselves more fully into personal stories, virtual reality (VR) may help enhance their visceral understanding of human issues and tragedies. VR documentaries already exist that cover topics such as war, migration, and diseases. Such immersive stories could be combined with immersive data visualizations for EA purposes. This idea has started to be explored by Ivanov and colleagues \cite{ivanov2019walk}, who designed a VR visualization of mass shooting casualties in the US (\autoref{fig:ivanov}). Each silhouette represents a person who died from a mass shooting in the US. Viewers can step back to get an overview of the dataset (A in the figure), or come closer to gather information about individual victims such as their age group or gender, which are encoded by the shape of the silhouette (B, C). The concept from Ivanov et al. is only a starting point, as one could imagine conveying richer qualitative information about each victim like their physical appearance (as some memorials do by showing photo portraits) or elements of their personal stories, which viewers could choose to relive from a first-person perspective. Unfolding or hypothetical humanitarian issues could be conveyed in a similar manner using a combination of data visualizations and immersive video footage (such as already used in VR documentaries) or simulated scenes. VR could also be used to convey the positive outcomes of donations; For example, one could imagine an immersive version of GDLive (\url{https://live.givedirectly.org/}), a website that posts information and updates about recipients of cash transfers. \textit{Augmented reality.} Augmented reality (AR) can create illusions of objects and people around us, including objects and people that exist remotely. This opens up unprecedented possibilities for bringing the lives of distant suffering people closer to our own, and making humanitarian issues more salient or more memorable. For example, if a person walks in a refugee camp that has been temporarily relocated in their backyard, they may create a mental association and remember the refugees each time they see (or even think about) their backyard. Visualizing data about refugee camps in such a way could thus give a much more lasting impression. In contrast, VR can subjectively transport viewers in distant places, but once the viewers are back, the event is remote again. As with VR, AR could also be used to convey positive outcomes of charitable donations. In the context of a donor/recipient pairing program, future AR technology may even make it possible for a donor to meet a past recipient on the street and chat with them: a long-distance cash transfer may suddenly feel like helping out an acquaintance in a small village. Finally, in the future, effective altruists may be able to use wearable AR devices as commitment devices, e.g., to get regularly reminded of remote tragedies or ways they can redirect unnecessary personal expenses to humanitarian causes. \begin{figure} \centerline{\includegraphics[width=18.5pc]{plants.png}} \caption{Physical data visualizations of 28 cases of sexual harassment -- each plant conveys data reported by one person. Source \cite{morais2022exploring}. } \label{fig:plants} \end{figure} \textit{Data physicalization}. Data physicalizations are physical entities whose shape or geometry encodes data \cite{dragicevic2020data}. Public spaces already contain physical objects that convey past human tragedies, such as memorials, sculptures and cemeteries. However, few of them focus on current issues and few of them convey quantitative facts, both of which are important for EA purposes. Rare exceptions include data sculptures (artistic data physicalizations) with a focus on humanitarian data, and occasional explorations by visualization researchers like the \textit{Harassment Plants} (\autoref{fig:plants}). Like AR visualizations, physical visualizations can be embedded in our everyday environment. But unlike AR visualizations, they are always present, they can be touched, and do not need special equipment to be seen. On the other hand, AR content can be created and displayed at will. \textit{Ambient displays}. Ambient displays are displays that \shortquote{present information within a space through subtle changes in light, sound, and movement, which can be processed in the background of awareness} \cite{wisneski1998ambient}. In particular, research projects have explored how ambient displays can support remote intimacy -- for example, the color of a lamp may change according to the affective state of a remote intimate partner captured through a wearable biofeedback device. Similar devices could be used to convey quantitative information about the plight of large populations of distant and anonymous people, such as the number of hospitalizations during a pandemic or the number of war casualties. Such ambient displays could give a continuous impression of the severity of an ongoing humanitarian crisis without having to constantly poll news reports. \section{CONCLUSION} Effective altruism offers both a new thinking framework and new questions and problems for visualization research. Yet, it appears that there has been virtually no collaboration so far between EA actors and visualization researchers, perhaps largely due to a lack of mutual awareness between the two communities. But this is changing, as EA is becoming mainstream and highly influential \cite{matthews2022how}. There are many fascinating questions and problems at the intersection of the two areas and unique opportunities for collaboration, so it is time visualization researchers reach out to the EA community and vice versa. \nocite{*} \balance \bibliographystyle{IEEEtran}
1,314,259,994,602
arxiv
\section*{Acknowledgments} We would like to thank Stephen Barr and Mark Wise for useful suggestions. The work of X.C.~is supported in part by the Science and Technology Facilities Council (grants numbers ST/T00102X/1, ST/T006048/1 and ST/S002227/1). The work of F.K.~is supported by a doctoral studentship of the Science and Technology Facilities Council.
1,314,259,994,603
arxiv
\section{Introduction} \normalsize Time division duplexing (TDD) and frequency division duplexing (FDD) are the commonly used techniques to protect receivers from their overwhelming self-interference (SI). This implies that the resources (i.e., time or frequency) are divided between forward and reverse links, which creates a performance trade-off between them. SI cancellation (SIC) eliminates such trade-off via in-band full-duplex (FD) communication, which gives the forward and reverse links the opportunity to simultaneously utilize the complete set of resources~\cite{Full2013Bharadia,Applications2014Hong,InBand2014Sabharwal,A2015Kim,On2014Alves}. FD transceivers are capable of sufficiently attenuating (up to -110 dB \cite{Full2015Goyal}) their own interference (i.e., SI) and simultaneously transmit and receive on the same channel, which offers higher bandwidth (BW) for FDD systems and longer transmission time for TDD systems. Consequently, FD communication improves the performance of both the forward and reverse links, in which the improvement depends on the efficiency of SIC. Leveraging FD communication to large-scale networks, SI is not the only bottleneck due to the increased mutual interference when compared to the half-duplex (HD) case. This is because each FD link contains two active transmitters while each HD link contains one active transmitter and one passive receiver. Therefore, rigorous studies that capture the effect of the network interference on FD communication are required to draw legitimate conclusions about its operation in large-scale setup. In this context, stochastic geometry can be used to model FD operation in large scale networks and understand its behavior \cite{Stochastic2013ElSawy}. Stochastic geometry succeeded to provide a systematic mathematical framework for modeling both ad-hoc and cellular networks \cite{Stochastic2012Haenggi,A2011Andrews,Stochastic2013ElSawy,Stochastic2015Lu}. Despite the higher interference injected into the network, recent studies have shown that FD communications outperform HD communications in large scale setup if sufficient SIC is achieved. For instance, the asymptotic study in \cite{Does2015Xie} shows a maximum improvement of $80\%$ rate gain, which monotonically decreases in the link distance, for FD communication over the HD case. A more realistic ad-hoc network setup in \cite{Throughput2015Tong2} shows that FD offers an average of $33\%$ rate gain when compared to the HD operation. In the case of cellular networks, \cite{Hybrid2015Lee} shows around $30\%$ improvement in the total rate for FD when compared to the HD case. The authors in \cite{Throughput2016Goyal} show that the increase of aggregate interference in FD networks creates a trade-off between the average spectral efficiency and the coverage probability. However, \cite{Full2015Goyal} reveals that the FD gains in cellular networks are mainly confined to the DL due to the high disparity between uplink (UL) and downlink (DL) transmission powers. Furthermore, the authors in \cite{Limits2015Tsiky, AlAmmouri2015Inband,Interference2016Randrianantenaina,Can2016ElSawy} show that when a constrained power control is employed in the UL, the FD communication gains in the DL may come at the expense of high degradation in the UL. The authors in \cite{Limits2015Tsiky} advise to use FD communications in small cell tiers such that the users' equipment (UEs) and base stations (BSs) have comparable transmit powers. For FD operation in macro tiers with high disparity between UL and DL transmit powers, the authors in \cite{AlAmmouri2015Inband} advocate using pulse shaping along with partial overlap between UL and DL spectrum to neutralize DL to UL interference and avoid deteriorating UL rate. With pulse shaping and partial UL/DL overlap, \cite{AlAmmouri2015Inband} shows a simultaneous improvement of $33 \%$ and $28\%$ in the UL and DL, respectively. It ought to be mentioned that, in addition to the UL/DL transmit power disparity, the asymmetric UL/DL traffic that naturally exists in practical cellular networks imposes another challenge to the FD operation~\cite{Throughput2015Mahmood}. To harvest the aforementioned gains, FD transceivers are required on both sides of each link. However, cellular networks operators can only upgrade their BSs and do not have direct access to upgrade UEs. Furthermore, the high cost of FD transceivers, in terms of complexity, power consumption and price, may impedes their penetration to the UEs' domain. Therefore, techniques to achieve FD gains in cellular networks with FD BSs and HD UEs are required. In this context, 3-nodes topology (3NT) is proposed in \cite{Full2014Sundaresan, Hybrid2015Lima, Analyzing2013Goyal,Full2015Mohammadi,Outage2015Psomas,Throughput2016Goyal} to harvest FD gains by serving two HD UEs within each FD BS. In 3NT, the BSs have SIC capabilities and can simultaneously serve HD UL and HD DL users on the same channels. That is, each BS can merge each UL/DL channel pair into a larger channel and reuse that channel to serve an UL and a DL users simultaneously. The studies in \cite{Hybrid2015Lima,Full2014Sundaresan,Analyzing2013Goyal,Full2015Mohammadi} show the potential of 3NT to harvest HD gains. However, the results in \cite{Full2014Sundaresan} are based on simulations, and the results in \cite{Analyzing2013Goyal,Full2015Mohammadi,Outage2015Psomas,Hybrid2015Lima} are based on a simplistic system models. In this paper, we present a unified mathematical framework, based on stochastic geometry, to model 3NT (i.e., FD BSs and HD users) and 2-nodes topology (2NT) (i.e., FD BSs and FD UEs) in multi-tier cellular networks. The proposed mathematical framework is then used to conduct rigorous comparison between 3NT and 2NT. Different from \cite{Analyzing2013Goyal,Full2015Mohammadi,Outage2015Psomas}, the presented system model accounts for the explicit performance of UL and DL for cell center users (CCUs) and cell edge users (CEUs) in a multi-tier cellular network. It also captures more realistic system parameters than \cite{Analyzing2013Goyal,Full2015Mohammadi,Outage2015Psomas} by accounting for pulse-shaping, matched filtering, UL power control, maximum power constraint for UEs, UEs scheduling, and the different BSs' characteristics in each network tier. When compared to \cite{AlAmmouri2015Inband}, the proposed framework considers a multi-tier network with different FD topologies (i.e., 2NT and 3NT), flexible association, different path-loss exponents between different network elements, and incorporate uncertainties in the SIC. However, we exploit the fine-grained duplexing strategy proposed in \cite{AlAmmouri2015Inband} that allows partial overlap between the UL and DL channels, which is denoted as $\alpha$-duplex ($\alpha$D) scheme. The parameter $\alpha \in [0,1]$ controls the amount of overlap between UL and DL channels and captures the HD (at $\alpha=0$) and FD (at $\alpha=1$) as special cases. Beside being used to optimize the spectrum allocation to the UL and DL, the parameter $\alpha$ shows the gradual effect of the interference induced via FD communication on the system performance, and to optimize the amount of the overlap between UL and DL channels. The results show that 3NT can achieve close performance (within 5$\%$) when compared to the 2NT with FD UEs that have efficient SIC if multi-user diversity and UEs scheduling are exploited. On the other hand, if the FD UEs in the 2NT have poor SIC, the 3NT achieves a better performance. In both cases, it is evident that network operators do not need to bear the burden of implementing SIC in the UEs to harvest FD gains. The rest of the paper is organized as follows: in Section II, we present the system model and methodology of the analysis. In Section III, we analyze the performance of the $\alpha$-duplex system. Numerical and simulation results with discussion are presented in Section IV before presenting the conclusion in Section V. \textit{\textbf{Notations}}: $\mathbb{E} [.]$ denotes the expectation over all the random variables (RVs) inside $[.]$, $\mathbb{E}_{x} [.]$ denotes the expectation with respect to (w.r.t.) the RV $x$, $\mathbbm{1}_{\{.\}}$ denotes the indicator function which takes the value $1$ if the statement $\{.\}$ is true and $0$ otherwise, $.*$ denotes the convolution operator and $S^{*}$ denotes the complex conjugate of $S$, $\mathcal{L}_{x} (.)$ denotes the Laplace transform (LT) of the probability density function (PDF) of RV $x$ and \textit{Italic} letters are used to distinguish variables from constants. \section{System Model} \subsection{Network Model}\label{sec:NetModel} A $K$-tier cellular network is considered, in which the BSs\footnote{In this work we assume that both the BSs and the UEs are equipped with a single antenna. Combining FD with multiple-input and multiple-output (MIMO) transmitters is covered in \cite{Full2015Atzeni,Directional2016Psomas}. } in each tier are modeled via an independent homogeneous 2-D Poisson point processes (PPPs) \cite{Stochastic2012Haenggi} $\Phi^{(k)}_{{\rm d}}$, where $k \in \{1,,2,...,K \}$, with intensity $\lambda_k$. The location of the $i^{th}$ BS in the $k^{th}$ tier is denoted by $x_{k,i} \in \mathbb{R}^2$. Beside simplifying the analysis, the PPP assumption for abstracting cellular BSs is verified by several experimental studies \cite{Stochastic2012Haenggi, A2011Andrews,Spatial2013Guo}. UEs are distributed according to a PPP $\Phi_{\rm u}$, which is independent from the BSs locations, with intensity $\lambda_{\rm u}$, where $\lambda_{\rm u}\gg \sum\limits_{k=1}^{K}\lambda_k$. Within each tier, all BSs transmit with a constant power $P_{\rm d}^{(k)}$, however, the value of $P_{\rm d}^{(k)}$ varies across different tiers. In contrast, UEs employ a truncated channel inversion power control with maximum transmit power constraint of $P_{\rm{u}}$ \cite{On2014ElSawy}. That is, each UE compensates for its path-loss to maintain a tier-specific target average power level of $\rho^{(k)}$ at the serving BS. UEs that cannot maintain the threshold $\rho^{(k)}$ transmit with their maximum power $P_{\rm{u}}$. UEs who can keep the threshold $\rho^{(k)}$, are denoted as cell center users (CCUs), while UEs who transmit with their maximum power are denoted as cell edge users (CEUs)\cite{Load2014AlAmmouri}. The power of all transmitted signals experiences a power law path loss attenuation with exponent $\eta>2$. Due to the different relative antenna heights and propagation environments, we discriminate between the path loss exponent for the paths between two BSs (DL to UL interference), two UEs (UL to DL interference), and a BS and a UE (UL to UL interference), which are respectively denoted by $\eta_{\rm du},\eta_{\rm ud}$, and $\eta_{\rm uu}$, as shown in Fig. \ref{fig:Network1}. Assuming channel reciprocity, the path loss exponent between a BS and a UE (i.e., DL to DL interference), denoted by $\eta_{\rm dd}$, is equivalent to the one between a UE and a BS (i.e., UL to UL interference) $\eta_{\rm uu}$, and hence, both symbols are used interchangeably\footnote{We assume that the path-loss exponents in each direction is different but equivalent in all tiers. Assuming equal path-loss exponents in all tiers is a common simplifying assumption in the literature.\cite{On2014ElSawy,Modeling2012Dhillon,Uplink2009Chandrasekhar}.}. Also, Rayleigh fading channels are assumed such that the channels power gains are independent and identically distributed (i.i.d) exponential RVs with unit means\footnote{Extending the results to capture other fading models can be done following \cite{Average2013Renzo,Modeling2016AlAmmouri}.}. \subsection{Operation Modes and Spectrum Allocation} \begin{figure}[t] \centerline{\includegraphics[width= 4.5in]{./Nm_1c.eps}} \caption{\, Channel allocation, interference types, and path-loss exponents for a) 2NT and b) 3NT.} \label{fig:Network1} \end{figure} We consider a fine grained $\alpha$D scheme that allows partial overlap between UL and DL channels and captures the FD and HD as special cases. We denote the BWs used in the HD case in the UL and DL, respectively, as $B^{\rm HD}_{\rm{u}}$ and $B^{\rm HD}_{\rm{d}}$, in which $B^{\rm HD}_{\rm{u}}$ and $B^{\rm HD}_{\rm{d}}$ are not necessarily equal. To avoid adjacent channel interference, the BSs utilize a guard band of $\epsilon B$ between each UL-DL pair of bands, where $B= {{\rm min} (B^{\rm HD}_{\rm d},B^{\rm HD}_{\rm u})}$\footnote{ The scheme proposed in \cite{AlAmmouri2015Inband} is captured by setting $\epsilon$ to zero, since no guard bands are assumed there.}. As shown in Fig. \ref{fig:Network2}, the BW used in the $\alpha$D DL is $B_{\rm d}(\alpha) = B^{\rm HD}_{\rm{d}}+ \alpha (\epsilon +1) B$, and in the $\alpha$D UL is $B_{\rm u}(\alpha)=B^{\rm HD}_{\rm{u}}+ \alpha (\epsilon +1) B$. Note that the parameter $\alpha$ controls the partial overlap between the UL and DL frequency bands. Also, the HD and FD modes are captured as special cases by setting $\alpha$ to 0 and 1, respectively. It is assumed that each tier has its own duplexing parameter $\alpha_k$, which is used by all BSs within that tier. \normalsize \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.4\textwidth} \centerline{\includegraphics[width= 2.2 in]{./BANDS.eps}}\caption{\, Frequency bands allocation.} \label{fig:Network2} \end{subfigure}% ~ \begin{subfigure}[t]{0.4\textwidth} \centerline{\includegraphics[width= 1.6 in]{./P7.eps}}\caption{\, UEs' scheduling in 3NT.} \label{fig:Network4} \end{subfigure} \caption{Frequncy Bands allocation and UEs' scheduling.}\label{fig:association} \end{figure*} Without loss of generality, we assume that each BS has only two pairs of UL-DL channels that are universally reused across the network. For simplicity, we assume that the two channel pairs are sufficiently separated in the frequency domain (i.e., $f^{\rm HD}_{\rm u1} < f^{\rm HD}_{\rm d1} \ll f^{\rm HD}_{\rm u2} <f^{\rm HD}_{\rm d2}$) to avoid adjacent channel interference between different UL-DL pairs. It is worth noting that the idealized rectangular frequency domain pulse shapes shown in Fig. \ref{fig:Network2} are used for illustration only. However, as discussed later, we use time-limited pulse shapes that impose adjacent channel interference due to the out of band ripples in the frequency domain. In the 2NT network, UEs have FD transceivers and can use the UL and DL belonging to the same UL-DL pair for their $\alpha$D operation. In contrast, 3NT UEs have HD transceivers and cannot transmit and receive on overlapping channels. Hence, each HD user is assigned his UL and DL channels from two different UL-DL pairs as shown in Fig. \ref{fig:Network1} and Fig. \ref{fig:Network2}. Consequently, 3NT UEs can benefit from the larger BW channels without SI. Note that the FD BSs in all cases as well as the FD UEs in the 2NT would experience SI as shown in Fig. \ref{fig:Network1}. In contrast, 3NT experience intra-cell interference on the DL direction due to the partial overlap between the UL channel of the one UE and the DL channel of the other UE. To this end, we assume that the BSs exploit multi-user diversity to control intra-cell interference in 3NT by imposing a minimum separation angle constraint between users scheduled on the same channel as shown in Fig. \ref{fig:Network4}\footnote{More advanced and sophisticated scheduling and multi-user diversity techniques are postponed for future work.}. In sectored BSs, the value of $\delta_o$ can be estimated to a certain accuracy depending on the number of sectors. If the BSs' cannot estimate the angeles between users, then $\delta_o$ is set to zero and we refer to this case as random scheduling. For the FD BSs and 2NT UEs, we denote the SI attenuation power as $\beta_{\rm u} h_{\rm s}$ and $\beta_{\rm d} h_{\rm s}$, respectively, where $\beta_{\rm u}$, $\beta_{\rm d}$ are positive constants representing the mean SIC power values in the UL and DL, respectively, and $h_{\rm s}$ follows a general unit mean distribution with PDF given by $f_{H_{\rm s}}(\cdot)$ which represents the uncertainty in SIC. Three special cases of interest for $f_{H_{\rm s}}(\cdot)$ are considered, namely, constant attenuation where $f_{H_{\rm s}}(\cdot)$ is a degenerate distribution as in \cite{AlAmmouri2015Inband,Intra2015Yun,Hybrid2015Lee} and random attenuation where $f_{H_{\rm s}}(\cdot)$ is an exponential distribution as in \cite{Full2015Mohammadi} and Rician fading as in \cite{Full2015Atzeni} which captures the previous two cases as special cases. It is shown in \cite{Harvesting2016AlAmmouri} that all distributions leads to the same performance trends. \subsection{UEs to BSs Association}\label{sec:Association} \normalsize \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.3\textwidth} \centerline{\includegraphics[width= 1.8 in]{./Reg.eps}}\caption{\, $\tau_{j}=1$.} \label{fig:CBA} \end{subfigure}% ~ \begin{subfigure}[t]{0.3\textwidth} \centerline{\includegraphics[width= 1.8 in]{./Pow.eps}}\caption{\, $\tau_{j}=(P_{{\rm d}}^{(j)})^{\frac{-1}{\eta}}$.} \label{fig:SBA} \end{subfigure} \caption{A realization of the associations areas assuming different association factors, where the green squares, diamonds, and circles represent macro, micro, and pico BSs, respectively.}\label{fig:association}\vspace{-0.8cm} \end{figure*} We consider a biased and coupled\footnote{Decoupled UL/DL association is analyzed using stochastic geometry in \cite{Joint2015Singh} for traditional HD multi-tier network, extending this analysis to decoupled association is postponed to future work.} BS-UE association scheme. Biasing is used to enable flexible load balancing between tiers by encouraging UEs to connect to lower power BSs to balance the average load served by the tiers across the network~\cite{An2014Andrews,HetNets2013Elsawy}. We define a distance dependent biasing factor $\tau$ and assume that all BSs within the same tier have the same biasing factor. Hence, a UE connects to $k^{th}$ tier if $\{ \tau_{k} r_{k}<\tau_{i} r_{i} \ \forall \ i\in\{1,..,K\}, \ k\neq i \}$. The used association scheme captures different association strategies as special cases. For example, if $\tau$ is set to the same value for all tiers, then closest BS association is considered, if $\tau_k=(P^{(k)}_{{\rm d}})^{\frac{-1}{\eta_{\rm dd}}}$, then the UE connects to the BS providing the highest received signal strength (RSS). Note that different association schemes changes the relative BSs' association areas across the tiers as shown in Fig. \ref{fig:association}, where three tiers network is shown with $10$W macro BSs, $5$W micro BSs, and $1$W, pico BSs\footnote{The values of the transmit powers are based on \cite{Hierarchical2011Jain}.}. In Fig. \ref{fig:CBA}, nearest BS association is considered, and hence, association areas are represented by Voronoi tessellation \cite{Generalized1986Geometriae}. In Fig. \ref{fig:SBA}, UE connects according to the RSS, in this case the association areas construct multiplicative weighted Voronoi tessellation (also denoted as circular tessellation) \cite{Generalized1986Geometriae}. \subsection{Pulse Shaping}\label{sec:PulseShaping} We employ time-limited pulse shapes\footnote{In this work, we focus on time-limited pulse shapes to avoid inter-symbol-interference (ISI), since including the effect of ISI will complicate the analysis much more. However, frequency-limited Nyquist pulses (e.g. root raised cosine) can be also used, since it protect the nodes from ISI. For more information on the effect of different pulse shapes on the $\alpha$-duplex scheme, refer to [17].}, denoted as $s(t,{\rm BW},b_v) \overset{\rm{FT}}{\longleftrightarrow} S(f,{\rm BW},b_v)$ with unit energy, where ${\rm BW}$ is the pulse null-to-null bandwidth, $b^{(k)}_{\rm d}$ and $b^{(k)}_{\rm u}$ indicate the pulse types used by the $k^{th}$ tier in the DL and UL, respectively. We assume a flexible pulse shaping scheme, where each tier has its own pulse shapes in the DL and UL, however, all BSs within the same tier use the same pulse shapes. To have a unified effective BW for all values of $\alpha_i$ in the $\alpha$D mode, the null-to-null BW of the pulse-shapes is kept equal to the channel BW. Hence, the pulse shapes are also functions of the parameter $\alpha_k$. \subsection{Base-band Signal Representation} For the sake of simple presentation, we use $\alpha_k$, $b^{(k)}_{\rm d}$ and $b^{(k)}_{\rm u}$ to denote the duplexing factor, the UL pulse shape, and the DL pulse shape, respectively in the $k^{th}$ tier. Also, we use $v,\bar{v},$ and $w$ to indicate the desired transmission, where $v,\bar{v},w\in \{{\rm d},{\rm u} \}, v\neq \bar{v}$, for DL and UL, respectively, and $i,k$ as BSs' tier index, where $i,k \in \{1,...,K\}$. Exploiting this notation, the received baseband signal at the input of the matched filter of a test transceiver in the $i^{th}$ tier (BS or UE) can be expressed~as \small \begin{align} \!\!\!\!\!\!\!\!\!\!\!\! y_{v}^{(i)}(t)= & \Gamma_o \sqrt{P^{(i)}_{v_o} r_o^{-\eta_{vv}} h_o} s(t,B_{v}(\alpha_i),b^{(i)}_{v}) +\sum\limits_{k=1}^{K} \sum_{j \in \tilde{\Psi}^{(k)}_{\rm d}} \mathfrak{I}^{{\rm d}^{(k)} \rightarrow v^{(i)}}_{j}(t) + \sum\limits_{k=1}^{K} \sum_{j \in \tilde{\Psi}^{(k)}_{\rm u}} \mathfrak{I}^{{\rm u}^{(k)} \rightarrow v^{(i)}}_{j}(t)+ \mathfrak{I}_{{\rm s}_{v}}^{(i)}(t) + n(t). \label{base_band_DL} \end{align}\normalsize \noindent{where} $\Gamma_o$, $P^{(i)}_{{ v}_o}$, $r_o$, and $h_o$ denote the intended symbol, transmit power, link distance, and channel power gains, respectively. $\tilde{\Psi}^{(k)}_{\rm d} \subseteq {\Psi}^{(k)}_{\rm d}$ is the set of interfering BSs in the $k^{th}$ tier, $ \mathfrak{I}^{{\rm d}^{(k)} \rightarrow v^{(i)}}_{j}(t)$ is the DL interference from the $j^{th}$ BS in $k^{th}$ tier, $\tilde{\Psi}^{(k)}_{\rm u} \subseteq {\Psi}^{(k)}_{\rm u}$ is the set of interfering UEs in the $k^{th}$ tier, $ \mathfrak{I}^{{\rm u}^{(k)} \rightarrow v^{(i)}}_{j}(t)$ is the UL interference from $j^{th}$ UE connected to the $k^{th}$ tier, $ \mathfrak{I}_{{\rm s}_{v}}^{(i)}(t)$ is the SI term affecting the $v$ direction, and $n(t)$ is a white complex Gaussian noise with zero mean and two-sided power spectral density $N_o/2$. The downlink and uplink interference are given by \small \begin{align} \label{inter1} \mathfrak{I}^{{\rm d}^{(k)} \rightarrow v^{(i)}}_{j}(t)&= \Gamma^{(k)}_{{\rm{d}}_{j}} s(t,B_{\rm d}(\alpha_i),b_{\rm d}^{(k)}) \sqrt{P^{(k)}_{\rm d} h^{(k)}_{{\rm d}_j} \left(r^{(k)}_{{\rm d}_j}\right)^{-\eta_{{\rm d}v}}} \exp \left( j 2 \pi \left(f^{(k)}_{\rm d}-f^{(i)}_{v}\right)t \right), \\ \mathfrak{I}^{{\rm u}^{(k)} \rightarrow v^{(i)}}_{j}(t)&= \Gamma^{(k)}_{{\rm{u}}_{j}} s(t,B_{\rm u}(\alpha_i),b_{\rm u}^{(k)}) \sqrt{P^{(k)}_{{\rm u}_j} h^{(k)}_{{\rm u}_j} \left(r^{(k)}_{{\rm u}_j}\right)^{-\eta_{{\rm u}v}}} \exp \left( j 2 \pi \left(f^{(k)}_{\rm u}-f^{(i)}_{v}\right)t \right). \label{inter2} \end{align}\normalsize \noindent{where} $\Gamma^{(k)}_{{\rm{d}}_{j}}$ and $\Gamma^{(k)}_{{\rm{u}}_{j}}$ denote the interfering symbol from the DL $j^{th}$ BS and interfering symbol from the UL $j^{th}$ UE in the $k^{th}$ tier. Following the same interpretation of the subscripts and superscripts defined for the interfering symbols, $h^{(k)}_{{\rm d}_j}$ and $h^{(k)}_{{\rm u}_j}$ denote the DL and UL interfering channel gains, $P^{(k)}_{{\rm d}}$, and $P^{(k)}_{{\rm u}_j}$ denote the DL and UL interfering transmit powers, $r^{(k)}_{{\rm d}_j}$, and $r^{(k)}_{{\rm u}_j}$ denote the DL and UL interfering link distances, and $f^{(k)}_{\rm d}$ and $f^{(k)}_{\rm u}$ denote the center frequencies of the DL and UL interfering frequency bands (see Fig. \ref{fig:Network1}). Note that the BS index is removed from the DL transmit power because all BSs in the same tier transmit with the same power. Similarly, the BS and UE indices are removed from the center frequencies $f^{(k)}_{\rm d}$ and $f^{(k)}_{\rm u}$ because all elements in the same tier employ the same overlap parameter $\alpha^{(k)}$. The SI term in \eqref{base_band_DL} is given by \small \begin{align}\label{eq:SIu1} & \mathfrak{I}^{(i)}_{{\rm s}_{\rm u}}(t)= \Gamma_{s} \sqrt{\beta^{(i)}_{\rm u} h_s P_{\rm d}}s(t,B_{\rm d}(\alpha_i),b^{(i)}_{\rm d}) \exp \left( j 2 \pi \Delta f^{(i)} t \right). \end{align} \begin{align}\label{eq:SId1} &\mathfrak{I}^{(i)}_{{\rm s}_{\rm d}}(t)= \left\{ \begin{array}{ll} \Gamma_{s} \sqrt{\beta_{\rm d} h_s P_{{\rm u}_{o}}}s(t, B_{\rm u}(\alpha_i),b^{(i)}_{\rm u}) \exp \left(- j 2 \pi \Delta f^{(i)} t \right). & {\rm 2NT} \\ 0. & {\rm 3NT} \end{array} \right. \end{align}\normalsize where, $\beta_{\rm d}$ represents the average attenuation power of the SI in the DL, $\beta^{(i)}_{\rm u}$ is the average attenuation power of the SI affecting a BS in the $i^{th}$ tier in the UL, hence, each tier can have a different SIC capability depending on the BSs' sizes and receivers complexity. $ P_{{\rm u}_{o}}$ is the transmit power of the tagged UE and \begin{align} \Delta f^{(i)}= f^{(i)}_{\rm u}-f^{(i)}_{\rm d}, \end{align} which represents the difference between the UL and DL center frequencies in the $i^{th}$ tier. Note that this difference also depends on the chosen tier, since each tier can have a different duplexing factor $\alpha_i$ which leads to different UL/DL BWs and center frequencies. \subsection{Methodology of Analysis} \label{method} The analysis is conducted on a test transceiver, which is a BS for the UL and a UE for the DL, located at the origin and operating on a test channel pair. According to Slivnyak's theorem \cite{Stochastic2012Haenggi}, there is no loss of generality in this assumption. Also, there is no loss of generality to focus on a test channel pair as interferences on different bands are statistically equivalent. We asses the impact of FD communication via the outage probability and the transmission rate. The outage probability is defined as \begin{align} \mathcal{O}(\theta)=\mathbb{P}\left\{{\rm SINR} <\theta\right\}. \label{eq:Outage} \end{align} For the transmission rate \cite{Throughput2014Li}, we assume that the nodes transmit with a fixed rate regardless of the state of the channel (${{\rm BW}} \log_2 \left(1+\theta \right)$), hence, the transmission rate is defined as \begin{align}\label{eq:Thr_Gen} \mathcal{R}={{\rm BW}} \log_2 \left(1+\theta \right) \mathbb{P}\left\{{\rm SINR} \geq \theta\right\}. \end{align} In \eqref{eq:Thr_Gen}, the degraded SINR is compensated by the increased linear BW term. Hence, \eqref{eq:Thr_Gen} can be used to fairly assess the performance of FD operation. As shown \eqref{eq:Outage} and \eqref{eq:Thr_Gen}, both the outage probability and transmission rate are independent from the symbol structure and only depend on the SINR. Consequently, all transmitted symbols $\Gamma_o$, $\Gamma^{(k)}_{{\rm{d}}_{j}}$ and $\Gamma^{(k)}_{{\rm{u}}_{j}}$ for all $\{k,j\}$ are abstracted to independent zero-mean unit-variance complex Gaussian random variables. Abstracting the symbols via Gaussian random variables have negligible effect on the signal-to-interference-plus-noise-ratio (SINR) distribution as shown in \cite{The2015Afify, Influence2005Giorgetti}. In the analysis, we start by modeling the effect of the matched and low-pass filtering on the baseband signal. Then, based on the base-band signal format after filtering, the expressions for the SINR in different cases (i.e., CCU-UL, CCU-DL, CEU-UL, and CEU-DL for 3NT, and 2NT) are obtained. The performance metrics in \eqref{eq:Outage} and \eqref{eq:Thr_Gen} are then expressed in terms of the LT of the PDF of the interference, which is obtained later to evaluate \eqref{eq:Outage} and \eqref{eq:Thr_Gen}\footnote{Expressions for the bit error probability can be derived by using the obtained SINR in the next section and following \cite{The2014Renzo,AlAmmouri2015Inband}.}. \section{Performance Analysis} The received signal is first convolved with the conjugated time-reversed pulse shape template, passed through a low-pass filter, and sampled at $t=t_o$. The baseband signal after filtering and sampling at the input of the decoder is given by: \small \begin{align} y^{(i)}_{v}(t_o)=&y^{(i)}_{v}(t).* h^{(i)}_{v}(t-t_0)|_{t=t_o} \notag \\ =&A \sqrt{P^{(i)}_{v_o} r_o^{-\eta_{vv}} h_o} \mathcal{I}_{v} (\alpha_i,\alpha_i)+ \sum\limits_{k=1}^{K} \sum_{j \in \tilde{\Psi}^{(k)}_v} \Gamma^{(k)}_{v_{j}} \sqrt{P^{(k)}_{v_j} h^{(k)}_{v_j} \left(r^{(k)}_{v_j}\right)^{-\eta_{vv}}} \mathcal{I}_{v}(\alpha_i,\alpha_k) +\notag\\ &\sum\limits_{k=1}^{K} \sum_{j \in \tilde{\Psi}^{(k)}_{\bar{v}}} \Gamma^{(k)}_{{\bar{v}}_{j}} \sqrt{P^{(k)}_{{\bar{v}}_j} h^{(k)}_{\bar{v}_j} \left(r^{(k)}_{\bar{v}_j}\right)^{-\eta_{\bar{v}v}}} \mathcal{C}_{v}(\alpha_i,\alpha_k) + \mathfrak{I}^{(i)}_{{\rm s}_{v}}(t) .* h^{(i)}_{v}(t-t_0)|_{t=t_o} +\sqrt{N_o |\mathcal{I}_{v} (\alpha_i,\alpha_i)|^2}. \label{eq:base_band_matched} \end{align} \normalsize where $h^{(i)}_{v}(t)$ is the combined matched and low-pass filter impulse response for a transceiver in the $i^{th}$ tier. The frequency domain representation $h^{(i)}_{v}(t)$ is given by \small \begin{align} H^{(i)}_{v}(f)=\left\{ \begin{array}{ll} S^{*}(f,B_{v}(\alpha_i),b^{(i)}_{v}) \ \ \ \ \ \ \ \ -\frac{B_{v}(\alpha_i)}{2} &\leq f \leq \frac{B_{v}(\alpha_i)}{2}.\\ 0 & \! \!\! \!\! \! \rm{elsewhere}. \end{array} \right. \label{matched2} \end{align}\normalsize where $S(f,B_{v}(\alpha_i),b^{(i)}_{v})$ represents the used pulse shape as discussed in section \ref{sec:PulseShaping}. The factors $\mathcal{I}(\cdot,\cdot)$ and $\mathcal{C}(\cdot,\cdot)$ in \eqref{eq:base_band_matched} represent the intra-mode (i.e., from UL-UL or DL-DL ) and cross-mode (i.e., from UL-DL or vice versa) effective received energy factors, respectively. From \eqref{inter1}, \eqref{inter2}, \eqref{matched2}, and expressing the convolution operation in the frequency domain, the pulse shaping and filtering factors are obtained as, \small \begin{equation} \label{fac1} \mathcal{I}_{v} (\alpha_i,\alpha_k)=\int\limits_{-B_{v}(\alpha_i)/2}^{B_{v}(\alpha_i)/2} S^{*}(f,B_{v}(\alpha_i),b^{(i)}_{v})S(f-f^{(k)}_{v}+f^{(i)}_{v},B_{v}(\alpha_k),b^{(k)}_{v})df , \end{equation} \begin{equation} \label{fac2} \mathcal{C}_{v} (\alpha_i,\alpha_k)=\int\limits_{-B_{v}(\alpha_i)/2}^{B_{v}(\alpha_i)/2} S^{*}(f,B_{v}(\alpha_i),b^{(i)}_{v})S(f-f^{(k)}_{\bar{v}}+f^{(i)}_{v},B_{\bar{v}}(\alpha_k),b^{(k)}_{\bar{v}})df , \end{equation} \normalsize It should be noted that although same mode links use similar pulse shapes in the same tier, the effective energy received from intra-mode intra-tier transmitters is not unity as shown in \eqref{fac1}. This is because \eqref{matched2} includes the combined impulse response of the matched and low-pass filters, which extracts the desired frequency range from the received signal. Consequently, the energy outside the desired BW is discarded and the energy contained within the pulse shape is no longer unity. Also, the cross-mode interference factor in \eqref{fac2} is strictly less than unity due to low-pass filtering, the possibly of different pulse shapes, and the partial overlap between cross-mode channels. Let $ \Xi = \left\{r_o, r^{(i)}_{v_j}, h_o, h^{(i)}_{v_j}, P^{(i)}_{v_o}, P^{(i)}_{v_j}, h_s ; \forall i=\{1,...,K\}, v\in \{{\rm u,d} \}\right\}$, then conditioning on $\Xi$ the $\rm{SINR}$ is given by \small \begin{align}\label{eq:SINR} &\!\!\!\!\!\!\!\!\!\!\!\!{\rm SINR}^{(i)}_{v}\left( \Xi\right) =\notag \\ &\!\!\!\!\!\!\!\!\!\!\!\!\frac{P^{(i)}_{v_o} r_o^{-\eta_{vv}} h_o}{\sum\limits_{k=1}^{K} \sum\limits_{j \in \tilde{\Psi}^{(k)}_v} P^{(k)}_{v_j} h^{(k)}_{v_j} \left(r^{(k)}_{v_j}\right)^{-\eta_{vv}} |\tilde{\mathcal{I}}_{v}(\alpha_i,\alpha_k)|^2 +\sum\limits_{k=1}^{K} \sum\limits_{j \in \tilde{\Psi}^{(k)}_{\bar{v}}} P^{(k)}_{{\bar{v}}_j} h^{(k)}_{\bar{v}_j} \left(r^{(k)}_{\bar{v}_j}\right)^{-\eta_{\bar{v}v}} |\tilde{\mathcal{C}}_{v}(\alpha_i,\alpha_k)|^2 + \tilde{\sigma}_{{\rm s}_v}^2(\alpha_i) +N_o}, \end{align}\normalsize where,\small \begin{align} |\tilde{\mathcal{I}}_{ v}(\alpha_i,\alpha_k)|^2&=\frac{|\mathcal{I}_{ v}(\alpha_i,\alpha_k)|^2}{| \mathcal{I}_{v}(\alpha_i,\alpha_i)|^2}, \\ |\tilde{\mathcal{C}}_{ v}(\alpha_i,\alpha_k)|^2&=\frac{|\mathcal{C}_{ v}(\alpha_i,\alpha_k)|^2}{| \mathcal{I}_{v}(\alpha_i,\alpha_i)|^2}, \end{align}\normalsize and $ \tilde{\sigma}_{{\rm s}_v}^2(\cdot)$ is the residual SI power normalized by $| \mathcal{I}_{v}(\alpha_i,\alpha_i)|^2$. From \eqref{eq:SIu1} and \eqref{eq:SId1}, $ \tilde{\sigma}_{{\rm s}_v}^2$ can be expressed for the UL and DL as \small \begin{align}\label{eq:SIu2} & \tilde{\sigma}_{{\rm s_{\rm u}}}^2(\alpha_i)= \beta^{(i)}_{\rm u} h_s P^{(i)}_{\rm d} |\tilde{\mathcal{C}}_{\rm u}(\alpha_i,\alpha_i)|^2. \end{align} \begin{align}\label{eq:SId2} &\tilde{\sigma}_{{\rm s_{\rm d}}}^2(\alpha_i)= \left\{ \begin{array}{ll} \beta_{\rm d} h_s P_{{\rm u}_{o}} |\tilde{\mathcal{C}}_{\rm d}(\alpha_i,\alpha_i)|^2. \ \ \ & {\rm 2NT} \\ 0. & {\rm 3NT} \end{array} \right. \end{align}\normalsize The SINR in \eqref{eq:SINR} is used in the next section to evaluate the outage provability and rate as discussed in Section~\ref{method}. \subsection{Performance Metrics} \normalsize From \eqref{eq:Outage} and \eqref{eq:SINR}, the outage probability in the link $v \in \{u,d\}$ in the $i^{th}$ tier can be written as, \small \begin{align}\label{eq:outageGeneral2} \mathcal{O}_{v}^{(i)}(\theta)&=\mathbb{P} \left\{ \frac{P^{(i)}_{v_o} r_o^{-\eta_{vv}} h_o}{\sum\limits_{k=1}^{K} \mathfrak{I}_{v \rightarrow v}^{(k,i)} |\tilde{\mathcal{I}}_{v}(\alpha_i,\alpha_k)|^2 + \sum\limits_{k=1}^{K} \mathfrak{I}_{\bar{v}\rightarrow v}^{(k,i)} |\tilde{\mathcal{C}}_{v }(\alpha_i,\alpha_k)|^2 + \tilde{\sigma}_{{\rm s}_v}^2(\alpha_i) +N_o}<\theta\right\}. \end{align}\normalsize where in general, $\mathfrak{I}_{v\rightarrow w}^{(k,i)}=\sum\limits_{j \in \tilde{\Psi}^{(k)}_v} P^{(k)}_{v_j} h^{(k)}_{v_j} \left(r^{(k)}_{v_j}\right)^{-\eta_{vw}}$. By exploiting the exponential distribution of $h_o$, it can be written as \small \begin{align}\label{eq:outageGeneral3} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{O}_{v}^{(i)}(\theta)&=1-\notag \\ &\mathbb{E} \left[ e^{\frac{-N_o r_o^{\eta_{vv}}\theta}{P^{(i)}_{v_o}}} e^{\frac{-\tilde{\sigma}_{{\rm s}_v}^2(\alpha_i) r_o^{\eta_{vv}}\theta}{P^{(i)}_{v_o}}} \prod\limits_{k=1}^{K} \mathcal{L}_{\mathfrak{I}_{v \rightarrow v}^{(k,i)}} \left( \frac{ r_o^{\eta_{vv}}\theta |\tilde{\mathcal{I}}_{v}(\alpha_i,\alpha_k)|^2 }{P^{(i)}_{v_o}} \right) \mathcal{L}_{\mathfrak{I}_{\bar{v}\rightarrow v}^{(k,i)}} \left( \frac{ r_o^{\eta_{vv}}\theta |\tilde{\mathcal{C}}_{v}(\alpha_i,\alpha_k)|^2 }{P^{(i)}_{v_o}} \right) \right]. \end{align}\normalsize where the expectation is over $\{r_o,P^{(i)}_{v_o},\tilde{\sigma}_{{\rm s}_v}^2 \}$. Since the $\{r_o,P^{(i)}_{v_o}\}$ depends on the UEs type (CCU or CEU) as discussed in section \ref{sec:NetModel}, we present an explicit study for each type. The serving distances $r_o$ for CCUs and CEU s are characterized via the following lemma. \begin{lemma} The serving distance distribution for a randomly selected CCU or CEU given that it is connected to the $i^{th}$ tier denoted by $f_{R^{(i)}_c}(.)$ and $f_{R^{(i)}_e}(.)$, respectively, are given by the following equations, \small \begin{align}\label{eq:Dist1} f_{R^{(i)}_c}(r)&= \frac{2 \pi \bar{\lambda_i} r \exp \left(-\pi \bar{\lambda_i} r^2\right)}{1-\exp \left(-\pi \bar{\lambda_i} \left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{2}{\eta_{\rm dd}}}\right)} \mathbbm{1}_{\left\{0\leq r \leq \left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{1}{\eta_{\rm dd}}}\right\}}(r), \end{align} \begin{align}\label{eq:Dist2} f_{R^{(i)}_e}(r)&= 2 \pi \bar{\lambda_i} r \exp \left(-\pi \bar{\lambda_i} r^2+\pi \bar{\lambda_i} \left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{2}{\eta_{\rm dd}}}\right) \mathbbm{1}_{\left\{\left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{1}{\eta_{\rm dd}}} < r < \infty\right\}}(r). \end{align}\normalsize where $\bar{\lambda_i}= \sum\limits_{k=1}^{K} \frac{\tau_k^2}{\tau_k^2} \lambda_k$. \begin{proof} Refer to Appendix A. \end{proof} \end{lemma} From Lemma 1, it is straightforward to find the probability that a randomly selected UE from the $i^{th}$ tier is a CCU or a CEU, which are given by, \small \begin{align} \mathbb{P} \left\{ {\rm CCU} \right\}&=1-\exp \left(-\pi \bar{\lambda_i} \left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{2}{\eta_{\rm dd}}}\right) , \\ \mathbb{P} \left\{ {\rm CEU} \right\}&=\exp \left(-\pi \bar{\lambda_i} \left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{2}{\eta_{\rm dd}}}\right). \end{align}\normalsize By the law of total probability, the average outage probability can be expressed via the CCUs' outage probability and the CEUs' outage probability, denoted by $\mathcal{O}_{v^c}^{(i)}$ and $\mathcal{O}_{v^e}^{(i)}$, respectively, as \small \begin{align}\label{eq:TotalOutage} \bar{\mathcal{O}}_{v}^{(i)}(\theta)=\mathcal{O}_{v^c}^{(i)}(\theta)\mathbb{P} \left\{ {\rm CCU} \right\}+\mathcal{O}_{v^e}^{(i)}(\theta)\mathbb{P} \left\{ {\rm CEU} \right\}, \end{align}\normalsize where each of $\mathcal{O}_{v^c}^{(i)}(\theta)$ and $\mathcal{O}_{v^e}^{(i)}(\theta)$ is represented as in \eqref{eq:outageGeneral3}, but with the specific parameters related to the CCUs and the CEUs. From \eqref{eq:outageGeneral3}, it is clear that the LT of the aggregate interference from each tier $\mathfrak{I}_{w \rightarrow v}^{(k,i)}$ is required to evaluate $\mathcal{O}_{v^c}^{(i)}(\theta)$ and $\mathcal{O}_{v^e}^{(i)}(\theta)$. The aggregate interference, and hence its LT, depends on the spatial distribution of the set of interfering BSs and UEs in the tier, $\tilde{\Psi}^{(k)}_{\rm d}$ and $\tilde{\Psi}^{(k)}_{\rm u}$, respectively. The set of interfering BSs $\tilde{\Psi}^{(k)}_{\rm d}$ in the $k^{th}$ tier is the same as the original set of BSs ${\Psi}^{(k)}_{\rm d}$ excluding the transmitting BS itself in the DL and the serving BS in the UL. Hence, $\tilde{\Psi}^{(k)}_{\rm d}$ is a PPP with intensity $\lambda_k$. From UEs associations and $\lambda_{\rm u} \gg \sum_{k=1}^K \lambda_k$, the intensity of the interfering UEs $\tilde{\Psi}^{(k)}_{\rm u}$ on a certain channel in the $k^{th}$ tier is also $\lambda_k$. However, $\tilde{\Psi}^{(k)}_{\rm u}$ is not a PPP because only one UE can use a channel in each Voronoi-cell, which impose correlations among the positions of the interfering UEs on each channel and violates the PPP assumption. Furthermore, the employed association makes the set of interfering UEs $\tilde{\Psi}^{(k)}_{\rm u}$ and the set of interfering BSs $\tilde{\Psi}^{(k)}_{\rm d}$ correlated. The inter-correlations between the interfering UEs and the cross-correlations between the UEs and BSs impede the model tractability. Hence, to maintain the tractability, we ignore these correlations. The used assumptions to keep the model tractability are formally stated below. \begin{assumption} The set of interfering UEs $\tilde{\Psi}^{(k)}_{\rm u}$ in the $k^{th}$ tier is a PPP with intensity $\lambda_k$. \end{assumption} \begin{assumption} The point process $\tilde{\Psi}^{(k)}_{\rm d}$ for the interfering BSs and the point process $\tilde{\Psi}^{(k)}_{\rm u}$ for the interfering UEs both in the $k^{th}$ tier are independent. \end{assumption} \begin{assumption} The point processes $\tilde{\Psi}^{(k)}_{\rm u}$'s which represent the interfering UEs connected to different tiers are independent from each other. \end{assumption} \begin{remark} The previous assumptions are necessary to maintain the model tractability. Assumption 1 has been used and validated in \cite{On2014ElSawy, Hybrid2015Lee, Load2014AlAmmouri,AlAmmouri2015Inband}, Assumption 2 in \cite{AlAmmouri2015Inband,Throughput2016Goyal}, and Assumption 3 in \cite{On2014ElSawy}. It is important to mention that these assumptions ignore the mutual correlations between the interfering sources, however, the correlations between the interfering sources and the test receiver are captured through the proper calculation for the interference exclusion region enforced by association and/or UL power control. The accuracy of the developed model with Assumptions 1-3 is validated via independent Monte Carlo simulation in Section IV. \end{remark} Based on Assumptions 1-3, the LT of the aggregated interference is always generated from a PPP $\Phi$, but with different parameters such as interference exclusion regions, interferers intensity, and transmit power distribution. For brevity, we present the following unified lemma for the LT of the aggregate interference generated from a homogeneous PPP with general parameters and use it to obtain all LTs in (19). \begin{lemma} Let $\mathcal{L}_{\mathfrak{I}}(s)$ be the LT of the aggregate interference $\mathfrak{I}$ generated from a PPP network with intensity $\lambda$, i.i.d transmit powers $P_j$, unit means i.i.d exponentially distributed channel power gains $h_j$, and per-interferer protection region of $\mathcal{B}(o,a_j)$, where $\mathcal{B}(o,a_j)$ is a ball centered at the origin $(o)$ and has a radius $a_j$. Then, $\mathcal{L}_{\mathfrak{I}}(s)$ is given by, \small \begin{align}\label{eq:LTgeneral} \mathcal{L}_{\mathfrak{I}}(s)=\exp \left( \frac{-2 \pi \lambda}{\eta-2} \mathbb{E}_{P} \left[a^{2-\eta} s P \ {}_2 F_1 \left[1,1-\frac{2}{\eta};2-\frac{2}{\eta}; -a^{-\eta} P s \right] \right] \right), \end{align}\normalsize where $ {}_2 F_1(\cdot,\cdot;\cdot;\cdot)$ is the hyper-geometric function \cite{Handbook1964Abramowitz}, $\mathbb{E}_{P} [ \cdot ]$ is the expectation over the transmitted power of the sources, and $\eta>2$ is a general path loss exponent. For the special case of $a=0$, equation \eqref{eq:LTgeneral} reduces to, \small \begin{align}\label{eq:LTa=0} \mathcal{L}_{\mathfrak{I}}(s)=\exp \left( -\frac{2 \pi^2 \lambda}{\eta} \mathbb{E}_{P} \left[ \left(s P \right)^{\frac{2}{\eta}} \right] \csc \left(\frac{2}{\eta} \right) \right). \end{align}\normalsize \begin{proof} Refer to Appendix B. \end{proof} \end{lemma} Due to the expectation over power distribution, the LT expression in \eqref{eq:LTgeneral} has an integral over hyper-geometric function. If the interference exclusion distance $a$ around the test receiver is independent of the transmit powers, then the expression given by \eqref{eq:LTgeneral} can be lower-bounded by the simplified closed-form expression given in the following lemma. \begin{lemma} \label{lem:LTgeneralapp} Let $\mathcal{L}_{\mathfrak{I}}(s)$ be the LT of the aggregate interference generated from a PPP network with intensity $\lambda$, i.i.d transmit powers $P_j$, unit means i.i.d exponentially distributed channel power gains, and interference protection region of $\mathcal{B}(o,a)$, where $\mathcal{B}(o,a)$ is a ball centered at the origin $(o)$ and has a radius $a$. Assuming that $a$ is independent from $P_j$, $\forall j$, then $\mathcal{L}_{\mathfrak{I}}(s)$ can be lower-bounded by, \small \begin{align}\label{eq:LTgeneralapp} \mathcal{L}_{\mathfrak{I}}(s) \geq \exp \left( \frac{-2 \pi \lambda}{\eta-2} a^{2-\eta} s \mathbb{E} \left[P\right] \ {}_2 F_1 \left[1,1-\frac{2}{\eta};2-\frac{2}{\eta}; -a^{-\eta}\mathbb{E} \left[P\right]s \right] \right). \end{align}\normalsize \begin{proof} Refer to Appendix C. \end{proof} \end{lemma} Lemma \ref{lem:LTgeneralapp} precludes the necessity to integrate over the PDF of the transmit power of the interfering sources and give the LT is a closed-form containing the first moment of the transmit power, which reduces the computational complexity of the LTs. For the sake of simplified expressions, we always use the bound in \eqref{eq:LTgeneralapp} whenever applicable and is verified in the section-IV. Using Lemma 2 and Lemma 3 , the LTs of the aggregated interference $\mathfrak{I}_{v \rightarrow v}^{(k,i)}$ for the UL and DL for CCUs and CEUs are given by the following lemma. \begin{lemma} Let $\mathcal{L}^{(c)}_{I^{(k,i)}_{{\rm u}\rightarrow {\rm u}}}$ $\left(\mathcal{L}^{(e)}_{I^{(k,i)}_{{\rm u}\rightarrow {\rm u}}}\right)$, $\mathcal{L}^{(c)}_{I^{(k,i)}_{{\rm d}\rightarrow {\rm u}}}$ $\left(\mathcal{L}^{(e)}_{I^{(k,i)}_{{\rm d}\rightarrow {\rm u}}}\right)$, $\mathcal{L}^{(c)}_{I^{(k,i)}_{{\rm d}\rightarrow {\rm d}}}$ $\left(\mathcal{L}^{(e)}_{I^{(k,i)}_{{\rm d}\rightarrow {\rm d}}}\right)$ and $\mathcal{L}^{(c)}_{I^{(k,i)}_{{\rm u}\rightarrow {\rm d}}}$ $\left(\mathcal{L}^{(e)}_{I^{(k,i)}_{{\rm u}\rightarrow {\rm d}}}\right)$ represent the LTs of the UL to UL, DL to UL, DL to DL, and UL to DL aggregate interference generated from the $k^{th}$ tier affecting a CCU (CEU) and its serving BS given that both of them are in the $i^{th}$ tier, then these LTs are given by \small \begin{align} \!\!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{L}^{(c)}_{I^{(k,i)}_{{\rm u}\rightarrow {\rm u}}}(s) &= \exp \left( \frac{-2 \pi \lambda_k \left(\rho^{(k)}\right)^{1-\frac{2}{\eta_{\rm uu}}}}{\eta_{\rm uu}-2} \mathbb{E}\left[ \left( P^{(k)}_{\rm u} \right)^{\frac{2}{\eta_{\rm uu}}} \right] s \ {}_2 F_1 \left[1,1-\frac{2}{\eta_{\rm uu}},2-\frac{2}{\eta_{\rm uu}}, -\rho^{(k)} s \right]\right) , \end{align} \begin{align} \!\!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{L}^{(c)}_{I^{(k,i)}_{{\rm d} \rightarrow {\rm u}}}(s) = \mathcal{L}^{(e)}_{I^{(k,i)}_{{\rm d}\rightarrow {\rm u}}}(s) = \exp \left( -\frac{2 \pi^2 \lambda_k}{\eta_{\rm du}} \left(s P^{(k)}_{\rm d} \right)^{\frac{2}{\eta_{\rm du}}} \csc \left(\frac{2 \pi}{\eta_{\rm du}} \right)\right), \end{align} \begin{align} \!\!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{L}^{(e)}_{I^{(k,i)}_{{\rm u}\rightarrow {\rm u}}}(s|r_o) & \approx \exp \left( \frac{-2 \pi \lambda_k }{\eta_{\rm uu}-2} \mathbb{E}_{P^{(k)}_{\rm u}}\left[ P^{(k)}_{\rm u}s \ {}_2 F_1 \left[1,1-\frac{2}{\eta_{\rm uu}},2-\frac{2}{\eta_{\rm uu}}, - P^{(k)}_{\rm u} s r_o^{-\eta_{\rm uu}} \right]\right] \right),\\ & \gtrsim \exp \left( \frac{-2 \pi \lambda_k }{\eta_{\rm uu}-2} \mathbb{E}\left[ P^{(k)}_{\rm u}\right] s \ {}_2 F_1 \left[1,1-\frac{2}{\eta_{\rm uu}},2-\frac{2}{\eta_{\rm uu}}, - \mathbb{E}\left[ P^{(k)}_{\rm u}\right] s r_o^{-\eta_{\rm uu}} \right]\right), \end{align} \begin{align} \!\!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{L}^{(c)}_{I^{(k,i)}_{{\rm d}\rightarrow {\rm d}}}(s|r_o)&= \mathcal{L}^{(e)}_{I^{(k,i)}_{{\rm d}\rightarrow {\rm d}}}(s) = \exp \left( \frac{-2 \pi \lambda_k}{\eta_{\rm dd}-2} \left(\frac{r_o \tau_i}{\tau_j}\right) ^{2-\eta_{\rm dd}} s P^{(k)}_{{\rm d}} \ {}_2 F_1 \left[1,1-\frac{2}{\eta_{\rm dd}},2-\frac{2}{\eta_{\rm dd}}, -\left(\frac{r_o \tau_i}{\tau_j}\right)^{-\eta_{\rm dd}} P^{(k)}_{\rm d} s \right] \right), \end{align} \begin{align}\label{eq:IudCCU} \!\!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{L}^{(c)}_{I^{(k,i)}_{{\rm u}\rightarrow {\rm d}}}(s)&\approx \exp \left( \frac{-2 \pi \lambda_k \left(\rho^{(k)}\right)^{1-\frac{2}{\eta_{\rm ud}}}}{\eta_{\rm ud}-2} \mathbb{E}\left[ \left( P^{(k)}_{\rm u} \right)^{\frac{2}{\eta_{\rm ud}}} \right] s \ {}_2 F_1 \left[1,1-\frac{2}{\eta_{\rm ud}},2-\frac{2}{\eta_{\rm ud}}, -\rho^{(k)} s \right]\right)U^{(k,i)}_1(r_o,s), \end{align} \begin{align}\label{eq:IudCEU} \!\!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{L}^{(e)}_{I^{(k,i)}_{{\rm u}\rightarrow {\rm d}}}(s) \approx \exp \left( \frac{-2 \pi \lambda_k }{\eta_{\rm ud}-2} \mathbb{E}\left[ P^{(k)}_{\rm u}\right] s \ {}_2 F_1 \left[1,1-\frac{2}{\eta_{\rm ud}},2-\frac{2}{\eta_{\rm ud}}, - \mathbb{E}\left[ P^{(k)}_{\rm u}\right] s r_o^{-\eta_{\rm ud}} \right]\right)U^{(k,i)}_1(r_o,s), \end{align} \normalsize where, \small \begin{align} \!\!\!\!\!\!\!\!\!\!\!\!\!\! \mathbb{E}\left[ \left(P^{(k)}_{\rm u} \right)^{\zeta}\right]= \frac{\left(\rho^{(k)}\right)^{\zeta} \gamma \left(\frac{\zeta \eta_{\rm dd}}{2}+1, \pi \bar{\lambda}_k \left(\frac{P_{\rm u}}{\rho^{(k)}}\right)^{\frac{2}{\eta_{\rm dd}}}\right)}{\left( \pi \bar{\lambda}_k \right)^{\frac{\zeta \eta_{\rm dd}}{2}}}+\left(P_{\rm u}\right)^{\zeta} \exp \left(\pi \bar{\lambda}_k \left(\frac{P_{\rm u}}{\rho^{(k)}}\right)^{\frac{2}{\eta_{\rm dd}}} \right), \end{align} and $U^{(k,i)}_1(r_o,s)$ is the LT of intra-cell interference in the 3NT case, which is expressed as \begin{align}\label{eq:U1EX} &\!\!\!\!\!\!\!\!\!\!\!\!\!\! U^{(k,i)}_1(r_o,s)=\notag \\ &\!\!\!\!\!\!\!\!\!\!\!\!\!\! \left\{ \begin{array}{ll} \int\limits_{0}^{\left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{1}{\eta_{\rm ud}}}}\int\limits_{\delta_o}^{\pi} \frac{\mathbb{P} \left\{ {\rm CCU} \right\} f_{R^{(i)}_c}(r)}{1+ s \rho^{(i)} \left(1+(\frac{r_o}{r})^2-2\frac{r_o}{r} \cos (\delta) \right)^{\frac{-\eta_{\rm ud}}{2}}} \frac{d\delta dr}{\pi-\delta_o}+ \int\limits_{\left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{1}{\eta_{\rm ud}}}}^{\infty}\int\limits_{\delta_o}^{\pi} \frac{\mathbb{P} \left\{ {\rm CEU} \right\} f_{R^{(i)}_e}(r)}{1+ s P_{\rm u} \left(r^2+r_o^2-2 r_o r \cos (\delta) \right)^{\frac{-\eta_{\rm ud}}{2}}} \frac{d\delta dr}{\pi-\delta_o}. \ \ \ & \underset{{i=k}}{\rm 3NT} \\ 1. \ \ \ & {\rm O.W}\\ \end{array} \right. \end{align}\normalsize and $\gamma(\cdot,\cdot)$ is the lower incomplete gamma function \cite{Handbook1964Abramowitz}. \begin{proof} Refer to Appendix D. \end{proof} \end{lemma} Note that $U^{(k,i)}_1(\cdot,\cdot)$ in equations \eqref{eq:IudCCU} and \eqref{eq:IudCEU} represents the intra-cell interference, in which it has an effect on the 3NT case from tier that tagged transceiver belongs to. For the sake of simplified expressions, we also present a closed-form approximation for $U^{(k,i)}_1(\cdot,\cdot)$ in the following lemma. \begin{lemma} The LT of the intra-cell interference given in equation \eqref{eq:U1EX} can be approximated by \small \begin{align}\label{eq:U1App} &\!\!\!\!\!\!\!\!\!\!\!\!\!\! U^{(k,i)}_1(r_o,s)=\notag \\ &\!\!\!\!\!\!\!\!\!\!\!\!\!\! \left\{ \begin{array}{ll} \frac{\mathbb{P} \left\{ {\rm CCU} \right\} }{1+ s \rho^{(i)} \left(1+(\frac{r_o}{\bar{r}_c})^2+\frac{2 \sin (\delta_o)}{\pi-\delta_o} \frac{r_o}{\bar{r}_c} \right)^{\frac{-\eta_{\rm ud}}{2}}} + \frac{\mathbb{P} \left\{ {\rm CEU} \right\} }{1+ s P_{\rm u} \left(\bar{r}_e^2+r_o^2+\frac{2 \sin (\delta_o)}{\pi-\delta_o} r_o \bar{r}_e \right)^{\frac{-\eta_{\rm ud}}{2}}} . \ \ \ & \underset{{i=k}}{\rm 3NT} \\ 1. \ \ \ & {\rm O.W}\\ \end{array} \right. \end{align}\normalsize where \small \begin{align} \bar{r}_c=\frac{{\rm erf} \left( \sqrt{\pi \bar{\lambda_i}} \left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{1}{\eta_{\rm ud}}} \right)-2 \sqrt{\bar{\lambda_i}} \left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{1}{\eta_{\rm ud}}} \exp \left(- \pi \bar{\lambda_i} \left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{2}{\eta_{\rm ud}}} \right)}{2 \sqrt{\bar{\lambda_i}}\left(1- \exp \left(- \pi \bar{\lambda_i} \left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{2}{\eta_{\rm ud}}} \right)\right)}. \end{align} \begin{align} \bar{r}_e=\frac{\exp \left( \pi \bar{\lambda_i} \left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{2}{\eta_{\rm ud}}} \right) {\rm erfc} \left( \sqrt{\pi \bar{\lambda_i}} \left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{1}{\eta_{\rm ud}}} \right)-2 \sqrt{\bar{\lambda_i}} \left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{1}{\eta_{\rm ud}}} }{2 \sqrt{\bar{\lambda_i}}}. \end{align}\normalsize where ${\rm erf} (\cdot)$ and ${\rm erfc} (\cdot)$ are the error function and the complementary error function, respectively \cite{Handbook1964Abramowitz}. \begin{proof} By substituting $r$ and $\cos (\delta)$ by their average values. \end{proof} \end{lemma} Using the results in Lemmas 1-5 along with (19), the outage probabilities for all types of connections in the depicted system model are characterized via the following theorem. \begin{theorem} The outage probabilities in the UL and the DL in the $i^{th}$ tier for CCUs and CEUs are given by, \small \begin{align}\label{eq:OutageUL1} \!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{O}_{{\rm u}^c}^{(i)}(\theta)&=1-e^{\frac{-N_o \theta}{\rho^{(i)}}} U^{(i)}_{\rm SI_ u}\left(\frac{\theta}{\rho^{(i)}}\right) \prod\limits_{k=1}^{K} \mathcal{L}^{(c)}_{\mathfrak{I}_{\rm u \rightarrow u}^{(k,i)}} \left( \frac{ \theta |\tilde{\mathcal{I}}_{{\rm u}}(\alpha_i,\alpha_k)|^2 }{\rho^{(i)}} \right) \mathcal{L}^{(c)}_{\mathfrak{I}_{{\rm d \rightarrow u}}^{(k,i)}} \left( \frac{ \theta |\tilde{\mathcal{C}}_{{\rm u}}(\alpha_i,\alpha_k)|^2 }{\rho^{(i)}} \right), \end{align} \begin{align}\label{eq:OutageUL2} \!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{O}_{{\rm u}^e}^{(i)}(\theta)&=1- \int\limits_{\left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{1}{\eta_{\rm uu}}}}^{\infty} e^{\frac{-N_o r_o^{\eta_{\rm uu}}\theta}{P_{\rm u}}}U^{(i)}_{\rm SI_ u}\left(\frac{\theta r_o^{\eta_{\rm uu}}}{P_{\rm u}}\right) f_{R^{(i)}_e}(r_o) \notag \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \prod\limits_{k=1}^{K} \mathcal{L}^{(e)}_{\mathfrak{I}_{\rm u \rightarrow u}^{(k,i)}} \left( \frac{ r_o^{\eta_{\rm uu}}\theta |\tilde{\mathcal{I}}_{{\rm u}}(\alpha_i,\alpha_k)|^2 }{P_{\rm u}} \right) \mathcal{L}^{(e)}_{\mathfrak{I}_{{\rm d \rightarrow u}}^{(k,i)}} \left( \frac{ r_o^{\eta_{\rm uu}}\theta |\tilde{\mathcal{C}}_{{\rm u}}(\alpha_i,\alpha_k)|^2 }{P_{\rm u}} \right) dr_o, \end{align} \begin{align} \!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{O}_{{\rm d}^{c}}^{(i)}(\theta)&=1- \int\limits_{0}^{\left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{1}{\eta_{\rm dd}}}} e^{\frac{-N_o r_o^{\eta_{\rm dd}}\theta}{P^{(i)}_{\rm d}}} U^{(i)}_{\rm SI_ d}\left(\frac{\theta \rho r_o^{2\eta_{\rm dd}}}{P_{\rm d}^{(i)}}\right) f_{R^{(i)}_c}(r_o) \notag \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \prod\limits_{k=1}^{K} \mathcal{L}^{(c)}_{\mathfrak{I}_{\rm d \rightarrow d}^{(k,i)}} \left( \frac{ r_o^{\eta_{\rm dd}}\theta |\tilde{\mathcal{I}}_{{\rm d}}(\alpha_i,\alpha_k)|^2 }{P^{(i)}_{{\rm d}}} \right) \mathcal{L}^{(c)}_{\mathfrak{I}_{u \rightarrow d}^{(k,i)}} \left( \frac{ r_o^{\eta_{\rm dd}}\theta |\tilde{\mathcal{C}}_{{\rm d}}(\alpha_i,\alpha_k)|^2 }{P^{(i)}_{{\rm d}}} \right) dr_o, \end{align} \begin{align} \!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{O}_{{\rm d}^{e}}^{(i)}(\theta)&=1-\int\limits_{\left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{1}{\eta_{\rm dd}}}}^{\infty} e^{\frac{-N_o r_o^{\eta_{\rm dd}}\theta}{P^{(i)}_{\rm d}}} U^{(i)}_{\rm SI_ d}\left(\frac{\theta P_{\rm u} r_o^{\eta_{\rm dd}}}{P_{\rm d}^{(i)}}\right) f_{R^{(i)}_e}(r_o) \notag \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \prod\limits_{k=1}^{K} \mathcal{L}^{(e)}_{\mathfrak{I}_{\rm d \rightarrow d}^{(k,i)}} \left( \frac{r_o^{\eta_{\rm dd}}\theta |\tilde{\mathcal{I}}_{{\rm d}}(\alpha_i,\alpha_k)|^2 }{P^{(i)}_{{\rm d}}} \right) \mathcal{L}^{(e)}_{\mathfrak{I}_{u\rightarrow d}^{(k,i)}} \left( \frac{ r_o^{\eta_{\rm dd}}\theta |\tilde{\mathcal{C}}_{{\rm d}}(\alpha_i,\alpha_k)|^2 }{P^{(i)}_{{\rm d}}} \right) dr_o, \end{align}\normalsize where,\small \begin{align}\label{eq:SIu} \!\!\!\!\!\!\!\!\!\!&U^{(i)}_{\rm SI_ u}(x)=\int\limits_{0}^{\infty} \exp \left(- x\beta^{(i)}_{\rm u} h P^{(i)}_{\rm d} |\tilde{\mathcal{C}}_{\rm u}(\alpha_i,\alpha_i)|^2 \right) f_{H_s}(h) dh, \end{align} \begin{align}\label{eq:SId} \!\!\!\!\!\!\!\!\!\!&U^{(i)}_{\rm SI_ d}(x)= \left\{ \begin{array}{ll} \int\limits_{0}^{\infty} \exp \left(- x \beta_{\rm d} h |\tilde{\mathcal{C}}_{\rm d}(\alpha_i,\alpha_i)|^2\right) f_{H_s}(h) dh. \ \ \ & {\rm 2NT} \\ 1. & {\rm 3NT} \end{array} \right. \end{align}\normalsize and $f_{H_s}(\cdot)$ is the distribution of the SIC power. $f_{R^{(i)}_c}(\cdot)$ and $f_{R^{(i)}_e}(\cdot)$ are given in equations \eqref{eq:Dist1} and \eqref{eq:Dist2}, respectively, and the LTs are given in Lemma 4. \begin{proof} Refer to Appendix E. \end{proof} \end{theorem} A special case of interest that leads to simple forms of the outage probability is presented in the following corollary. \begin{corollary} In an interference limited dense single tier cellular network with unbinding UL transmit power, the outage probability in the DL and UL, assuming $\eta_{\rm dd}=\eta_{\rm uu}=\eta_{\rm ud}=4$, $\eta_{\rm du}=3$, $\delta_o=90^o$, and $h_s \sim \exp(1)$, are given by \small \begin{align}\label{eq:OutageUL11} \!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{O}_{{\rm u}}(\theta)& \approx 1-\frac{\exp \left( - \sqrt{ \theta } \arctan \left( \sqrt{ \theta }\right) -\frac{4 \pi^2 \lambda}{3\sqrt{3}} \left(\frac{ \theta |\tilde{\mathcal{C}}_{{\rm u}}(\alpha,\alpha)|^2 P_{\rm d} }{\rho} \right)^{\frac{2}{3}} \right) }{1+\beta_{\rm u} P_{\rm d} |\tilde{\mathcal{C}}_{\rm u}(\alpha,\alpha)|^2 \frac{\rho}{\theta}} \end{align} \begin{align}\label{eq:OutageDL11} \!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{O}_{{\rm d}}(\theta)& \approx 1-\int\limits_{0}^{\infty} 2 \pi \lambda r_o U_{\rm NT}(\theta,r_o) \notag \\ &\!\!\!\!\!\!\!\!\!\!\!\!\! \times \exp \left(- \pi \lambda r_o^2 - \pi \lambda r_o^2 \sqrt{\theta} \arctan \left(\sqrt{\theta} \right)- \sqrt{\frac{\rho r_o^4 \theta|\tilde{\mathcal{C}}_{{\rm d}}(\alpha,\alpha)|^2 } {P_{\rm d}}} \arctan \left( \sqrt{\frac{\rho r_o^4 \theta |\tilde{\mathcal{C}}_{{\rm d}}(\alpha,\alpha)|^2 } {P_{\rm d}}} \right) \right) dr_o \end{align}\normalsize where,\small \begin{align}\label{eq:UNT} \!\!\!\!\!\!\!\!\!\!U_{\rm NT}(\theta,r_o)&= \left\{ \begin{array}{ll} \frac{P_{\rm d} }{P_{\rm d}+\beta_{\rm d} |\tilde{\mathcal{C}}_{{\rm d}}(\alpha,\alpha)|^2 \rho r_o^{8}\theta}. \ \ \ & {\rm 2NT} \\ \int\limits_{0}^{\infty}\int\limits_{\delta_o}^{\pi} \frac{ P_{\rm d} \lambda r \exp \left(- \pi \lambda r^2 \right)}{P_{\rm d}+ r_o^4 \theta |\tilde{\mathcal{C}}_{{\rm d}}(\alpha,\alpha)|^2 \rho \left(1+(\frac{r_o}{r})^2-2\frac{r_o}{r} \cos (\delta) \right)^{-2}} d\delta dr. & {\rm 3NT} \end{array} \right. \\ &\approx\left\{ \begin{array}{ll} \frac{P_{\rm d} }{P_{\rm d}+\beta_{\rm d} |\tilde{\mathcal{C}}_{{\rm d}}(\alpha,\alpha)|^2 \rho r_o^{8}\theta}. \ \ \ & {\rm 2NT} \\ \frac{ P_{\rm d}}{P_{\rm d}+ r_o^4 \theta |\tilde{\mathcal{C}}_{{\rm d}}(\alpha,\alpha)|^2 \rho \left(1+4 \lambda r_o^2+\frac{8}{\pi} \sqrt{\lambda}r_o \right)^{-2}} dr. & {\rm 3NT} \end{array} \right. \end{align}\normalsize \begin{proof} The expressions follow from Theorem 1 and Lemmas 4-5 by considering a single tier network and setting $P_{\rm u} \rightarrow \infty$. \end{proof} \end{corollary} Following \eqref{eq:Thr_Gen}, the rate can be expressed in terms of the outage probability as follows \small \begin{align}\label{eq:Thr_Out} \mathcal{R} (\theta)={{\rm BW}} \log_2 \left(1+\theta \right) \left(1-\mathcal{O}(\theta) \right). \end{align}\normalsize Hence, general expressions for the $\alpha$-duplex rate in a multi-tier network can be obtained by directly substituting the outage probability expressions from Theorem 1 in equation \eqref{eq:Thr_Gen}. For the sake of brevity, we only list the rate expressions for a special case of interest in the following Corollary. \begin{corollary} In an interference limited dense single tier cellular network with unbinding UL transmit power, the average rate in the DL and UL, assuming $\eta_{dd}=\eta_{uu}=\eta_{ud}=4$, $\eta_{du}=3$, $\delta_o=90^o$, and $h_s \sim \exp(1)$, are given by \small \begin{align}\label{eq:RateUL11} \!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{R}_{{\rm u}}(\theta)& \approx \frac{(B_{\rm u}+(\epsilon+1) \alpha B) \log_2\left(1+\theta \right)}{1+\beta_{\rm u} P_{\rm d} |\tilde{\mathcal{C}}_{\rm u}(\alpha,\alpha)|^2 \frac{\rho}{\theta}}\exp \left( - \sqrt{ \theta } \arctan \left( \sqrt{ \theta }\right) -\frac{4 \pi^2 \lambda}{3\sqrt{3}} \left(\frac{ \theta |\tilde{\mathcal{C}}_{{\rm u}}(\alpha,\alpha)|^2 P_{\rm d} }{\rho} \right)^{\frac{2}{3}} \right) . \end{align} \begin{align}\label{eq:RateDL11} \!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{R}_{{\rm d}}(\theta)& \approx 2 \pi \lambda(B_{\rm d}+(\epsilon+1) \alpha B) \log_2\left(1+\theta \right) \notag \\ &\!\!\!\!\!\!\!\!\!\!\!\!\! \times \int\limits_{0}^{\infty} r_o \exp \left(- \pi \lambda r_o^2 - \pi \lambda r_o^2 \sqrt{\theta} \arctan \left(\sqrt{\theta} \right)- \sqrt{\frac{\rho r_o^4 \theta|\tilde{\mathcal{C}}_{{\rm d}}(\alpha,\alpha)|^2 } {P_{\rm d}}} \arctan \left( \sqrt{\frac{\rho r_o^4 \theta |\tilde{\mathcal{C}}_{{\rm d}}(\alpha,\alpha)|^2 } {P_{\rm d}}} \right) \right) U_{\rm NT}(\theta,r_o) dr_o \end{align}\normalsize where,\small \begin{align}\label{eq:RUNT} \!\!\!\!\!\!\!\!\!\!U_{\rm NT}(\theta,r_o)&= \left\{ \begin{array}{ll} \frac{P_{\rm d} }{P_{\rm d}+\beta_{\rm d} |\tilde{\mathcal{C}}_{{\rm d}}(\alpha,\alpha)|^2 \rho r_o^{8}\theta}. \ \ \ & {\rm 2NT} \\ \int\limits_{0}^{\infty}\int\limits_{\delta_o}^{\pi} \frac{ P_{\rm d} \lambda r \exp \left(- \pi \lambda r^2 \right)}{P_{\rm d}+ r_o^4 \theta |\tilde{\mathcal{C}}_{{\rm d}}(\alpha,\alpha)|^2 \rho \left(1+(\frac{r_o}{r})^2-2\frac{r_o}{r} \cos (\delta) \right)^{-2}} d\delta dr. & {\rm 3NT} \end{array} \right. \\ &\approx\left\{ \begin{array}{ll} \frac{P_{\rm d} }{P_{\rm d}+\beta_{\rm d} |\tilde{\mathcal{C}}_{{\rm d}}(\alpha,\alpha)|^2 \rho r_o^{8}\theta}. \ \ \ & {\rm 2NT} \\ \frac{ P_{\rm d}}{P_{\rm d}+ r_o^4 \theta |\tilde{\mathcal{C}}_{{\rm d}}(\alpha,\alpha)|^2 \rho \left(1+4 \lambda r_o^2+\frac{8}{\pi} \sqrt{\lambda}r_o \right)^{-2}} dr. & {\rm 3NT} \end{array} \right. \end{align}\normalsize \begin{proof} Follows from Corollary 1 and equation \eqref{eq:Thr_Out}. \end{proof} \end{corollary} From the last corollary, we can find the critical SIC $\beta_{\rm d}$ at which the 2NT outperforms the 3NT as a function of the serving distance ($r_o$). This value is given by the following corollary. \begin{corollary} In an interference limited dense single tier cellular network with unbinding UL transmit power, the approximate minimum value of $\beta_{\rm d}$ required in the 2NT to outperform 3NT as a function of the serving distance ($r_o$), assuming $\eta_{ud}=4$, $\delta_o=90^o$, fully-overlapped channels ($\alpha=1$ and $|\tilde{\mathcal{C}}_{{\rm d}}(\alpha,\alpha)|^2=1$), and $h_s \sim \exp(1)$, is given by \small \begin{align}\label{eq:Cond1} \beta_{\rm d} \approx \left(4 \lambda r_o^4+\frac{8}{\pi} \sqrt{\lambda} r_o^3 +r_o^2\right)^{-2} \end{align}\normalsize \begin{proof} Follows from equation (54). \end{proof} \end{corollary} Equation \eqref{eq:Cond1} expresses the critical value of $\beta_{\rm d}$ as a function of $r_o$ and $\lambda$. To get more insights on the critical value of $\beta_{\rm d}$ with respect to the BSs' intensity $\lambda$ , we assume that the tagged UE is located at the average serving distance, this assumption reduces \eqref{eq:Cond1} to \small \begin{align}\label{eq:Cond2} \beta_{\rm d} \approx \frac{16 \lambda^2}{9} \end{align}\normalsize In the next section, Theorem 1, Lemmas 1-5, and Corollaries 1-3 are used to analyze the performance of cellular network under 2NT and 3NT operations. \section{Simulations and Numerical Results} \begin{table} [] \caption{\; Parameters Values.} \centering \begin{tabular}{|l|l|l|l|} \hline \rowcolor[HTML]{C0C0C0} \textbf{Parameter} & \textbf{Value} & \textbf{Parameter} & \textbf{Value} \\ \hline $P_{\rm{u}}$ & 3 W & $P_{\rm{d}}$ & 5 W \\ \hline $\lambda$ & 1 $\text{BSs/km}^2$ & $N_o$ & 0 \\ \hline $B^{\rm HD}_{\rm u}$, $B^{\rm HD}_{\rm d}$ & 1 MHz & $\theta$ & 1 \\ \hline $\beta_{\rm d}$ &$-75$ dB & $\beta_{\rm u}$ & $-110$ dB \\ \hline $\rho$ & -60 dBm & $\epsilon$ & 0.03134 \\ \hline $b_{\rm d}/b_{\rm u}$ & Sinc/Sinc$^2$ & $\delta_o$ & $90^o$ \\ \hline $\eta_{\rm uu}$, $\eta_{\rm dd}$, $\eta_{\rm ud}$ & 4 & $\eta_{\rm du}$ & 3 \\ \hline \end{tabular} \label{TB:parameters}\vspace{-0.5cm} \end{table} Throughout this section, we verify the developed mathematical paradigm via independent system level simulations, where the BSs are realized via a PPP over an area of 600 ${\rm km}^2$. Then, the UEs are distributed uniformly over the area such that each BS has at least two UEs within its association area. Each BS randomly selects two UEs to serve such that the $\delta_o$ angle separation as illustrated in Fig. \ref{fig:Network4} is satisfied. The SINR is calculated by summing the interference powers from all the UEs and the BSs after multiplying by the effective interference factors. In the UL, the transmit powers of the UEs are set according to the power control discussed in Section II. The results are taken for the UE and the BSs that are closest to the origin to avoid the edge effect. Unless otherwise stated, the parameters values in Table 1 are used. Note that for the average SIC power, the maximum reported value according to \cite{Full2015Goyal} is -110 dB, and hence, we set $\beta_{\rm u}$ to -110 and consider that $\beta_{\rm d} \geq \beta_{\rm u}$ because the BSs are more likely to have more powerful SIC capabilities. For the pulse shaping, we consider two basic pulse shapes, namely, Sinc$^2$ and Sinc pulse shape\footnote{Employing and designing more sophisticated pulse shapes for specific purposes is left for future work.}, which have the FTs given in \eqref{eq:PulseShapes}, \small \begin{align}\label{eq:PulseShapes} S(f,\rm{BW},b)=\left\{ \begin{array}{ll} \frac{\rm{SINC(\frac{2 f}{\rm{BW}})}}{\sqrt{\int\limits_{- \infty}^{\infty} {\rm SINC}^2\left(\frac{2 f}{\rm{BW}}\right) df}} & \mbox{ } b = \rm{Sinc.} \\ \frac{\rm{SINC^2(\frac{2 f}{\rm{BW}})}}{\sqrt{\int\limits_{- \infty}^{\infty} {\rm SINC}^4\left(\frac{2 f}{\rm{BW}}\right) df}} & \mbox{ } b = \rm{Sinc^2 .} \end{array} \right. \end{align}\normalsize where $b =$ Sinc when the Sinc pulse is considered and $b =$ Sinc$^2$ when Sinc$^2$ pulse is considered. Unless otherwise stated, the SIC power distribution $f_{H_s}(\cdot)$ is assumed to be exponentially distributed with unit mean. \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=2.08in]{./2NT_3.eps} \caption{2NT DL throughput.}\label{fig:Ex1_1} \end{subfigure}% ~ \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=2.1in]{./3NT_3.eps} \caption{3NT DL throughput.}\label{fig:Ex1_2} \end{subfigure} ~ \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=2.1in]{./UP_3.eps} \caption{UL throughput.}\label{fig:Ex1_3} \end{subfigure} \caption{Rates vs $\alpha$ for the 3NT and 2NT.}\label{fig:Ex1}\vspace{-0.5cm} \end{figure*} Fig. \ref{fig:Ex1} shows the rate variation for UL and DL versus $\alpha$ for the 2NT and 3NT, where $\alpha=0$ and $\alpha=1$ represent the FD and the HD cases, respectively, the solid lines represent the analytical results obtained from Theorem 1 with the exact LTs in Lemma 4, diamonds represent the results obtained by simulations, the squares in Fig. \ref{fig:Ex1_2} are found by using the approximation for intra-cell interference given in Lemma 5, and the squares in Fig. \ref{fig:Ex1_3} by using the bounds for the LTs given in Lemma 4. The close match between the analysis, approximations, and simulation results validates the developed mathematical model and verifies the accuracy of the assumptions in Section III-A, as well as the bound presented in Lemmas 4-5. Several insights can be obtained from Fig .4. For instance, the figure shows that the CCUs have better performance compared to the CEUs in all cases, which is intuitive due to the larger service distances that lead to higher path-loss attenuation for CEUs compared to the CCUs. Note that the CEUs do not have sufficient power to invert their path-loss in the UL direction, and hence, the received power at the serving BS is less than $\rho$, which leads to the deteriorated UL CEU performance when compared to the CCU case. The figure also shows that there exist an optimal value of partial overlap $0 <\alpha <1$ that maximizes the UL transmission rate\footnote{The UL performance is maximized at $\alpha=0.28859$ due to the orthogonality between the used pulse shapes at this particular value, for more details refer to \cite{AlAmmouri2015Inband}.}. Hence, despite the efficient SIC (-110 dB), neither HD nor FD are optimal in the UL case due to the prominent DL interference. On the other hand, the DL performance is mainly affected by the SIC rather than the UL interference\footnote{In the case of perfect SI or very low values of $\beta_{\rm d}$, e.g. $\beta_{\rm d}=-110$ dBm, the degradation in the SINR is only due to the UL-to-DL interference, and since this is negligible compared to the DL-to-DL interference for a realistic set of network parameters, the increase in BW (linearly) overcome the decrease in the SINR which results in approximately linear curve.}. Particularly, for UE with efficient SIC, the full overlap (i.e., FD) is the best strategy for the DL. On the other hand, partial overlap is better for UE with inefficient SIC. It is worth mentioning that the SI has more prominent effect on the CEU than CCU, in which the SI nearly nullifies the DL rate for high value of $\alpha$ and efficient SIC. This is because CEUs transmit with their maximum power, which makes the residual SI power more prominent compared to CCUs. Comparing Fig. \ref{fig:Ex1}.a and Fig. \ref{fig:Ex1}.b, we can see that the 3NT achieves close performance to the 2NT with sufficient SIC, and outperforms the 2NT with poor SIC. Note that 3NT UEs operates in HD mode and hence are not affected by SIC as shown in Fig. \ref{fig:Ex1_2}. \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=2.2in]{./AVG_3.eps} \caption{Average DL rate.}\label{fig:Ex2_1} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=2.2in]{./CCU_3.eps} \caption{CCU DL rate.}\label{fig:Ex2_2} \end{subfigure} ~ \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=2.2in]{./CEU_3.eps} \caption{CEU DL rate.}\label{fig:Ex2_3} \end{subfigure} \caption{DL Rates vs $\beta_{\rm d}$ under different network topology, where FD denotes $\alpha=1$ and $\alpha$D denotes $\alpha \approx 0.2886$.}\label{fig:Ex2}\vspace{-0.8cm} \end{figure*} Fig.~\ref{fig:Ex2} plots the DL transmission rate vs SI attenuation power for the 2NT and 3NT. The figure shows that the DL FD rate outperforms the DL rate of both the HD and the $\alpha=0.288$ in all the cases. Fig.~\ref{fig:Ex2} also shows that there is a critical value, for $\beta_{\rm d}$, at which the 3NT outperforms 2NT. This critical value can be interpreted as the point at which the SI experienced by DL UEs in the 2NT becomes more significant than the intra-cell interference experienced by DL-UEs in 3NT and we found an closed form approximation for it in Corollary 3 and (56). Interestingly, the gain offered by the 2NT at low values of $\beta_{\rm d}$ is not significant when compared to the 3NT. Hence, the intra-cell interference is not a limiting parameter for the 3NT. In other words, network operators can harvest FD gains by HD UEs almost similar to the gains harvested by FD UEs with efficient SIC capabilities. The figure also shows that, in case of poor SIC at the UEs, the 3NT can offer significant gains, specially for CEUs, when compared to the 2NT case. To study the effect of the serving distance on the 2NT/3NT performance, we plot Fig. \ref{fig:Ex3_1} for $\lambda=20$ BSs/km$^2$. The figure plots the minimum $\beta_{\rm d}$ required in 2NT to outperform 3NT vs. the serving distance along with the pdf of the serving distance, where sold (dashed) lines are obtained from the exact (approximate) expression of the intra-cell interference given in equation \eqref{eq:RUNT}. The close match between the exact and the approximate results validates the approximation given in \eqref{eq:RUNT}. As the figure shows, the 3NT is more appealing for farther UEs because they have a tighter constraint for the SIC $\beta$ required for the 2NT to outperform the 3NT, which may require more sophisticated and expensive FD transceivers. There are two reasons for this result; first, large serving distance implies that the intra-cell interferer in 3NT is further on average due to the enforced scheduling technique, which reduces the negative effect of the intra-cell interference. Second, longer service distance implies larger transmit power due to the employed power control, hence more powerful SIC is required. A useful design insight for Fig. 6 is that the BSs should select the mode of operation (i.e., 3NT or 2NT) for the UEs based on their distances along with their SIC. To get more insights on the network operation for different intensities we plot Fig. \ref{fig:Ex3_2} based on equation \eqref{eq:Cond2}. As expected, 2NT becomes more appealing in dense cellular networks because the intra-cell (self) interference is more(less) prominent in smaller cell area\footnote{Fig. 6b focuses on the comparing 2NT/3NT with different intensities. The effect of intensity on the FD gain for 2NT in cellular network is covered in \cite{AlAmmouri2015Inband} and for ad-hoc network with CSMA-based transmitters is covered in \cite{Exploring2015Wang}.}. Finally, we study the rate gains of the 2NT/3NT with $\delta_o$ in Fig. \ref{fig:Ex3_3}. As expected, by increasing $\delta_o$ the distance between the two scheduled UEs increases, hence the intra-cell power in 3NT operation decreases. Moreover, the figure shows the necessity of UEs scheduling and multi-user diversity in 3NT, otherwise the rate loss in 3NT compared to 2NT can go up to 20$\%$ in the case of random scheduling ($\delta_o=0$). \begin{figure*}[t!] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=2.2in]{./Distance2.eps} \caption{Critical values of $\beta_{\rm d}$ vs. the serving distance.}\label{fig:Ex3_1} \end{subfigure}% ~ \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=2.2in]{./Intensity2.eps} \caption{Critical values of $\beta_{\rm d}$ vs. the BSs' intesnity.}\label{fig:Ex3_2} \end{subfigure} ~ \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=2.2in]{./Delt2.eps} \caption{DL rate vs $\delta_o$ as illustrated in Fig. 2b.}\label{fig:Ex3_3} \end{subfigure} \caption{(a), (b) Critical values of $\beta_{\rm d}$ vs. the serving distance and BSs' intesnity using equations (56) and (57), and (c) DL rate vs $\delta_o$ as illustrated in Fig. 2b.}\label{fig:Ex3}\vspace{-0.8cm} \end{figure*} \section{Conclusion} This paper presents a mathematical paradigm for multi-tier cellular networks with FD BSs and HD/FD UEs. The presented model captures detailed system parameters including pulse shaping, filtering, imperfect self-interference cancellation, partial uplink/downlink overlap, uplink power control, limited users' transmit powers, UE-BS association, and UEs' scheduling. To this end, unified rate expressions for 2-nodes topology (2NT) with FD users and 3-nodes topology (3NT) with HD users are presented and used to compare their performance. The results show that there exist a critical value for the self-interference cancellation, at which the performance of 3NT outperforms the 2NT. Moreover, closed form approximations for this critical value as a function of the serving distance and the BSs' intensity are obtained. The results also show that even when SI is efficiently canceled, the 2NT does not offer significant gains when compared to the 3NT operation if multi-user diveristy and users' scheduling are exploited. This implies that network operators can harvest FD gains by implementing FD transceivers at their BSs regardless of the state of the users (i.e., FD or HD). \appendices \section{Proof of Lemma 1} Exploiting independence between the network tiers and using the null probability of the PPP, the cumulative distribution function (CDF) of the $i^{th}$-tier service distance is given by, \small \begin{align}\label{eq:App1_0} &F_{R^{(i)}}(r)=\mathbb{P} \{R^{(i)} \leq r \}=\mathbb{P} \{R_i \leq r | R_i \leq \frac{\tau_j}{\tau_i}r_j \forall j \in \{1,...,K \},j \neq i\}=\frac{\mathbb{P} \{R_i \leq r \cap R_i \leq \frac{\tau_j}{\tau_i}r_j \forall j \neq i\}}{\mathbb{P} \{ R_i \leq \frac{\tau_j}{\tau_i}r_j \forall j \neq i\}} \end{align}\normalsize The denominator is given by, \small \begin{align}\label{eq:App1_1} &\mathbb{P} \{ R_i \leq \frac{\tau_j}{\tau_i}r_j \forall j \neq i\}=\int\limits_{0}^{\infty}f_{R_i} (r_i)\left[ \prod\limits_{\underset{j=1}{j \neq i}}^{M} \ \int\limits_{(r_i \frac{\tau_i}{\tau_j})}^{\infty}f_{R_j} (r_j) dr_j \ \right]dr_i=\frac{\lambda_i}{ \sum\limits_{j=1}^{M} \frac{\tau_i^2}{\tau_j^2} \lambda_j } \end{align}\normalsize and the nominator is given by, \small \begin{align}\label{eq:App1_2} &\mathbb{P} \{r_i \leq r \cap r_i \leq \frac{\tau_j}{\tau_i}r_j\forall j \neq i \}=\int\limits_{0}^{\infty}f_{R_i} (r_i)\left[ \prod\limits_{\underset{j=1}{j \neq i}}^{M}\int\limits_{r_i \frac{\tau_i}{\tau_j}}^{\infty}f_{R_j} (r_j) dr_j \ \right]dr_i= \frac{\lambda_i \left(1- \exp \left(-\pi r^2 \sum\limits_{j=1}^{M} \frac{\tau_i^2}{\tau_j^2} \lambda_j \right) \right)}{ \sum\limits_{j=1}^{M} \frac{\tau_i^2}{\tau_j^2} \lambda_j} \end{align}\normalsize By substituting equations \eqref{eq:App1_1} and \eqref{eq:App1_2} in \eqref{eq:App1_0} it results in, \small \begin{align} F_{R^{(i)}}(r)= 1- \exp \left(-\pi r^2 \sum\limits_{j=1}^{M} \frac{\tau_i^2}{\tau_j^2} \lambda_j\right)=1- \exp \left(-\pi \bar{\lambda_i} r^2\right) \ \ \ \ \ \ \ \ \ \ \ r \geq 0. \end{align} where $\bar{\lambda_i}= \sum\limits_{j=1}^{M} \frac{\tau_i^2}{\tau_j^2} \lambda_j $ and the PDF is given by, \begin{align}\label{eq:App1_3} &f_{R^{(i)}}(r)= 2 \pi \bar{\lambda_i} r \exp \left(-\pi \bar{\lambda_i} r^2\right) \ \ \ \ \ \ \ \ \ \ \ r \geq 0. \end{align}\normalsize Given that the UE is a CCU, the PDF in \eqref{eq:App1_3} should be truncated according to channel inversion power control. Let $R^{(i)}_c$ denote the serving distance for a test CCU connected to the $i^{th}$ tier, then its PDF is given by, \small \begin{align}\label{eq:App1_4} \!\!\!\!\!\!\!\!\!\!\!\!\!\! f_{R^{(i)}_c}(r)&= \frac{2 \pi \bar{\lambda_i} r \exp \left(-\pi \bar{\lambda_i} r^2\right)}{\int\limits_{0}^{\left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{1}{\eta_{\rm dd}}}} 2 \pi \bar{\lambda_i} r \exp \left(-\pi \bar{\lambda_i} r^2\right)dr}=\frac{2 \pi \bar{\lambda_i} r \exp \left(-\pi \bar{\lambda_i} r^2\right)}{1-\exp \left(-\pi \bar{\lambda_i} \left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{2}{\eta_{\rm dd}}}\right)} \mathbbm{1}_{\left\{0\leq r \leq \left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{1}{\eta_{\rm dd}}}\right\}}(r). \end{align}\normalsize Similarly, for the PDF of the service distance for a CEUs, \small \begin{align}\label{eq:App1_5} \!\!\!\!\!\!\!\!\!\!\!\!\!\! f_{R^{(i)}_e}(r)&=\frac{2 \pi \bar{\lambda_i} r \exp \left(-\pi \bar{\lambda_i} r^2\right)}{\int\limits_{\left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{1}{\eta_{\rm dd}}}}^{\infty} 2 \pi \bar{\lambda_i} r \exp \left(-\pi \bar{\lambda_i} r^2\right)dr}= 2 \pi \bar{\lambda_i} r \exp \left(-\pi \bar{\lambda_i} r^2+\pi \bar{\lambda_i} \left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{2}{\eta_{\rm dd}}}\right) \mathbbm{1}_{\left\{\left(\frac{P_{\rm u}}{\rho^{(i)}}\right)^{\frac{1}{\eta_{\rm dd}}} < r < \infty\right\}}(r). \end{align}\normalsize \section{Proof of Lemma 2} The proof is as follows, \small \begin{align} \mathcal{L}_{I}(s) &= \mathbb{E}\left[\exp \left( \sum\limits_{j \in \Phi}-s P_{_j} h_j r_j^{-\eta} \mathbbm{1}\left( r_j>a_j\right) \right)\right],\notag \\ &\stackrel{(i)}{=} \mathbb{E}_{\Phi}\left[\underset{r_j \in \Phi}{\prod} \mathbb{E}_{h_j,P_{j}}\left[ \exp\left( -s P_{_j} h_j r_j^{-\eta} \mathbbm{1}\left( r_j> a_j\right) \right)\right]\right],\notag \\ &\stackrel{(ii)}{=} \exp\left( - 2 \pi \lambda\mathbb{E}_{P}\left[ \int_{a}^{\infty}\mathbb{E}_{h}\left[ \left(1- \exp\left( -s P h r^{-\eta} \right)\right)\right] rdr\right] \right),\notag \\ &\stackrel{(iii)}{=} \exp \left( \frac{-2 \pi \lambda}{\eta-2} \mathbb{E}_{P} \left[a^{2-\eta} s P \ {}_2 F_1 \left[1,1-\frac{2}{\eta},2-\frac{2}{\eta}, -a^{-\eta} P s \right] \right] \right),\label{eq:LUU} \end{align}\normalsize where, $(i)$ follows from the independence between $\tilde{\Psi}$ and $h_j$, $(ii)$ by using the probability generation functional (PGFL) of PPP and $(iii)$ by using the LT of $h$ and by evaluating the integral. \section{Proof of Lemma 3} The lemma is proved by showing that the second derivative of the function which appears inside the expectation of the exponent in \eqref{eq:LTgeneralapp} is positive w.r.t P. Hence, the function of interest is convex in P and the result in Lemma 3 follows from Jensen's inequality \cite[Section 3.1.8]{Convex2004Boyd}. Let $y=a^{-\eta}P s$, the function of interest, denoted here as $G(y)$, can be expressed as \small \begin{align} \label{gyyg} G(y)=-y \ {}_2 F_1 \left[1,1-\frac{2}{\eta},2-\frac{2}{\eta}, -y \right] \end{align} \normalsize The second derivative of $G(y)$ is given by \footnotesize \begin{align}\label{2nd_der} \frac{d^2 G(y)}{dy^2} {=}\left(\frac{\left(\, _2F_1\left(1,1-\frac{2}{\eta };2-\frac{2}{\eta };-y\right)-\frac{1}{y+1}\right)}{y}\left(-\frac{2}{\eta }\right)+\frac{1}{(y+1)^2} \right)\left(1-\frac{2}{\eta }\right). \end{align}\normalsize where \eqref{2nd_der} is found by using \cite[Eqs (15.2.2),(15.2.10),(15.2.27)]{Handbook1964Abramowitz} and some mathematical simplifications. Owing to the fact that $\frac{1}{(1+y)^2}$, $\left(1-\frac{2}{\eta }\right)$, $\frac{2}{\eta}$, and y are positive for $\eta>2$, the prove is completed by proving that $G_2(y)= \left(\, _2F_1\left(1,1-\frac{2}{\eta };2-\frac{2}{\eta };-y\right)-\frac{1}{y+1}\right)$ is positive. Using the integral definition of the hypergeometric function \cite[Eq. (15.3.1)]{Handbook1964Abramowitz} and projecting it on our case, we have \small \begin{align}\label{eq:App3_1} \!\!\!\!\!\!\!\!\!\!\!\!\!{}_2F_1\left(1,1-\frac{2}{\eta };2-\frac{2}{\eta };-y\right)&= \frac{\Gamma \left(2-\frac{2}{\eta} \right)}{\Gamma \left(1-\frac{2}{\eta} \right)} \int\limits_0^{1} \frac{t^{\frac{-2}{\eta}}}{1+t y}dt \stackrel{(i)}{=} \left(1-\frac{2}{\eta} \right) \int\limits_0^{1} \frac{t^{\frac{-2}{\eta}}}{1+t y}dt\stackrel{(ii)}{>} \int\limits_0^{1} \frac{t^{\frac{-2}{\eta}}}{1+t y}dt \stackrel{(iii)}{>} \frac{1}{1+y} > 0. \end{align} \normalsize where $(i)$ follows by \cite[Eq. (6.1.15)]{Handbook1964Abramowitz}, $(ii)$ follows from the fact that $\eta>2$, and (iii) is proved as in the sequel. Taking the first derivative of the integrand in \eqref{eq:App3_1} as \small \begin{align} {\left(\frac{t^{-2/\eta }}{t y+1}\right)}^{\prime}=\frac{t^{-\frac{\eta +2}{\eta }} (-(\eta +2) t y-2)}{\eta (t y+1)^2}, \end{align} \normalsize shows that it is a decreasing function in $t$, and hence, the minimum occurs at the boundary $1$. Then (iii) in \eqref{eq:App3_1} follows by lower-bounding the integral by the minimum value of the integrand multiplied by the integration region. Hence, the second derivative of $G(y)$ in \eqref{gyyg} is positive, which completes the prove. \section{Proof of Lemma 4} Based on Lemma 2 and 3, we only need to determine the interference exclusion region (IER) for each tier ($a_j$) in each case. \begin{itemize} \item $\mathcal{L}_{I^{(k,i)}_{{\rm d}\rightarrow {\rm d}}}(s)$: Due to the association rule in Section~\ref{sec:Association}, $r_o \tau_i \leq r_j \tau_k$ is always satisfied. Hence, the IER is defined by $\mathcal{B}(o,r_o \frac{\tau_i}{\tau_k})$, by substituting $a$ in Lemma 2 by $r_o \frac{\tau_i}{\tau_k}$, the final expression is found. \item $\mathcal{L}^{(c)}_{I^{(k,i)}_{{\rm d}\rightarrow {\rm d}}}(s)$: Based on the power inversion for CCUs and following \cite{On2014ElSawy}, $a=(\frac{P_{\rm u}}{\rho^{(k)}})^{\frac{1}{\eta}}$. \item $\mathcal{L}^{(e)}_{I^{(k,i)}_{{\rm d}\rightarrow {\rm d}}}(s)$: Based on the power inversion for CEUs and following \cite{Load2014AlAmmouri}, $a=r_o$, then by using Lemma 3, the final expression is found. \item $\mathcal{L}_{I^{(k,i)}_{{\rm d}\rightarrow {\rm u}}}(s)$: The PPP assumption of the BSs location implies that there is no IER for both cases (CCUs and CCUs), and hence $a=0$. \item $\mathcal{L}_{I^{(k,i)}_{{\rm u}\rightarrow {\rm d}}}(s)$: We assume that the tagged BS is collocated its associated UE, hence $\mathcal{L}_{I^{(k,i)}_{{\rm u}\rightarrow {\rm d}}}(s)=\mathcal{L}_{I^{(k,i)}_{{\rm u}\rightarrow {\rm u}}}(s)$ in 2NT. In 3NT, the effect of intra-cell interference should be considered also. Let $U^{(k,i)}_1(s)$ denote the LT of the intra-cell interference and let $P_{{\rm u}_1}$, $h_{1-o}$, $r_{1-o}$, and $r_{1}$ denote the transmitted power of the interfering user, the channel gain between the two users, the distance between them and the distance between the interfering UE and the serving BS, respectively, then the LT of the interfering power can be expressed as \small \begin{align}\label{eq:E} U^{(i,i)}_1(s)&=\mathbb{E}\left[ e^{-s P_{{\rm u}_1} h_{1-o} r_{1-o}^{-\eta} }\right], \notag \\ &\stackrel{(i)}{=}\mathbb{E}\left[ e^{-s P_{{\rm u}_1} h_{1-o} \left(r_o^2+r_1^2-2 r_o r_1 \cos (\delta)\right)^{-\eta/2} }\right], \end{align}\normalsize where $(i)$ follows by using the cosine rule (cf. Fig. 2b), where $\delta$ is the uniformly distributed between $\delta_o$ and $\pi$. When the other UE is a CCU, which has a probability $\mathbb{P}\{\rm CCU \}$, then $P_{{\rm u}_1}=\rho^{(i)} r_1^{\eta}$, and when it is CEU, which has a probability $\mathbb{P}\{\rm CEU \}$, then $P_{{\rm u}_1}=P_{\rm u}$ by substituting these values and by averaging over $h_{1-o}$, the expression for $ U^{(k,i)}_1(s)$ is found. \end{itemize} \section{Proof of Theorem 1} Starting by the UL outage probability, for CCUs the transmitted power is equal to $\rho r_o^{\eta}$, and for the CEUs the transmitted power is set to the maximum $P_{\rm u}$, by substituting these values in \eqref{eq:outageGeneral3} we get equations \eqref{eq:OutageUL1} and \eqref{eq:OutageUL2} except $ U^{(i)}_{\rm SI_ u}$ which is found by substituting $\tilde{\sigma}^2_{s}$ by its value given in \eqref{eq:SIu2} and then by averaging over $h_s$ while conditioning on $r_o$. Similar steps are followed to find the outage in the DL direction. \bibliographystyle{IEEEtran}
1,314,259,994,604
arxiv
\section{Introduction} The capability of aquatic animals to accurately perceive their environment plays a crucial role in their survival. Many fish species employ specialized organs to obtain visual, olfactory, and tactile cues from their environment \blue{ which} often complement each other. Predator-detection by fish using visual or olfactory cues~\citep{Hara1975,Ladich2003,Valentincic2004} is crucial for providing early-warning\blue{, since} mechanical disturbances may be imperceptible \blue{at large distances}. On the other hand, sensory organs specialized for detecting mechanical disturbances~\citep{Schwartz1974} take precedence when fish operate in deep or turbid waters, where visual and other sensory mechanisms may become ineffective. In these situations, the burden of collecting sensory information falls primarily on the `lateral line' organ in fish~\citep{Dijkgraaf1963,Kroese1992,Coombs1996,Coombs2005,Bleckmann2009}. These organs are comprised of hair-like mechanoreceptors called neuromasts (Figure~\ref{fig:fishNeuromasts}), which generate neuronal impulses when deflected by either the flow shear (superficial neuromasts - \cite{engelmann2000neurobiology}) or non-zero pressure gradients (sub-surface `canal' neuromasts - \cite{Bleckmann2009}). An array of such sensors allows fish to discern both \blue{the} direction and speed of disturbances generated in the\blue{ir} surrounding flow~\citep{Chambers2014,Asadnia2015}. \begin{figure} \centering \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=\textwidth]{lateral_line_cut.jpg} \subcaption{} \label{fig:lateralLineSystem} \end{subfigure} \\ \begin{subfigure}[b]{0.65\textwidth} \centering \includegraphics[width=\textwidth]{fish_sketch_annotated.pdf} \subcaption{} \label{fig:lateralLineSketch} \end{subfigure} \qquad \qquad \begin{subfigure}[b]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{neuromastFig.png} \subcaption{} \label{fig:neuromastsSketch} \end{subfigure} \caption{(\subref{fig:lateralLineSystem}) The lateral line in juvenile zebrafish, with neuromasts visible as bright dots on the body surface (adapted with permission from \citet{Sapede2002}). We observe a high density of neuromasts in the head and the tail, with sparser distribution along the midsection. (\subref{fig:lateralLineSketch}) A schematic representation of the distribution of mechanoreceptors along the fish body. (\subref{fig:neuromastsSketch}) The neuromasts bend in response to flow, which generates \blue{a} neuronal response \blue{by} sensory cells located at the base (adapted with permission from \citet{Kottapalli2013}).} \label{fig:fishNeuromasts} \end{figure} \blue{These} flow \blue{sensors} are distributed in distinctive patterns on the body, with the \blue{canal neuromasts distributed evenly along the midline from head to tail~\citep{Ristroph2015},} and \blue{superficial} neuromasts found in dense clusters near the head \blue{and tail,} with a sparser distribution \blue{along the midsection (Figure~\ref{fig:fishNeuromasts})}. The fact that they are not distributed uniformly over the body, as well as differences \blue{in distribution} among species inhabiting different hydrodynamic environments~\citep{engelmann2000neurobiology, Bleckmann2009, atema1988sensory}, suggest that neuromast distribution may be optimized for characterizing hydrodynamic disturbances. Experimental studies have demonstrated that a well-functioning lateral line is crucial for a range of routine behaviour, such as schooling~\citep{Pitcher1976,Partridge1980}, predator evasion~\citep{Blaxter1989}, prey detection/capture~\citep{Hoekstra1985}, reproduction~\citep{Satou1994}, rheotaxis~\citep{Dijkgraaf1963,Kanter2003}, obstacle avoidance~\citep{Hassan1989}, and station-keeping by countering the effects of unsteady gusts~\citep{Sutterlin1975}. Disrupting the normal functioning of the lateral line, either via chemical or mechanical means, hinders fish's ability to perform these tasks effectively. \citet{Liao2006} demonstrated that disabling the lateral line system influences fish's ability to harness energy from unsteady flows. The sensory system also plays a vital role in `hydrodynamic imaging', where fish devoid of visual cues swim past walls and unknown objects repeatedly to form a hydrodynamic `map' of their surroundings~\citep{Hassan1989,Coombs1999,Montgomery2001,Coombs2003}. Certain species such as the Blind Cave Fish, which have evolved degenerated sight, rely heavily on this technique for navigation, and for inferring the shape and size of unfamiliar objects~\citep{vonCampenhausen1981,Windsor2008,dePerera2004}. The lateral line system has inspired the design of artificial sensory arrays, given their potential to transform underwater navigation of robotic vehicles~\citep{Yang2006,Yang2010,Kottapalli2012,Jezov2012,Kruusmaa2014,Asadnia2015,Triantafyllou2016,Strokina2016,Kottapalli2018,Yen2018}. Such mechanoreceptors would be a vital addition to the already available suite of visual and acoustic sensors, with the added advantage of low energy-consumption\blue{,} since they operate via passive mechanical deformation. \blue{These} vibration-detecting sensors \blue{would} be crucial for navigation, detection, and tracking in low-light conditions, or in scenarios where the use of onboard lights or sonar is undesirable, either for maintaining stealth, or for minimally intrusive observation of animals. Current prototypes of such artificial sensors are based on arrays of pressure transducers~\citep{Fernandez2011,Venturelli2012,Xu2017}, and mechanically deforming hair-like structures~\citep{Yang2006,Tao2012,Abdulsadda2013,Dagamseh2013,deVries2015,Triantafyllou2016}. The importance of the lateral line as an essential sensory organ in fish, and its immense potential for driving the bio-inspired design of artificial sensors, has stimulated numerous experimental and model-based studies. The structure and function of these sensory arrays has been investigated via biological experiments, to characterize their response to pressure differences and object-induced vibrations in water~\citep{Gray1984,Denton1988,Kroese1992,Coombs1996,Blake2006}. Experiments using artificial fish models have tried to emulate these biological studies, using pressure-transducers and hair-like sensors to characterize the frequency and range of oscillating spheres \citep{Montgomery1998}, and Karman vortex streets \citep{Venturelli2012}. Moreover, there have been a number of mathematical model-based studies, that have combined potential-flow solutions with simplified representations of fish-swimming to study the functioning of the lateral line~\citep{Hassan1992,Franosch2009,Bouffanais2011,Ren2012,Colvert2016}. A few of these studies have attempted to infer the optimal arrangement of sensors on rigid objects exposed to various flow conditions. \citet{Colvert2016} determined the optimal placement of a single sensor-pair on an elliptical body, moving at different orientations in uniform flow. \citet{Ahrari2017} used simplified analytical representations to determine optimal sensor-arrangement and -orientation on a rigid hydrofoil, which could best characterize a dipole source with six degrees of freedom in three dimensions. While model-based studies provide important insight regarding sensing, they suffer from certain drawbacks owing to simplified hydrodynamics, and simplistic representations of fish-swimming (e.g., ellipses and rigid airfoils). Neglecting the effects of viscosity in potential-flow based studies is a notable disadvantage, especially when considering larvae swimming at relatively low Reynolds numbers \blue{(\textit{Re})}. Moreover, viscous effects play \blue{a substantial} role in the operation of the lateral line \citep{Triantafyllou2016}, given that superficial neuromasts are immersed in the fish's boundary layer, and canal neuromasts encounter low $Re$ flow inside constricted channels. The Reynolds number that animals operate at can also have a considerable impact on the functioning of the lateral line \citep{Webb2014}, which cannot be accounted-for via inviscid assumptions. The importance of viscous effects has also been demonstrated by \citet{Rapo2009}, who studied the impact of an oscillating sphere on the boundary layer of a vibrating flat plate, albeit using analytical simplifications to circumvent the high computational cost of three-dimensional numerical simulations. \refOne{Recent studies using two-dimensional viscous computations have attempted to classify wake patterns behind an oscillating airfoil using Artificial Neural Networks~\citep{Colvert2018,Alsalman2018}. Using flow sensors placed in the wake of the airfoil, they determine that both the spatial distribution of the sensors as well as the flow variable being measured influence the accuracy for predicting wake characteristics.} Here, we investigate the role of hydrodynamics in determining the sensor-distribution observed in fish, using two-dimensional Navier-Stokes simulations of self-propelled swimmers to overcome the limitations mentioned above. We determine the optimal spatial distribution of sensors via Bayesian optimal experimental design, and we find that the resulting patterns are closely related to sensory layouts found in natural swimmers. \section{Methods} \label{sec:methods} The present study relies on two-dimensional simulations of a self-propelled swimmer possessing shear stress and pressure gradient sensors on its surface. The swimmer is exposed to disturbances generated by cylinders located at various positions in the environment. The sensor locations are identified by formulating a Bayesian optimal experimental design with the goal of maximizing the information gain of the swimmer in its environment. \subsection{Numerical methods} \label{sec:numMeth} We conduct two-dimensional simulations of viscous flows past multiple bodies by discretizing the vorticity form of the incompressible Navier-Stokes equations, \begin{equation} \dfrac{\partial \omega}{\partial t} + (\bm{u}\cdot\nabla)\omega = \nu\nabla^2\omega + \lambda\nabla\times\left(\chi\left(\bm{u}_s-\bm{u}\right)\right) \, , \label{eq:penalNSvort} \end{equation} where $\bm{u}$ is the flow-velocity, and $\omega = \nabla \times {\bm{u}}$ is the vorticity. The penalty term, $\lambda\nabla\times\left(\chi\left(\bm{u}_s-\bm{u}\right)\right)$ models the interaction of objects with the surrounding fluid (\citet{coquerelle2008vortex}), where $0<\chi\le 1$ indicates the solid body. \blue{$\lambda$} is the penalization parameter, and $\bm{u}_s$ represents the combined translational, rotational, and deformational velocity of the solid object. The equations are discretized using remeshed vortex methods \citep{Koumoutsakos1995} and wavelet adapted grids \citep{Rossinelli2015}, and the penalty term is integrated via the fully implicit backward Euler method. Additional details for the computational methods may be found in \citet{Gazzola2011} and \citet{Rossinelli2015}. \blue{The simulation domain is a unit square, with an effective resolution of $4096^2$ grid points.} The fish length is $L= 0.2$ \blue{units}, with approximately 800 grid points along its mid-line. \subsection{Swimmer shape and kinematics} \label{sec:shapeAndKinematics} We consider two distinct scenarios for the swimmer behaviour to identify the optimal distribution of sensors: one where external disturbances are detected by a static fish-shaped body, and the other involving a self-propelled swimmer. Furthermore, we examine the influence of body geometry on optimal sensor distribution by considering two shapes for the swimmers modelled after zebrafish in their larval and adult stages. The larva shape, shown in Figure~\ref{fig:larvaShape}, is based on silhouettes extracted from experiments, whereas the adult fish is modelled using a geometric combination of circular arcs, lines, and parabolic sections (Figure~\ref{fig:adultShape}) \citep{Gazzola:2012}. Details regarding shape parametrization for both cases are provided in the Appendix (Eqs.~\ref{eq:adultShape} and~\ref{eq:larvaShape}). \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{snapshots/larvaShot.pdf} \subcaption{} \label{fig:larvaShape} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{snapshots/adultShot.pdf} \subcaption{} \label{fig:adultShape} \end{subfigure} \caption{(\subref{fig:larvaShape}) A larva-shaped swimmer detecting disturbances generated by a rotating cylinder (angular velocity = \blue{10 rotations/s}) . Regions \blue{with} positive vorticity are coloured in red, and those with negative vorticity are coloured in blue. (\subref{fig:adultShape}) An adult-shaped swimmer detecting an oscillating cylinder (\blue{amplitude=$0.075L$,} frequency = $10Hz$). Animations for these two cases are shown in \blue{supplementary} Movie 1{} and Movie 2{}.} \label{fig:showShape} \end{figure} The swimmers propel themselves by imposing a sinusoidal wave travelling along the body. Details of the swimming kinematics are also provided in Appendix~\ref{app:shapeKinematics}. \subsection{Disturbance-generation and detection} The sensory cues detected by \blue{the} rigid and swimming bodies \blue{described in section~\ref{sec:shapeAndKinematics}} are generated using oscillating and rotating cylinders of diameter $D=0.25L$ (Figure~\ref{fig:showShape}), and a D-shaped half cylinder of diameter $0.5L$. The amplitude and frequency of the horizontally oscillating cylinders are set to $A_{cyl} = 0.075L$ and $f_{cyl} = 10Hz$, whereas the angular velocity of the rotating cylinders is set to $20\pi \ rad/s$ (10 rotations/s). The cylinders are placed at various locations within a prescribed region in the computational domain (in the `prior-region'), as shown in Figure~\ref{fig:priorRegion}. \begin{figure} \centerline{\includegraphics[width=0.75\textwidth]{priorRegion.pdf}} \caption{Setup used for determining the optimal sensor distribution on a fish-like body swimming past a cylinder located within the rectangular area, which is referred to as the `prior-region'. The sensor-placement algorithm attempts to find the best arrangement of sensors that allows the swimmer to identify the correct cylinder position with minimal uncertainty.} \label{fig:priorRegion} \end{figure} We distinguish two types of sensors on the swimmer body. Shear stress sensors estimate the local shear stress by measuring the tangential flow velocity in the reference-frame of the swimmer at 2 grid cells away from the body, corresponding to a physical distance of $0.0024L$. These \blue{sensors} are analogous to superficial neuromasts in fish that protrude into the boundary layer and measure tangential velocity \citep{Kroese1992,Bleckmann2009,Asadnia2015}. In addition, we consider pressure gradient sensors that correspond to canal neuromasts observed in natural swimmers. We compute pressure gradient along the swimmers' surface by fitting a least-squares cubic spline to surface pressure, in order to minimize derivative noise. \refOne{We note that the sensor-measurements are scalar quantities, since both the shear stress and pressure gradient are projected along the surface tangent vector at each measurement location on the body. This is done so that the measured scalar quantities are a close representation of the flow-induced forces that deflect the hair-like sensory structures in real fish.} In the case of self-propelled swimmers, measurements are taken towards the end of a coasting phase to allow self-generated disturbances to subside sufficiently, and are averaged over a small time window from $15.750T$ to $15.875T$. For motionless larvae, the disturbance-sources start moving at $0$s, and time-averaging of the recorded data is done between $0.95$s and $1.0$s. This allows transients from the initial cylinder start-up to dissipate sufficiently. Time averaging for the D-cylinder simulations is done from $18$s to $20$s, which allows adequate time for vortex shedding to exhibit a periodically repeating pattern. These measurements are then used to determine the optimal arrangement of sensors on the swimmer body, via the Bayesian optimal experimental design algorithm described in section~\ref{sec:optimalSensor}. \subsection{Bayesian optimal sensor placement} \label{sec:optimalSensor} \subsubsection{Bayesian estimation of disturbance location} We consider a disturbance-generating source (for example an oscillating or rotating cylinder) located at coordinates $\cylinderPos=\left(x, y\right)$ in the region shown in Figure~\ref{fig:priorRegion}. The uncertainty in the values of the coordinates of the cylinder is quantified by a probability distribution that is updated based on measurements collected on the surface of the swimmers. The cylinder location can be detected provided that disturbances induced by the cylinder to the surrounding fluid are detected by sensors located on the swimmer surface. The problem of optimal placement implies that we identify the configuration of sensors that can provide the best estimate for the coordinates of the cylinder ($\cylinderPos$). We assume that the sensor locations are placed symmetrically on both sides of the two-dimensional swimmer and they are described by a vector $\bm{s}\in R^{n}$, that is the mid-line coordinate of each sensor pair, with values in $[0,L]$(see Figure 3). The shear stress or the pressure gradient are measured on the surface points corresponding to the positions $\sensors$, and are listed in a vector $\bm{y}\in R^{2n}$. We denote as $\Signal( \cylinderPos; \sensors)$ the predictions of shear stress /pressure gradient at sensor locations $\sensors$, obtained by solving the Navier-Stokes equations with a disturbance-generating source located at $\cylinderPos$. Moreover, we assume that we have prior knowledge about the parameter $\cylinderPos$, encoded in a \emph{prior} probability distribution $p(\cylinderPos)$. After observing the measurements $\Measured$ from sensors $\sensors$, we use Bayesian inference to update our prior belief for the plausible values of parameter $\cylinderPos$, by identifying the \emph{posterior} probability distribution $p(\cylinderPos | \Measured ,\sensors)$. Following Bayes' rule, the posterior distribution $p(\cylinderPos | \Measured ,\sensors)$ of the model parameters is proportional to the product of the prior distribution $p(\cylinderPos)$ and the \emph{likelihood} $p(\Measured | \cylinderPos,\sensors)$. The likelihood function represents the probability that a particular measurement $\Measured$ for a given sensor arrangement $\sensors$ originates from the disturbance-source located at $\cylinderPos$. We assume a prediction error, $\bm{\varepsilon}(\sensors)$, as the difference between the measurements $\Measured$ and the predictions $\Signal( \cylinderPos; \sensors)$ such that: \begin{equation} \Measured = \Signal( \cylinderPos; \sensors) + \bm{\varepsilon}(\sensors) \, . \label{eq:measurement} \end{equation} The prediction error term ($\bm{\varepsilon}(\sensors)$) represents errors that can be attributed to measurement- and model-errors, as well as numerical errors due to spatio-temporal discretization of the Navier-Stokes equations. Following the maximum entropy criteria the prediction error $\bm{\varepsilon}(\sensors)$ follows a multivariate Gaussian distribution $\mathcal{N}(0, \Sigma(\sensors))$ with zero mean and covariance matrix $\Sigma(\sensors) \in R^{2n \times 2n}$. The likelihood function $p(\Measured \vert \cylinderPos,\sensors)$ is then expressed as: \begin{align} p\left(\Measured \vert \cylinderPos, \sensors\right) &= \dfrac{1}{\sqrt{(2\pi)^{2n} \det(\bm{\Sigma}(\sensors))}} \exp \left(-\dfrac{1}{2} \left(\Measured - \Signal( \cylinderPos; \sensors) \right)^T \bm{\Sigma}^{-1}(\sensors) \left(\Measured - \Signal( \cylinderPos; \sensors) \right) \right) \, . \label{eq:likelihood} \end{align} \subsubsection{Optimal sensor placement based on information gain} The goal of the \emph{optimal sensor placement} problem is to find the locations $\sensors$ of the sensors such that the data measured in these locations are most informative for estimating the position $\cylinderPos$ of the disturbance. A measure of information gain is provided by the Kullback-Leibler (KL) divergence between the prior and the posterior distribution. We postulate that the optimal sensor configuration maximizes a utility function that represents the information gain, or equivalently, the Kullback-Leibler divergence defined as: \begin{equation} u(\sensors,\Measured) := \int_{\mathcal{R}} p( \cylinderPos | \Measured,\sensors) \; \ln \frac{p(\cylinderPos | \Measured,\sensors)}{p(\cylinderPos)} \dd{\cylinderPos} \, . \label{eq:utilityFunction} \end{equation} We note that in the experimental design phase, the measurements $\Measured$ are not available. Thus, the prediction error model (Eq.~\ref{eq:measurement}) is used to generate measurements for given model parameter values $\cylinderPos$ and sensor configuration $\sensors$. We identify the best sensor arrangement by maximizing a utility function, defined as the expected value of the Kullback-Leibler divergence over all possible values of the measurements simulated by Eq.~\ref{eq:measurement}~\citep{Ryan2003}: \begin{equation} \begin{split} U(\DES) := \mathbb{E}_{\Measured | \sensors} \big [ u(\sensors,\Measured) \big ] &= \int_{\mathcal{Y}} u(\sensors,\Measured)\; p(\Measured | \sensors) \dd{\Measured} \\ &= \int_{\mathcal{Y}} \int_{\mathcal{R}} p( \cylinderPos | \Measured,\sensors) \; \ln \frac{p(\cylinderPos | \Measured,\sensors)}{p(\cylinderPos)} \; p(\Measured | \sensors) \dd{\cylinderPos} \dd{\Measured} \, . \label{eq:utility} \end{split} \end{equation} The expected utility function involves a double integral over the parameter space $\cylinderPos$ and over the measured data $\Measured$. An efficient estimator of this double integral using sampling techniques is provided by \citet{huan2013simulation}. A similar estimator is used in the present work, \begin{equation} \hat U(\sensors) = \frac{1}{N_{\DATA}} \sum_{j=1}^{N_{\DATA}} \sum_{i=1}^{N_{\cylinderPos}} w_i p(\cylinderPos^{(i)}) \left [ \ln p(\Measured^{(i,j)} | \cylinderPos^{(i)},\sensors) - \ln \left( \sum_{k=1}^{N_{\cylinderPos}} w_k p(\cylinderPos^{(k)}) p(\Measured^{(i,j)} | \cylinderPos^{(k)},\sensors) \right ) \right ] \, . \label{eq:estimator} \end{equation} A detailed derivation and discussion of the estimator is provided in Appendix~\ref{sec:utilityDerivation}. Our estimator employs a quadrature technique to evaluate the integral over the two-dimensional parameter space $\cylinderPos$. In Eq.~\ref{eq:estimator}, $\cylinderPos^{(i)}$ and $w_i$ denote $N_{\cylinderPos}$ quadrature points and corresponding weights related to discretization of the two-dimensional prior-region \blue{(Figure~\ref{fig:priorRegion})}. A total of $N_{\cylinderPos}$ distinct Navier-Stokes simulations are conducted, with a cylinder positioned at \blue{various} discrete points $\cylinderPos^{(i)}$, and the quadrature is evaluated using the trapezoidal rule. Based on the prediction error defined in Eq.~\ref{eq:measurement}, the measured data $\Measured^{(i,j)}$ in Eq.~\ref{eq:estimator} are given by, \begin{equation} \Measured^{(i,j)} = \Signal( \cylinderPos^{(i)}; \sensors) + \bm{\varepsilon}^{(j)} \, , \label{eq:measuredDiscrete} \end{equation} where $\bm{\varepsilon}^{(j)},\, \blue{\text{ with }} j=1,\ldots,N_{\Measured}$, are vectors sampled from the distribution $\mathcal{N}(0, \Sigma(\sensors))$. $N_{\Measured}$ is set to $100$ in the current work\blue{, which results in a smoother estimate of $\hat U(\sensors)$ in Eq.~\ref{eq:estimator}.} We note that the computational effort for evaluating \blue{$\hat U(\sensors)$ in Eq.~\ref{eq:estimator}} depends primarily on the number of Navier-Stokes simulations, $N_{\cylinderPos}$, which are required to evaluate $\Signal(\cylinderPos^{(i)}, \sensors)$ for different disturbance locations $\cylinderPos^{(i)}$, and \blue{subsequently to} determine $\Measured^{(i,j)}$ using Eq.~\ref{eq:measuredDiscrete}. The computational burden does not depend on the number of measured samples $N_{\Measured}$, since there are no additional time consuming simulations involved in generating $\bm{\varepsilon}^{(j)}$. Thus the computational effort scales linearly with the number $N_r$ of model parameter points $\cylinderPos^{(i)}$. We assume that the prior distribution $p(\cylinderPos)$ for the location of the disturbance source is uniform over the prior-region shown in Figure~\ref{fig:priorRegion}, i.e., the probability of finding the source is constant for all locations. Moreover, the only available information we have is a description of the prior-region where the disturbance may be found. Using Bayes' theorem, and the fact that the prior distribution is uniform, we can assert that the posterior distribution of a disturbance location $\cylinderPos$, $p(\cylinderPos \vert \Measured,\sensors)$, is proportional to the likelihood function $p(\Measured \vert \cylinderPos, \sensors)$. The covariance matrix $\Sigma(\sensors)$ depends primarily on the sensor positions $\sensors$, and is diagonal if the errors at the given sensor positions are independent of each other. In the current work, the prediction errors are assumed to be correlated for measurements collected on the same side of the swimmer (i.e., left- or right-lateral surfaces), and decorrelated if the measurements originate from opposite sides. An exponentially decaying correlation is assumed for the covariance matrix, \begin{equation} \Sigma_{ij}(\sensors) = \begin{cases} \sigma^2 \exp \left( - \frac{ \| \bm{x}(s_i) - \bm{x}(s_j) \| }{\CorLen}\right), & \quad \textrm{ if } 1\leq i,j \leq n, \\ \Sigma_{i-n,j-n}(\sensors), & \quad \textrm{ if } n < i,j \leq 2n, \\ 0 & \quad \textrm{ otherwise,} \end{cases} \label{eq:covarianceMatrix} \end{equation} where $\bm{x}(s_i)$ corresponds to the coordinates of the $i$-th sensor on the right lateral surface of the swimmer, $\CorLen>0$ is the prescribed correlation length, and $\sigma$ is the correlation strength. For all the simulations described in this work, the correlation length is set to be $\CorLen=0.01L$. The correlation strength $\sigma$ is a fixed percentage ($30\%$) of the mean sensor-measurement, which is computed over all available instances of $\cylinderPos$ and at all points discretizing the swimmer skin. This form of the correlation error reduces the information-gain when sensors are placed too close together \citep{papadimitriou2012effect, simoen2013prediction}, and prevents excessive clustering of sensors within confined neighbourhoods. Finally, we provide an intuitive interpretation of how Eq.~\ref{eq:estimator} relates to information gain. Let us assume that a particular set of sensors is able to characterize the disturbance sources quite effectively. Moreover, we assume that the measurement $\Measured^{(i,j)}$ has been generated by a disturbance located at $\cylinderPos^{(i)}$. This implies that the posterior $p(\cylinderPos \vert \Measured^{(i,j)},\sensors)$, which indicates the probability that a particular disturbance source $\cylinderPos$ has generated the measurement measurement $\Measured^{(i,j)}$, is peaked and centered around the true source location $\cylinderPos^{(i)}$. Since the prior distribution is uniform, the likelihood $p(\Measured^{(i,j)} \vert \cylinderPos,\sensors)$ is proportional to the posterior, and is also peaked and centered around $\cylinderPos^{(i)}$. Thus, the first term in Eq.~\ref{eq:estimator} is large, whereas most of the terms in the second sum are close to zero, since the probability of measurement $\Measured^{(i,j)}$ originating from source $\cylinderPos^{(k)}$ is small due to the peaked nature of the posterior (except for $k=i$). In this case, the expected utility value computed using Eq.~\ref{eq:estimator} is large. On the other hand, a poor sensor arrangement which cannot characterize source positions well, yields flatter likelihood and posterior distributions due to high uncertainty. Thus, different source positions yield similar measurements at the selected sensors, which makes the second sum in Eq.~\ref{eq:estimator} larger (non-zero $p(\Measured^{(i,j)} \vert \cylinderPos^{(k)},\sensors)$ even for $k\ne i$), thereby reducing the utility value. \subsubsection{Optimization of the expected utility function} \label{sec:sequentialOpt} The optimal sensor arrangement is obtained by maximizing the expected utility estimator $\hat U(\sensors)$ described in Eq.~\ref{eq:estimator}. However, optimal sensor placement problems are characterized by a relatively large number of multiple local optima. Heuristic approaches, such as the sequential sensor placement algorithm described by \citet{Papadimitriou2004}, have been demonstrated to be effective alternatives. In this approach, the optimization is carried out iteratively, one sensor at a time. First, $\hat U(\sensors)$ is \blue{computed} for a single sensor\blue{-pair} $\sensors=s_1$, and the optimal solution $s_1^\star$ is obtained by identifying the maximum in $\hat U(\sensors)$. Then, $\hat U(\sensors)$ is recomputed with $\sensors=(s_1^\star,s_2)$, and it is optimized with respect to the second sensor\blue{-pair,} resulting in an optimal solution $s_2^\star$. We can generalize this procedure for all subsequent sensors, by defining $\hat{U}_i(s) = \hat{U} (s_1^\star,\ldots,s_{i-1}^\star,s)$. The optimal solution for the $i$-th sensor is given as, \begin{equation} s_i^\star = \argmax_{s} \; \hat{U}_i (s) \qquad \textrm{and} \qquad \hat{U}_i^\star = \max_{s} \; \hat{U}_i (s) \, . \label{eq:optimal:sensor} \end{equation} We note that the scalar variable $s$ denotes the position of a single sensor\blue{-pair}, whereas the vector $\sensors$ holds the position of all sensor\blue{-pairs} along the swimmer's midline. The sequential placement procedure is carried out for a number of sensors, $N_s$, and it terminates when the last sensor in the optimal configuration is identified $\sensors^\star=(s_1^\star,\ldots,s_{N_s}^\star)$. \citet{Papadimitriou2004} has demonstrated that the heuristic sequential sensor placement algorithm provides a sufficiently accurate approximation of the global optimum. Moreover, using the sequential optimization approach, $N_s$ one-dimensional problems have to be solved, instead of one $N_s$-dimensional problem. We solve each one-dimensional problem of identifying the maximum of $\hat{U}_i$ via a grid search, where the swimmer midline is discretized using the points $\{ \, k \Delta s, \; k=0,\ldots,N_g \, \} $, with $\Delta s=L/N_g$ and $N_g=1000$. Thus, for each iteration of sequential optimization, the utility estimator in Eq.~\ref{eq:estimator} has to be evaluated $N_g+1$ times. We remark that the Bayesian optimal design procedure is computationally demanding, as it entails model simulations for several different sensor configurations $\sensors$. To minimize the relevant computational cost, we run $N_{\cylinderPos}$ distinct Navier-Stokes simulations for all disturbance locations $\cylinderPos^{(i)}$ ($i=1\ldots,N_r$), and store the shear stress \blue{and pressure gradient} at all available dicretization points along the swimmer skin offline. This allows us to reuse simulation data for a particular disturbance-source, without having to re-run Navier-Stokes simulations for different sensor configurations. We note that the skin discretization may not correspond to the $N_g$ points used for computing $\hat{U}_i^\star$. Thus, the output quantities of interest are averaged at appropriate locations along the swimmer surface, over a small neighbourhood of size $0.01L$. \section{Results} \label{sec:results} We first examine the optimal arrangement of \blue{shear stress and pressure gradient} sensors on motionless larva in the presence of oscillating, rotating and D-shaped cylinders. We then consider self-propelled swimmers, which are exposed to cylinder-generated disturbances. \subsection{Stationary swimmer in the vicinity of oscillating/rotating cylinders} We first consider the setup of a stationary larva-shaped swimmer and a cylinder that either oscillates parallel to the `anteroposterior' axis of the body, or rotates with a constant angular velocity. The oscillating-cylinder setup is shown in Figure~\ref{fig:snapshotsOscillating}, and depicts the vorticity generated by the cylinder along the larva's body. \begin{figure} \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{snapshots/horizStaticSnaps/horiz_snap_0090.pdf} \subcaption{} \label{fig:horizSnap1} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{snapshots/horizStaticSnaps/horiz_snap_0092.pdf} \subcaption{} \label{fig:horizSnap2} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{snapshots/horizStaticSnaps/horiz_snap_0094.pdf} \subcaption{} \label{fig:horizSnap3} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{snapshots/horizStaticSnaps/horiz_snap_0096.pdf} \subcaption{} \label{fig:horizSnap4} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{snapshots/horizStaticSnaps/horiz_snap_0098.pdf} \subcaption{} \label{fig:horizSnap5} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{snapshots/horizStaticSnaps/horiz_snap_0100.pdf} \subcaption{} \label{fig:horizSnap6} \end{subfigure} \caption{Snapshots of the vorticity field around a static larva profile in the presence of a horizontally oscillating cylinder. The snapshots are taken at regular intervals over a single oscillation period, with positive vorticity shown in red and negative vorticity shown in blue. A corresponding animation is shown in \blue{supplementary} Movie 3{}.} \label{fig:snapshotsOscillating} \end{figure} These two setups allow us to analyze mechanical cues (i.e., vibrations in the flow field) without interference from a self-generated boundary layer, which has a tendency to obscure external signals in the case of towed and self-propelled bodies. The \blue{simulation domain extends from $[0,1]$ in $x$ and $y$, and the} rectangular prior-region in both the setups corresponds to $\cylinderPos_{min}=(0.357, 0.375)$ and $\cylinderPos_{max}=(0.7, 0.47)$. A total of $11\times 37 = 407$ potential $\cylinderPos^{(i)}$ locations are distributed uniformly throughout the region, and the static object's center of mass \blue{is} located at $(0.5, 0.3)$. \blue{The kinematic viscosity is set to $\nu=1e\text{-}4$ in these simulations.} \subsubsection{The utility function, and sensor placement} The optimal distribution of sensors along the larva's body can be determined using the estimator $\hat U(\sensors)$ defined in Eq.~\ref{eq:estimator}. Higher utility values indicate that measurements taken at the corresponding locations are more informative. More specifically, the utility at location $s$ is high if a sensor placed there can more effectively differentiate between signals originating from distinct cylinder \blue{locations}. The utility curves computed from signals generated by the oscillating and rotating cylinders are shown in Figures~\ref{fig:utility1dStaticHoriz} and~\ref{fig:utility1dStaticRot}. \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{static_horiz_utility.pdf} \subcaption{} \label{fig:utility1dStaticHoriz} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{static_rot_utility.pdf} \subcaption{} \label{fig:utility1dStaticRot} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{staticOscill_std_U.png} \subcaption{} \label{fig:stdU_staticHoriz} \end{subfigure} \quad \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{staticRot_tighter_std_U.png} \subcaption{} \label{fig:stdU_staticRot} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{staticOscill_std_V.png} \subcaption{} \label{fig:stdV_staticHoriz} \end{subfigure} \quad \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{staticRot_tighter_std_V.png} \subcaption{} \label{fig:stdV_staticRot} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[]{static_sensors.pdf} \subcaption{} \label{fig:staticSensors} \end{subfigure} \\ \caption{Utility plots for a stationary, larva-shaped body with (\subref{fig:utility1dStaticHoriz}) oscillating and (\subref{fig:utility1dStaticRot}) rotating cylinders. The curves indicate the utility for placing the first \blue{shear stress} sensor at a given location $s$. The utility curves were not computed in the region $0.95<s/L\le1$, to avoid potential numerical issues resulting from sharp corners at the tail. (\subref{fig:stdU_staticHoriz},\subref{fig:stdV_staticHoriz}) Standard deviation of horizontal and vertical velocity caused by oscillating cylinders, with larger deviation shown in yellow and lower values shown in black. The standard deviation was computed across 9 distinct simulations (6 time-snapshots recorded in each simulation), with a single oscillating cylinder placed at 9 locations uniformly in the prior-region. (\subref{fig:stdU_staticRot}, \subref{fig:stdV_staticRot}) Standard deviation of velocity components for the rotating cylinders. (\subref{fig:staticSensors}) Optimal sensor distribution determined using sequential placement. Sensors for detecting oscillating cylinders are shown as black squares, whereas those for detecting rotating cylinders are shown as blue circles. The numbering indicates the \blue{sequence determined by the optimal placement algorithm.}} \label{fig:staticLarva} \end{figure} The maxima in these curves suggest that the best location for detecting both oscillating and rotating cylinders\blue{, using shear stress sensors,} is at $s/L=0.033$. We remark that this location involves a notable change in body-surface curvature, as can be discerned from the swimmer-silhouettes shown in the figures. We postulate that the best sensor positions are those that are exposed to large variations in the quantity of interest, namely the shear stress \blue{ or pressure gradient}, since this would allow the sensors to best distinguish between different disturbance sources more readily. We confirm that this is indeed the case, by visualizing the standard deviation of velocity components in regions surrounding the larva, in Figures~\ref{fig:stdU_staticHoriz} to~\ref{fig:stdV_staticRot}. The standard deviation measures the variation among simulations when cylinders are placed at different positions in the prior-region shown in Figure~\ref{fig:priorRegion}. \refOne{The colour scales are identical for panels~\ref{fig:stdU_staticHoriz} and~\ref{fig:stdV_staticHoriz} (the oscillating cylinder scenario), but different from the colour scales in panels~\ref{fig:stdU_staticRot} and~\ref{fig:stdV_staticRot} (the rotating cylinder scenario). The colour scales in~\ref{fig:stdU_staticRot} and~\ref{fig:stdV_staticRot} have been saturated by approximately 30 times, so that weaker flow disturbances created by the rotating cylinders are adequately visible.} We observe from Figures~\ref{fig:stdU_staticHoriz} and~\ref{fig:stdV_staticHoriz} that changing the position of an oscillating cylinder gives rise to significant differences in the tangential velocity (shear stress) close to the head and the tail. This implies that signals measured by sensors in these regions differ markedly from one simulation to the other, which \blue{arguably would} make it easier to estimate the position of \blue{a particular} cylinder. A large variation in horizontal velocity $u$ occurs close to a change in body curvature at $s/L\approx0.033$, which also corresponds to the global maximum in $\hat{U}_1(s)$ (Figure~\ref{fig:utility1dStaticHoriz}). The utility curve exhibits consistently high values for $s/L\le0.15$, which results from large variations in $u$ and $v$ in regions surrounding the head. We note that large variations in the lateral velocity $v$ occur primarily at the head- and tail-tip \blue{(Figure~\ref{fig:stdV_staticHoriz})}, with almost no variation along the midsection ($0.2<s/L\le1$). This can be attributed to $v$ being almost zero in these regions (across all simulations), owing to negligible recirculation along these relatively straight body sections. The large variation in $v$ at the head/tail tip may be explained by the flow turning at the corners, as is evident from the time-series snapshots shown in Figure~\ref{fig:snapshotsOscillating}. We note that while $u$ appears to exhibit large deviation around the midsection ($0.4\le s/L \le 0.6$ in Figure~\ref{fig:stdU_staticHoriz}), the utility curve in Figure~\ref{fig:utility1dStaticHoriz} does not show a corresponding spike. This may be related to the fact that the standard deviation plots were compiled using a small subset of 9 cylinder-locations out of the 407 used for the utility plot. Furthermore, a close inspection of Figure~\ref{fig:stdU_staticHoriz} indicates that these large deviations in $u$ near the midsection occur beyond the detection range of the sensors, i.e., too far away to be picked up by microscopic neuromasts that are $0.0024L$ in length. As in the case of oscillating cylinders, the standard deviation plots for rotating cylinders in Figures~\ref{fig:stdU_staticRot} and \ref{fig:stdV_staticRot} can be correlated to the utility curve in Figure~\ref{fig:utility1dStaticRot}; high utility values ($s/L \le 0.15$, Figure~\ref{fig:utility1dStaticRot}) correspond to large deviations in both $u$ and $v$ near the head (Figures~\ref{fig:stdU_staticRot} and~\ref{fig:stdV_staticRot}). Based on the utility curve, the highest sensitivity for measuring flow perturbations corresponds to the head and posterior sections of the body. This suggests that the head and tail are the most informative regions for \blue{detecting shear stress fluctuations for a static larva}, regardless of the type of disturbance being considered. This observation is consistent with the distribution of neuromasts shown in Figure~\ref{fig:fishNeuromasts}, where the \blue{surface neuromasts} are visible in high concentrations in the head and posterior regions of fish, but show sparse presence along the midsection. \subsubsection{Sequential sensor placement} In the previous section we discussed the case of a single sensor on the swimmer body. We now examine the optimal arrangement of multiple sensors, where the best location for the $n$-th sensor is determined provided that $n-1$ sensors have already been placed. Assume that the first sensor has been placed at $s_1^\star$ using the global maximum in utility curve $\hat{U}_1(s)$. The next best sensor-location is determined by recomputing the utility function $\hat{U}_2(s)$ as described in section~\ref{sec:sequentialOpt}. Following this procedure, the optimal location of all sensors is determined sequentially. Figure~\ref{fig:staticSensors} shows the optimal distribution of 20 sensors for the static larva determined in this manner. We first examine the optimal arrangement for detecting oscillating cylinders, with the corresponding sensors depicted as black squares. We observe that out of the first 10 sensors, numbers $\{1, 3, 5, 9\}$ are placed at the head, whereas numbers $\{2, 4, 6, 7, 8, 10\}$ are found towards the posterior. This suggests a large information-gain via sensors located in the head \blue{and the tail.} For detecting rotating cylinders (sensors shown as blue circles in Figure~\ref{fig:staticSensors}), sensors $\{1, 3, 4, 6, 8, 10\}$ are found in the head, and sensors $\{2, 5, 7, 9\}$ are placed in the posterior section. We also examine the utility curves for placing the first three \blue{oscillation-detecting shear stress} sensors in Figure~\ref{fig:utility_successive}. \begin{figure} \centering \begin{subfigure}[c]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{utility_successive.pdf} \subcaption{} \label{fig:utility_successive} \end{subfigure} \begin{subfigure}[c]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{utility_cumulative.pdf} \subcaption{} \label{fig:utility_cumulative} \end{subfigure} \caption{(\subref{fig:utility_successive}) Utility curves for placing the first three sensors on a static larva that detects oscillating cylinders (Figure~\ref{fig:snapshotsOscillating}). The solid green curve corresponds to $\hat{U}_1(s)$, the dashed purple curve to $\hat{U}_2(s)$, and the red dash-dot curve to $\hat{U}_3(s)$. (\subref{fig:utility_cumulative}) The optimal utility $\hat U_{n}^\star$ for the $n$-th sensor can be determined as $\max_{s} \hat{U}_n(s)$ using the curves shown in panel (\subref{fig:utility_successive}) (see also Eq.~\ref{eq:optimal:sensor}).} \label{fig:utility_sequential} \end{figure} We observe that $\hat{U}_2(s_1) \approx \hat{U}_1(s_1)$, which indicates that placing a second sensor at the same location as the first ($s_1/L=0.033$) would not lead to an appreciable increase in the utility value (i.e., no gain in useful information). The maximum in $\hat{U}_2(s)$ occurs at $s/L=0.95$, which yields the optimal location $s_2^\star$ for the second sensor. Another notable aspect of curve $\hat{U}_2(s)$ is a pronounced `v-shaped' depression in the vicinity of $s_1^\star$, which results from using a non-zero correlation length in Eq.~\ref{eq:covarianceMatrix}. The low utility values in this region impede the placement of sensors too close \blue{to each other}. Using a zero correlation length would have resulted in an abrupt drop in $\hat{U}_2(s)$ at $s_1^\star$ (instead of the smooth depression), and could lead to excessive clustering of sensors within confined neighbourhoods. Figure~\ref{fig:utility_cumulative} shows the cumulative utility value for an increasing number of sensors placed on the swimmer body. We observe that after a rapid initial rise for the first three to five sensors, the utility of placing subsequent sensors increases very slowly. This indicates that using a limited number of optimal \blue{sensor} locations should be sufficient to characterize disturbance sources with \blue{reasonably good} accuracy. \subsection{Motionless larva in the wake of a D-cylinder} We now consider simulations where a rigid larva-shaped profile is placed in the unsteady vortex-wake generated by a D-shaped half cylinder (Figure~\ref{fig:dCylSnapshots}). \begin{figure} \centering \begin{subfigure}[b]{0.48\textwidth} \centering \frame{\includegraphics[width=\textwidth]{snapshots/dCylSnaps/shotDcyl_0060.png}} \subcaption{} \label{fig:dCylSnap1} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \centering \frame{\includegraphics[width=\textwidth]{snapshots/dCylSnaps/shotDcyl_0073.png}} \subcaption{} \label{fig:dCylSnap2} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \centering \frame{\includegraphics[width=\textwidth]{snapshots/dCylSnaps/shotDcyl_0086.png}} \subcaption{} \label{fig:dCylSnap3} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \centering \frame{\includegraphics[width=\textwidth]{snapshots/dCylSnaps/shotDcyl_0099.png}} \subcaption{} \label{fig:dCylSnap4} \end{subfigure} \caption{Snapshots of the vorticity field around a static larva in the wake of a D-shaped cylinder with diameter $0.5L$. The D-cylinder is oriented at a {10\degree} angle with respect to a uniform horizontal flow to promote vortex shedding. The snapshots are shown at regular time-intervals, with positive vorticity shown in red and negative vorticity shown in blue. A corresponding animation is show in Movie 4.} \label{fig:dCylSnapshots} \end{figure} This configuration is inspired by the pioneering work of Liao et al. ~\citep{Liao2003Science} who examined the fluid dynamics of trout placing themselves behind rocks. A uniform horizontal flow of $1L/s$ is imposed throughout the computational domain, and the rigid bodies are held stationary. The D-cylinder is located at $(0.2,0.5)$, and the rectangular prior-region for placing the larvae extends from $\cylinderPos_{min}=(0.3, 0.43)$ to $\cylinderPos_{max}=(0.79, 0.57)$. A total of $11\times 36 = 396$ potential $\cylinderPos^{(i)}$ locations are distributed uniformly throughout the prior-region. The Reynolds number is $Re=200$ based on the cylinder diameter, and $Re=400$ based on the swimmer length. Figure~\ref{fig:dCylSensors} shows the utility curve for placing the first shear stress sensor on the \blue{static} larva, as well as the sensor distribution resulting from sequential placement. \begin{figure} \centering \begin{subfigure}[c]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{static_dCyl_utility.pdf} \subcaption{} \label{fig:utilityDcyl} \end{subfigure} \\ \begin{subfigure}[c]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{Dcyl_sensors.pdf} \subcaption{} \label{fig:sensorsDcyl} \end{subfigure} \caption{(\subref{fig:utilityDcyl}) Utility curve $\hat{U}_1(s)$ for larvae in a D-cylinder's wake. (\subref{fig:sensorsDcyl}) Sequential placement of 20 sensors, with the order of placement shown.} \label{fig:dCylSensors} \end{figure} The utility values for $0.2 \le s/L \le 0.6$ are close to zero, which implies that placing the first sensor along the midsection would provide minimal information gain. Using the sequential-placement procedure described in section~\ref{sec:sequentialOpt}, we determine that \blue{all} of the first 10 sensors are placed at the head, \blue{with no sensors present in the tail.} Our results indicate that sensors at the head are far more significant than sensors in the mid- and posterior-sections of the body for detecting the unsteady wake behind a half-cylinder. \subsection{Self-propelled swimmers\blue{: shear stress sensors}} \label{sec:swimmingFish} Fish generate vorticity on their bodies by their undulatory motion. Their flow-sensing neuromasts are completely immersed in this self-generated flow field, which likely has a significant impact on their ability to detect external disturbances. To include the influence of these self-generated flows on optimal sensor-placement, we now consider simulations of self-propelled swimmers that are exposed to oscillating and rotating cylinders (Figure~\ref{fig:showShape}). These swimmers utilize an intermittent swimming gait referred to as `burst-and-coast' swimming, which allows for \blue{improved} sensory perception \citep{Kramer2001}, as self-generated disturbances subside during the coasting phase. The swimmers perform four full burst-coast swimming cycles starting from rest, before the cylinder starts oscillating or rotating, as depicted in Movie 1{} and Movie 2{}. In the initial transient phase, the swimmer gain a speed of approximately $\blue{0.7}L/s$, which corresponds to a Reynolds number of $\text{Re} =uL/\nu\approx \blue{280}$ (with $L=0.2$ and $\nu=1e\text{-}4$). At the start of the fifth coasting phase, the cylinder starts moving, which simulates the startle/attack response of a prey/predator present in the swimmer's vicinity. The rectangular prior-region for initializing the cylinders extends from $\cylinderPos_{min}=(0.25, 0.375)$ to $\cylinderPos_{max}=(0.7, 0.5)$, with a total of $11\times 37 = 407$ potential $\cylinderPos^{(i)}$ locations distributed uniformly throughout the region. The swimmer's center of mass \blue{is} located at $(0.5, 0.3)$. To determine the extent to which body shape influences optimal placement, we perform simulations using a larva-shaped profile, and a simplified model of an adult. Figure~\ref{fig:utilitySwimmers} compares the utility curves and sensor distributions for these two distinct swimmers. \begin{figure} \centering \begin{subfigure}[c]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{larva_utility.pdf} \subcaption{} \label{fig:utilityLarva} \end{subfigure} \\ \begin{subfigure}[c]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{larva_sensors.pdf} \subcaption{} \label{fig:sensorsLarva} \end{subfigure} \\ \begin{subfigure}[c]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{adult_utility.pdf} \subcaption{} \label{fig:utilityAdult} \end{subfigure} \\ \begin{subfigure}[c]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{adult_sensors_horizNum.pdf} \includegraphics[width=0.8\textwidth]{adult_sensors_rotNum.pdf} \subcaption{} \label{fig:sensorsAdult} \end{subfigure} \caption{(\subref{fig:utilityLarva}) Utility curves for the first \blue{shear stress} sensor, $\hat{U}_1(s)$, on a larva-shaped swimmer (black squares - oscillating cylinders, blue circles - rotating cylinders). (\subref{fig:sensorsLarva}) Sequential placement of 20 sensors along the body, with the order of placement shown. (\subref{fig:utilityAdult}) Utility curves for an adult-shaped swimmer. (\subref{fig:sensorsAdult}) Sensor placement for the adult, with results from horizontal and rotating disturbances shown separately for clarity.} \label{fig:utilitySwimmers} \end{figure} Based on the utility curves in Figures~\ref{fig:utilityLarva} and~\ref{fig:utilityAdult}, we deduce that the head is the most suitable region for placing the first sensor, as was the case for the motionless profiles examined in the previous sections. \blue{We also observe that the utility curves are correlated to the surface curvature of their respective body profiles; in the case of the larva, there is marked variation in $\hat{U}_1(s)$ for $s/L\le0.2$, which corresponds to large curvature changes in the body surface. The utility curve also shows a gradual variation for $s/L\ge0.6$, which corresponds to a gentler change in curvature of the surface. Similarly, the utility curves and body curvature for the adult vary rapidly for $s/L\le0.05$ and more gradually for $s/L\ge0.6$.} Furthermore, we \blue{note that the blue utility curves in Figures~\ref{fig:utilityLarva} and~\ref{fig:utilityAdult} are close to $0$ for $s/L \ge 0.2$. This suggests that the head is the most useful region for placing rotation-detecting shear stress sensors, irrespective of differences in body shape.} \ref{fig:utilityLarva} ~\ref{fig:utilityAdult} The optimal sensor arrangements observed in Figures~\ref{fig:sensorsLarva} and~\ref{fig:sensorsAdult} are listed in Table~\ref{tab:swimmerSensors}. \begin{table} \begin{center} \def~{\hphantom{0}} \begin{tabular}{lllcl} & &Head &Midsection &Posterior \\ \hline \multirow{2}{*}[-1.5pt]{Larva} &Oscillating & $1, 2, 4, 5, 8, 9$ &\textemdash & $3, 6, 7, 10$ \\[5pt] \cline{2-5}\\[-5pt] &Rotating &$1, 2, \ldots, 10 $ &\textemdash &\textemdash\\ \hline \multirow{2}{*}{Adult} &Oscillating & $1, 2, 4, 6, 9, 10$ &\textemdash & $3, 5, 7, 8$ \\[5pt] \cline{2-5} \\[-5pt] &Rotating &$1, 2, \ldots, 10$ &\textemdash &\textemdash\\ \hline \end{tabular} \caption{Optimal distribution of the first 10 \blue{shear stress} sensors for the self-propelled swimmers. The body has been divided into 3 distinct segments: the head ($0\le s/L <0.2$); the midsection ($0.2\le s/L < 0.6$); and the posterior ($0.6\le s/L \le1$).} \label{tab:swimmerSensors} \end{center} \end{table} \blue{There is strong indication that the head is the most important region for detecting shear stress caused by external disturbances, followed by the posterior section; the midsection appears to be insensitive to shear-stress variations altogether, as evidenced by the lack of sensors in this region. Moreover, the posterior section appears to be insensitive to rotating disturbances regardless of the body shape.} \blue{These observations} agree well with \blue{surface} neuromast distributions observed in live fish (Figure~\ref{fig:lateralLineSystem}), where large numbers are found in the head \blue{and the tail}, with sparser clustering in the midsection. \subsection{Optimal sensor placement using combined datasets} \label{sec:combined_shear} Fish are subject to a multitude of external stimuli over the course of their lifetime. Hence, it is conceivable that neuromasts may be attuned to diverse sources of disturbance. We emulate this situation for optimal sensor placement by considering data collected from the five different simulation setups \blue{simultaneously}, namely, motionless larvae with oscillating, rotating, and D-shaped cylinders, and self-propelled larvae with oscillating and rotating cylinders. The sequential-placement procedure described in the previous sections is followed, with a slight modification to the definition of the utility function. The combined utility for the five different configurations may be expressed as a sum of the individual utility functions\blue{,} due to the conditional independence of the measurements on the sensor locations (see Appendix~\ref{sec:app:derivation}). The resulting utility curve \blue{for the shear stress sensors} is shown in Figure~\ref{fig:combination}, along with the optimal sensor distribution. \begin{figure} \centering \begin{subfigure}[c]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{all_5way_utility.pdf} \subcaption{} \label{fig:utilityAllSum} \end{subfigure} \\ \begin{subfigure}[c]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{5waySum_sensors.pdf} \subcaption{} \label{fig:sensorsAllSum} \end{subfigure} \caption{(\subref{fig:utilityAllSum}) Utility curve for the first \blue{shear stress} sensor, $\hat{U}_1(s)$, on a larva-shaped swimmer, using a combination of all five flow configurations described in the paper. (\subref{fig:sensorsAllSum}) Sequential placement of 20 \blue{shear stress} sensors along the body.} \label{fig:combination} \end{figure} We observe predominant placement of sensors in the head \blue{and tail}, corresponding to large utility values \blue{in these regions. Moreover, we find that virtually no sensors are located in the midsection.} The dense clustering of sensors in the head \blue{and tail}, with sparse distribution in the midsection yet again resembles \blue{surface neuromast} patterns found in live fish \blue{(Figure~\ref{fig:fishNeuromasts}), and indicates that fish extremities may be ideal for detecting variations in shear stress.} \blue{ \subsection{Optimal pressure gradient sensors} \label{sec:swimmingFishPressGrad} We now consider the optimal placement of pressure gradient sensors on the larva's body. These sensors are analogous to canal neuromasts found in live fish, which display markedly similar distribution patterns across a variety of fish species~\citep{Ristroph2015}. The canal is usually present in a continous line running from head to tail, and shows a high concentration of neuromasts in canal branches found in the head~\citep{Coombs1988,Ristroph2015}. We use a combination of the five distinct flow configurations described earlier, to determine the optimal arrangement of pressure gradient sensors by following the procedure described in section~\ref{sec:combined_shear}. The resulting utility curve and sensor distribution are shown in Figure~\ref{fig:combination_pressGrad}. \begin{figure} \centering \begin{subfigure}[c]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{all_5way_utility_pressGrad.pdf} \subcaption{} \label{fig:utilityAllSum_pressGrad} \end{subfigure} \\ \begin{subfigure}[c]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{5waySum_sensors_pressGrad.pdf} \subcaption{} \label{fig:sensorsAllSum_pressGrad} \end{subfigure} \caption{\blue{(\subref{fig:utilityAllSum_pressGrad}) Utility curve for the first \blue{pressure gradient} sensor, $\hat{U}_1(s)$, on a larva-shaped swimmer, using a combination of all five flow configurations described in the paper. (\subref{fig:sensorsAllSum_pressGrad}) Sequential placement of 20 \blue{pressure gradient} sensors along the body.}} \label{fig:combination_pressGrad} \end{figure} The most notable difference between the arrangement of pressure gradient sensors (Figure~\ref{fig:sensorsAllSum_pressGrad}), and that of shear stress sensors (Figure~\ref{fig:sensorsAllSum}), is observed in the midsection of the body. We find a consistent distribution of pressure gradient sensors in the midsection, which is not the case for shear stress sensors. Out of the 20 pressure gradient sensors placed, 10 are found clustered densely in the head ($s/L\le0.1$, which corresponds to high utility values in Figure~\ref{fig:utilityAllSum_pressGrad}), and the other 10 are spaced regularly throughout the body. This arrangement is similar to the neuromast distribution found in subsurface canals, which yet again suggests that this sensory structure may have evolved for detecting changes in pressure gradients with high accuracy. \refOne{In fact, the utility curve shown in Figure~\ref{fig:utilityAllSum_pressGrad} agrees qualitatively with the canal density reported by \citet{Ristroph2015}, especially for $s/L<0.2$. However, a direct comparison must be made with care, given that our simulations are two-dimensional, whereas the distributions reported by \citet{Ristroph2015} display significant three-dimensional branching in the head.} } \subsection{Inference of disturbance-generating source} \label{sec:inference} Having determined the optimal distribution of sensors on the swimmer body, we now assess how effectively these arrangements can characterize the disturbance sources. For a given set of sensors $\sensors$, this involves estimating the probability that a particular \blue{sensor} measurement \blue{may} originate from different cylinder positions within the prior-region. For this, we consider the measurements $\Measured^{(GT)}$ at the sensor locations, generated from a single cylinder \blue{located at $\cylinderPos^{(GT)}$} (the superscript $GT$ denotes `ground-truth'). For a given sensor configuration $\sensors$, the measurements $\Measured^{(GT)}$ are computed using the prediction error model $\Measured^{(GT)} = \Signal( \cylinderPos^{(GT)}; \sensors) + \bm{\varepsilon}(\sensors)$, where $\Signal( \cylinderPos^{(GT)}; \sensors)$ is obtained by simulating the Navier-Stokes equations with an oscillating cylinder located at $\cylinderPos^{(GT)}$, and $\bm{\varepsilon}(\sensors)$ is a vector sampled from the Gaussian distribution $\mathcal{N}(0, \Sigma(\sensors))$. Assuming that the disturbance position $\cylinderPos^{(GT)}$ is unknown, the swimmer attempts to identify it by assigning probability values to all possible cylinder locations $\cylinderPos$ within the prior-region (i.e., by determining the posterior distribution $p(\cylinderPos | \Measured^{(GT)} ,\sensors)$). The highest probability value yields the best estimate for the cylinder position. This process is analogous to a fish attempting to localize the position of a predator or prey. Using the fact that the prior distribution of the disturbance location is uniform, the required posterior probability distribution of the disturbance location is proportional to the likelihood $p(\Measured \vert \cylinderPos^{(GT)}, \sensors)$ defined in Eq.~\ref{eq:likelihood}, where $\Measured$ are the measurements recorded along the swimmer body. The resulting probability distribution for \blue{estimating the correct} $\cylinderPos^{(GT)}$ is depicted in Figure~\ref{fig:inference}, with the \refOne{rows} showing results for an increasing number of sensors. \refOne{The left column shows probability-estimates from a self-propelled swimmer attempting to identify the position of an unknown disturbance-source, based on flow measurements from optimal sensor locations which are indicated on the body. The right column depicts estimates made by a swimmer using suboptimal sensor distributions. The probability distributions indicate that the swimmer on the left (using optimal sensor distributions) is able to provide a much more accurate estimate of the correct location of the disturbance-source.} \begin{figure} \centering \begin{subfigure}[c]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{1sensor_optimal.png} \subcaption{} \label{fig:1sensor_optimal} \end{subfigure} \begin{subfigure}[c]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{1sensor_uniform.png} \subcaption{} \label{fig:1sensor_uniform} \end{subfigure} \\ \begin{subfigure}[c]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{3sensor_optimal.png} \subcaption{} \label{fig:3sensor_optimal} \end{subfigure} \begin{subfigure}[c]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{3sensor_uniform.png} \subcaption{} \label{fig:3sensor_uniform} \end{subfigure} \\ \begin{subfigure}[c]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{5sensor_optimal.png} \subcaption{} \label{fig:5sensor_optimal} \end{subfigure} \begin{subfigure}[c]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{5sensor_uniform.png} \subcaption{} \label{fig:5sensor_uniform} \end{subfigure} \caption{Plots showing the probability that the \refOne{signal being measured by a self-propelled swimmer} originates from a particular location in the prior-region. \refOne{Left column - swimmer using flow measurements from optimal sensor locations indicated on the body, right column - estimates made by a swimmer using suboptimal sensor distributions}. Brighter areas indicate regions of higher probability. The relevant sensor arrangement is shown using red `$\times$' symbols on the swimmers' bodies. The actual position of the signal-generating cylinder is marked with red diamonds. (\subref{fig:1sensor_optimal}, \subref{fig:3sensor_optimal}, \subref{fig:5sensor_optimal}) Probability distributions computed using measurements from 1, 3, and 5 optimal sensors on both sides of the body. (\subref{fig:1sensor_uniform}, \subref{fig:3sensor_uniform}, \subref{fig:5sensor_uniform}) Probability distributions for 1, 3, and 5 uniformly-distributed sensors on both sides.} \label{fig:inference} \end{figure} We observe that the un-informed placement of a single sensor in Figure~\ref{fig:1sensor_uniform} leads to a large spread in the probability distribution, making it difficult to locate the disturbance source accurately. In comparison, the first optimal sensor in Figure~\ref{fig:1sensor_optimal} yields a noticeably narrower spread, centered close to the correct position of the signal-generating cylinder (i.e., the ground-truth). The probability distributions in both cases become narrower with increasing number of sensors, making it easier to locate the disturbance source. \blue{In all cases, the optimal arrangement of sensors performs noticeably better than the uniform distribution for identifying the correct cylinder location.} \section{Discussion} \label{sec:discussion} \refThree{While the present work attempts to represent realistic flow conditions, it is important to keep in mind its limitations and simplifying assumptions used in the simulations and data analyses. Most importantly, we note that flexural dynamics of the sensory structures are not accounted for in the current study. In actual fish, the extent to which the hair-like sensory structures are deflected by the flow can influence their sensing-effectiveness \citep{Hudspeth1977,Blake2006,VanTrump2008,Bleckmann2009}. For instance, beyond a certain locomotion speed, the superficial neuromasts may suffer from saturation effects during forward motion, when the hair cells are fully deflected and may no longer be able to detect external stimuli effectively. In addition to natural swimmers, such dynamics have been included in artificial lateral lines employed in robotic devices \citep{Fan2002}. Similarly, the canal neuromasts are located in recessed channels under the skin, which alter the flow experienced by the sensory structures considerably. This is compensated in real fish through the introduction of resonant hair cell structures \citep{Maoileidigh2012}, but is not accounted for in our simulations. All of these aspects may play an important role in determining the observed distribution of sensors on a fish's body, in addition to the fluid-flow induced by external stimuli. However, our two-dimensional simulations do not account for these factors, in an attempt to keep the complexity of the Navier-Stokes simulations to manageable levels. Furthermore, we note that detecting flows by a stationary swimmer in a quiescent fluid with external perturbations is not equivalent to flow measurements by moving swimmers; the latter suffer from separation effects of the boundary layer, which can influence the location of optimal sensors. Consequently, we must be careful when considering sensor-placement using the combined datasets, as done in Figures~\ref{fig:combination} and~\ref{fig:combination_pressGrad}. Inspecting the individual scenarios in Figures~\ref{fig:staticSensors}, \ref{fig:sensorsDcyl}, and \ref{fig:sensorsLarva}, we observe that the sensor distributions for the stationary and moving swimmers are not entirely dissimilar, which gives us confidence in using the combined dataset for sensor placement in Figures~\ref{fig:combination} and~\ref{fig:combination_pressGrad}.} \refOne{In our approach, a sensor-pair is placed on the body surface symmetrically around the fish centerline. We anticipate that re-evaluating the simulations with a left-right flip, i.e., changing the orientation of the fish with respect to its approach to the cylinder, may lead to some differences in the exact sensor locations. However, once the fish is experiencing the cylinder's wake on its pressure and shear sensors, we expect that the overall sensor arrangement will remain unchanged, i.e., we will still observe a dense distribution of sensors in the head and the tail. We have found that the dominant factor in determining sensor placement is not the approach to the flow but rather the body-geometry and motion. This is evident when we compare sensor distributions across very dissimilar scenarios, i.e., the three different flow-configurations involving stationary fish, self-propelled fish, and rigid fish placed in the D-cylinder's wake (Figures~\ref{fig:staticSensors}, \ref{fig:sensorsLarva}, and \ref{fig:sensorsDcyl}).} \refTwo{The present approach is computationally demanding as it requires conducting a large number of Direct Numerical Simulations (DNS) for the Navier-Stokes equations, each of which takes 4 to 7 hours to complete on a 12-core CPU node depending on the flow configuration. The computational cost is substantial, given that close to 400 distinct simulations have to be evaluated for each of the considered flow configurations, resulting in a total of approximately 3000 simulations. At the same time, we must emphasize that all DNS computations are performed once (off-line) and the sequential sensor-placement algorithm is rather inexpensive, with the computations taking on the order of 5 minutes using a single CPU core. Hence, it is imperative to store DNS data containing all possible sensor-measurements offline, and to process them for sequential sensor placement as required. We remark that if the same study were to be conducted in a three-dimensional setting using Direct Numerical Simulations, the computational cost of each simulation would increase by approximately three orders of magnitude, since the computational grid would increase in size from 4096 x 4096 cells for the present 2D cases, to approximately 4096 x 4096 x 1024 cells for the 3D cases. This would represent a significant increase in computational cost, especially considering the need to run approximately 3000 DNSs to examine all of the cases discussed in the present work. We remark that if, for 3D flows, we use instead approaches such as Large Eddy Simulations (LES) and Unsteady Reynolds Averaged Navier-Stokes (URANS) calculations we do not expect to have higher computational costs than the ones required herein for the 2D DNS. In closing, we argue that the availability of computational power and automation in the near future will make approaches such as the one presented herein amenable to further computational (and experimental) studies.} \section{Conclusion} \label{sec:conclusion} We have combined two-dimensional Navier-Stokes simulations with Bayesian optimal experimental design, to identify the best arrangement of sensory structures on self-propelled swimmers' bodies. The study is inspired by the particular distribution of flow-sensing mechanoreceptors found in many fish species, referred to as the lateral-line organ, where a large number of sensory structures are located in the head. We optimize sensor arrangements on two different swimmer shapes under the influence of various sources of disturbance. We find optimal arrangements that resemble those found in fish bodies, suggesting that such arrangements may allow them to gather information from their surroundings more effectively than other layouts. We demonstrate that the optimal configuration of these sensors depends on the body shape and the type of disturbance being perceived. This is explored using a variety of simulations involving both static and swimming configurations, using distinct body profiles resembling fish larvae and adults, and using disturbances generated by oscillating \blue{cylinders,} rotating cylinders\blue{,} and by D-shaped half cylinders. Despite certain differences that exist in sensor distributions among the various cases considered, there is a marked tendency for a large number of \blue{shear stress} sensors to be located in the head \blue{and the tail} of the swimmer, with \blue{virtually no sensors found} in the midsection. \blue{In the case of pressure gradient sensors, we observe a high density of sensors placed in the head, followed by regularly spaced distribution along the entire body.} \blue{These observations} closely reflect the structure of the sensory organ in live fish. To assess the effectiveness of the sensor placement algorithm, we compare the performance of optimal arrangements to that of un-informed uniform sensor distributions. The results confirm that optimal distribution patterns lead to more accurate identification of external disturbances, which suggests that these distinctive distributions may allow fish to assimilate maximum information from their surroundings using the fewest number of neuromasts. We believe that the present work is a positive step towards understanding mechanosensing in fish, and we hope that the proposed methodology can assist in the development of optimal sensory-layouts for engineered swimmers. \section{Acknowledgments} \blue{This work was supported by European Research Council Advanced Investigator Award 341117, and utilized computational resources granted by the Swiss National Supercomputing Centre (CSCS) under project IDs `s658' and `ch7'. We thank Panagiotis E. Hadjidoukas for assistance with the TORC interface for running the flow simulations.}
1,314,259,994,605
arxiv
\section{Introduction} \label{intro} Functional methods in modern statistical physics represent one of the most powerful tools for the study both of equilibrium and dynamical properties (see, e.g. \cite{Orland,Zinn}). A great amount of statistical field theories known in the literature are based of the Hubbard-Stratonovich transformation \cite{Hubbard1,Strato}, proposed in the 50ies. Nearly at the same time another method - so-called collective variables (CV) method - that allows in a explicit way to construct a functional representation for many-particle interacting systems was developed \cite{Zubar,Yuk1} and applied for the description of charged particle systems, in particular, to the calculation of the configurational integral of the Coulomb systems. The idea of this method was based on: (i) the concept of collective coordinates being appropriate for the physics of system considered (see, for instance, \cite{Bohm-Pines}), and (ii) the integral equality allowing to derive an exact functional representation for the configurational Boltzmann factor. Later the CV methods was successfully developed for the description of classical many-particle systems \cite{Yuk-Hol} and for the phase transition theory \cite{Yuk}. One more functional approach, the mesoscopic field theory, was recently developed for the study of phase transitions in ionic fluids \cite{ciach_stell}. One of the goals of this paper is to reconsider the CV method from the point of view of the statistical field theory and to compare the results obtained with those found recently by one of us by means of the KSSHE (Kac-Siegert-Stratonovich-Hubbard-Edwards) theory \cite{Cai-Mol}. We formulate the method of CV in real space and consider a one-component continuous model consisting of hard spheres interacting through additive pair potentials. The expression for the functional of the grand partition function is derived and the CV action that depends upon two scalar fields - field $\rho$ connected to the number density of particles and field $\omega$ conjugate to $\rho$ - is calculated. We study correlations between these fields as well as their relations to the density and energy correlations of the fluid. The grand partition function of the model is evaluated in a systematic way using a well-known method of statistical field theory, namely the so-called loop expansion. It consists in expanding functionally the action $\mathcal{H}$ around a saddle point, so that the lowest order (zero loop) approximation defines the mean-field (MF) level of the theory and the first order loop expressions correspond to the random phase approximation (RPA). Recently \cite{Cai-Mol} this technique was applied to the action obtained within the framework of the KSSHE theory. In this paper we perform a two-loop expansion of the pressure and the free energy of the homogeneous fluid which yields a new type of approximation which we plan to test in our future work. The paper is organized as follows. In Section~2, starting from the Hamiltonian, we introduce the two different functional representations of the grand partition function based on the KSSHE and CV methods. Here we also enter several types of statistical field averages that are important in the further part of the paper. In Section~3 we introduce the CV and KSSHE field correlation functions, establish links between them as well as their relation to the density correlation functions of the fluid. The MF level of the KSSHE and CV field theories is formulated in Section~4. Section~5 is devoted to the loop expansion of the grand potential. The pressure and the free energy of the homogeneous fluid are obtained in the two-loop approximation in Section~6. We conclude with some final remarks in Section~7. \section{Summary of previous works} \label{summary} \subsection{The model} We consider the case of a simple three dimensional fluid that consists of identical hard spheres of diameter $\sigma$ with additional isotropic pair interactions $v(r_{ij})$ ($r_{ij}=\vert \mathbf{x}_i -\mathbf{x}_j \vert$, $\mathbf{x}_i$ is the position of particle "$i$"). Since $v(r)$ is arbitrary in the core, i.e. for $r \leq \sigma$, we assume that $v(r)$ has been regularized in such a way that its Fourier transform $v_{q}$ is a well behaved function of $q$ and that $v(0)$ is a finite quantity. We denote by $\Omega$ the domain of volume $V$ occupied by the molecules of the fluid. The fluid is at equilibrium in the grand canonical (GC) ensemble and we denote by $\beta=1/kT$ the inverse temperature ($k$ is the Boltzmann constant) and $\mu$ is the chemical potential. In addition the particles are subject to an external potential $\psi(\mathbf{x})$ and we will denote by $\nu(\mathbf{x})=\beta (\mu-\psi(\mathbf{x}))$ the dimensionless local chemical potential. We will stick to notations usually adopted in standard textbooks on the theory of liquids (see e.g. \cite{Hansen}) and denote by $w(r)=-\beta v(r)$ the negative of the dimensionless pair interaction. Quite arbitrarily we will say that the interaction is attractive if the Fourier transform $\widetilde{w}(q)$ is positive for all $q$; in the converse case it will be said repulsive. In a given configuration $\mathcal{C}=(N;\mathbf{x}_1 \ldots \mathbf{x}_N)$ the microscopic density of particles reads \begin{equation} \widehat{\rho}(\mathbf{x}\vert \mathcal{C})= \sum_{i=1}^{N} \delta^{(3)}(\mathbf{x}-\mathbf{x}_i) \; , \end{equation} and the GC partition function $\Xi\left[ \nu \right] $ can thus be written as \begin{equation} \label{csi}\Xi\left[ \nu \right] = \sum_{N=0}^{\infty} \frac{1}{N!} \int_{\Omega}d1 \ldots dn \; \exp\left( -\beta V_{\text{HS}}(\mathcal{C}) +\frac{1}{2} \left\langle \widehat{\rho}\vert w \vert\widehat{\rho} \right\rangle + \left\langle \overline{\nu}\vert \widehat{\rho} \right\rangle \right) \; , \end{equation} where $i \equiv \mathbf{x}_i $ and $di\equiv d^{3}x_i$. For a given volume $V$, $\Xi\left[ \nu \right]$ is a function of $\beta$ and a convex functional of the local chemical potential $\nu(x)$ which we have strengthened by using a bracket. In eq.\ (\ref{csi}) $\exp(-\beta V_{\text{HS}}(\mathcal{C}))$ denotes the hard sphere contribution to the Boltzmann factor in a configuration $\mathcal{C}$ and $\overline{\nu}=\nu+\nu_S$ where $\nu_S= - w(0)/2$ is proportional to the self-energy of the particles. From our hypothesis on $w(r)$, $\nu_S$ is a finite quantity which depends however on the regularization of the potential in the core. In the r.h.s of eq.\ (\ref{csi}) we have also introduced Dirac's brac-kets notations \begin{subequations} \begin{eqnarray} \left\langle \overline{\nu}\vert \widehat{\rho} \right\rangle &\equiv& \int_{\Omega} d1 \; \overline{\nu}(1)\widehat{\rho}(1) \\ \left< \widehat{\rho} \vert w \vert\widehat{\rho} \right> & \equiv & \int_{\Omega} d1 d2\; \widehat{\rho}(1\vert \mathcal{C}) w(1,2) \widehat{\rho}(2\vert \mathcal{C}) \; . \end{eqnarray} \end{subequations} Previous work have shown that $\Xi\left[ \nu \right]$ can be rewritten as a functional integral. Two such equivalent representations are reviewed below. \subsection{The KSSHE representation} As it is well known the GC partition function $\Xi\left[ \nu \right]$ can be re-expressed as a functional integral by performing the KSSHE transformation \cite{Kac,Siegert,Strato,Hubbard1,Hubbard2,Edwards} of the Boltzmann's factor. Under this transformation $\Xi\left[ \nu \right]$ can be rewritten as the GC partition function of a fluid of bare hard spheres in the presence of a random Gaussian field $ \varphi$ with a covariance given by the pair potential \cite{Wiegel,Wegner,Cai-Mol,Cai-JSP}. More precisely one has \begin{itemize} \item \textit{ i) in the attractive case ($\widetilde{w}(q)>0$)} \begin{eqnarray} \label{attractive} \Xi\left[ \nu \right] &=& \mathcal{N}_{w}^{-1} \int \mathcal{D} \varphi \; \exp \left( -\frac{1}{2} \left\langle \varphi \vert w^{-1} \vert \varphi \right\rangle \right) \Xi_{\text{HS}}\left[ \overline{\nu} + \varphi\right] \nonumber \\ &\equiv & \left\langle \Xi_{\text{HS}}\left[ \overline{\nu} + \varphi\right] \right\rangle_{w} \; , \end{eqnarray} \item \textit{ ii) in the repulsive case ($\widetilde{w}(q)<0$)} \begin{eqnarray} \label{repulsive} \Xi\left[ \nu \right] &=& \mathcal{N}_{(-w)}^{-1} \int \mathcal{D} \varphi \; \exp \left( \frac{1}{2} \left\langle \varphi \vert w^{-1} \vert \varphi \right\rangle \right) \Xi_{\text{HS}}\left[ \overline{\nu} + i \varphi\right] \nonumber \\ &\equiv & \left\langle \Xi_{\text{HS}}\left[ \overline{\nu} + i \varphi\right] \right\rangle_{(-w)} \; , \end{eqnarray} \end{itemize} where, in both cases, $\varphi$ is a real random field and $\Xi_{\text{HS}}$ denotes the GC partition function of bare hard spheres. $\Xi$ can thus be written as a Gaussian average $\left\langle \ldots \right\rangle_{w}$ of covariance $w$ and we have noted by $\mathcal{N}_{w}$ the normalization constant \begin{equation} \label{normaw} \mathcal{N}_{w}= \int \mathcal{D} \varphi \; \exp \left( -\frac{1}{2} \left\langle \varphi \vert w^{-1} \vert \varphi \right\rangle \right) \; . \end{equation} The functional integrals which enter eqs.~(\ref{attractive}),~(\ref{repulsive}) and ~(\ref{normaw}) can be given a precise meaning in the case where the domain $\Omega$ is a cube of side $L$ with periodic boundary conditions (PBC) which will be implicitly assumed henceforth. More details are given in Appendix A. In the repulsive case, the hard core part of the interaction is not compulsory for the existence of a thermodynamic limit \cite{Ruelle} and the reference system can be chosen as the ideal gas \cite{Efimov,Lambert}. Eqs~(\ref{attractive}) and ~(\ref{repulsive}) are easily generalized to the case of molecular fluids or mixtures, for instance a charged hard spheres mixture (primitive model)\cite{Cai-JSP,Cai-Mol1}. When the pair interaction $w$ is neither attractive nor repulsive it is necessary to introduce two real scalar fields if some rigor is aimed at \cite{Cai-Mol}. Alternatively, eq.~(\ref{attractive}) can be considered to hold in any case having in mind that $\varphi$ can be a complex scalar field. Therefore we shall write formally in all cases \begin{equation} \label{csiKSSHE} \Xi\left[ \nu \right]=\mathcal{N}_{w}^{-1} \int \mathcal{D} \varphi \; \exp \left( - \mathcal{H}_{\text{K}} \left[\nu, \varphi \right] \right) \; , \end{equation} where the action of the KSSHE field theory reads as \begin{equation} \label{action-K} \mathcal{H}_K \left[\nu, \varphi \right]= \frac{1}{2}\left\langle \varphi \vert w^{-1} \vert \varphi \right\rangle - \ln \Xi_{\text{HS}}\left[ \overline{\nu} + \varphi\right] \; . \end{equation} \subsection{The CV representation} We introduce now briefly the CV representation of $\Xi\left[ \nu \right]$ and refer the reader to the literature for a more detailed presentation \cite{Yuk,Yuk1,Yuk-Hol,Yuk2,Yuk3}. The starting point is the identity \begin{eqnarray} \label{a} \exp \left( \frac{1}{2}\left\langle \widehat{\rho}\vert w \vert \widehat{\rho}\right\rangle \right)& =& \int \mathcal{D} \rho \; \delta_{\mathcal{F}}\left[ \rho -\widehat{\rho} \right] \exp \left( \frac{1}{2}\left\langle \rho \vert w \vert \rho \right\rangle \right) \; \nonumber \\ &=& \int \mathcal{D} \rho \mathcal{D} \omega \; \exp \left( \frac{1}{2}\left\langle \rho \vert w \vert \rho \right\rangle +i \left\langle \omega \vert \left\lbrace \rho - \widehat{\rho} \right\rbrace \right\rangle \right), \end{eqnarray} where we have made use of the functional "delta function" \cite{Orland} $\delta_{\mathcal{F}}\left[ \rho \right]$ defined in eq.~(\ref{deltaF}) in \mbox{appendix A}. Inserting eq.~(\ref{a}) in the expression~(\ref{csi}) of the GC partition function one finds \begin{equation} \label{csiCV} \Xi\left[ \nu \right]= \int \mathcal{D} \rho \mathcal{D} \omega \; \exp \left( - \mathcal{H}_{\text{CV}}\left[\nu, \rho, \omega \right] \right) \; , \end{equation} where the action of the CV field theory reads as \begin{equation} \label{actionCV} \mathcal{H}_{\text{CV}} \left[\nu, \rho, \omega \right]= -\frac{1}{2} \left\langle \rho \vert w \vert \rho \right\rangle - i \left\langle \omega \vert \rho\right\rangle - \ln \Xi_{\text{HS}}\left[ \overline{\nu} - i \omega \right] \; . \end{equation} We stress that $\omega$ and $\rho$ are two real scalar fields and that eqs.~(\ref{csiCV}) and~(\ref{actionCV}) are valid for repulsive, attractive as well as arbitrary pair interactions. Moreover, with the clever normalization of Wegner \cite{Wegner} for the functional measures there are no unspecified multiplicative constant involved in eq.~(\ref{csiCV}) (see Appendix A for more details). The CV transformation is clearly more general than the KSSHE transformation since it can be used for a pair interaction $w(1,2)$ which does not possess an inverse and is easily generalized for n-body interactions with $n>2$. The equivalence of the CV and KSSHE representations~(\ref{csiKSSHE}) and~(\ref{csiCV}) of $\Xi\left[\nu \right]$ is readily established in the repulsive case ( $w<0$) by making use of the properties of Gaussian integrals (cf. eq.~(\ref{Gauss}) of Appendix A). In the attractive or in the general case we did not find a convincing way to obtain one formula from the other. \subsection{Statistical average} In the sequel it will be important to distinguish carefully, besides the usual GC average $<\mathcal{A}(\mathcal{C})>_{\text{GC}}$ of a dynamic variable $\mathcal{A}(\mathcal{C})$, between several types of statistical field averages. Firstly Gaussian averages of covariance $w$ \begin{equation} \label{moyw} \left\langle \mathcal{A}\left[\varphi \right] \right\rangle_{w}= \mathcal{N}_{w}^{-1} \int \mathcal{D} \varphi \; \exp \left(-\frac{1}{2} \left\langle \varphi \vert w^{-1} \vert \varphi \right\rangle \right) \mathcal{A}\left[\varphi \right], \end{equation} where $\mathcal{A}\left[ \varphi\right]$ is some functional of the KSSHE field $\varphi$ and $\mathcal{N}_{w}$ has been defined in eq.~(\ref{normaw}); secondly the KSSHE averages defined as \begin{equation} \label{moyK} \left\langle \mathcal{A}\left[\varphi \right] \right\rangle_{\text{K}}= \Xi \left[\nu \right]^{-1} \; \int \mathcal{D} \varphi \; \mathcal{A}\left[\varphi \right] \exp \left( - \mathcal{H}_{\text{K}} \left[\nu, \varphi \right] \right), \end{equation} and thirdly the CV averages defined in a similar way as \begin{equation} \label{moyCV} \left\langle \mathcal{A}\left[\rho, \omega \right] \right\rangle_{\text{CV}}=\Xi\left[\nu \right]^{-1} \; \int \mathcal{D} \rho \mathcal{D} \omega \; \; \mathcal{A}\left[\rho, \omega \right] \exp \left( - \mathcal{H}_{\text{CV}} \left[\nu, \rho, \omega \right] \right), \end{equation} where $\mathcal{A}\left[ \rho, \omega \right]$ is a functional of the two CV fields $\rho$ an $\omega $. \section{Correlation functions} \label{Corre} Since all thermodynamic quantities of the fluid can be expressed in terms of the GC density correlation functions $G^{(n)}$ \cite{Hansen} it is important to relate them to the field correlation functions in the CV and the KSSHE representations; this is subject of section~\ref{Corre}. \subsection{Density correlations} \label{CorreGC} The ordinary and truncated (or connected) density correlation functions of the fluid will be defined in this paper as \cite{Hansen,Stell1,Stell2} \begin{eqnarray} \label{defcorre} G^{(n)}[\nu](1, \ldots, n) &=&\left< \prod_{1=1}^{n} \widehat{\rho} (\mathbf{x}_{i} \vert \mathcal{C}) \right>_{GC} =\frac{1}{\Xi[\nu]}\frac{\delta^{n} \;\Xi[\nu]} {\delta \nu(1) \ldots \delta \nu(n)} \; ,\nonumber \\ G^{(n), T}[\nu](1, \ldots, n) &=& \frac{\delta^{n} \log \Xi[\nu]} {\delta \nu(1) \ldots \delta \nu(n)} \; . \end{eqnarray} Our notation emphasizes the fact that the $G^{(n)}$ (connected and not connected) are functionals of the local chemical potential $\nu(x)$ and functions of the coordinates $(1,\ldots, n) \equiv (\mathbf{x}_{1},\ldots, \mathbf{x}_{n})$. We know from the theory of liquids that \cite{Stell1,Stell2} \begin{equation} G^{(n), T}[\nu](1,\ldots,n)= G^{(n)}[\nu]( 1,\ldots,n) - \sum \prod_{m<n}G^{(m), T} [\nu](i_{1},\ldots,i_{m}) \; , \end{equation} where the sum of products is carried out over all possible partitions of the set $(1,\ldots,n)$ into subsets of cardinal $m<n$. Of course $\rho[\nu](x) \equiv G^{(n=1)}[\nu](x)=G^{(n=1), T}[\nu](x)$ is the local density of the fluid. It follows from the definition~(\ref{defcorre}) of the $G^{(n)}[\nu](1, \ldots, n)$ that they can be reexpressed as KSSHE or CV statistical averages, i.e. \begin{subequations} \begin{eqnarray} G^{(n)}[\nu](1, \ldots, n)&=&\left\langle G^{(n)}_{\text{HS}}[\overline{\nu} + \varphi](1, \ldots, n)\right\rangle_{\text{K}} \; , \\ G^{(n)}[\nu](1, \ldots, n)&=&\left\langle G^{(n)}_{\text{HS}}[\overline{\nu} - i\omega](1, \ldots, n)\right\rangle_{\text{CV}} \; . \end{eqnarray} \end{subequations} Although enlightening these relations are not very useful except for the special case $n=1$ which reads explicitly as \begin{subequations} \begin{eqnarray} \rho\left[\nu \right](\mathbf{x})&=& \left\langle \rho_{\text{HS}}[\overline{\nu} + \varphi](\mathbf{x})\right\rangle_{\text{K}} \; , \\ \rho\left[\nu \right](\mathbf{x})&=& \left\langle \rho_{\text{HS}}[\overline{\nu}- i\omega ](\mathbf{x})\right\rangle_{\text{CV}} \; , \end{eqnarray} \end{subequations} where $\rho_{\text{HS}}[\xi](\mathbf{x})$ is the local density of the hard sphere fluid with the local chemical potential $\xi(\mathbf{x})$. \subsection{KSSHE field correlations} \label{CorreK} Let us introduce the modified partition function \begin{equation} \Xi^{1}\left[ \nu, J \right]=\mathcal{N}_{w}^{-1} \int \mathcal{D} \varphi \; \exp \left( - \mathcal{H}_{\text{K}} \left[\nu, \varphi \right] + \left\langle J \vert \varphi\right\rangle \right) \; , \end{equation} where $J$ is a real scalar field. Clearly $\Xi^{1}\left[ \nu, J \right]$ is the generator of field correlation functions and a standard result of statistical field theory yields \cite{Zinn} \begin{eqnarray} \label{defcorreK} G^{(n)}_{\varphi}[\nu](1, \ldots, n) &=&\left< \prod_{i=1}^{n} \varphi \left(\mathbf{x}_{i}\right) \right>_{\text{K}} \; =\frac{1}{\Xi[\nu]} \left. \frac{\delta^{n} \;\Xi^1[\nu,J]} {\delta J(1) \ldots \delta J(n)} \right \vert_{J=0} \; ,\nonumber \\ G^{(n), T}_{\varphi} [\nu](1, \ldots, n) &=& \left. \frac{\delta^{n} \; \log \Xi^1[\nu,J]} {\delta J(1) \ldots \delta J(n)}\right \vert_{J=0} \; . \end{eqnarray} Moreover one has \cite{Zinn} \begin{equation} G^{(n), T}_{\varphi} (1,\ldots,n)= G^{(n)}_{\varphi}( 1,\ldots,n) - \sum \prod_{m<n}G^{(m), T}_{\varphi}(i_{1},\ldots,i_{m}) \; . \end{equation} The relations between the $G^{(n), T}_{\varphi}$ and the truncated density correlation functions $G^{(n), T}$ have been established elsewhere by one of us \cite{Cai-Mol,Cai-JSP,Cai-Mol1}. We summarize them below for future reference. \begin{subequations} \label{dens-K} \begin{eqnarray} \label{dens-K1} \rho\left[ \nu \right](1)&=& w^{-1}(1,1^{'}) \left\langle \varphi (1^{'}) \right\rangle_{\text{K}} \; , \\ \label{dens-K2} G^{(2), T}\left[ \nu \right] (1,2) &=& -w^{-1}(1,2) +w^{-1}(1,1^{'})w^{-1}(2,2^{'}) \times \nonumber \\ &\times & G^{(2), T}_{\varphi}\left[ \nu \right] (1^{'},2^{'}) \; , \\ \label{dens-Kn} G^{(n), T}\left[ \nu \right] (1,\ldots,n)&=& w^{-1}(1,1^{'}) \ldots w^{-1}(n,n^{'}) \times \nonumber \\ &\times & G^{(n), T}_{\varphi}\left[ \nu \right] (1',\ldots,n') \qquad \text{ for } n\geq 3 \; , \end{eqnarray} \end{subequations} where we have adopted Einstein's convention, i.e. a space integration of position variables labled by the same dummy indices over the domain $\Omega$ is meant. \subsection{\label{CorreCV}CV field correlations} In this section we study the correlation functions of the fields $\rho$ and $\omega$ in the CV representation. We thus define \begin{eqnarray} \label{G-CV} G^{(n)}_{\rho}[\nu](1, \ldots, n) &=&\left< \prod_{1=1}^{n} \rho \left(\mathbf{x}_{i}\right) \right>_{\text{CV}} \; , \nonumber \\ G^{(n)}_{\omega}[\nu](1, \ldots, n) &=&\left< \prod_{1=1}^{n} \omega \left(\mathbf{x}_{i}\right) \right>_{\text{CV}} \; , \end{eqnarray} and their connected parts \begin{eqnarray} G^{(n), T}_{\rho} (1,\ldots,n)&= & G^{(n)}_{\rho}( 1,\ldots,n) - \sum \prod_{m<n}G^{(m), T}_{\rho}(i_{1},\ldots,i_{m}) \; , \nonumber \\ G^{(n), T}_{\omega} (1,\ldots,n)&= & G^{(n)}_{\omega}( 1,\ldots,n) - \sum \prod_{m<n}G^{(m), T}_{\omega}(i_{1},\ldots,i_{m}) \; . \; \end{eqnarray} \subsubsection{Correlation functions $G^{(n)}_{\rho}$} \label{gro} Let us define the modified GC partition function \begin{equation} \label{Xi2} \Xi^2\left[\nu,J \right]= \int \mathcal{D} \rho \mathcal{D} \omega \; \exp \left( - \mathcal{H}_{\text{CV}}\left[\nu, \rho, \omega \right]+\left\langle J \vert \rho\right\rangle \right) \; , \end{equation} where $J$ is a real scalar field. $\Xi^2\left[\nu,J \right]$ is clearly the generator of the $G^{(n)}_{\rho}$ and we thus have \begin{eqnarray} \label{defcorreCVrho} G^{(n)}_{\rho}[\nu](1, \ldots, n) &=& \frac{1}{\Xi[\nu]} \left. \frac{\delta^{n} \;\Xi^2[\nu,J]} {\delta J(1) \ldots \delta J(n)} \right \vert_{J=0} \; ,\nonumber \\ G^{(n), T}_{\rho} [\nu](1, \ldots, n) &=& \left. \frac{\delta^{n} \log \Xi^2[\nu,J]} {\delta J(1) \ldots \delta J(n)} \right \vert_{J=0}\; . \end{eqnarray} The simplest way to obtain the relations between the $G^{(n)}_{\rho}$ and the density correlation functions is to start from the definition~(\ref{defcorre}) of $G^{(n)}$. One has \begin{eqnarray} G^{(n)}(1, \ldots, n)&=& \frac{1}{\Xi[\nu]}\frac{\delta^{n} \;\Xi[\nu]} {\delta \nu(1) \ldots \delta \nu(n)} \; \nonumber \\ &=&\frac{1}{\Xi[\nu]} \int \mathcal{D} \rho \mathcal{D} \omega \; \exp \left(\frac{1}{2} \left\langle\rho \vert w\vert \rho \right\rangle +i \left\langle \omega\vert \rho \right\rangle \right) \frac{\delta^{n} \; \Xi_{\text{HS}}[\overline{\nu} -i \omega]} {\delta \nu(1) \ldots \delta \nu(n)} \nonumber \\ &=&\frac{1}{\Xi[\nu]} \int \mathcal{D} \rho \mathcal{D} \omega \; \exp \left(\frac{1}{2} \left\langle\rho \vert w\vert \rho \right\rangle +i \left\langle \omega\vert \rho \right\rangle \right) \times \nonumber \\ &\times& (i)^n \frac{\delta^{n} \; \Xi_{\text{HS}}[\overline{\nu} -i \omega]} {\delta \omega(1) \ldots \delta \omega(n)} \nonumber \; . \end{eqnarray} Performing now $n$ integral by parts yields \begin{eqnarray} G^{(n)}(1, \ldots, n)&=&\frac{(-i)^n}{\Xi[\nu]} \int \mathcal{D} \rho \mathcal{D} \omega \; \exp \left(\frac{1}{2} \left\langle\rho \vert w\vert \rho \right\rangle + \ln \Xi_{\text{HS}}[\overline{\nu} -i \omega] \right)\times \nonumber \\ &\times& \frac{\delta^{n} \; \exp\left(i\left\langle \omega \vert \rho \right\rangle \right) } {\delta \omega(1) \ldots \delta \omega(n)} =\left< \prod_{1=1}^{n} \rho \left(i\right) \right>_{\text{CV}} \; . \nonumber \end{eqnarray} We have thus proved the expected result \begin{equation} G^{(n)}\left[\nu \right] (1, \ldots, n)=G^{(n)}_{\rho}\left[\nu \right] (1, \ldots, n) \; , \end{equation} which implies in turn that \begin{equation} \label{dens-CV-rho}G^{(n), T}\left[\nu \right] (1, \ldots, n)=G^{(n),T}_{\rho}\left[\nu \right] (1, \ldots, n) \; . \end{equation} \subsubsection{Correlation functions $G^{(n)}_{\omega}$} \label{Gnomega} Let us define the modified GC partition function \begin{equation} \label{Xi3} \Xi^3\left[\nu,J \right]= \int \mathcal{D} \rho \mathcal{D} \omega \; \exp \left( - \mathcal{H}_{\text{CV}} \left[\nu, \rho, \omega \right]+\left\langle J \vert \omega \right\rangle \right) \; , \end{equation} where $J$ is a real scalar field. $\Xi^3\left[\nu,J \right]$ is the generator of the $G^{(n)}_{\omega}$ and we thus have \begin{eqnarray} \label{defcorreCVomega} G^{(n)}_{\omega}[\nu](1, \ldots, n) &=& \frac{1}{\Xi[\nu]} \left. \frac{\delta^{n} \;\Xi^3[\nu,J]} {\delta J(1) \ldots \delta J(n)} \right \vert_{J=0} \; ,\nonumber \\ G^{(n), T}_{\omega} [\nu](1, \ldots, n) &=& \left. \frac{\delta^{n} \log \Xi^3[\nu,J]} {\delta J(1) \ldots \delta J(n)} \right \vert_{J=0}\; . \end{eqnarray} In order to relate $G^{(n)}_{\omega}$ to $G^{(n)}$ we perform the change of variable $\rho \rightarrow \rho + i J$ in eq.~(\ref{Xi3}). The functional Jacobian of the transformation is of course equal to unity and one obtains the relation \begin{equation} \label{jojo} \ln \Xi^3\left[\nu,J \right]= -\frac{1}{2} \left\langle J\vert w \vert J \right\rangle + \ln\Xi^2\left[\nu, i w \star J \right] \;, \end{equation} where the star $\star$ means a space convolution and $\Xi^2$ was defined at eq.~(\ref{Xi2}). The idea is to perform now $n$ successive functional derivatives of both sides of eq.~(\ref{jojo}) with respect to $J$. Since it follows from the expression~(\ref{defcorreCVrho}) of $G^{(n), T}_{\rho} $ that \begin{eqnarray} \left. \frac{\delta^{n} \log \Xi^2[\nu, i w \star J]} {\delta J(1) \ldots \delta J(n)} \right \vert_{J=0}&=& i^n w(1,1^{'}) \ldots w(n,n^{'})\times \nonumber \\ &\times& G^{(n), T}_{\rho} [\nu](1^{'}, \ldots, n^{'}) \; , \end{eqnarray} one readily obtains \begin{subequations} \label{dens-CV} \begin{eqnarray} \label{dens-CV1} \left\langle \omega (1) \right\rangle_{\text{CV}}&=& i \; w(1,1^{'}) \left\langle \rho (1^{'}) \right\rangle_{\text{CV}} \; , \\ \label{dens-CV2} G^{(2), T}_{\omega}\left[ \nu \right] (1,2) &=&- w(1,2) \nonumber \\ & - &w(1,1^{'})w(2,2^{'}) G^{(2), T}_{\rho}\left[ \nu \right] (1^{'},2^{'}) \; , \\ \label{dens-CVn} G^{(n), T}_{\omega}\left[ \nu \right] (1,\ldots,n)&=& i^n \; w(1,1^{'}) \ldots w(n,n^{'}) \times \nonumber \\ &\times & G^{(n), T}_{\rho}\left[ \nu \right] (1',\ldots,n') \qquad \text{ for } n\geq 3 \; . \end{eqnarray} \end{subequations} A comparison between eqs~(\ref{dens-K}), ~(\ref{dens-CV-rho}), and ~(\ref{dens-CV}) will show that the CV and KSSHE correlation functions are simply related since we have, as expected \begin{equation} G^{(n), T}_{\varphi}\left[ \nu \right] (1,\ldots,n)= (-i)^n \; G^{(n), T}_{\omega}\left[ \nu \right] (1,\ldots,n) \; \qquad \text{ for all } n \; . \end{equation} \section{Mean Field theory} \label{MF} \subsection{\label{MF-KSSHE}KSSHE representation} We summarize previous work on the mean-field (MF) or saddle point approximation of the KSSHE theory \cite{Cai-Mol,Cai-JSP}. At the MF level one has \cite{Zinn} \begin{equation} \Xi_{\text{MF}}\left[\nu \right]=\exp \left(- \mathcal{H}_{\text{K}}\left[ \nu, \varphi_{0} \right] \right) \; , \end{equation} where, for $\varphi=\varphi_{0}$, the action is stationary, i.e. \begin{equation} \label{statio-K}\left. \frac{\delta \; \mathcal{H}_{\text{K}}\left[ \nu, \varphi \right]}{\delta \varphi}\right \vert_{\varphi_{0}}=0 \; . \end{equation} Replacing the KSSHE action by its expression~(\ref{action-K}) in eq.~(\ref{statio-K}) leads to an implicit equation for $\varphi_{0}$: \begin{equation} \label{statio2-K} \varphi_{0}(1)=w(1,1^{'}) \; \rho_{\text{HS}}\left[\overline{\nu} + \varphi_{0}\right]( 1^{'}) \; , \end{equation} which reduces to \begin{equation} \label{statio2-K-hom} \varphi_{0}=\widetilde{w}(0)\; \rho_{\text{HS}}\left[\overline{\nu} + \varphi_{0}\right] \; \end{equation} for a homogeneous system. It follows from the stationary condition~(\ref{statio-K}) that the MF density is given by \begin{equation} \label{ro-MF} \rho_{\text{MF}}\left[\nu \right] (1)= \frac{\delta \ln \Xi_{\text{MF}}\left[\nu \right]}{\delta \nu(1)}= \rho_{\text{HS}} \left[ \overline{\nu} + \varphi_{0} \right](1) \; , \end{equation} and that the MF grand potential reads \begin{equation} \label{MF-gpot} \ln \Xi_{\text{MF}}\left[\nu \right]= \ln \Xi_{\text{HS}}\left[\overline{\nu} + \varphi_{0}\right] - \frac{1}{2}\left\langle \rho_{\text{MF}}\vert w \vert\rho_{\text{MF}}\right\rangle \; . \end{equation} Moreover, the MF Kohn-Scham free energy defined as the Legendre transform \begin{equation} \beta \mathcal{A}_{\text{MF}}\left[\rho \right] =\sup_{\nu}\left\lbrace \left\langle \rho \vert \nu\right\rangle -\ln \Xi_{\text{MF}}\left[\nu \right] \right\rbrace \end{equation} is found to be \begin{equation} \label{MF-A} \beta \mathcal{A}_{\text{MF}}\left[\rho \right] = \beta \mathcal{A}_{\text{HS}}\left[\rho \right] -\frac{1}{2}\left\langle\rho \vert w \vert \rho \right\rangle +\frac{1}{2} \int_{\Omega} dx \; w(0) \rho(x) \; . \end{equation} It can be shown \cite{Cai-Mol} that $\mathcal{A}_{\text{MF}}\left[\rho \right]$ constitutes a rigorous upper bound for the exact free energy $\mathcal{A}\left[\rho \right]$ if the interaction is attractive ($\widetilde{w}(q)>0$) and a lower bound in the converse case ($\widetilde{w}(q)<0$). Finally, the pair correlation and vertex functions at the zero-loop order which are defined respectively as \begin{subequations} \begin{eqnarray} G_{\text{MF}}^{(2), T}\left[\nu \right](1,2)&=& \frac{\delta^{2} \ln \Xi_{\text{\text{MF}}}\left[ \nu\right]} {\delta \nu(1) \; \delta \nu(2)} \; , \\ C_{\text{MF}}^{(2)}\left[\rho \right] (1,2)&=& -\frac{\delta^{2} \beta \mathcal{A}_{\text{\text{MF}}}\left[\rho\right]} {\delta \rho(1) \; \delta \rho(2)} \; , \end{eqnarray} \end{subequations} are given by \begin{subequations} \begin{eqnarray} G_{\text{MF}}^{(2), T}(1,2)&=& \left(1 -w \cdot G^{(2), T}_{\text{HS}}\left[\overline{\nu} + \varphi_{0} \right] \right)^{-1} \cdot G^{(2), T}_{\text{HS}}\left[\overline{\nu} + \varphi_{0} \right]\left(1,2 \right) \\ \label{zozo} C_{\text{MF}}^{(2)}(1,2)&=&-G_{\text{MF}}^{(2), T \, \, -1}(1,2) = C_{\text{HS}}(1,2) + w(1,2) \; . \end{eqnarray} \end{subequations} It follows then from eq.~(\ref{dens-K2}) that we have \begin{equation} \label{propa} G_{\varphi, \;\text{MF}}^{(2), T}(1,2)= \left(1 -w \cdot G^{(2)}_{\text{HS}}\left[\overline{\nu} + \varphi_{0} \right]\right)^{-1} \cdot w(1,2) \; . \end{equation} \subsection{\label{MF-CV}CV representation} An analysis similar to that of Sec~\ref{MF-KSSHE} can be made in the CV representation. The MF level of the CV field theory will be defined by \begin{equation} \Xi_{\text{MF}}\left[\nu \right]=\exp \left(- \mathcal{H}_{\text{CV}}\left[ \nu, \rho_{0}, \omega_{0} \right] \right) \; , \end{equation} where, for $\rho=\rho_{0}$ and $\omega=\omega_{0}$ the CV action is stationary, i.e. \begin{equation} \label{statio-CV}\left. \frac{\delta \; \mathcal{H}_{\text{CV}}\left[ \nu, \rho, \omega \right]}{\delta \rho}\right \vert_{(\rho_0,\omega_{0})} =\left. \frac{\delta \; \mathcal{H}_{\text{CV}}\left[ \nu, \rho, \omega \right]}{\delta \omega}\right \vert_{(\rho_0,\omega_{0})} =0 \; . \end{equation} Replacing the CV action by its expression~(\ref{actionCV}) in eq.~(\ref{statio-CV}) yields a set of two coupled implicit equations for $\rho_{0}$ and $\omega_{0}$: \begin{eqnarray} 0&=& w(1,2)\rho_{0}(2) + i \omega_{0}(1) \nonumber \; , \\ 0&=& \rho_{0}(1) - \rho_{\text{HS}}\left[ \overline{\nu} -i\omega_{0} \right](1) \; . \end{eqnarray} If we define $\varphi_{0}= -i \omega_{0}$, then the two previous equations can be rewritten \begin{eqnarray} \varphi_{0}&=& w(1,2) \rho_{0}(2) \nonumber \; , \\ \rho_{0}(1)&=& \rho_{\text{HS}}\left[ \overline{\nu} + \varphi_{0}\right](1) \; , \end{eqnarray} which shows that, as expected, $\varphi_{0}$ coincides with the saddle point of the KSSHE field theory (cf Sec~\ref{MF-KSSHE}). Moreover a direct calculation will show that \begin{equation} \ln \Xi_{\text{MF}}\left[\nu \right]= -\mathcal{H}_{\text{K}}\left[\nu, \varphi_{0} \right] = -\mathcal{H}_{\text{CV}}\left[\nu, \rho_{0},\omega_{0} \right] \; . \end{equation} Therefore the local density, the grand potential, the free energy, the correlation and vertex functions coincide at the MF level in the CV and KSSHE field theories. \section{Loop expansion of the grand potential} \label{loop} A one loop expression for the grand potential $\ln \Xi\left[\nu \right] $ and the free energy $\beta \mathcal{A}\left[\rho \right] $ has been obtained in \cite{Cai-Mol} in the framework of KSSHE field theory. Here we perform a two-loop expansion for $\ln \Xi$. The derivation will be made both in the KSSHE and CV representations. \subsection{\label{loop-KSSHE} KSSHE representation} The loop expansion of the logarithm of the partition function of a scalar field theory can be found in any standard text book, see e.g. that of Zinn-Justin \cite{Zinn}. One proceeds as follows. A small dimensionless parameter $\lambda$ is introduced and the loop expansion is obtained by the set of transformations \begin{eqnarray} \varphi &=&\varphi_{0} + \lambda^{1/2} \chi \; ,\nonumber \\ \ln \Xi\left[ \nu \right] &=& \lambda \ln \left\lbrace \mathcal{N}_{w}^{-1} \int \mathcal{D}\chi \; \exp \left( - \frac{\mathcal{H}_{\text{K}}\left[\nu,\varphi \right]}{\lambda} \right) \right\rbrace \; , \nonumber \\ &=&\ln \Xi^{(0)}\left[ \nu \right] + \lambda \ln \Xi^{(1)}\left[ \nu \right]+ \lambda^{2} \ln \Xi^{(2)}\left[ \nu \right] + \mathcal{O}(\lambda^{3}) \; , \end{eqnarray} where $\varphi_{0}$ is the saddle point of the KSSHE action. At the end of the calculation one set $\lambda=1$. It follows from the stationary condition~(\ref{statio-K}) that a functional Taylor expansion of $\mathcal{H}_{\text{K}}\left[\nu,\varphi \right]$ about the saddle point has the form \begin{eqnarray} \label{loop-K} \frac{\mathcal{H}_{\text{K}}\left[\nu,\varphi \right]}{\lambda} &=& \frac{\mathcal{H}_{\text{K}}\left[\nu,\varphi_{0} \right]}{\lambda} + \frac{1}{2} \left\langle \chi \vert \Delta_{\varphi_{0}}^{-1}\vert\chi \right\rangle \nonumber \\ &+& \sum_{n=3}^{\infty} \frac{\lambda^{\frac{n}{2} -1}}{n !} \int_{\Omega} d1 \ldots dn \; \mathcal{H}^{(n)}_{\varphi_{0}}(1, \ldots,n) \chi(1) \ldots \chi(n), \end{eqnarray} where we have adopted Zinn-Justin's notations \cite{Zinn}. In eq.~(\ref{loop-K}) \begin{equation} \label{propa-2} \Delta_{\varphi_{0}}^{-1}(1,2)= w^{-1}(1,2) -G_{\text{HS}}^{(2), \; T}\left[ \overline{\nu} +\varphi_{0}\right] (1,2) \; . \end{equation} $\Delta_{\varphi_{0}}(1,2)$ is the free propagator of the theory, it coincides with $G_{\varphi, \;\text{MF}}^{(2)}(1,2)$ as can be seen by comparing eqs.~(\ref{propa}) and~(\ref{propa-2}). If $\nu$ is uniform then the system is homogeneous and $\Delta_{\varphi_{0}}(1,2)$ takes on a simple form in Fourier space, i.e. \begin{equation} \widetilde{\Delta}_{\varphi_{0}}(q)=\frac{\widetilde{w}(q)} {1- \widetilde{w}(q) \widetilde{G}_{\text{HS}}^{(2), \; T}\left[ \overline{\nu} +\varphi_{0}\right](q) } \; . \end{equation} Finally the kernels $\mathcal{H}^{(n)}_{\varphi_{0}}$ in the r.h.s. of eq.~(\ref{loop-K}) are given by \begin{equation} \label{kernel} \mathcal{H}^{(n)}_{\varphi_{0}}(1, \ldots,n) \equiv - G_{\text{HS}}^{(n), \; T}\left[ \overline{\nu} +\varphi_{0}\right] (1, \ldots,n) \; . \end{equation} The expression of $\ln \Xi^{(n)}\left[ \nu \right]$ in terms of the propagator $\Delta_{\varphi_{0}}^{-1}(1,2)$ and the vertex interactions $\mathcal{H}^{(n)}_{\varphi_{0}}$ is obtained by means of a cumulant expansion of $\ln \Xi\left[ \nu \right]$ and by making a repeated use of Wick's theorem. Of course $ \Xi^{(0)}\left[ \nu \right]\equiv \Xi_{\text{MF}}\left[ \nu \right]$. At the one-loop order one finds \cite{Cai-Mol} \begin{equation} \label{Xi1-K} \Xi^{(1)}\left[ \nu \right]= \frac{\mathcal{N}_{\Delta_{\varphi_{0}}}}{\mathcal{N}_{w}}=\frac{\int \mathcal{D}\varphi \; \exp \left( -\frac{1}{2}\left \langle \varphi \vert \Delta_{\varphi_{0}}\vert \varphi \right \rangle \right)}{\int \mathcal{D}\varphi \; \exp \left( -\frac{1}{2}\left \langle \varphi \vert w \vert \varphi \right \rangle \right) } \; \end{equation} For a homogeneous system the Gaussian integrals in the r.h.s. of eq.~(\ref{Xi1-K}) can be performed explicitly (cf. eqs.~(\ref{Gauss}) of Appendix A) and one has \begin{equation} \label{Xi1-K-bis} \ln \Xi^{(1)}\left[ \nu \right]= -\frac{V}{2} \int_{\mathbf{q}} \ln \left(1 - \widetilde{w}(q) \widetilde{G}_{\text{HS}}^{(2), \; T}\left[ \overline{\nu} +\varphi_{0} \right] (q) \right) \; . \end{equation} As is shown in detail in \cite{Cai-Mol} the one-loop approximation coincides with the well known RPA approximation of the theory of liquids \cite{Hansen}. The second-loop order contribution $\ln \Xi^{(2)}$ to the grand potential has a complicated expression involving the sum of three diagrams sketched in fig.~\ref{diag} \begin{equation} \label{Xi2-K} \ln \Xi^{(2)}\left[ \nu \right]= D_a +D_b +D_c \; . \end{equation} \begin{figure}[!ht] \begin{center} \epsfig{figure=fig_st2739.eps,width=8cm,clip=} \caption{\label{diag} Diagrams which contribute to $ \ln \Xi^{(2)}\left[ \nu \right]$. $D_a$ and $D_c$ are irreducible while $D_b$ is reducible.} \end{center} \end{figure} More explicitly one has \cite{Zinn} \begin{eqnarray} \label{D} D_a&=& \frac{1}{8} \int_{\Omega} d1 \ldots d4 \; \Delta_{\varphi_{0}}(1,2) \Delta_{\varphi_{0}}(3,4) G_{\text{HS}}^{(4), \; T}\left[ \overline{\nu} +\varphi_{0} \right] (1,2,3,4) \nonumber \\ D_b&=& \frac{1}{8} \int_{\Omega} d1 \ldots d3 \; d1^{'} \ldots d3^{'}\; \Delta_{\varphi_{0}}(1,2) \Delta_{\varphi_{0}}(1^{'},2^{'}) \Delta_{\varphi_{0}}(3,3^{'}) \nonumber \\ & \times & G_{\text{HS}}^{(3), \; T}\left[ \overline{\nu} +\varphi_{0} \right] (1,2,3) G_{\text{HS}}^{(3), \; T}\left[ \overline{\nu} +\varphi_{0} \right] (1^{'},2^{'},3^{'}) \nonumber \\ D_c&=& \frac{1}{12} \int_{\Omega} d1 \ldots d3 \; d1^{'} \ldots d3^{'}\; \Delta_{\varphi_{0}}(1,1^{'}) \Delta_{\varphi_{0}}(2,2^{'}) \Delta_{\varphi_{0}}(3,3^{'}) \nonumber \\ & \times & G_{\text{HS}}^{(3), \; T}\left[ \overline{\nu} +\varphi_{0} \right] (1,2,3) G_{\text{HS}}^{(3), \; T}\left[ \overline{\nu} +\varphi_{0} \right] (1^{'},2^{'},3^{'}) \; . \end{eqnarray} As they stand, the above relations are not particularly useful for practical applications (even in the homogeneous case) since they involve the three and four body correlation functions of the reference HS fluid. We will introduce some reasonable approximation in Sec.~\ref{free} to tackle with this horible expression. Quite remarkably it has been shown recently that for a symmetric mixture of charged hard spheres $\ln \Xi^{(2)}$ has a much more simple expression which involves only the pair correlation functions of the HS fluid as a consequence of local charge neutrality \cite{Cai-JSP,Cai-Mol1}. \subsection{\label{sec-loop-CV} CV representation} In order to obtain a loop expansion of $\ln \Xi$ in the CV representation we consider the following set of transformations \begin{subequations} \begin{eqnarray} \label{aa}\rho &=&\rho_{0} + \lambda^{1/2} \delta \rho \; , \\ \label{bb}\omega &=&\omega_{0} + \lambda^{1/2} \delta \omega \; , \\ \label{cc}\ln \Xi\left[ \nu \right] &=& \lambda \ln \left\lbrace \int \mathcal{D}\delta\rho \; \mathcal{D}\delta\omega \; \exp \left( - \frac{\mathcal{H}_{\text{CV}}\left[\nu,\rho, \omega \right]}{\lambda} \right) \right\rbrace \; , \end{eqnarray} \end{subequations} where $(\rho_{0},\omega_{0})$ is the saddle point of the CV action. The form retained in eqs~(\ref{aa}) and~(\ref{bb}) is imposed by the exact relation $\left\langle \omega \right\rangle_{\text{CV}}= i w \star \left\langle \rho \right\rangle_{\text{CV}} $ derived in Sec~\ref{Gnomega}. It follows from the stationary condition~(\ref{statio-CV}) that the functional Taylor expansion of $\mathcal{H}_{\text{CV}}\left[\nu,\rho,\omega \right]$ about the saddle point reads as \begin{eqnarray} \label{loop-CV} \frac{\mathcal{H}_{\text{CV}}\left[\nu,\rho, \omega \right]}{\lambda}& =& \frac{\mathcal{H}_{\text{CV}}\left[\nu,\rho_{0} ,\omega_{0} \right]}{\lambda} - \frac{1}{2} \left\langle \delta \rho \vert w \vert \delta \rho \right\rangle -i \left\langle \delta \omega \vert \delta \rho\right\rangle \nonumber \\ & -& \frac{1}{2}\left\langle \delta \omega \left\vert G_{\text{HS}}^{(2), \;T}\left[\overline{\nu}-i \omega_{0} \right] \right \vert\delta \omega \right\rangle +\delta\mathcal{H}\left[\delta \omega \right] \; , \end{eqnarray} where \begin{equation} \delta \mathcal{H}\left[\delta \omega \right]= - \sum_{n=3}^{\infty} \frac{ (-i)^{n}}{n !} \lambda^{\frac{n}{2} -1} \int_{\Omega} d1 \ldots dn \; \mathcal{H}^{(n)}_{\varphi_{0}}(1, \ldots,n) \delta \omega (1) \ldots \delta \omega (n) \; , \end{equation} since, as $\varphi_{0}=-i \omega_{0}$ we have indeed $G_{\text{HS}}^{(n), \;T}\left[\overline{\nu}-i \omega_{0} \right]= - \mathcal{H}^{(n)}_{\varphi_{0}}$ where the kernel $\mathcal{H}^{(n)}_{\varphi_{0}}$ is precisely that defined in eq.~(\ref{kernel}). We are thus led to define a two-fields Gaussian Hamiltonian \begin{equation} \mathcal{H}_{\text{G}}\left[\rho,\omega \right] \equiv -\frac{1}{2}\left\langle \rho \vert w \vert \rho \right\rangle -i\left\langle\omega \vert \rho \right\rangle + \frac{1}{2}\left\langle \omega \vert G^{(2), \; T}_{\text{HS}}\left[ \overline{\nu} -i \omega_{0} \right] \vert \omega \right\rangle \; , \end{equation} and Gaussian averages \begin{equation} \label{Gaverage}\left\langle \ldots \right\rangle_{\text{G}} \equiv \mathcal{N}_{\text{G}}^{-1} \int \mathcal{D}\rho \mathcal{D}\omega \; \ldots \exp\left(- \mathcal{H}_{\text{G}}\left[\rho,\omega \right] \right) \; , \end{equation} where the normalization constant $\mathcal{N}_{\text{G}}$ is given by \begin{equation} \label{normaG}\mathcal{N}_{\text{G}}=\int \mathcal{D}\rho \mathcal{D}\omega \; \exp\left(- \mathcal{H}_{\text{G}}\left[\rho,\omega \right] \right) \: . \end{equation} Note that if $w>0$ (attractive case ) eq.~(\ref{Gaverage}) makes sense only if the integration over the field variable $\omega$ is performed in first. With these notations in hands we can rewrite eq.~(\ref{cc}) as \begin{equation} \ln \Xi \left[ \nu \right]= \ln \Xi_{\text{MF}} \left[ \nu \right] + \lambda \ln \mathcal{N}_{\text{G}} + \lambda \ln \left\langle e^{\delta \mathcal{H}\left[ \omega \right]}\right\rangle_{\text{G}} \; . \end{equation} A cumulant expansion of the last term in the r.h.s. yields the $\lambda$ expansion \begin{equation} \label{cumu}\ln \Xi = \ln \Xi_{\text{MF}} + \lambda \ln \overline{\Xi}^{(1)}+ \lambda \left\lbrace \left\langle \delta \mathcal{H} \right\rangle_{\text{G}} + \frac{1}{2}\left( \left\langle \delta \mathcal{H}^{2} \right\rangle_{\text{G}} -\left\langle \delta \mathcal{H} \right\rangle_{\text{G}}^{2}\right) \right\rbrace + \mathcal{O}(\lambda^{3}) \; , \end{equation} where \begin{equation} \overline{\Xi}^{(1)}\left[ \nu \right]=\mathcal{N}_{\text{G}} \; , \end{equation} and it will be shown that the third term in the r.h.s. of eq.~(\ref{cumu}) is of order $\mathcal{O}(\lambda^{2})$. \subsubsection{One-loop approximation} We want to prove that the CV and KSSHE one-loop approximations for $ \Xi$, respectively $\overline{\Xi}^{(1)}$ and $\Xi^{(1)}$, coincide. Let us first rewrite the formula~(\ref{Xi1-K}) for $\Xi^{(1)}$ as the Gaussian average \begin{equation} \label{xx}\Xi^{(1)}\left[\nu \right]= \left\langle \exp \left( \frac{1}{2} \left\langle \varphi \vert G_{\text{HS}}^{(2),\;T} \vert \varphi \right\rangle \right) \right\rangle_{w} \; . \end{equation} Now we focus on $\overline{\Xi}^{(1)}\left[ \nu \right]=\mathcal{N}_{\text{G}}$. Performing the (ordinary Gaussian) integration over $\omega$ first in eq.~(\ref{normaG}) we find \begin{eqnarray} \overline{\Xi}^{(1)}\left[\nu \right]&=& \int \mathcal{D}\rho \; \exp\left( \frac{1}{2} \left\langle \rho \vert w \vert \rho \right\rangle \right) \int \mathcal{D}\omega\; \exp\left( i \left\langle \omega \vert \rho \right\rangle -\frac{1}{2} \left\langle \omega \vert G_{\text{HS}}^{(2),\;T} \vert \omega \right\rangle \right) \nonumber \\ &=& \mathcal{N}_{\left[ G_{\text{HS}}^{(2),\;T}\right]^{-1}} \; \int \mathcal{D}\rho \; \exp\left(- \frac{1}{2} \left\langle \rho \vert \left[ G_{\text{HS}}^{(2),\;T}\right]^{-1} - w \vert \rho \right\rangle \right) \; . \end{eqnarray} Now we make use of the two following properties of Gaussian integrals \begin{subequations} \begin{eqnarray} \label{propria} \mathcal{N}_{a}&=& 1/\mathcal{N}_{a^{-1}} \\ \label{proprib} \left\langle \exp \left( \frac{1}{2}\left\langle \varphi \vert b \vert \varphi \right\rangle \right) \right\rangle_{a} &=&\left\langle \exp \left( \frac{1}{2}\left\langle \varphi \vert a \vert \varphi \right\rangle \right) \right\rangle_{b} \; , \end{eqnarray} \end{subequations} to rewrite \begin{eqnarray} \overline{\Xi}^{(1)}\left[\nu \right]&=&\frac{ \int \mathcal{D}\rho \; \exp\left(- \frac{1}{2} \left\langle \rho \vert \left[ G_{\text{HS}}^{(2),\;T}\right]^{-1} - w \vert \rho \right\rangle \right) }{\int \mathcal{D}\rho \; \exp\left(- \frac{1}{2} \left\langle \rho \vert \left[ G_{\text{HS}}^{(2),\;T}\right]^{-1} \vert \rho \right\rangle \right)} \nonumber \\ &=& \left\langle \exp\left( \frac{1}{2} \left\langle \rho \vert w \vert \rho \right\rangle \right)\right\rangle _{G_{\text{HS}}^{(2),\;T}} \;. \end{eqnarray} It follows then readily from eqs.~(\ref{xx}) and~(\ref{proprib}) that $\overline{\Xi}^{(1)}= \Xi^{(1)}$. The two identities~(\ref{propria}) and ~(\ref{proprib}) are easy to show in the homogeneous case and, in this case, there are an immediate consequence of the properties of the Gaussian integral~(\ref{Gauss}). In the general, non-homogeneous case, they are derived in Appendix~B. Let us first have a look at the correlations of the field $\rho$. \subsubsection{Two-loop approximation} At the two-loop level we have to compute the averages $\left\langle \delta \mathcal{H}\left[\omega \right] \right\rangle_{\text{G}}$ and $\left\langle \delta \mathcal{H}\left[\omega \right]^{2} \right\rangle_{\text{G}}$. In order to do so we have first to generalize Wick's theorem to this special kind of Gaussian average defined in eq.~(\ref{Gaverage}). Let us first have a look at the correlations of the field $\rho$. One has from eq.~(\ref{Gaverage}) \begin{eqnarray} \left\langle \rho(1) \ldots \rho(n) \right\rangle_{\text{G}} &=& \frac{ \int \mathcal{D}\rho \mathcal{D}\omega \; \rho(1) \ldots \rho(n) \exp\left(- \mathcal{H}_{\text{G}}\left[\rho,\omega \right] \right)} {\int \mathcal{D}\rho \mathcal{D}\omega \; \exp\left(- \mathcal{H}_{\text{G}}\left[\rho,\omega \right] \right) } \nonumber \\ &=& \frac{ \int \mathcal{D}\rho \; \rho(1) \ldots \rho(n) \exp\left( -\frac{1}{2} \left\langle \rho \left \vert \left[ G_{\text{HS}}^{(2),\;T}\right]^{-1} -w \right \vert \rho \right\rangle \right)} {\int \mathcal{D}\rho \; \exp\left(- \frac{1}{2} \left\langle \rho \left \vert \left[ G_{\text{HS}}^{(2),\;T} \right]^{-1} -w \right \vert \rho \right\rangle \right) } \nonumber\; , \end{eqnarray} where we have performed the (ordinary) Gaussian integral on the field $\omega$ first. We are thus led to the ordinary Gaussian average (provided that $\left[ G_{\text{HS}}^{(2),\;T}\right]^{-1} -w$ is positive definite, which we assume), therefore the usual Wick's theorem is applied which yields \begin{eqnarray} \left\langle \rho(1) \right\rangle_{\text{G}}^{T} &=& 0 \nonumber \\ \left\langle \rho(1)\rho(2) \right\rangle_{\text{G}}^{T} &=& \left( \left[ G_{\text{HS}}^{(2),\;T} \right]^{-1} -w\right)^{-1}=G_{\text{MF}}^{(2), \; T}(1,2) \nonumber \\ \left\langle \rho(1) \ldots \rho(n) \right\rangle_{\text{G}}^{T}&=& 0 \; (\text{for } n\geq 3) \; . \end{eqnarray} We take now advantage of the general relations~(\ref{dens-CV}) between the truncated correlations of $\rho$ and $\omega$ in the CV formalism to get \begin{eqnarray} \left\langle \omega(1) \right\rangle_{\text{G}}^{T} &=& 0 \nonumber \\ \left\langle \omega(1)\omega(2) \right\rangle_{\text{G}}^{T} &=& -w(1,1^{'}) w(2,2^{'}) G_{\text{MF}}^{(2), \; T}(1^{'},2^{'}) =-\Delta_{\varphi_{0}}(1,2) \nonumber \\ \left\langle \omega(1) \ldots \omega(n) \right\rangle_{\text{G}}^{T}&=& 0 \; (\text{for } n\geq 3) \; , \end{eqnarray} from which Wick's theorem follows \begin{eqnarray} \label{WickG} \left\langle \omega(1) \ldots \omega(n) \right\rangle_{\text{G}}&=& 0 \; (n \text{ odd }) \; , \nonumber \\ \left\langle \omega(1) \ldots \omega(n) \right\rangle_{\text{G}}&=& (-1)^{n}\sum_{\text{pairs}}\prod \Delta_{\varphi_{0}}(i_{1},i_{2})\; (n \text{ even }) \; , \end{eqnarray} which was an expected result thanks to the correspondence $\varphi_{K} \leftrightarrow i \omega_{CV}$. We have now at our disposal all the tools to compute \begin{eqnarray} \label{morceau1}\left\langle \delta \mathcal{H}\left[\omega \right] \right\rangle_{\text{G}}&=& \frac{(-i)^{4}}{4!} \lambda \int d1 \ldots d4 \; G_{\text{HS}}^{(4)}\left[\overline{\nu}+\varphi_{0} \right] (1,\ldots 4) \nonumber \\ &\times& \left\langle \omega(1) \ldots \omega(4) \right\rangle_{\text{G}} + \mathcal{O}(\lambda^{2}) \nonumber \\ &=&\frac{\lambda}{8} \int d1 \ldots d4 \;G_{\text{HS}}^{(4)}\left[\overline{\nu}+\varphi_{0} \right] (1,\ldots 4) \nonumber \\ &\times & \Delta_{\varphi_{0}}(1,2)\Delta_{\varphi_{0}}(3,4) +\mathcal{O}(\lambda^{2}) \; , \end{eqnarray} where we have made use of Wick's theorem~(\ref{WickG}). We note that $\left\langle \delta \mathcal{H}\left[\omega \right] \right\rangle_{\text{G}}^{2}$ will not contribute to $\ln \Xi$ at the two-loop order and it remains to compute $\left\langle \delta \mathcal{H}^{2}\left[\omega \right] \right\rangle_{\text{G}}$. One finds \begin{eqnarray} \label{morceau2} \left\langle \delta \mathcal{H}^{2}\left[\omega \right] \right\rangle_{\text{G}}&=& \frac{\lambda}{(3!)^{2}} \int d1 \ldots d3^{'} G_{\text{HS}}^{(3)}\left[\overline{\nu}+\varphi_{0} \right] (1,2,3) G_{\text{HS}}^{(3)}\left[\overline{\nu}+\varphi_{0} \right] (1^{'},2^{'},3^{'}) \nonumber \\ &\times&\left\langle \omega(1) \ldots \omega(3^{'}) \right\rangle_{\text{G}} + \mathcal{O}(\lambda^{2}) \nonumber \\ &=& \lambda \int d1 \ldots d3^{'} G_{\text{HS}}^{(3)}\left[\overline{\nu}+\varphi_{0} \right] (1,2,3) G_{\text{HS}}^{(3)}\left[\overline{\nu}+\varphi_{0} \right] (1^{'},2^{'},3^{'})\nonumber \\ &\times &\left\lbrace \frac{1}{4}\Delta_{\varphi_{0}}(1,2) \Delta_{\varphi_{0}}(1^{'},2^{'})\Delta_{\varphi_{0}}(3,3^{'})\right. \nonumber \\ &+& \left. \frac{1}{6}\Delta_{\varphi_{0}}(1,1^{'}) \Delta_{\varphi_{0}}(2,2^{'})\Delta_{\varphi_{0}}(3,3^{'})\right\rbrace + \mathcal{O}(\lambda^{2}) \; , \end{eqnarray} where once again Wick's theorem~(\ref{WickG}) was used. Gathering the intermediate results~(\ref{morceau1}) and~(\ref{morceau2}) one has finally, after inspection \begin{equation} \lambda \left\lbrace \left\langle \delta \mathcal{H} \right\rangle_{\text{G}} + \frac{1}{2}\left( \left\langle \delta \mathcal{H}^{2} \right\rangle_{\text{G}} -\left\langle \delta \mathcal{H} \right\rangle_{\text{G}}^{2}\right) \right\rbrace = \lambda^{2} \;\ln \overline{\Xi}^{(2)}\left[\nu \right] + \mathcal{O}(\lambda^{3}) \end{equation} with \begin{equation} \overline{\Xi}^{(2)}\left[\nu \right]=\Xi^{(2)}\left[\nu \right] \; . \end{equation} We have thus shown that the one and two-loop expressions for $\ln \Xi\left[\nu \right]$ coincide in the KSSHE and CV representations. This coincidence is likely to be true at all orders in the loop-expansion. \section{The pressure and the free energy of the homogeneous fluid} \label{free} \subsection{The pressure} In this section we restrict ourselves to the homogeneous case, therefore $\ln \Xi\left[\nu \right]=V \beta P\left(\nu \right)$, where $P$ denotes the pressure and $\beta \mathcal{A}\left[\rho \right]= V \beta f(\rho)$ where $f$ is the Helmoltz free energy per unit volume. The two-loop expression that we derived for $P$ in Sec.~\ref{loop} is too much complicated to be of any use in practical calculations since it involves the $3$ and $4$ density correlation functions of the HS fluid which are unknown, whereas $G_{\text{HS}}^{(2), \; T}$ is known approximatively, for instance in the Percus-Yevick (PY) approximation \cite{Hansen}. A simple but coherent approximations for $G_{\text{HS}}^{(3), \; T}$ and $G_{\text{HS}}^{(4), \; T}$ will be proposed now. Recall first that it follows from their definitions \cite{Stell1,Stell2} (see e.g. eqs~(\ref{defcorre})) that the $G_{\text{HS}}^{(n), \; T}\left[\nu \right] $ satisfy to the following relations \begin{equation} \label{h1}\frac{\delta }{\delta \nu(n+1)} G_{\text{HS}}^{(n), \; T}\left[\nu \right](1, \ldots,n)=G_{\text{HS}}^{(n+1), \; T}\left[\nu \right](1, \ldots,n,n+1) \; . \end{equation} For a homogeneous system (in which case $\nu$ is a constant) one infers from this equation that \begin{equation} \label{h2}\int_{\Omega}d1 \ldots dn \; G_{\text{HS}}^{(n+1), \; T}\left[\nu \right](1, \ldots,n,n+1)= \frac{\partial^{n}}{\partial \nu^{n}}\rho_{\text{HS}}\left(\nu \right)\equiv \rho_{\text{HS}}^{(n)}\left(\nu \right) \; , \end{equation} where $\rho_{\text{HS}}\left(\nu \right)$ is the number density of hard spheres at the chemical potential $\nu$. In the rest of the section we will adopt the following approximation \begin{equation} \label{hyp} G_{\text{HS}}^{(n), \; T}\left[\nu \right](1, \ldots,n)= \rho_{\text{HS}}^{(n+1)}\left(\nu \right)\delta(n,1) \ldots \delta(2,1) \; \text{for n} \geq 3 \; . \end{equation} Note that this hypothesis is coherent with the exact relations~(\ref{h1}) and~(\ref{h2}). However, for $n=2$, we adopt for $G_{\text{HS}}^{(2), \; T}\left[\nu \right](1,2)$ some known approximation of the theory of liquids (PY approximation for instance). The free propagator has then a non trivial expression. In Fourier space it reads \begin{equation} \widetilde{\Delta}_{\varphi_{0}}(q)=\frac{\widetilde{w}(q)}{ 1-\widetilde{w}(q) \widetilde{G}_{\text{HS}}^{(2), \; T}\left[\overline{\nu}+\varphi_{0}\right](q) }, \end{equation} where \begin{equation} \widetilde{G}_{\text{HS}}^{(2), \; T}(q)= \int d^{3}x \; e^{-i q x} G_{\text{HS}}^{(2), \; T}(x) \end{equation} is the Fourier transform of $G_{\text{HS}}^{(2), \; T}\left[\nu \right](x)$. The set of approximations that we have just discussed is reasonable as long as the range of the KSSHE field correlation functions is (much) larger than than the range of the HS density correlation functions. This would be true if $w$ is the long range pair interaction or, in general, in the vicinity of the critical point. With the hypothesis~(\ref{hyp}) it is easy to obtain the two-loop order approximation for the pressure. One finds \begin{eqnarray} \label{press}\beta P (\nu) &=&\beta P^{(0)}(\nu) + \lambda \beta P^{(1)}(\nu)+ \lambda^{2} \beta P^{(2)}(\nu)+\mathcal{O}(\lambda^{3}) \; ,\nonumber \\ \beta P^{(0)}(\nu)&=&\beta P_{\text{MF}}(\nu)=P_{\text{HS}}(\overline{\nu}+\varphi_{0})-\frac{\varphi_{0}^{2}}{ 2 \widetilde{w}(0)} \; ,\nonumber \\ \beta P^{(1)}(\nu)&=&-\frac{1}{2} \int_{\mathbf{q}} \ln\left( 1- \widetilde{w}(q)\widetilde{G}_{\text{HS}}^{(2), \; T}\left[ \overline{\nu}+\varphi_{0} \right](q) \right) \; ,\nonumber \\ \beta P^{(2)}(\nu)&=&\frac{\rho^{(3)}_{0}}{8}\Delta_{\varphi_{0}}^{2}(0)+ \frac{\left[ \rho^{(2)}_{0}\right] ^{2}}{8} \widetilde{\Delta}_{\varphi_{0}}(0) \Delta_{\varphi_{0}}^{2}(0) \nonumber \\ &+& \frac{\left[ \rho^{(2)}_{0}\right] ^{2}}{12}\int d^{3}x \; \Delta_{\varphi_{0}}^{3}(x) \; , \end{eqnarray} where $\rho^{(n)}_{0} \equiv \rho_{\text{HS}}^{(n)}\left( \overline{\nu}+\varphi_{0}\right)$ and $\int_{\mathbf{q}}\equiv \int d^{3}q/(2 \pi)^{3}$. \subsection{The free energy} In order to compute the free energy $f(\rho)$ at the two-loop order we need the expression of the density only at the one-loop order (this a well known properties of Legendre transform \cite{Zinn} that will be checked explicitely further on). We thus define \begin{eqnarray} \label{end1}\rho(\nu)&=&\frac{\partial }{\partial \nu} \beta P=\rho_0 + \Delta \rho(\nu) \nonumber \\ \Delta \rho(\nu)&=&\lambda \rho^{(1)}\left(\nu \right) + \mathcal{O}(\lambda^{2})\;, \end{eqnarray} where $\rho_0\equiv \rho_{\text{HS}}\left( \overline{\nu}+\varphi_{0}\right)$ and \begin{eqnarray} \label{www}\rho^{(1)}\left(\nu \right)&=&\frac{\partial}{\partial \nu}\beta P^{(1)}(\nu) =\frac{1}{2}\int_{\mathbf{q}} \frac{\widetilde{w}(q)}{ 1-\widetilde{w}(q) \widetilde{G}_{\text{HS}}^{(2), \; T}(q)} \; \frac{\partial}{\partial \nu} \widetilde{G}_{\text{HS}}^{(2), \; T}(q) \nonumber \\ &=& \frac{\rho_{0}^{(2)}}{2} \Delta_{\varphi_{0}}(0)\left( 1+\frac{\partial \varphi_{0}}{\partial \nu} \right) \; , \end{eqnarray} where, in order to derive the last line, we made use of eqs.~(\ref{h1}) and~(\ref{hyp}). In order to obtain the expression of $\partial \varphi_{0}/\partial \nu$ one derives the homogeneous stationary condition~(\ref{statio2-K-hom}) with respect to $\nu$ which gives us \begin{equation} 1+\frac{\partial \varphi_{0}}{\partial \nu}=\frac{1}{1 - \rho_{0}^{(1)}\widetilde{w}(0)} \; . \end{equation} Introducing this result in the last line of eq.~(\ref{www}) we find finally that \begin{equation} \label{ro1} \rho^{(1)}\left(\nu \right)=\frac{\rho_{0}^{(2)}}{2} \frac{\Delta_{\varphi_{0}}(0)}{1 - \rho_{0}^{(1)}\widetilde{w}(0)} \; . \end{equation} We are now in position to compute the free energy \begin{eqnarray} \beta f (\rho) &=& \rho \nu - \beta P(\nu) \nonumber \\ &=& \rho \nu -\beta P^{(0)}(\nu) - \lambda \beta P^{(1)}(\nu)- \lambda^{2} \beta P^{(2)}(\nu)+\mathcal{O}(\lambda^{3}) \; . \end{eqnarray} Our task is now to reexpress the r.h.s. as a function of $\rho=\rho_0 + \Delta \rho(\nu)$ which will be done along the same lines that those used in \cite{Cai-JSP}. We shall compute separately three contributions to $\beta f$. \\\\ \noindent \textit{i) Computation of} $X\equiv\rho \nu -\beta P^{(0)}(\nu) $ \\\\ We first note that $X$ can be rewritten as \begin{equation} X=\rho_0 \nu -\beta P^{(0)}(\nu) + \nu \Delta \rho =\beta f^{(0)}(\rho_0)+ \nu \Delta \rho \; , \end{equation} where $f^{(0)}(\rho_0)\equiv f_{\text{MF}}(\rho_0) $ is the MF free energy of the homogeneous system at the density $\rho_0$, i.e. (cf. eq.~(\ref{MF-A})) \begin{equation} \label{12} \beta f^{(0)}(\rho_0)=\beta f_{\text{HS}}(\rho_0) -\frac{1}{2} \rho_0 \widetilde{w}^{2}(0)+ \frac{1}{2} \rho_0 w(0) \; . \end{equation} In order to relate $f^{(0)}(\rho_0)$ to $f(\rho]$ we perform a second order Taylor expansion \begin{equation} \label{13}\beta f^{(0)}(\rho =\rho_0 + \Delta \rho )= \beta f^{(0)}(\rho_0) + \nu \Delta \rho -\frac{1}{2} \widetilde{C}_{MF}(0) \left[ \Delta \rho \right]^{2} + \mathcal{O}(\lambda^{3}) \; , \end{equation} where the Fourier transform at $q=0$ of the MF two-body vertex function is computed from eq.~(\ref{zozo}) with the result \begin{equation} \label{15} \widetilde{C}_{MF}(0)=-1/\widetilde{G}_{HS}(0) +\widetilde{w}(0)=-\frac{1-\rho_{0}^{(1)} \widetilde{w}(0)}{\rho_{0}^{(1)}} \; . \end{equation} Combining the intermediate results~(\ref{12}),~(\ref{13}), and~(\ref{15}) one has finally \begin{eqnarray} \label{X} X & = &\beta f^{(0)}(\rho) + \frac{1}{2}\label{14}\widetilde{C}_{MF}(0) \left[ \Delta \rho \right]^{2} + \mathcal{O}(\lambda^{3}) \nonumber \\ &=&\beta f^{(0)}(\rho) -\frac{\lambda^{2}}{8} \Delta_{\varphi_{0}}^{2}(0) \frac{\left[ \rho_{0}^{(2)}\right] ^{2}}{\rho_{0}^{(1)}\left( 1 - \rho_{0}^{(1)} \widetilde{w}(0)\right) } + \mathcal{O}(\lambda^{3}) \; . \end{eqnarray} We note that, as claimed at the beginning of the paragraph, the knowledge of $\Delta \rho$ at the one-loop order is sufficient for our purpose. \\\\ \noindent \textit{ii) Computation of} $\beta P^{(1)}(\nu) $ \\\\ As can be inferred from eq.~(\ref{press}) $\beta P^{(1)}(\nu)$ may be seen as a function of the sole variable $\rho_0\equiv \rho_{HS}(\overline{\nu}+\varphi_{0})$ since for the HS reference system there is a one to one correspondence between densities and chemical potentials -at least in the fluid phase, away from the liquid/solid transition-. Therefore $\beta P^{(1)}(\nu) \equiv\beta P^{(1)}(\rho_0) $, with some abusive notations. One thus have \begin{eqnarray} \beta P^{(1)}(\rho=\rho_0+ \Delta \rho)&=&\beta P^{(1)}(\rho_0) + \frac{\partial \nu}{\partial\rho_0 } \frac{\partial \beta P^{(1)}(\nu)}{\partial \nu} \Delta \rho + \mathcal{O}(\lambda) \; , \nonumber \\ &=&\beta P^{(1)}(\rho_0) + \frac{\partial \nu}{\partial\rho_0 } \frac{\left[\Delta \rho \right]^{2}}{\lambda} + \mathcal{O}(\lambda) \; , \end{eqnarray} where the last line follows from eqs.~(\ref{end1}) and~(\ref{www}). At this point we notice that \begin{equation} \frac{\partial \rho_0}{\partial \nu}= \widetilde{G}_{\text{MF}}^{T}(q=0)= \frac{\rho_0^{(1)}}{1 - \rho_0^{(1)} \widetilde{w}(0)}=1/\frac{\partial \nu}{\partial\rho_0 } \; , \end{equation} which allows us to write finally \begin{subequations} \begin{eqnarray} \beta P^{(1)}(\rho_0)&=& \beta P^{(1)}(\rho) -\frac{\lambda}{4} \Delta^{2}_{\varphi_{0}}(0) \frac{\left[\rho_0^{(2)}\right]^{2}}{\rho_0^{(1)} \left( 1- \rho_0^{(1)}\widetilde{w}(0) \right) } \\ \beta P^{(1)}(\rho)&=& - \frac{1}{2}\int_q \ln \left(1 - \widetilde{w}\left( q\right) \widetilde{G}_{\text{HS}, \; \rho}^{T}\left( q\right) \right) \; , \end{eqnarray} \end{subequations} where, in the second equality, the subscript emphasizes that the truncated HS pair correlation function has to be computed at the density $\rho$. The computation of $f(\rho)$ is now completed and gathering the intermediate results one has finally, setting $\lambda=1$ \begin{eqnarray} \label{f2l} \beta f(\rho)&=& \beta f_{\text{HS}}(\rho) -\frac{\widetilde{w}\left( 0\right)}{2} \rho^{2} \nonumber \\ &+&\frac{1}{2} \int_{\mathbf{q}} \left\lbrace \ln \left(1 - \widetilde{w}\left( q\right) \widetilde{G}_{\text{HS}, \; \rho}^{T}\left( q\right) \right) + \rho \widetilde{w}\left( q\right)\right\rbrace \nonumber \\ &-&\frac{\rho^{(3)}_{0}}{8}\Delta_{\rho}^{2}(0)+ \frac{1}{8} \Delta_{\rho}^{2}(0)\frac{\left[ \rho^{(2)}_{0}\right]^{2} }{\rho^{(1)}_{0}} - \frac{\left[ \rho^{(2)}_{0}\right] ^{2}}{12}\int d^{3}x \; \Delta_{\rho}^{3}(x) \; . \end{eqnarray} Some comments on this result are in order. i) The last line in eq.~(\ref{f2l}) gathers the two-loop contributions of $\beta f(\rho)$. At this order of the loop expansion it is thus legitimate to replace $\Delta_{\varphi_0}$ by $\Delta_{\rho}$ where the propagator is evaluated at the density $\rho$ rather than $\rho_0$ since $\rho - \rho_0=\mathcal{O}(\lambda)$. More explicitly \begin{equation} \widetilde{\Delta}_{\rho}(q)=\frac{\widetilde{w}\left( q\right)}{1 -\widetilde{G}_{\text{HS}, \; \rho}^{T} \left( q\right)\widetilde{w}\left( q\right) } \; . \end{equation} ii) Similarly the derivatives $\rho^{(n)}_{0}$ which enter the last line of eq.~(\ref{f2l}) can be evaluated at the density $\rho$ rather than $\rho_0$. One can thus write, taking advantage of the one to one correspondence between the HS densities and chemical potentials \begin{eqnarray} \rho^{(1)}_{0}&=&\frac{1}{\nu_{\text{HS}}^{(1)}\left( \rho\right) } \nonumber \\ \rho^{(2)}_{0}&=&\frac{\partial\rho^{(1)}_{0} }{\partial \nu} =\frac{-\nu_{\text{HS}}^{(2)}(\rho)} {\left[\nu_{\text{HS}}^{(1)}(\rho)\right]^{3}} \nonumber \\ \rho^{(3)}_{0}&=&\frac{\partial\rho^{(2)}_{0} }{\partial \nu} = \frac{3 \left[\nu_{\text{HS}}^{(2)}(\rho)\right]^{2} - \nu_{\text{HS}}^{(3)}(\rho)\nu_{\text{HS}}^{(1)}(\rho) }{\left[\nu_{\text{HS}}^{(1)}(\rho)\right]^{5}} \; , \end{eqnarray} where $\nu_{\text{HS}}^{(n)}(\rho)$ denotes the $n$th derivative of the HS chemical potential with respect to the density (it can be computed in the framework of the PY or Carnahan-Starling approximations for instance \cite{Hansen}). iii) It must be pointed out that, quite unexpectedly, the reducible diagram $D_b$ has not be cancelled by the Legendre transform. Usually, in statistical field theory it is the case (cf. \cite{Zinn}). The reason is that the chemical potential $\nu$ is not the field conjugate to the order parameter $m=\left\langle \varphi \right\rangle_{K}$ of the KSSHE field theory. However one of us have shown elsewhere \cite{Cai-JSP,Cai-Mol1} that for the symmetric mixtures of charged hard spheres only irreducible diagrams contribute to $\beta f^{(2)}$. iv) All the quantities which enter eq.~(\ref{f2l}) can be computed numerically (for instance in the PY approximation). We plan to test the validity of our approximation in future work. \section{Conclusion} Using the CV method we reconsider the basic relations of statistical field theory of simple fluids that follow from this approach. In contrary to the KSSHE theory \cite{Cai-Mol} the corresponding CV action depends on two scalar fields - field $\rho$ connected to the number density of particles and field $\omega$ conjugate to $\rho$. Explicit expressions that allow to relate between themselves the correlation functions defined in different versions of the theory are derived. For a one-component continuous model of fluids, consisting of hard spheres interacting through the short-range pair potential, we calculated the grand partition function in a systematic way for both versions of statistical field theory using the loop expansion technique. As it was expected, in all the orders of loop expansion considered, both versions of the theory produced indeed the same analytical results. The expressions for the pressure as well as for the free energy are derived in a two-loop approximation that is the next step in comparison with the results obtained recently by one of us \cite{Cai-Mol} within the KSSHE theory. In fact this is a new type of approximation which we plan to test in our future work. In contrast to the usual statistical field theory \cite{Zinn}, our results demonstrate that for the case of simple fluids the reducible diagram is not cancelled by the Legendre transform. It is due to the fact that our field theory is a non-standard in the sense that the coupling between the internal field $\varphi$ and the external field $\nu$ is non-linear yielding new expressions for $\beta f(\rho)$. From our analysis of the CV and KSSHE transformations we can also conclude that the former has some important advantages which could be very useful for more complicate models of fluids. In particular, it is valid for an arbitrary pair potential (including a pair interaction $w(1,2)$ which does not possess an inverse) and is easily generalized for the case of n-body interparticle interactions with $n>2$. \section*{Acknowledgments} This work was made in the framework of the cooperation project between the CNRS and the NASU (ref.~ CNRS 17110).
1,314,259,994,606
arxiv
\section{Twistor Yang-Mills} The twistor space of four-dimensional Euclidean space ${\mathbb{E}}$ may be viewed as the total space of right Weyl spinors over ${\mathbb{E}}$. A right Weyl spinor $\pi_{\dot\alpha}$ in four dimensions has two components, and we will be interested in the projectivized spin bundle where $\pi_{\dot\alpha}$ is defined only up to the scaling $\pi_{\dot\alpha}\sim \lambda\pi_{\dot\alpha}$ for any non-zero complex $\lambda$. Thus projective twistor space ${\mathbb{PT}}'$ is a ${\mathbb{C}}{\mathbb{P}}^1$ bundle over ${\mathbb{E}}$, and may be described by coordinates $(x^{\alpha,\dot\alpha},[\pi_{\dot\beta}])$ where $x\in{\mathbb{E}}$ and $[\pi_{\dot\beta}]$ is the point on the ${\mathbb{C}}{\mathbb{P}}^1$ with homogeneous coordinates $\pi_{\dot\beta}$. The indices $\alpha,\dot\alpha=0,1$ denote the fundamental representation of each of the two $SL(2,{\mathbb{C}})$s in the spin group $SL(2,{\mathbb{C}})\times SL(2,{\mathbb{C}})$ of the complexified Lorentz group, and are raised and lowered using the $SL(2,{\mathbb{C}})$-invariant tensors $\epsilon_{\alpha\beta}=-\epsilon_{\beta\alpha}$ {\it etc.} We will often employ the notation $\langle a\,b\rangle\equiv \epsilon^{\dot\beta\dot\alpha}a_{\dot\alpha} b_{\dot\beta}$, $[c\,d]\equiv\epsilon_{\beta\alpha}c^{\alpha}d^{\beta}$ and $\langle a| M|c]\equiv a_{\dot\alpha}M^{\alpha\dot\alpha}c_{\alpha}$ to denote these $SL(2,{\mathbb{C}})-invariant$ inner products. To preserve real Euclidean space we restrict ourselves to the subgroup $SU(2)\times SU(2)\subset SL(2,{\mathbb{C}})\times SL(2,{\mathbb{C}})$ which also leaves invariant the inner products $\langle\pi\,\hat\pi\rangle$ and $[\omega\,\hat\omega]$, where $(\hat\pi_{\dot 0},\ \hat\pi_{\dot 1})\equiv(\overline{\pi_{\dot1}},\ -\overline{\pi_{\dot0}})$ and $(\hat\lambda^0,\ \hat\lambda^1)\equiv(\overline{\lambda^1},\ -\overline{\lambda^0})$. Twistor space ${\mathbb{PT}}$ is the complex manifold ${\mathbb{P}}^3$ with holomorphic homogeneous coordinates $(\omega^A,\pi_{A'}) =(x^{AA'}\pi_{A'},\pi_{A'})$, $(\omega^A,\pi_{A'})\sim (\lambda\omega^A,\lambda\pi_{A'})$ and ${\mathbb{PT}}'$ is the subset ${\mathbb{P}}^3\backslash{\mathbb{P}}^1$ on which $\pi_{A'}\neq 0$. We will work with coordinates $(x,\pi_{A'})$ on ${\mathbb{PT}}'$ for the most part, and in these coordinates, the complex structure can be expressed in terms of the following basis of (0,1)-forms and dual basis of $(0,1)$-vectors adapted to the fibration over ${\mathbb{E}}$ \begin{equation} \bar e^0 = \frac{\hat\pi_{\dot\alpha}{\rm d}\hat\pi^{\dot\alpha}} {\langle\pi\,\hat\pi\rangle^2}\, \qquad \bar e^\alpha = \frac{\hat\pi_{\dot\alpha}{\rm d} x^{\alpha\dot\alpha}}{\langle\pi\,\hat\pi\rangle} \qquad {\overline{\del}}_0 = \langle\pi\,\hat\pi\rangle\pi_{\dot\alpha}\frac\partial{\partial\hat\pi_{\dot\alpha}} \qquad {\overline{\del}}_\alpha = \pi^{\dot\alpha}\frac\partial{\partial x^{\alpha\dot\alpha}} \label{basis} \end{equation} where the scale factors are chosen to ensure that the forms have only holomorphic weight ({\it i.e.} they are independent of rescalings of $\hat\pi$). The ${\overline{\del}} $ operator is defined on functions $f$ with holomorphic weight to be ${\overline{\del}} f= \bar e^0{\overline{\del}}_0f +\bar e^\alpha {\overline{\del}}_\alpha f$, so that ${\overline{\del}} f=0$ implies that $f$ is holomorphic in $(\omega^\alpha,\pi_{\dot\alpha})$. To discuss space-time Yang-Mills theory on twistor space, we must choose a vector bundle $E\to{\mathbb{PT}}'$ with vanishing first Chern class. Having $c_1(E)=0$ implies that $E$ is the pullback of some space-time bundle $\tilde E$ via the projection map, $E=\mu^*\tilde E$. Our action will be a functional of fields $A\in\Omega^{0,1}_{{\mathbb{PT}}'}({\rm End} E)$ and $B\in\Omega^{0,1}_{{\mathbb{PT}}'}({\mathcal{O}}(-4)\otimes{\rm End}E)$ where $A$ is thought of as providing a ${\overline{\del}}$-operator ${\overline{\del}}_{gA}={\overline{\del}}+gA$ on $E$ and is defined up to appropriate gauge transformations $\delta A={\overline{\del}}_{gA}\gamma$. Here, $g$ is the Yang-Mills coupling constant. We are not assuming that ${\overline{\del}}_{gA}$ be integrable, so that ${\overline{\del}}_{gA}^2=gF$ is not assumed to be zero where $F={\overline{\del}} A+g[A,A]$. We then consider the action $S_{BF} + S_{B^2}$ where \cite{Mason:2005zm,Boels:2006ir} \begin{equation} S_{BF} = \int_{{\mathbb{PT}}'}\!\!\!\Omega\wedge {\rm tr}(B\wedge F)\, \qquad S_{B^2}:=-\frac12\int_{{\mathbb{E}}\times {\mathbb{P}}^1\times{\mathbb{P}}^1}\hspace{-1cm} {\rm d}^4x \, {\rm D}\pi_1\, {\rm D}\pi_2\, \langle\pi_1\pi_2\rangle^4{\rm tr}\left(B_1 K_{12} B_2 K_{21}\right) \label{action} \end{equation} The first term in this action is holomorphic $BF$ theory, with $\Omega= \pi^{\dot\gamma}{\rm d}\pi_{\dot\gamma}\wedge \pi_{\dot\alpha}\pi_{\dot\beta}{\rm d} x^{\alpha\dot\alpha}\wedge {\rm d} x^{\beta\dot\beta}\varepsilon_{\alpha\beta}$, the canonical top holomorphic form of weight 4 on ${\mathbb{PT}}'$. This part of the action was introduced by Witten \cite{Witten:2003nn} as part of a supersymmetric Chern-Simons theory. The second term is local on ${\mathbb{E}}$ but is non-local on the ${\mathbb{P}}^1$ fibres. In it, $K_{ij}=K_{ij}(x,\pi_i,\pi_j)$, $i,j=1,2$ are Green's functions for the restriction $({\overline{\del}}_0+A_0)$ of the ${\overline{\del}}_A$ operator to these fibres described more explicitly in the next section, while ${\rm D}\pi_i=\langle\pi_i\,{\rm d}\pi_i\rangle$ is the top holomorphic form of weight 2 on the $i^{\rm th}$ fibre and $B_i=B(x,\pi_i)$ denotes $B$ evaluated on the $i$th factor. This term is the lift to twistor space of the $B^2$ term in the (space-time) reformulation of Yang-Mills theory by Chalmers \& Siegel~\cite{Mason:2005zm,Chalmers:1996rq}. The action is invariant under gauge transformations \begin{equation} \delta A={\overline{\del}}{\gamma}+ g[A,\gamma] \qquad\delta B=g[B,\gamma]+{\overline{\del}}\beta+g[A,\beta]\ , \label{gauge} \end{equation} where $\gamma$ and ${\beta}$ are smooth ${\rm End}\,E$-valued sections of weight 0 and -4 respectively. In~\cite{Mason:2005zm,Boels:2006ir} we obtained the classical equivalence of the twistor action with that on space-time by (partially) fixing these gauge transformations by requiring $A_0=0$ and $B$ to be in a gauge in which it is harmonic over the ${\mathbb{P}}^1$ fibres of ${\mathbb{PT}}'\to{\mathbb{E}}$. We refer to this as {\em space-time} gauge. In this gauge $A$ can be expressed directly in terms of a space-time gauge field ${\mathcal{A}}_{\alpha\dot\alpha}$ and $B$ in terms of an auxilliary space-time field ${\mathcal{B}}_{\dot\alpha\dot\beta}$ and the twistor action reduces to the Chalmers and Siegel form of the usual Yang-Mills action on space-time \begin{equation} \int_{\mathbb{E}} \left({\mathcal{B}}_{\dot\alpha\dot\beta}{\mathcal{F}}^{\dot\alpha\dot\beta} - \frac{1}{2} {\mathcal{B}}_{\dot\alpha\dot\beta}{\mathcal{B}}^{\dot\alpha\dot\beta}\right){\rm d}^4 x \end{equation} with its usual space-time gauge symmetry. Here, ${\mathcal{F}}_{\dot\alpha\dot\beta}= \nabla_{(\dot\alpha}^\alpha{\mathcal{A}}_{\dot\beta)\alpha} + {\mathcal{A}}_{(\dot\alpha}^\alpha{\mathcal{A}}_{\dot\beta)\alpha} $ is the self-dual part of the field strength of ${\mathcal{A}}_{\alpha\dot\alpha}$ and ${\mathcal{B}}_{\dot\alpha\dot\beta}$ is an auxiliary field that equals ${\mathcal{F}}_{\dot\alpha\dot\beta}$ on shell. The twistor action has a natural extension to supersymmetric gauge theory upto and including $\mathcal{N}=4$ and has an analogue for conformal gravity~\cite{Mason:2005zm,Boels:2006ir}. \section{Feynman Rules and MHV diagrams} In this section, we will impose an axial gauge - first introduced by Cachazo, Svr\v cek \& Witten~\cite{Cachazo:2004kj} - to obtain the Feynman rules from (\ref{action}) and we will see that these directly produce MHV diagrams. Note that the symmetry in~(\ref{gauge}), with $\gamma$ and $\beta$ each depending on six real variables (the real coordinates of twistor space) is larger than the gauge symmetry of the space-time Yang-Mills action; it is precisely by exploiting this larger symmetry that we are able to interpolate between the standard and MHV pictures of scattering theory. Decomposing our fields into the basis (\ref{basis}) as $A=A_0\bar e^0 + A_\alpha\bar e^\alpha$ {\it etc.}, we seek to impose the CSW gauge condition $\eta^\alpha A_\alpha=0=\eta^\alpha B_\alpha$ for $\eta$ some arbitrary constant spinor. This is an axial gauge condition, so the corresponding ghost terms will decouple. This gauge has the benefit that the $BAA$ vertex from the holomorphic $BF$ theory vanishes, being the cube of three 1-forms each of which only has non-zero components in only two of the three anti-holomorphic directions. Thus, the only remaining interactions come from the non-local $B^2$ term. This term depends on the field $A_0$ in a non-polynomial manner because of the presence of the Green's functions $K=({\overline{\del}}_0+gA_0)^{-1}$ on the ${\mathbb{P}}^1$s. To find the explicit form of the vertices, expand in powers of $A$ using the standard formul\ae \begin{equation} \frac{\delta K_{12}}{\delta A}=\int_{{\mathbb{P}}^1}\!{\rm D}\pi_3\, K_{13}gA(x,\pi_3)\!K_{32} \quad\quad\hbox{and}\quad\quad \left.K_{ij}\right|_{A=0} =\frac{{\mathbb{I}}}{2\pi{\rm i}}\frac{1}{\langle\pi_i\,\pi_j\rangle}\ \label{Kvariation} \end{equation} where ${\mathbb{I}}$ is the identity matrix in the adjoint representation of the gauge group. Using this repeatedly gives the expansion \begin{equation} K_{1p}= \sum_{n=1}^\infty \int_{({\mathbb{P}}^1)^{n-1}}\frac1{2\pi {\rm i} \langle \pi_n\, \pi_p\rangle}\prod_{r=2}^n \frac{g}{2\pi {\rm i}} \frac{A_r \wedge{\rm D}\pi_r}{\langle \pi_{r-1}\, \pi_{r}\rangle} \label{Kexpansion} \end{equation} This can be substituted in to give the vertices \begin{equation} \sum_{n=2}^\infty\ \frac{g^{n-2}}{(2\pi{\rm i})^n}\int_{{\mathbb{E}}\times({\mathbb{P}}^1)^n}\hspace{-0.8cm}{\rm d}^4x\ \left(\prod_{i=1}^{n} \frac{{\rm D}\pi_i}{\langle\pi_i\,\pi_{i+1}\rangle}\right) \sum_{p=2}^n\langle\pi_{1}\,\pi_{p}\rangle^4 {\rm tr}\left( B_1 A_2 \cdots A_{p-1}B_p A_{p+1}\cdots A_n \right)\ . \label{vertices} \end{equation} This expression strongly resembles a sum of MHV amplitudes, except that here we are dealing with {\it vertices} rather than amplitudes and~(\ref{vertices}) is entirely off-shell. In order to express the linear fields and propagators, it is helpful to introduce certain $(0,1)$-form valued weighted $\delta$-functions of spinor products \begin{equation} \bar \delta_{n-1,-n-1}\langle\lambda\,\pi\rangle = \langle\hat\pi\,{\rm d}\hat\pi\rangle\, \frac{\langle\lambda\,\xi \rangle^n \langle\hat\lambda\,\xi \rangle} {\langle\pi\,\xi \rangle^n \langle\hat\pi\,\xi \rangle}\delta^2(\langle\lambda\,\pi\rangle) \end{equation} where for a complex variable $z=x+iy$, $\delta^2(z)=\delta(x)\delta(y)$ and the scale factors have been chosen so that $\bar\delta_{n-1,-n-1}\langle\lambda\pi\rangle$ has holomorphic weight only in $\lambda$ and $\pi$ with weights $n-1$, $-n-1$ respectively. It is independent of the constant spinor $\xi$ as $\lambda\propto\pi$ on the support of the delta function; see \cite{Witten:2004cp} for a full discussion. With these definitions, $K_{ij}$ can be defined by \begin{equation} ({\overline{\del}}_0+gA_0) K_{ij}= {\mathbb{I}}\bar\delta_{-1,-1}\langle \pi_i\, \pi_j\rangle \, . \label{dbarK} \end{equation} The first term in~(\ref{vertices}) is quadratic in $B$ and involves no $A$ fields. We are always free to treat such algebraic terms either as vertices or as part of the kinetic energy of $B$. When working in this axial gauge in twistor space, it turns out to be more convenient to do the former, whereupon the only kinetic terms come from the holomorphic $BF$ theory. In this axial gauge this is \begin{equation} \int_{{\mathbb{PT}}'}\!\Omega\wedge{\rm tr}(B\wedge F) =\int_{{\mathbb{PT}}'}\!\Omega\wedge{\rm tr}(B\wedge{\overline{\del}} A) \label{freekinetic} \end{equation} so the propagator is the inverse of the ${\overline{\del}}$ operator on ${\mathbb{PT}}'$. Using coordinates $(x,\pi)$ on ${\mathbb{PT}}'$, the propagators depend on two points, $(x_1,\pi_1)$, $(x_2,\pi_2)$, but, as usual, depend only on the space-time variables through $x_1-x_2$. We can Fourier transform the $x_1-x_2$ to obtain the momentum space axial gauge propagator \begin{equation} \langle A(p,\pi_1) B(p,\pi_2)\rangle =\frac{{\mathbb{I}}}{p^2} \bar \delta_{-2,0} [\eta|p|\pi_1\rangle \wedge \bar\delta_{2,-4} [\eta|p|\pi_2\rangle + \left( \frac {{\mathbb{I}}}{{\rm i}} \frac{\eta_\alpha\bar e^\alpha_1 \wedge \bar \delta_{2,-4} \langle\pi_1\pi_2\rangle }{[\eta|p|\pi_1\rangle} + 1\leftrightarrow 2 \right) \label{propagators} \end{equation} The linearized field equations obeyed by external fields are ${\overline{\del}} A=0$ and ${\overline{\del}} B=0$, while the linearized gauge transformations are $\delta A={\overline{\del}}\gamma$ and $\delta B={\overline{\del}}\beta$. Together these show that on-shell, free $A$ and $B$ fields are elements of the Dolbeault cohomology groups $H^1_{\overline{\del}}({\mathbb{PT}}',{\mathcal{O}})$ and $H^1_{\overline{\del}}({\mathbb{PT}}',{\mathcal{O}}(-4))$ representing massless particles of helicity $\mp1$ as is well-known from the Penrose transform. Momentum eigenstates obeying the axial gauge condition are \cite{Cachazo:2004kj} \begin{equation} A(x,\pi)=T{\rm e}^{{\rm i} \tilde q_{\alpha} x^{\alpha\dot\alpha}q_{\dot\alpha}} \bar\delta_{-2,0}\langle q\,\pi\rangle\qquad\qquad B(x,\pi)=T{\rm e}^{{\rm i}\tilde q_{\alpha} x^{\alpha\dot\alpha}q_{\dot\alpha}} \bar\delta_{2,-4}{\langle q\, \pi\rangle} \label{ffields} \end{equation} where $T$ is some arbitrary element of the Lie algebra of the gauge group and $\tilde q_\alpha q_{\dot\alpha}$ is the on-shell momentum. Only the components $A_0$ and $B_0$ are non-vanishing and are simple multiples of delta functions. Thus, inserting these external wavefunctions into the vertices in (\ref{vertices}), one trivially performs the integrals over each copy of ${\mathbb{P}}^1$ by replacing $\pi_i^{\dot\alpha}$ by $q_i^{\dot\alpha}$ for each external particle. The integral over ${\mathbb{E}}$ then yields an overall momentum delta-function and, after colour stripping the ${\rm tr}(T_1\ldots T_n)$ factors, one obtains the standard form for an MHV amplitude, arising here from the Feynman rules of the twistor action~(\ref{action}). More general Feynman diagrams arise from combining the vertices~(\ref{vertices}) with the propagators from~(\ref{propagators}) and evaluating external fields using~(\ref{ffields}). This reproduces the MHV formalism for Yang-Mills scattering amplitudes: the delta-functions in the propagator lead to the prescription of the insertion of $[\eta|p$ as the spinor corresponding to the off-shell momentum $p$. We note that the vertices only couple to the components $A_0$ and $B_0$ of the fields so that the second term in the propagator (\ref{propagators}) does not play a role except to allow one to interchange the role of $B_0$ and $A_\alpha$. We note that, since $\eta$ arises here as an ingredient in the gauge condition, BRST invariance implies that the overall amplitudes are independent of $\eta$. \section{Coupling to Matter} Matter fields of helicity $n/2$ correspond to $(0,1)$-forms $C_n$ of homogeneity $n-2$ with values in ${\mathrm{End}}\,E$ for adjoint matter, or $E$ or $E^*$ for `fundamental' matter. These are subject to a gauge freedom $C_n\rightarrow C_n+{\overline{\del}}_{gA}\kappa_n$ where $\kappa_n$ is an arbitrary smooth function of weight $n-2$ with values in $E$, $E^*$ or ${\mathrm{End}}\,E$ as appropriate. Thus an adjoint scalar field corresponds to a field $\phi$ on twistor space with values in $\Omega^{0,1}({\mathrm{End}}\,E\otimes{\mathcal{O}}(-2))$ and a fundamental massless fermion corresponds to fields $\lambda$ in $\Omega^{0,1}({\mathcal{O}} (-1))\otimes E)$ and $\tilde\lambda$ in $\Omega^{0,1}({\mathcal{O}} (-3))\otimes E^*)$ subject to the gauge freedom \begin{equation}\label{matter-gauge} (\phi, \lambda,\tilde\lambda)\rightarrow (\phi+{\overline{\del}}_{gA} \chi, \lambda+{\overline{\del}}_{gA}\xi,\tilde\lambda+ {\overline{\del}}_{gA}\tilde\xi) \end{equation} with $\chi, \xi,\tilde\xi$ of weights $-2,-1,-3$ respectively. The local part of the matter action is \begin{equation} S_{\phi,\lambda,\tilde\lambda, \mathrm{Loc}}= \int_{{\mathbb{PT}}'}\!\!\! \Omega\wedge \left(\frac{1}{2}{\rm tr}\,\phi\wedge({\overline{\del}}\phi+g[A,\phi]) + \tilde\lambda\wedge({\overline{\del}} +gA)\lambda\right) \label{matter-action} \end{equation} It is clear that~(\ref{matter-action}) is invariant under the gauge transformations~(\ref{gauge}). However, in order for the sum $S_{BF}+ S_{\phi,\lambda,\tilde\lambda, \mathrm{Loc}}$ ({\it i.e.} the complete local action) to be invariant under~(\ref{matter-gauge}), we must modify the transformation rule of $B$ to \begin{equation}\label{B-matter-gauge} B\rightarrow B+{\overline{\del}}\beta +g[A,\beta]+g[B,\gamma]-g[\chi,\phi]+ g\lambda \tilde\xi-g\xi\tilde\lambda. \end{equation} This modified transformation law no longer leaves the $B^2$ term invariant, so we need to include new terms to compensate for this additional gauge freedom. These terms may be found either by the Noether procedure. There is some freedom in the choice of these additional terms; we can fix this freedom by requiring that the action corresponds to matter fields which are minimally coupled on space-time. This requirement leads to the terms \begin{eqnarray} S_{B\, \phi^2 }&= & g\int_{{\mathbb{E}}}\hspace{-0.1cm} {\rm d}^4x \, \int_{({\mathbb{P}}^1)^3} \langle\pi_1\,\pi_2\rangle^2 \langle\pi_1\,\pi_3\rangle^2 {\rm tr} \left(B_1 K_{12}\phi_2K_{23}\phi_3 K_{31}\right) \prod_{i=1}^3{\rm D}\pi_i \\ S_{\phi^4 } &= & g^2\int_{{\mathbb{E}}}\hspace{-0.1cm} {\rm d}^4x \, \frac12\int_{({\mathbb{P}}^1)^4} \langle\pi_1\pi_2 \rangle^2\langle\pi_3\pi_4 \rangle^2\, {\rm tr} \left( \prod_{i=1}^4 \, \phi_iK_{i,i+1}{\rm D}\pi_i\right) \\ S_{B\, \lambda,\tilde\lambda}&= & g\int_{{\mathbb{E}}}\hspace{-0.1cm} {\rm d}^4x \, \int_{({\mathbb{P}}^1)^3} \frac{\langle\pi_1\pi_2 \rangle\langle\pi_2\pi_3 \rangle^3}{\langle\pi_1\pi_3\rangle}\, \left(\tilde\lambda_1K_{12}B_2 K_{23}\lambda_3 \right) \prod_{i=1}^3{\rm D}\pi_i \\ S_{\lambda^2,\tilde\lambda^2}&= & g^2\int_{{\mathbb{E}}}\hspace{-0.1cm} {\rm d}^4x \, \int_{({\mathbb{P}}^1)^4} \frac{\langle\pi_2\pi_4 \rangle\langle\pi_1\pi_3 \rangle^3 }{\langle\pi_4\pi_1 \rangle\langle\pi_2\pi_3 \rangle} \left(\tilde\lambda_1K_{12}\lambda_2 \right) \left(\tilde\lambda_3K_{34}\lambda_4\right) \prod_{i=1}^4 {\rm D}\pi_i \\ S_{\lambda,\tilde\lambda\, \phi^2}&= & g^2\int_{{\mathbb{E}}}\hspace{-0.1cm} {\rm d}^4x \, \int_{({\mathbb{P}}^1)^4} \left[ \frac{ \langle\pi_4\pi_3\rangle \langle\pi_1\pi_3 \rangle \langle\pi_1\pi_4 \rangle^2 }{\langle\pi_4\pi_1\rangle} \left(\tilde\lambda_1K_{12}\lambda_2 \right) {\rm tr} \left( \phi_3K_{34}\phi_4K_{43}\right) \right.\nonumber \\ &&\hspace{.7cm}\left.+ \frac{\langle\pi_1\pi_2\rangle^2 \langle\pi_1\pi_3\rangle \langle\pi_3\pi_4\rangle + 2\leftrightarrow 3 }{2\langle\pi_4\pi_1\rangle} \left(\tilde\lambda_1K_{12}\phi_2K_{23}\phi_3K_{34}\lambda_4 \right) \right] \prod_{i=1}^4{\rm D}\pi_i \label{non-local-matter} \end{eqnarray} so that the total action is \begin{equation} S_{\rm Full}= S_{BF} + S_{\phi,\lambda,\tilde\lambda, \mathrm{Loc}} + S_{B^2} + S_{B\, \phi^2}+ S_{\phi^4}+ S_{B\, \lambda \, \tilde \lambda} +S_{\lambda^2\tilde\lambda^2 } +S_{\lambda,\tilde\lambda \phi^2}\ \ . \label{full-action} \end{equation} To see that these combinations are indeed invariant under~(\ref{matter-gauge}) and (\ref{B-matter-gauge}), note that $S_{B^2}$ will pick up a term which is an integral over $({\mathbb{P}}^1)^2$ of $g \langle\pi_1\,\pi_2\rangle^4 {\rm tr }(B_1 K_{12}[\chi_2,\phi_2]K_{21})$ from the $\chi$ variation of $B$. On the other hand, $S_{B\phi^2}$ will pick up a term from the variation of $\phi$ that is an integral over $({\mathbb{P}}^1)^3$ of $g\langle\pi_1\,\pi_2\rangle^2 \langle\pi_1\,\pi_3\rangle^2 {\rm tr} \left(B_1 K_{12}\left( ({\overline{\del}}_{gA_2}\chi_2)K_{23}\phi_3 + \phi_2K_{23}({\overline{\del}}_{gA_3}\chi_3) \right) K_{31}\right)$. We can integrate this last expression by parts, although care must be taken because of the singularities in the integrand, and use (\ref{dbarK}) to perform one of the $({\mathbb{P}}^1)^3$ integrals to obtain $g\langle\pi_1\,\pi_2\rangle^4 {\rm tr }(B_1 K_{12}[\chi_2,\phi_2]K_{21})$ cancelling the corresponding term in the variation of $S_{B^2}$. The $\chi$ part of the variation of $B$ in $S_{B\phi^2}$ however leads to new terms that are in turn cancelled by the corresponding variation of $S_{\phi^4}$ and so on for the $\xi$ and $\tilde\xi$ terms. All terms except $S_{B^2}$ vanish in space-time gauge, but are necessary for the full gauge invariance of the action. Thus the full action corresponds on space-time to Yang-Mills with a minimally coupled massless fermion $(\Lambda_\alpha,\tilde \Lambda_{\alpha'})$ in the (anti-)fundamental representation and a massless scalar field $\Phi$ in the adjoint representation. We note that we are free to include $\Phi^n$ interactions by including the additional gauge invariant terms \begin{equation} S_{\Phi^n}= c_n\int_{{\mathbb{E}}}\hspace{-0.1cm} {\rm d}^4x \, \int_{({\mathbb{P}}^1)^4} {\rm tr} \, \left( \prod_{i=1}^n \, \langle\pi_i\pi_{i+1} \rangle \phi_iK_{i,i+1}{\rm D}\pi_i\right) \label{phin} \end{equation} These terms are gauge invariant because the singularities in $K_{ij}$ in the integrand are cancelled by the factors of $\langle \pi_i\, \pi_j\rangle$. These correspond precisely to ${\rm tr }\Phi^n$ terms in the space-time Yang-Mills theory by use of the standard integral formula for scalar fields $\Phi(x)=\int_{{\mathbb{P}}^1_x} H\phi H^{-1}\,{\rm D}\pi$ where $H$ is the gauge transformation to space-time gauge. As before, we can impose the CSW gauge condition $\eta^\alpha A_\alpha=0=\eta^\alpha B_\alpha$, and similarly $\eta^\alpha\phi_\alpha = 0 =\eta^\alpha\lambda_\alpha= \eta^\alpha\tilde\lambda_\alpha$. In this gauge \begin{equation} \phi=T{\rm e}^{\tilde q_\alpha x^{\alpha\dot\alpha}q_{\dot\alpha}} \bar\delta_{0,-2}\langle q\, \pi\rangle\, \quad \tilde\lambda=f{\rm e}^{\tilde q_\alpha x^{\alpha\dot\alpha}q_{\dot\alpha}} \bar\delta_{-1,-1}\langle q\, \pi\rangle\, \quad \lambda=f^*{\rm e}^{\tilde q_\alpha x^{\alpha\dot\alpha}q_{\dot\alpha} } \bar\delta_{1,-3}\langle q\, \pi\rangle\, \end{equation} where here $f$ is an element of the fundamental representation. The local parts of the action $S_{BF}+S_{\phi,\lambda\tilde\lambda {\rm Loc}}$ become quadratic and gives rise to propagators that have the same form as~(\ref{propagators}), but with integers suitably altered to reflect the different homogeneities. Just as for $S_B^2$, each of $S_{B\phi^2}, S_{\phi^4}, S_{B\lambda \tilde\lambda}, S_{\lambda^2\tilde\lambda^2}, S_{\lambda,\tilde\lambda\phi^2}, S_{\Phi^n}$ can be seen to be generating functions for MHV-type diagrams, now involving $\lambda, \tilde \lambda$ and $\phi$, by expanding out the Green's functions $K_{ij}$ in powers of $A$ according to equation (\ref{Kexpansion}) and substituting into the above formul\ae~as we did in (\ref{vertices}). For example, $S_{B\phi^2}$ gives rise to a sequence of MHV vertices each with one positive helicity gluon corresponding to the $B$ field, two Higgs fields $\phi$ and an arbitrary number of negative helicity gluons. The other non-local terms in~(\ref{full-action}) behave similarly. The structure of these MHV-type vertices can be seen to be the same as was analysed in~\cite{Georgiou:2004by}. \section{Connection to space-time calculation} In space-time, Yang-Mills scattering amplitudes are calculated from correlation functions using the LSZ formalism. This instructs us to calculate $\langle {\mathcal{A}}_{\mu_1}(p_1) \ldots {\mathcal{A}}_{\mu_n}(p_n)\rangle$, isolate the residues of the single particle poles and contract with the polarization vectors. The transform of the twistor-space field $A$ to a space-time gauge field ${\mathcal{A}}$ is given by \begin{equation} {\mathcal{A}}_{\alpha \dot\alpha}(x) = \int k H \left( {\overline{\del}}_\alpha + A_{\alpha} \right)H^{-1} \frac{\hat{\pi}_{\dot\alpha} }{\langle\pi\,\hat{\pi}\rangle} \label{spacetimeA} \end{equation} where $H$ solves $({\overline{\del}}_0+A_0)H=0$ and is the gauge transformation to space-time gauge, and $k$ is a volume (K\"ahler) form on the ${\mathbb{P}}^1$. This follows from solving the constraint $F_{0\alpha}=0$ which arises by varying $B_\alpha$ in the action. Note that more than one field insertion will lead to multiparticle poles and so can be ignored at least at tree level. With this observation the linearization of (\ref{spacetimeA}) can be used so that, for the application of the LSZ formalism, operators \begin{equation} {\mathcal{A}}_{\alpha\dot\alpha} = \int k_1 \left(A_{\alpha}\frac{\hat{\pi_1}_{\dot\alpha} }{\langle\pi_1\,\hat{\pi}_1\rangle} +\frac{\hat\pi_{1\,\dot\alpha} {\overline{\del}}_{\alpha}}{\langle\pi_1\,\hat{\pi}_1\rangle} \int k_2 \frac{\langle\pi_1\,\xi\rangle}{\langle\pi_2\,\xi\rangle} \frac{A_0}{\langle\pi_1\, \pi_2\rangle} \right) \label{eq:bungalowbill} \end{equation} should be inserted into the path integral. Here the second term is the first term of the expansion of $H$ in $A_0$ and $\xi$ is an arbitrary spinor reflecting the residual gauge freedom in ${\mathcal{A}}_{\alpha\dot\alpha}$. In the axial CSW gauge, the vertices that contribute contain just fibre components of the $(0,1)$ forms. The only contractions to give $1/p^2$ poles are $\langle A_\alpha A_0 \rangle$ and $\langle A_0 B_0\rangle$, the first of these using one power of the $B^2$ vertex. The residues of these contributions follow from~(\ref{eq:bungalowbill}) and are given on-shell by \begin{equation} \int k_1 \left(A_{\alpha}\frac{\hat{\pi_1}_{\dot\alpha} }{\langle\pi_1\,\hat{\pi}_1\rangle} \right) \rightarrow\ \eta_\alpha [\eta\,\tilde q] q_{\dot\alpha}\qquad\qquad \int k_1 \left(\frac{\hat\pi_{1\,\dot\alpha} {\overline{\del}}_{\alpha}}{\langle\pi_1\,\hat{\pi}_1\rangle} \int k_2 \frac{\langle\pi_1\,\xi\rangle}{\langle\pi_2\,\xi\rangle} \frac{A_0}{\langle\pi_1\, \pi_2\rangle} \right)\ \rightarrow\ \frac{\xi_{\dot\alpha} \tilde q_\alpha}{[\xi\, q] [\eta\,\tilde q]^2}\ \ . \end{equation} These residues must be contracted with the polarization vectors which we recall are \begin{equation} \varepsilon^-_{\alpha\dot\alpha} = \frac{ \tilde q_{\alpha} \kappa_{\dot\alpha}}{\langle\kappa\,q\rangle} \qquad\hbox{and}\qquad \varepsilon^+_{\alpha\dot\alpha} = \frac{\tilde\kappa_{\alpha} q_{\dot\alpha}}{[\tilde\kappa\,\tilde q]} \end{equation} Hence it is clear that $A_\alpha$ and $A_0$ operator insertions correspond to insertions of different helicity states. In the above calculation $B_0$ could have been inserted instead of $A_\alpha$ since by the twistor transform on-shell $B_0$ is the field strength of the positive helicity gluon. Hence to insert the corresponding potential one can also use \begin{equation} \frac{p^{\alpha \dot\alpha} F_{\dot\alpha \dot\beta}}{p^2} = \frac{p^{\alpha \dot\alpha}}{p^2} \int k H B_0 H^{-1} \pi_{\dot\alpha} \pi_{\dot\beta} \end{equation} which can be verified to lead to the same prefactor as above. This is in effect an expression of the field equation which relates $B_0$ and $A_\alpha$. It is therefore seen that the usual calculation of Yang-Mills amplitudes is equivalent to the prescription given earlier. \section{Discussion} The most important drawback of our (and indeed most other) approaches to the MHV formalism is that, while there are now a number of positive results for loop amplitudes, it is still not clear that the extension to loop level can be made systematic. This problem appears particularly acute in the non-supersymmetric case where there are one-loop diagrams that cannot be constructed from MHV vertices and propagators alone, see~\cite{Brandhuber:2006bf} for a full discussion. Na\"ively it would seem that such problems should not arise in our approach as the action leads to a systematic perturbation theory including loops and the ghosts decouple both in the space-time gauge and in the CSW axial gauge. Nevertheless the missing one-loop diagrams provide us with a clear problem and it is still unclear how they might arise in our derivation of the MHV formalism. One possibility is that the missing diagrams arise from regularisation difficulties: it is well-known in space-time that axial gauges require extra care for certain poles (see {\it e.g.}~\cite{Leibbrandt:1987qv}), or the chiral nature of the action may obstruct the implementation of an efficient regularisation scheme. More likely however, in changing the path integral measure from space-time to twistor space fields one encounters a determinant in the spirit of~\cite{Feng:2006yy,Brandhuber:2006bf}. A potential source for this determinant is the complex nature of our gauge transformation to the CSW gauge from the space-time gauge; the path integral is only invariant under real gauge transformations and such complex ones may require an extra determinant in the path integral measure. Nonetheless, in our opinion this paper clearly shows that the existence of the MHV diagram formalism can be understood in terms of a linear and local gauge symmetry on twistor space. \bigskip \begin{acknowledgments} The authors would like to thank Freddy Cachazo and Wen Jiang for discussions. DS acknowledges the support of EPSRC (contract number GR/S07841/01) and a Mary Ewart Junior Research Fellowship. The work of LM and RB is supported by the European Community through the FP6 Marie Curie RTN {\it ENIGMA} (contract number MRTN-CT-2004-5652). \end{acknowledgments}
1,314,259,994,607
arxiv
\section{Introduction} \label{introduction} Recommender Systems (RS) have become very common in recent years and are useful in various real-life applications. The most popular ones are probably suggestions for movies on Netflix and books for Amazon. However, they can also be used in more unlikely area such drug discovery where a key problem is the identification of candidate molecules that affect proteins associated with diseases. One of the approaches that have been widely used for the design of recommender systems is collaborative filtering (CF). This approach analyses a large amount of information on some users' preferences and tries to predict what other users may like. A key advantage of using collaborative filtering for the recommendation systems is its capability of accurately recommending complex items (movies, books, music, etc) without having to understand their meaning. For the rest of the paper, we refer to the items of a recommender system by movie and user though they may refer to different actors (compound and protein target for the ChEMBL benchmark for example \cite{ChEMBL}). To deal with collaborative filtering challenges such as the size and the sparseness of the data to analyze, Matrix Factorization (MF) techniques have been successfully used. Indeed, they are usually more effective because they take into consideration the factors underlying the interactions between users and movies called \emph{latent features}. As sketched in Figure~\ref{fig:mf}, the idea of these methods is to approximate the user-movie rating matrix $R$ as a product of two low-rank matrices $U$ and $V$ (for the rest of the paper $U$ refers to the users matrix and $V$ to the movie matrix) such that $R \approx U \times V$. In this way $U$ and $V$ are constructed from the known ratings in $R$, which is usually very sparsely filled. The recommendations can be made from the approximation $U \times V$ which is dense. If $M$ $\times$ $N$ is the dimension of $R$ then $U$ and $V$ will have dimensions $M$ $\times$ $K$ and $N$ $\times$ $K$. $K$ represents then number of latent features characterizing the factors, $K \ll M$, $K \ll N$. Popular algorithms for low-rank matrix factorization are alternating least-squares (ALS) \cite{parallelALS}, stochastic gradient descent (SGD) \cite{parallelSGD} and the Bayesian probabilistic matrix factorization (BPMF) \cite{BPMF}. Thanks to the Bayesian approach, BPMF has been proven to be more robust to data-overfitting and released from cross-validation (needed for the tuning of regularization parameters). In addition, BPMF easily incorporates confidence intervals and side-information \cite{SIDEINFORMATION, simm:macau}. Yet BPMF is more computational intensive and thus more challenging to implement for large datasets. Therefore, the contribution of this work is to propose a parallel implementation of BPMF that is suitable for large-scale distributed systems. An earlier version of this work has been published at \cite{bpmf_cluster16}. Compared to that earlier version, this works adds efficient asynchronous communication using GASPI~\cite{grunewald:gaspi}. The remainder of this paper is organized as follows. Section~\ref{sec:BPMF} describes the BPMF algorithm. In Section~\ref{sec:MBPMF}, the shared-memory version of the parallel BPMF is described. In Section~\ref{sec:DBPMF}, details about the distributed BPMF are given. The experimental validation and associated results is presented in Section~\ref{sec:experiments}. In Section~\ref{sec:related} existing work dealing with parallel matrix factorization techniques and BPMF in particular is presented. Some conclusions and perspectives of this work are drawn in Section~\ref{sec:conclusion} \begin{figure} \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\columnwidth]{matrix_factorization.pdf} \caption{Low-rank Matrix Factorization} \label{fig:mf} \end{minipage} \begin{minipage}{.5\textwidth} \centering \includegraphics[width=\columnwidth]{chembl_histogram} \caption{Histogram for the ChEMBL dataset of the number of ratings per user.} \label{fig:chembl_histogram} \end{minipage} \end{figure} \section{BPMF} \label{sec:BPMF} The BPMF algorithm \cite{BPMF} puts matrix factorization in a Bayesian framework by assuming a generative probabilistic model for ratings with prior distributions over parameters. It introduces common multivariate Gaussian priors for each user of $U$ and movie in $V$. To infer these two priors from the data, BPMF places fixed uninformative Normal-Wishart hyperpriors on them. We use a Gibbs sampler to sample from the prior and hyperprior distributions. This sampling algorithm can be expressed as the pseudo code shown in Algorithm~\ref{algo:bpmf_pseudo}. Most time is spent in the loops updating $U$ and $V$, where each iteration consist of some relatively basic matrix and vector operations on $K \times K$ matrices, and one computationally more expensive $K \times K$ matrix inversion. \begin{algorithm} \footnotesize \LinesNumbered \For{sampling iterations} { sample hyper-parameters movies based on V \For{all movies $m$ of $M$} { update movie model $m$ based on ratings ($R$) for this movie and model of users that rated this movie, plus randomly sampled noise } sample hyper-parameters users based on U \For{all users $u$ of $U$} { update user $u$ based on ratings ($R$) for this user and model of movies this user rated, plus randomly sampled noise } \For{all test points} { predict rating and compute RMSE } } \caption{BPMF Pseudo Code} \label{algo:bpmf_pseudo} \end{algorithm These matrix and vector operations are very well supported in Eigen~\cite{Eigen} a high-performance modern C++11 linear algebra library. Sampling from the basic distributions is available in the C++ standard template library (STL), or can be trivially implemented on top. As a results the Eigen-based C++ version of Algorithm~\ref{algo:bpmf_pseudo} is a mere 35 lines of C++ code. \section{Multi-core BPMF} \label{sec:MBPMF} In this section we describe how to optimize this implementation to run efficiently on a shared memory multi-core system. A version for distributed systems with multiple compute nodes is explained in a separate section. \subsection{Single Core Optimizations} Most of time is spent updating users' and movies' models. This involves computing a $K \times K$ outer product for the covariance matrix and inverting this matrix to obtain the precision matrix. Since the precision matrix is used only once, in a matrix-vector product, we can avoid the full inverse and only compute the Cholesky decomposition. Furthermore, if the number of ratings for a user/movie is small a rank-one update~\cite{stewart:matrix_algos} is more efficient. Updating a single user in $U$ depends on the movies in $V$ for whom there are ratings in $R$, Hence, the access patterns to $U$ and $V$ are determined by the sparsity pattern in $R$. By reordering the columns and rows of $R$, we can improve the data locality and thus the program's cache behavior. Since the access pattern in BPMF is similar to access pattern in a Sparse Matrix-Vector Multiplication (SPMV), we reused the technique proposed in \cite{vastenhouw:mondriaan}. To sample from the hyper-parameters a global average and covariance across both $U$ and $V$ needs to be computed. Standalone, the computation of these values is dominated by the long-latency memory accesses to $U$ and $V$. However, if we integrate the computation of these aggregates with the updates of $U$ and $V$, they become almost free. \subsection{Multi-core-based parallel BPMF} The main challenges for performing BPMF in parallel is how to distribute the data and the computations amongst parallel workers (threads and/or distributed nodes). For the shared memory architectures, our main concerns where using as many threads as possible, keeping all threads as busy as possible and minimizing memory discontinuous accesses. Since the number of users entries (resp. movie entries) are very large and since they can all be computed in parallel, it make sense to assigned a set of items to each thread. Next, balanced work sharing is a major way of avoiding idle parallel threads. Indeed, if the amount of computations is not balanced some threads are likely to finish their tasks and stay idle waiting for others to finish. As can be seen in Figure~\ref{fig:chembl_histogram}, there are items (users or movies) with a large number of ratings and for whom the amount of compute is substantially larger than those items with less ratings. To ensure a good load balance, we use a cheaper but serial algorithm using the aforementioned rank-one update, for items with less than 1000 ratings. For items with more ratings, we use a parallel algorithm containing a full Cholesky decomposition. This choice is motivate by Figure~\ref{fig:parallel_sample} which shows the time to update one item versus the number of ratings for the three possible algorithms. By using the parallel algorithm for more expensive users/movies we effectively split them up in more smaller tasks that can utilize multiple cores on the system. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{parallel_sample} \caption{Compute time to update one item for the three methods: sequential rank-one update, sequential Cholesky decomposition, and parallel Cholesky decomposition as a function of the number of ratings.} \label{fig:parallel_sample} \end{figure} \section{Distributed parallel BPMF} \label{sec:DBPMF} The multi-core BPMF implementation presented above has been extended to distributed systems using three different distributed programming models: MPI~\cite{MPI}, GASPI~\cite{grunewald:gaspi} and ExaSHARK~\cite{SHARK}. In this section we first describe the three programming models, next how the data is distributed across nodes, how the work per node is balanced and how communication is handled, for the three approaches. \subsection{Distributed Programming} \subsubsection{MPI-3.0} Message Passing Interface (MPI) is a standardized and portable message-passing system for distributed systems. The latest standard MPI-3.0 includes features important for this BPMF implementation, for example: support for asynchronous communication, support for hybrid application combining message passing with shared memory level parallelism like OpenMP~\cite{openmp} or TBB~\cite{tbb}. \subsubsection{GASPI} The Global Address Space Programming Interface (GASPI \cite{grunewald:gaspi}) is the specification for a PGAS style programming model for C/C++ and Fortran. The API consists of a set of basic routines. As an alternative to MPI, its main advantages are \emph{i)} its one-sided communication layer that can take full advantage of the hardware capabilities to utilize remote direct memory access (RDMA) for spending no CPU cycles on communication, \emph{ii)} the fact that the GASPI library has been optimized to work in a multi-threaded environment and \emph{iii)} its seamless interoperability with MPI. \subsubsection{ExaSHARK} Compared to MPI and GASPI, ExaSHARK is a much higher abstraction level library designed to handle matrices that are physically distributed across multiple nodes. The access to the global array is performed through logical indexing. ExaSHARK is portable since it is built upon widely used technologies such as MPI and C++ as a programming language. It provides coding via a global-arrays-like interface which offers template-based functions (dot products, matrix multiplications, unary expressions) which offers transparent execution across the whole system. \subsection{Data Distribution} We distribute the matrices $U$ and $V$ across the system where each nodes computes their part. When an item is computed, the rating matrix $R$ determines to what nodes this item needs to be sent. Our main optimization concern on how to distribute $U$ and $V$ is to make sure the computational load is distributed equally as possible and the amount of data communication is minimized. Similarly to the cache optimization mentioned above, we can reorder the rows and columns in $R$ to minimize the number of items that have to be exchanged, if we split and distribute $U$ and $V$ according to consecutive regions in $R$. Additionally we take work balance in to account when reordering $R$. For this we use a workload model derived from Figure~\ref{fig:parallel_sample}. The blue curve in the figure give a reasonable idea of the amount of work for a user or movie in relation to the amount of ratings. As you can see, when the number of ratings is small, the work per rating is higher than for items with many ratings. Hence we approximate the workload per user/movie with fixed cost, plus a cost per movie rating. \subsection{Updates and data communication} \subsubsection{Communication using ExaSHARK} For the users updates, only one-sided communication is used in the case a user is outside a process range, namely the \texttt{\small GlobalArray::get()} routine. Indeed, thanks to the PGAS model, each process knows which other process owns a particular range of the global array. \subsubsection{Communication using pure MPI} To allow for communication and computation to overlap we send the updated user/movie as soon as it has been computed. For this we use the asynchronous MPI 3.0 routines \texttt{\small MPI\_Isend} and \texttt{\small MPI\_Irecv}. However, the overhead of calling these routines is too much to individually send each item to the nodes that need it. Additionally, too many messages would be in flight at the same time for the runtime to handle this efficiently. Hence we store items that need to be sent in a temporary buffer and only send when the buffer is full. \subsubsection{Communication using GASPI} Because GASPI is more light-weight, we can afford to simply send (gaspi\_write) an item once it has been computed. \section{Validation} \label{sec:experiments} In this section, we present the experimental results and related discussion for the proposed parallel implementations of the BPMF described above. \subsection{Hardware platform} We performed experiments on Lynx a cluster with 20 nodes, each equipped with dual 6-core Intel(R) Westmere CPUs with 12 hardware threads each, a clock speed 2.80GHz and 96 GB of RAM, and on Anselm a cluster with 209 nodes, each node equipped with 2 8-core Intel(R) Sandy Bridge CPUs with at least 64GB RAM per node. \subsection{Benchmarks} \label{bench} Two public benchmarks have been used to evaluate the performances of the proposed approaches: the ChEMBL dataset \cite{ChEMBL} and the MovieLens~\cite{harper:movielens} database. The ChEMBL dataset is related to the drug discovery research field. It contains descriptions for biological activities involving over a million chemical entities, extracted primarily from scientific literature. Several version exist since the dataset is updated on a fairly frequent basis. In this work, we used a subset of the version 20 of the database which was released on February 2015. The subset is selected based on the half maximal inhibitory concentration (IC50) which is a measure of the effectiveness of a substance in inhibiting a specific biological or biochemical function. The total ratings number is around 1023952 from 483500 compounds (acting as users) and 5775 targets (acting as movies). The MovieLens dataset (ml-20m) describes 5-star rating and free-text tagging activity from MovieLens, a movie recommendation service. It contains 20M ratings across 27278 movies. These data were created by 138493 users between January 09, 1995 and March 31, 2015. For all the experiments, all the versions of the parallel BPMF reach the same level of prediction accuracy evaluated using the root mean square error metric (RMSE) which is a used measure of the differences between values predicted by a model or an estimator and the values actually observed \cite{Hyndman:RMSE}. \subsection{Results for Multi-core BPMF} In this section, we compare the performance of the proposed multi-core BPMF with the Graphlab library which is a state of the art library widely used in machine learning community. We have chosen GraphLab because it is known to outperform other similar graph processing implementations~\cite{Guo:graphlab}. The results presented in Figure \ref{fig:multicore} report the performance in number of updates to $U$ and $V$ per second for the ChEMBL benchmark suite on a machine with 12 cores for three different version: \textbf{TBB} The C++ implementation using Intel's Threading Building Blocks (TBB) for shared memory parallelization; \textbf{OpenMP} The C++ implementation using Intel's OpenMP for shared memory parallelization; \textbf{SHARK} ExaSHARK version; and \textbf{GraphLab} Version using GraphLab The number of latent features ($K$) is equal to 50. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{chembl_multicore.pdf} \caption{Performance of the multi-core BPMF on the ChEMBL dataset in number of updates to $U$ and $V$ versus the number of parallel threads.} \label{fig:multicore} \end{figure} The results show that all parallel implementations of the BPMF scale with the increasing number of used cores. However there is a clear correlation between the abstraction level used and the performance obtained. The TBB and OpenMP versions are the most low-level and obtain highest performance, higher-level libraries like ExaSHARK and GraphLab focus less on performance and this gap is clearly visible in the graph. GraphLab, for example, uses TCP sockets and ehternet instead of MPI and InfiniBand. The TBB version performs better than the OpenMP version because TBB's support for nested parallelism and because TBB uses a work-stealing scheduler that can better balance the work. \subsection{Distributed BPMF} \label{Distributed BPMF} In this section, the strong scaling of the different versions of distributed BPMF is studied. We first present results for the ChEMBL dataset on a relatively small cluster with 12 nodes, comparing the different MPI, GASPI and ExaSHARK versions, showing the benefit of asynchronous communication even at such small scales. Then we show that there are large differences between the different asynchronous versions for larger clusters, and we find the limits of scaling such a tightly integrated algorithm as BPMF. Figure~\ref{fig:multinode} (left) shows a clear advantage of two asynchronous communication version being the GASPI version and the MPI version using \texttt{\small MPI\_Isend} and \texttt{\small MPI\_Irecv}. For these version communication happens in the background, in parallel with computation, while for the two other versions, the ExaSHARK version and the version using MPI broadcast (\texttt{\small MPI\_bcast}) communication is happening after the computation and thus the performance gained by adding more nodes, is lost again by the time spent communicating. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{chembl_12nodes.pdf} \includegraphics[width=0.49\textwidth]{movielens_128nodes.pdf} \caption{Performance of the distributed BPMF on the ChEMBL dataset (left) and MovieLens dataset (right) in number of updates to $U$ and $V$ per second versus the number of cores used.} \label{fig:multinode} \end{figure*} Scaling further to 128 nodes, the difference between the asynchronous versions becomes apparent. Figure~\ref{fig:multinode} (right) shows the GASPI version scales better than the asynchronous MPI version, achieving more than 70\% parallel efficiency for 128 nodes compared to 10\% for the MPI version. This is due to two factors. Firstly the GASPI communication library is much more light-weight than the MPI version, spending about 2.5x less time than MPI per message sent. And because of this, secondly the GASPI version allows you to hide 85\% of the communication time (for 128 nodes), while for the MPI version this is a mere 10\%. The overlap of communication and computation is displayed in Figure~\ref{fig:overlap}. In this figure \emph{both} means that the network hardware is sending data (communicating) while the processor is busy doing computations. A clear difference between MPI on the left and GASPI on the right is visible. As can already be seen from the GASPI results on 128 nodes, we also expect the performance of the GASPI version to level off. This is due to the general decrease in the amount of work per node (less items) and increase in the amount of communication (more nodes). We need changes to the algorithm itself to keep scaling. \begin{figure*} \centering \includegraphics[width=0.4\textwidth]{overlap_mpi.pdf} \includegraphics[width=0.4\textwidth]{overlap_gaspi.pdf} \caption{Time spent computing, communicating and doing both for the MPI implementation (left) and GASPI implementation (right).} \label{fig:overlap} \end{figure*} \section{Related Work} \label{sec:related} Apart from Bayesian Probabilistic Matrix Factorization (BPMF) \cite{BPMF}, the most popular algorithms for low-row matrix factorization are probably alternating least-squares (ALS) \cite{parallelALS} and stochastic gradient descent (SGD) \cite{parallelSGD}. SGD randomly loops through all observed interactions user-movie, computes the error of the prediction for each interaction and modifies the model parameters in the opposite direction of the gradient. The ALS technique repeatedly keeps one of the matrices $U$ and $V$ fixed, so that the other one can be optimally re-computed. ALS then rotates between re-computing the rows of $U$ in one step and the columns of $V$ in the subsequent step. The advantage of BPMF is that the predictions are averaged over all the samples from the posterior distribution and all the model parameters are integrated. While a growing number of works studied parallel implementations of the SGD \cite{parallelSGD,parallelSGD2} and ALS \cite{parallelALS}, less research work dealt with a parallelization of the BPMF \cite{BPMFDesc,parallelMCMC}. Indeed, computing the posterior inference which time complexity per iteration is cubic with the respect of the rank of the factor matrix ($\approx$$K$$^3$), may become very exorbitant when the number of users and movies runs into millions. SGD, in the other hand, is computationally less expensive even if it needs more iterations to reach a good enough prediction and its performance is sensitive to the choice of the learning rate. For ALS, although its time complexity per iteration, previous related work \cite{parallelALS} showed that it is well suited for parallelization. In \cite{parallelMCMC}, a distributed Bayesian matrix factorization algorithm using stochastic gradient Markov Chain Monte Carlo (MCMC) is proposed. This work is much more similar to this work than the aforementioned ALS and SGD. In the paper, the authors extended the Distributed Stochastic Gradient Langevin Dynamics (DSGLD) for more efficient learning. For the sake of increasing prediction's accuracy, they use multiple parallel chains in order to collect samples at a much faster rate and to explore different modes of parameter space. In this work, the Gibbs sampler is used because it is popular for its best quality samples even though it is more difficult to parallelize. From parallel programming prospective, a master slave model is considered in \cite{parallelMCMC}. The initial matrix $R$ is grid into as many independent blocks as used workers. At each iteration, the master picks a block using a block scheduler and sends the corresponding chunk of $U$ and $V$ to the block's worker. Upon reception, the worker updates these chunks by running DSGLD using its local block of ratings. Afterwards, the worker sends the chunks back to the master. Upon reception, this later updates its global copy of the matrices $U$ and $V$. Two levels of parallelism are used by the authors as a way of compensating the low mixing rate of SGLD: a parallel execution of the same sampling step (chain) and different samples in parallel. In this work, a PGAS approach is used where the computation is totally decentralized and where the matrices are defined as global arrays. In such a decentralized model, no global barrier is needed to update the matrices neither for synchronizing the block distribution scheduling such as in \cite{parallelMCMC}. No bottleneck is also created when the updates of the matrices are exchanged. \section{Conclusion and Future Work} \label{sec:conclusion} This work proposed a high-performance distributed implementation of the Bayesian probabilistic matrix factorization algorithm. We have shown that load balancing and low-overhead asynchronous communication are essential to achieve good parallel efficiency, clearly outperforming more common synchronous approaches like GraphLab. The achieved speed-up allowed us to speed up machine learning for drug discovery on an industrial dataset from 15 days for the initial Julia-based version to 5 minutes using the distributed version with TBB and GASPI. Future work includes extending the framework to support more matrix factorization methods such as Group Factor Analysis \cite{GFA} or Macau \cite{simm:macau}, but also a look at more scalable MF \emph{algorithms}. \section*{Acknowledgments} This work is partly funded by the European project ExCAPE with reference 671555. We acknowledge IT4I for providing access to the Anselm and Salomon systems.
1,314,259,994,608
arxiv
\subsubsection{Acknowledgements.} We are grateful to David Baelde for his help in phrasing the definition of the logical relation in Section~\ref{ssec:vericc}. The paper has benefited from many suggestions from its reviewers. This work has been supported by the National Science Foundation grant CCF-0917140 and by the University of Minnesota through a Doctoral Dissertation Fellowship and a Grant-in-Aid of Research. Opinions, findings and conclusions or recommendations that are manifest in this material are those of the participants and do not necessarily reflect the views of the NSF. \section{The Framework} \label{sec:framework} \vspace{-0.1cm} We describe, in turn, the specification logic and \LProlog, the reasoning logic, and the manner in which the Abella system embeds the specification logic. \vspace{-0.3cm} \subsection{The specification logic and \LProlog} The \HOHH logic is an intuitionistic and predicative fragment of Church's Simple Theory of Types~\cite{church40}. Its types are formed using the function type constructor \lsti|->| over user defined primitive types and the distinguished type \lsti|o| for formulas. Expressions are formed from a user-defined \emph{signature} of typed constants whose argument types do not contain \lsti|o| and the \emph{logical constants} $\simply$ and $\sconj$ of type \lsti|o -> o -> o| and $\Pi_\tau$ of type \lsti|($\tau$ -> o) -> o| for each type $\tau$ not containing \lsti|o|. We write $\simply$ and $\sconj$, which denote implication and conjunction respectively, in infix form. Further, we write $\Pi_\tau\app \lambda (x:\tau) M$, which represents the universal quantification of $x$ over $M$, as $\typedforall{\tau}{x}{M}$. The logic is oriented around two sets of formulas called \emph{goal formulas} and \emph{program clauses} that are given by the following syntax rules: \vspace{-0.15cm} \begin{tabbing} \qquad\=$G$\qquad\=::=\qquad\=\kill \>$G$\>::=\>$A\sep G \;\sconj\; G\sep D\simply G \sep \typedforall{\tau}{x}{G}$\\ \>$D$\>::=\>$A\sep G\simply A\sep \typedforall{\tau}{x}{D}$ \end{tabbing} \vspace{-0.15cm} Here, $A$ represents \emph{atomic formulas} that have the form $(p\app t_1\app \ldots\app t_n)$ where $p$ is a (user defined) \emph{predicate constant}, \ie a constant with target type \lsti|o|. Goal formulas of the last two kinds are referred to as hypothetical and universal goals. Using the notation $\Pi_{\bar{\tau}}\bar{x}$ to denote a sequence of quantifications, we see that a program clause has the form $\typedforall{\bar{\tau}}{\bar{x}}{A}$ or $\typedforall{\bar{\tau}}{\bar{x}}{G \simply A}$. We refer to $A$ as the head of such a clause and $G$ as the body; in the first case the body is empty. A collection of program clauses constitutes a \emph{program}. A program and a signature represent a specification of all the goal formulas that can be derived from them. The derivability of a goal formula $G$ is expressed formally by the judgment $\sequent{\Sigma}{\Theta}{\Gamma}{G}$ in which $\Sigma$ is a signature, $\Theta$ is a collection of program clauses defined by the user and $\Gamma$ is a collection of dynamically added program clauses. The validity of such a judgment---also called a sequent---is determined by provability in intuitionistic logic but can equivalently be characterized in a goal-directed fashion as follows. If $G$ is conjunctive, it yields sequents for ``solving'' each of its conjuncts in the obvious way. If it is a hypothetical or a universal goal, then one of the following rules is used: \vspace{-0.1cm} \begin{smallgather} \infer[\impR] {\sequent{\Sigma}{\Theta}{\Gamma}{D \simply G}} {\sequent{\Sigma}{\Theta}{\Gamma, D}{G}} \qquad \infer[\forallR] {\sequent{\Sigma}{\Theta}{\Gamma}{\typedforall{\tau}{x}{G}}} {(c \notin \Si) & \sequent{\Sigma,c:\tau}{\Theta}{\Gamma}{G[c/x]}} \end{smallgather} \vspace{-0.1cm} \noindent In the \forallR\ rule, $c$ must be a constant not already in $\Sigma$; thus, these rules respectively cause the program and the signature to grow while searching for a derivation. Once $G$ has been simplified to an atomic formula, the sequent is derived by generating an instance of a clause from $\Theta$ or $\Gamma$ whose head is identical to $G$ and by constructing a derivation of the corresponding body of the clause if it is non-empty. This operation is referred to as backchaining on a clause. In presenting \HOHH specifications in this paper we will show programs as a sequence of clauses each terminated by a period. We will leave the outermost universal quantification in these clauses implicit, indicating the variables they bind by using tokens that begin with uppercase letters. We will write program clauses of the form $G \simply A$ as $A$~\lsti+:-+$G$. We will show goals of the form $G_1 \land G_2$ and $\typedforall{\tau}{y}{G}$ as $G_1$\lsti+,+$G_2$ and \lsti+pi+~$y:\tau$\lsti+\+~$G$, respectively, dropping the type annotation in the latter if it can be filled in uniquely based on the context. Finally, we will write abstractions as $y$\lsti+\+$M$ instead of $\lambdax{y}{M}$. Program clauses provide a natural means for encoding rule based specifications. Each rule translates into a clause whose head corresponds to the conclusion and whose body represents the premises of the rule. These clauses embody additional mechanisms that simplify the treatment of binding structure in object languages. They provide $\lambda$-terms as a means for representing objects, thereby allowing binding to be reflected into an explicit meta-language abstraction. Moreover, recursion over such structure, that is typically treated via side conditions on rules expressing requirements such as freshness for variables, can be captured precisely through universal and hypothetical goals. This kind of encoding is concise and has logical properties that we can use in reasoning. We illustrate the above ideas by considering the specification of the typing relation for the simply typed $\lambda$-calculus (STLC). Let $N$ be the only atomic type. We use the \HOHH type \lsti+ty+ for representations of object language types that we build using the constants \lsti+n : ty+ and \lsti+arr : ty -> ty -> ty+. Similarly, we use the \HOHH type \lsti+tm+ for encodings of object language terms that we build using the constants \lsti+app : tm -> tm -> tm+ and \lsti+abs : ty -> (tm -> tm) -> tm+. The type of the latter constructor follows our chosen approach to encoding binding: for example, we represent the \STLC expression $(\typedlambda{y}{N \to N}{\typedlambda{x}{N}{(y \app x)}})$ by the \HOHH term \lsti+(abs (arr n n) (y\ (abs n (x\ (app y x)))))+. Typing for the STLC is a judgment written as $\Gamma \tseq T : \mbox{\it Ty}$ that expresses a relationship between a context $\Gamma$ that assigns types to variables, a term $T$ and a type $\mbox{\it Ty}$. Such judgments are derived using the following rules: \vspace{-0.1cm} \begin{smallgather} \infer[]{ \Gamma \tseq T_1 \app T_2 : \mbox{\it Ty}_2 }{ \Gamma \tseq T_1 : \mbox{\it Ty}_1 \to \mbox{\it Ty}_2 & \Gamma \tseq T_2 : \mbox{\it Ty}_1 } \quad \infer[]{ \Gamma \tseq \typedlambda{y}{\mbox{\it Ty}_1}T : (\mbox{\it Ty}_1 \to \mbox{\it Ty}_2) }{ \Gamma, y:\mbox{\it Ty}_1 \tseq T : \mbox{\it Ty}_2 } \end{smallgather} \vspace{-0.5cm} The second rule has a proviso: $y$ must be fresh to $\Gamma$. In the $\lambda$-tree syntax approach, we encode typing as a binary relation between a term and a type, treating the typing context implicitly via dynamically added clauses. Using the predicate \lsti|of| to represent this relation, we define it through the following clauses: \vspace{-0.10cm} \begin{lstlisting} of (app T1 T2) Ty2 :- of T1 (arr Ty1 Ty2), of T2 Ty1. of (abs Ty1 T) (arr Ty1 Ty2) :- pi y\ (of y Ty1 => of (T y) Ty2). \end{lstlisting} \vspace{-0.10cm} \noindent The second clause effectively says that \lsti|(abs Ty1 T)| has the type \lsti|(arr Ty1 Ty2)| if \lsti|(T y)| has type \lsti|Ty2| in an extended context that assigns \lsti|y| the type \lsti|Ty1|. Note that the universal goal ensures that \lsti|y| is new and, given our encoding of terms, \lsti|(T y)| represents the body of the object language abstraction in which the bound variable has been replaced by this new name. The rules for deriving goal formulas give \HOHH specifications a computational interpretation. We may also leave particular parts of a goal unspecified, representing them by ``meta-variables,'' with the intention that values be found for them that make the overall goal derivable. This idea underlies the language $\lambda$Prolog that is implemented, for example, in the Teyjus system~\cite{teyjus.website}. \vspace{-0.3cm} \subsection{The reasoning logic and Abella}\label{sec:reasoning} The inference rules that describe a relation are usually meant to be understood in an ``if and only if'' manner. Only the ``if'' interpretation is relevant to using rules to effect computations and their encoding in the \HOHH logic captures this part adequately. To reason about the \emph{properties} of the resulting computations, however, we must formalize the ``only if'' interpretation as well. This functionality is realized by the logic \Gee that is implemented in the Abella system. The logic \Gee is also based on an intuitionistic and predicative version of Church's Simple Theory of Types. Its types are like those in \HOHH except that the type \lsti|prop| replaces \lsti|o|. Terms are formed from user-defined constants whose argument types do not include \lsti|prop| and the following logical constants: \lsti|true| and \lsti|false| of type \lsti|prop|; $\conj$, ${\lor}$ and \lsti|->| of type \lsti|prop -> prop -> prop| for conjunction, disjunction and implication; and, for every type $\tau$ not containing \lsti|prop|, the quantifiers $\forall_\tau$ and $\exists_\tau$ of type \lsti|($\tau$ -> prop) -> prop| and the equality symbol \lsti|=$_\tau$| of type \lsti|$\tau$ -> $\tau$ -> prop|. The formula $B =_\tau B'$ holds if and only if $B$ and $B'$ are of type $\tau$ and equal under $\alpha\beta\eta$ conversion. We will omit the type $\tau$ in logical constants when its identity is clear from the context. A novelty of \Gee is that it is parameterized by \emph{fixed-point definitions}. Such definitions consist of a collection of \emph{definitional clauses} each of which has the form $\forall {\bar{x}}, A \defeq B$ where $A$ is an atomic formula all of whose free variables are bound by $\bar{x}$ and $B$ is a formula whose free variables must occur in $A$; $A$ is called the head of such a clause and $B$ is called its body.\footnote{To be acceptable, definitions must cumulatively satisfy certain stratification conditions~\cite{mcdowell00tcs} that we adhere to in the paper but do not explicitly discuss.} To illustrate definitions, let \lsti|olist| represent the type of lists of \HOHH formulas and let \lsti|nil| and \lsti|::|, written in infix form, be constants for building such lists. Then the append relation at the \lsti|olist| type is defined in \Gee by the following clauses: \vspace{-0.1cm} \begin{lstlisting} append nil L L; append (X :: L1) L2 (X :: L3) := append L1 L2 L3. \end{lstlisting} \vspace{-0.1cm} This presentation also illustrates several conventions used in writing definitions: clauses of the form $\forall {\bar{x}}, A \defeq\,$\lsti|true| are abbreviated to $\forall {\bar{x}}, A$, the outermost universal quantifiers in a clause are made implicit by representing the variables they bind by tokens that start with an uppercase letter, and a sequence of clauses is written using semicolon as a separator and period as a terminator. The proof system underlying \Gee interprets atomic formulas via the fixed-point definitions. Concretely, this means that definitional clauses can be used in two ways. First, they may be used in a backchaining mode to derive atomic formulas: the formula is matched with the head of a clause and the task is reduced to deriving the corresponding body. Second, they can also be used to do case analysis on an assumption. Here the reasoning structure is that if an atomic formula holds, then it must be because the body of one of the clauses defining it holds. It therefore suffices to show that the conclusion follows from each such possibility. The clauses defining a particular predicate can further be interpreted inductively or coinductively, leading to corresponding reasoning principles relative to that predicate. As an example of how this works, consider proving \vspace{-0.1cm} \begin{lstlisting} forall L1 L2 L3, append L1 L2 L3 -> append L1 L2 L3' -> L3 = L3' \end{lstlisting} \vspace{-0.1cm} assuming that we have designated \lsti|append| as an inductive predicate. An induction on the first occurrence of \lsti|append| then allows us to assume that the entire formula holds any time the leftmost atomic formula is replaced by a formula that is obtained by unfolding its definition and that has \lsti|append| as its predicate head. Many arguments concerning binding require the capability of reasoning over structures with free variables where each such variable is treated as being distinct and not further analyzable. To provide this capability, \Gee includes the special \emph{generic} quantifier $\nabla_\tau$, pronounced as ``nabla'', for each type $\tau$ not containing \lsti|prop|~\cite{miller05tocl}. In writing this quantifier, we, once again, elide the type $\tau$. The rules for treating $\nabla$ in an assumed formula and a formula in the conclusion are similar: a ``goal'' with \lsti|(|$\nabla$\lsti|x M)| in it reduces to one in which this formula has been replaced by \lsti|M[c/x]| where \lsti|c| is a fresh, unanalyzable constant called a \emph{nominal constant}. Note that \lsti|nabla| has a meaning that is different from that of \lsti|forall|: for example, \lsti|(nabla x y, x = y -> false)| is provable but \lsti|(forall x y, x = y -> false)| is not. \Gee allows the \lsti|nabla| quantifier to be used also in the heads of definitions. The full form for a definitional clause is in fact $\forall {\bar{x}} \nabla {\bar{z}}, A \defeq B$, where the \lsti|nabla| quantifiers scope only over $A$. In generating an instance of such a clause, the variables in $\bar{z}$ must be replaced with nominal constants. The quantification order then means that the instantiations of the variables in $\bar{x}$ cannot contain the constants used for $\bar{z}$. This extension makes it possible to encode structural properties of terms in definitions. For example, the clause \lsti|(nabla x, name x)| defines \lsti|name| to be a recognizer of nominal constants. Similarly, the clause \lsti|(nabla x, fresh x B)| defines \lsti|fresh| such that \lsti|(fresh X B)| holds just in the case that \lsti|X| is a nominal constant and \lsti|B| is a term that does not contain \lsti|X|. As a final example, consider the following clauses in which \lsti|of| is the typing predicate from the previous subsection. \vspace{-0.1cm} \begin{lstlisting} ctx nil; nabla x, ctx (of x T :: L) := ctx L. \end{lstlisting} \vspace{-0.1cm} These clauses define \lsti|ctx| such that \lsti|(ctx L)| holds exactly when \lsti|L| is a list of type assignments to distinct variables. \vspace{-0.3cm} \subsection{The two-level logic approach}\label{sec:twolevel} Our framework allows us to write specifications in \HOHH and reason about them using \Gee. Abella supports this \emph{two-level logic approach} by encoding \HOHH derivability in a definition and providing a convenient interface to it. The user program and signature for these derivations are obtained from a \LProlog program file. The state in a derivation is represented by a judgment of the form \lsti+{$\Gamma$ |- $G$}+ where $\Gamma$ is the list of dynamically added clauses; additions to the signature are treated implicitly via nominal constants. If $\Gamma$ is empty, the judgment is abbreviated to \lsti|{G}|. The theorems that are to be proved mix such judgments with other ones defined directly in Abella. For example, the uniqueness of typing for the STLC based on its encoding in \HOHH can be stated as follows: \vspace{-0.1cm} \begin{lstlisting} forall L M T T', ctx L -> {L |- of M T} -> {L |- of M T'} -> T = T'. \end{lstlisting} \vspace{-0.1cm} This formula talks about the typing of \emph{open} terms relative to a dynamic collection of clauses that assign unique types to (potentially) free variables. The ability to mix specifications in \HOHH and definitions in Abella provides considerable expressivity to the reasoning process. This expressivity is further enhanced by the fact that both \HOHH and \Gee support the $\lambda$-tree syntax approach. We illustrate these observations by considering the explicit treatment of substitutions. We use the type \lsti|map| and the constant \lsti|map: tm -> tm -> map| to represent mappings for individual variables (encoded as nominal constants) and a list of such mappings to represent a substitution; for simplicity, we overload the constructors \lsti|nil| and \lsti|::| at this type. Then the predicate \lsti|subst| such that \lsti|subst ML M M'| holds exactly when \lsti|M'| is the result of applying the substitution \lsti|ML| to \lsti|M| can be defined by the following clauses: \vspace{-0.1cm} \begin{lstlisting} subst nil M M; nabla x, subst ((map x V) :: ML) (R x) M := subst ML (R V) M. \end{lstlisting} \vspace{-0.1cm} Observe how quantifier ordering is used in this definition to create a ``hole'' where a free variable appears in a term and application is then used to plug the hole with the substitution. This definition makes it extremely easy to prove structural properties of substitutions. For example, the fact that substitution distributes over applications and abstractions can be stated as follows: \vspace{-0.1cm} \begin{lstlisting} forall ML M1 M2 M', subst ML (app M1 M2) M' -> exists M1' M2', M' = app M1' M2' /\ subst ML M1 M1' /\ subst ML M2 M2'. forall ML R T M', subst ML (abs T R) M' -> exists R', M' = abs T R' /\ nabla x, subst ML (R x) (R' x). \end{lstlisting} \vspace{-0.1cm} An easy induction over the definition of substitution proves these properties. As another example, we may want to characterize relationships between closed terms and substitutions. For this, we can first define well-formed terms through the following \HOHH clauses: \vspace{-0.1cm} \begin{lstlisting} tm (app M N) :- tm M, tm N. tm (abs T R) :- pi x\ tm x => tm (R x). \end{lstlisting} \vspace{-0.1cm} Then we characterize the context used in \lsti|tm| derivations in Abella as follows: \vspace{-0.1cm} \begin{lstlisting} tm_ctx nil; nabla x, tm_ctx (tm x :: L) := tm_ctx L. \end{lstlisting} \vspace{-0.1cm} Intuitively, if \lsti|tm_ctx L| and \lsti+{L |- tm M}+ hold, then \lsti|M| is a well-formed term whose free variables are given by \lsti|L|. Clearly, if \lsti+{tm M}+ holds, then \lsti|M| is closed. Now we can state the fact that a closed term is unaffected by a substitution: \vspace{-0.1cm} \begin{lstlisting} forall ML M M', {tm M} -> subst ML M M' -> M = M'. \end{lstlisting} \vspace{-0.1cm} Again, an easy induction on the definition of substitutions proves this property. \section{Implementing Transformations on Functional Programs} \label{sec:implfpt} We now turn to the main theme of the paper, that of showing the benefits of our framework in the verified implementation of compilation-oriented program transformations for functional languages. The case we make has the following broad structure. Program transformations are often conveniently described in a syntax-directed and rule-based fashion. Such descriptions can be encoded naturally using the program clauses of the \HOHH logic. In transforming functional programs, special attention must be paid to binding structure. The $\lambda$-tree syntax approach, which is supported by the \HOHH logic, provides a succinct and logically precise means for treating this aspect. The executability of \HOHH specifications renders them immediately into implementations. Moreover, the logical character of the specifications is useful in the process of reasoning about their correctness. This section is devoted to substantiating our claim concerning implementation. We do this by showing how to specify transformations that are used in the compilation of functional languages. An example we consider in detail is that of closure conversion. Our interest in this transformation is twofold. First, it is an important step in the compilation of functional programs: it is, in fact, an enabler for other transformations such as code hoisting. Second, it is a transformation that involves a complex manipulation of binding structure. Thus, the consideration of this transformation helps shine a light on the special features of our framework. The observations we make in the context of closure conversion are actually applicable quite generally to the compilation process. We close the section by highlighting this fact relative to other transformations that are of interest. \vspace{-0.3cm} \subsection{The closure conversion transformation} \label{ssec:cc} The closure conversion transformation is designed to replace (possibly nested) functions in a program by \emph{closures} that each consist of a function and an environment. The function part is obtained from the original function by replacing its free variables by projections from a new environment parameter. Complementing this, the environment component encodes the construction of a value for the new parameter in the enclosing context. For example, when this transformation is applied to the following pseudo OCaml code segment \vspace{-0.1cm} \begin{lstlisting}[language=Caml] let x = 2 in let y = 3 in (fun z. z + x + y) \end{lstlisting} \vspace{-0.1cm} it will yield \vspace{-0.1cm} \begin{lstlisting}[language=Caml] let x = 2 in let y = 3 in <fun z e. z$\;$+$\;$e.1$\;$+$\;$e.2, (x,y)> \end{lstlisting} \vspace{-0.1cm} We write \lsti|<F,E>| here to represent a closure whose function part is \lsti|F| and environment part is \lsti|E|, and \lsti|e.i| to represent the $i$-th projection applied to an ``environment parameter'' \lsti|e|. This transformation makes the function part independent of the context in which it appears, thereby allowing it to be extracted out to the top-level of the program. \vspace{-0.3cm} \subsubsection{The source and target languages.} \label{sssec:cclangs} \begin{figure}[t] \centering \parbox{0.4\linewidth}{ \begin{smallalign} & T ::= \tnat \sep T_1 \to T_2 \sep \tunit \sep {T_1} \tprod {T_2}\\ & M ::= n \sep x \sep \pred M \sep M_1 + M_2\\ & \qquad \sep \ifz {M_1} {M_2} {M_3} \\ & \qquad \sep \unit \sep \pair {M_1} {M_2} \sep \fst M \sep \snd M\\ & \qquad \sep \letexp x {M_1} {M_2} \\ & \qquad \sep \fix f x M \sep (M_1 \app M_2)\\ & V ::= n \sep \fix f x M \sep () \sep \pair {V_1} {V_2} \end{smallalign} \vspace{-0.6cm} \caption{Source language syntax} \label{fig:sourcelang} } \qquad \parbox{0.4\linewidth}{ \begin{smallalign} & T ::= \tnat \sep {T_1} \to {T_2} \sep {T_1} \carr {T_2} \sep \tunit \sep {T_1} \tprod {T_2}\\ & M ::= n \sep x \sep \pred M \sep M_1 + M_2 \\ & \qquad \sep \ifz {M_1} {M_2} {M_3} \\ & \qquad \sep \unit \sep \pair {M_1} {M_2} \sep \fst M \sep \snd M\\ & \qquad \sep \letexp x {M_1} {M_2} \sep \ \abs x M \sep (M_1 \app M_2)\\ & \qquad \sep \clos{M_1}{M_2} \sep \open {x_f} {x_e} {M_1} {M_2}\\ & V ::= n \sep \abs x M \sep () \sep \pair {V_1} {V_2} \sep \clos {V_1} {V_2} \end{smallalign} \vspace{-0.6cm} \caption{Target language syntax} \label{fig:targlang} } \vspace{-0.6cm} \end{figure} Figures~\ref{fig:sourcelang} and \ref{fig:targlang} present the syntax of the source and target languages that we shall use in this illustration. In these figures, $T$, $M$ and $V$ stand respectively for the categories of types, terms and the terms recognized as values. $\tnat$ is the type for natural numbers and $n$ corresponds to constants of this type. Our languages include some arithmetic operators, the conditional and the tuple constructor and destructors; note that $\predsans$ represents the predecessor function on numbers, the behavior of the conditional is based on whether or not the ``condition'' is zero and $\fstsans$ and $\sndsans$ are the projection operators on pairs. The source language includes the recursion operator $\fixsans$ which abstracts simultaneously over the function and the parameter; the usual abstraction is a degenerate case in which the function parameter does not appear in the body. The target language includes the expressions $\clos {M_1} {M_2}$ and $(\open {x_f} {x_e} {M_1} {M_2})$ representing the formation and application of closures. The target language does not have an explicit fixed point constructor. Instead, recursion is realized by parameterizing the function part of a closure with a function component; this treatment should become clear from the rules for typing closures and for evaluating the application of closures that we present below. The usual forms of abstraction and application are included in the target language to simplify the presentation of the transformation. The usual function type is reserved for closures; abstractions are given the type ${T_1} \carr {T_2}$ in the target language. We abbreviate $\pair {M_1} {\ldots, \pair {M_n} \unit}$ by $(M_1,\ldots,M_n)$ and $\fst {(\snd {(\ldots(\snd M))})}$ where $\mathbf{snd}$ is applied $i-1$ times for $i \geq 1$ by $\pi_i(M)$. Typing judgments for both the source and target languages are written as $\Gamma \tseq M : T$, where $\Gamma$ is a list of type assignments for variables. The rules for deriving typing judgments are routine, with the exception of those for introducing and eliminating closures in the target language that are shown below: \begin{smallgather} \infer[\cofclos]{ \Gamma \tseq {\clos {M_1} {M_2}} : {T_1 \to T_2} }{ \tseq {M_1} : {((T_1 \to T_2) \tprod T_1 \tprod T_e) \carr T_2} & \Gamma \tseq {M_2} : {T_e} } \\[5pt] \infer[\cofopen]{ \Gamma \tseq {\open {x_f} {x_e} {M_1} {M_2}} : T }{ \Gamma \tseq {M_1} : {T_1 \to T_2} & {\Gamma, x_f:((T_1 \to T_2) \tprod T_1 \tprod l) \carr T_2, x_e:l} \tseq {M_2} : T } \end{smallgather} In $\cofclos$, the function part of a closure must be typable in an empty context. In $\cofopen$, $x_f$, $x_e$ must be names that are new to $\Gamma$. This rule also uses a ``type'' $l$ whose meaning must be explained. This symbol represents a new type constant, different from $\tnat$ and $\unit$ and any other type constant used in the typing derivation. This constraint in effect captures the requirement that the environment of a closure should be opaque to its user. The operational semantics for both the source and the target language is based on a left to right, call-by-value evaluation strategy. We assume that this is given in small-step form and, overloading notation again, we write $M \step{1} M'$ to denote that $M$ evaluates to $M'$ in one step in whichever language is under consideration. The only evaluation rules that may be non-obvious are the ones for applications. For the source language, they are the following: \begin{smallgather} \infer[]{ M_1 \app M_2 \step{1} M_1' \app M_2 }{ M_1 \step{1} M_1' } \quad\ \infer[]{ V_1 \app M_2 \step{1} V_1 \app M_2' }{ M_2 \step{1} M_2' } \quad\ \infer[]{ (\fix f x M) \app V \step{1} M[\fix f x M/f, V/x] }{} \end{smallgather} For the target language, they are the following: \begin{smallgather} \infer[]{ {\open {x_f} {x_e} {M_1} {M_2}} \step{1} {\open {x_f} {x_e} {M_1'} {M_2}} }{ {M_1} \step{1} {M_1'} } \\[7pt] \infer[]{ {\open {x_f} {x_e} {\clos {V_f} {V_e}} {M_2}} \step{1} {M_2[V_f/x_f, V_e/x_e]} }{} \end{smallgather} One-step evaluation generalizes in the obvious way to $n$-step evaluation that we denote by $M \step{n} M'$. Finally, we write $M \eval V$ to denote the evaluation of $M$ to the value $V$ through $0$ or more steps. \vspace{-0.4cm} \subsubsection{The transformation.} \label{sssec:cctrans} \begin{figure*}[!t] \begin{smallgather} \infer[\ccnat]{ \cc {\rho} n n }{} \qquad \infer[\ccvar]{ \cc {\rho} x M }{ (x \mapsto M) \in \rho } \qquad \infer[\ccfvs]{ \ccenv {\rho} {(x_1,\ldots,x_n)} {(M_1,\ldots,M_n)} }{ \cc {\rho} {x_1} {M_1} & \ldots & \cc {\rho} {x_n} {M_n} } \\ \infer[\ccpred]{ \cc {\rho} {\pred M} {\pred M'} }{ \cc{\rho} M {M'} } \qquad \infer[\ccplus]{ \cc {\rho} {M_1 + M_2} {M_1' + M_2'} }{ \cc \rho {M_1} {M_1'} & \cc \rho {M_2} {M_2'} } \\ \infer[\ccifz]{ \cc {\rho} {\ifz {M} {M_1} {M_2}} {\ifz {M'} {M_1'} {M_2'}} }{ \cc \rho {M} {M'} & \cc \rho {M_1} {M_1'} & \cc \rho {M_2} {M_2'} } \qquad \infer[\ccunit]{ \cc \rho \unit \unit }{} \\ \infer[\ccpair]{ \cc {\rho} {\pair {M_1} {M_2}} {\pair {M_1'} {M_2'}} }{ \cc \rho {M_1} {M_1'} & \cc \rho {M_2} {M_2'} } \quad \infer[\ccfst]{ \cc {\rho} {\fst M} {\fst M'} }{ \cc{\rho} M {M'} } \quad \infer[\ccsnd]{ \cc {\rho} {\snd M} {\snd M'} }{ \cc{\rho} M {M'} } \\ \infer[\cclet\quad y\ \mbox{\rm must be fresh} ]{ \cc \rho {\letexp x {M_1} {M_2}} {\letexp y {M_1'} {M_2'}} }{ \cc \rho {M_1} {M_1'} & \cc {\rho, x \mapsto y} {M_2} {M_2'} } \\ \infer[\ccapp\quad g\ \mbox{\rm must be fresh}]{ \cc {\rho} {M_1 \app M_2} {\letexp g {M_1'} {\open {x_f} {x_e} {g} {x_f \app (g,M_2',x_e)}}} }{ \cc {\rho} {M_1} {M_1'} & \cc {\rho} {M_2} {M_2'} } \\ \infer[\ccfix]{ \cc {\rho} {\fix f x M} {\clos {\abs p {\letexp g {\pi_1(p)} {\letexp y {\pi_2(p)} {\letexp {x_e} {\pi_3(p)} {M'}}}}} {M_e}} }{ (x_1,\ldots,x_n) \supseteq \fvars {\fix f x M} & \ccenv {\rho} {(x_1,\ldots,x_n)} {M_e} & \cc {\rho'} M {M'} } \\ \mbox{where $\rho' = (x \mapsto y, f \mapsto g, x_1 \mapsto \pi_1(x_e), \ldots, x_n \mapsto \pi_n(x_e))$ and $p, g, y,$ and $x_e$ are fresh variables} \end{smallgather} \vspace{-0.5cm} \caption{Closure Conversion Rules} \label{fig:cc} \vspace{-0.5cm} \end{figure*} In the general case, we must transform terms under mappings for their free variables: for a function term, this mapping represents the replacement of the free variables by projections from the environment variable for which a new abstraction will be introduced into the term. Accordingly, we specify the transformation as a 3-place relation written as $\cc \rho M {M'}$, where $M$ and $M'$ are source and target language terms and $\rho$ is a mapping from (distinct) source language variables to target language terms. We write $(\rho, x \mapsto M)$ to denote the extension of $\rho$ with a mapping for $x$ and $(x \mapsto M) \in \rho$ to mean that $\rho$ contains a mapping of $x$ to $M$. Figure~\ref{fig:cc} defines the $\cc \rho M {M'}$ relation in a rule-based fashion; these rules use the auxiliary relation $\ccenv \rho {(x_1,\ldots,x_n)} {M_e}$ that determines an environment corresponding to a tuple of variables. The $\cclet$ and $\ccfix$ rules have a proviso: the bound variables, $x$ and $f, x$ respectively, should have been renamed to avoid clashes with the domain of $\rho$. Most of the rules have an obvious structure. We comment only on the ones for transforming fixed point expressions and applications. The former translates into a closure. The function part of the closure is obtained by transforming the body of the abstraction, but under a new mapping for its free variables; the expression $(x_1,\ldots,x_n) \supseteq \fvars{\fix f x M}$ means that all the free variables of $(\fix f x M)$ appear in the tuple. The environment part of the closure correspondingly contains mappings for the variables in the tuple that are determined by the enclosing context. Note also that the parameter for the function part of the closure is expected to be a triple, the first item of which corresponds to the function being defined recursively in the source language expression. The transformation of a source language application makes clear how this structure is used to realize recursion: the constructed closure application has the effect of feeding the closure to its function part as the first component of its argument. \vspace{-0.3cm} \subsection{A \LProlog rendition of closure conversion} \label{ssec:implcc} Our presentation of the implementation of closure conversion has two parts: we first show how to encode the source and target languages and we then present a \LProlog specification of the transformation. In the first part, we discuss also the formalization of the evaluation and typing relations; these will be used in the correctness proofs that we develop later. \vspace{-0.4cm} \subsubsection{Encoding the languages.} \label{sssec:encode_cclang} We first consider the encoding of types. We will use \lsti|ty| as the \LProlog type for this encoding for both languages. The constructors \lsti|tnat|, \lsti|tunit| and \lsti|prod| will encode, respectively, the natural number, unit and pair types. There are two arrow types to be treated. We will represent $\to$ by \lsti|arr| and $\carr$ by \lsti|arr'|. The following signature summarizes these decisions. \vspace{-0.1cm} {\footnotesize \begin{tabbing} \quad\=\lsti|tnat,tunit|\ \=\ \,\=\lsti|ty|\qquad\qquad\qquad\=\lsti|arr,prod,arr'|\ \=\ \,\=\kill \>\lsti|tnat,tunit| \>:\> \lsti|ty| \>\lsti|arr,prod,arr'| \>:\> \lsti|ty -> ty -> ty| \end{tabbing} } \vspace{-0.1cm} We will use the \LProlog type \lsti|tm| for encodings of source language terms. The particular constructors that we will use for representing the terms themselves are the following, assuming that \lsti|nat| is a type for representations of natural numbers: \vspace{-0.1cm} {\footnotesize \begin{tabbing} \quad\=\lsti|plus,pair,app|\ :\ \lsti|tm -> tm -> tm|\quad\=\lsti|fix|\ :\ \lsti|(tm -> tm -> tm) -> tm|\quad\=\kill \>\lsti|nat|\ :\ \lsti|nat -> tm|\> \lsti|pred,fst,snd|\ :\ \lsti|tm -> tm|\> \lsti|unit|\ :\ \lsti|tm|\\ \>\lsti|plus,pair,app|\ :\ \lsti|tm -> tm -> tm|\> \lsti|ifz|\ :\ \lsti|tm -> tm -> tm -> tm|\\ \>\lsti|let|\ :\ \lsti|tm -> (tm -> tm) -> tm|\> \lsti|fix|\ :\ \lsti|(tm -> tm -> tm) -> tm| \end{tabbing} } \vspace{-0.1cm} \noindent The only constructors that need further explanation here are \lsti|let| and \lsti|fix|. These encode binding constructs in the source language and, as expected, we use \LProlog abstraction to capture their binding structure. Thus, $\letexp x n x$ is encoded as \lsti|(let (nat n) (x\x))|. Similarly, the \LProlog term \lsti|(fix (f\x\ app f x))| represents the source language expression $(\fix f x {f \app x})$. We will use the \LProlog type \lsti|tm'| for encodings of target language terms. To represent the constructs the target language shares with the source language, we will use ``primed'' versions of the \LProlog constants seen earlier; \eg, \lsti|unit'| of type \lsti|tm'| will represent the null tuple. Of course, there will be no constructor corresponding to \lsti|fix|. We will also need the following additional constructors: \vspace{-0.1cm} {\footnotesize \begin{tabbing} \quad\=\kill \>\lsti|abs'|\ :\ \lsti|(tm' -> tm') -> tm'|\qquad \lsti|clos'|\ :\ \lsti|tm' -> tm' -> tm'|\\ \>\lsti|open'|\ :\ \lsti|tm' -> (tm' -> tm' -> tm') -> tm'| \end{tabbing} } \vspace{-0.1cm} \noindent Here, \lsti|abs'| encodes $\lambda$-abstraction and \lsti|clos'| and \lsti|open'| encode closures and their application. Note again the $\lambda$-tree syntax representation for binding constructs. Following Section~\ref{sec:framework}, we represent typing judgments as relations between terms and types, treating contexts implicitly via dynamically added clauses that assign types to free variables. We use the predicates \lsti|of| and \lsti|of'| to encode typing in the source and target language respectively. The clauses defining these predicates are routine and we show only a few pertaining to the binding constructs. The rule for typing fixed points in the source language translates into the following. \vspace{-0.1cm} \begin{lstlisting} of (fix R) (arr T1 T2) :- pi f\ pi x\ of f (arr T1 T2) => of x T1 => of (R f x) T2. \end{lstlisting} \vspace{-0.1cm} Note how the required freshness constraint is realized in this clause: the universal quantifiers over \lsti|f| and \lsti|x| introduce new names and the application \lsti|(R f x)| replaces the bound variables with these names to generate the new typing judgment that must be derived. For the target language, the main interesting rule is for typing the application of closures. The following clause encodes this rule. \vspace{-0.1cm} \begin{lstlisting} of' (open' M R) T :- of' M (arr T1 T2), pi$\;$f\$\;$pi$\;$e\$\;$pi l\ of' f (arr' (prod (arr T1 T2) (prod T1 l)) T2) => of' e l => of' (R f e) T. \end{lstlisting} \vspace{-0.1cm} Here again we use universal quantifiers in goals to encode the freshness constraint. Note also how the universal quantifier over the variable \lsti|l| captures the opaqueness quality of the type of the environment of the closure involved in the construct. We encode the one step evaluation rules for the source and target languages using the predicates \lsti|step| and \lsti|step'|. We again consider only a few interesting cases in their definition. Assuming that \lsti|val| and \lsti|val'| recognize values in the source and target languages, the clauses for evaluating the application of a fixed point and a closure are the following. \vspace{-0.1cm} {\footnotesize \begin{lstlisting} step (app (fix R) V) (R (fix R) V) :- val V. step' (open' (clos' F E) R) (R F E) :- val' (clos' F E). \end{lstlisting} } \vspace{-0.1cm} \noindent Note here how application in the meta-language realizes substitution. We use the predicates \lsti|nstep| (which relates a natural number and two terms) and \lsti|eval| to represent the $n$-step and full evaluation relations for the source language, respectively. These predicates have obvious definitions. The predicates \lsti|nstep'| and \lsti|eval'| play a similar role for the target language. \vspace{-0.4cm} \subsubsection{Specifying closure conversion.} \label{sssec:encode_ccrules} To define closure conversion in \LProlog, we need a representation of mappings for source language variables. We use the type \lsti|map| and the constant \lsti|map : tm -> tm' -> map| to represent the mapping for a single variable.\footnote{This mapping is different from the one considered in Section~\ref{sec:twolevel} in that it is from a \emph{source} language variable to a \emph{target} language term.} We use the type \lsti|map_list| for lists of such mappings, the constructors \lsti|nil| and \lsti|::| for constructing such lists and the predicate \lsti|member| for checking membership in them. We also need to represent lists of source and target language terms. We will use the types \lsti|tm_list| and \lsti|tm'_list| for these and for simplicity of discussion, we will overload the list constructors and predicates at these types. Polymorphic typing in \LProlog supports such overloading but this feature has not yet been implemented in Abella; we overcome this difficulty in the actual development by using different type and constant names for each case. The crux in formalizing the definition of closure conversion is capturing the content of the $\ccfix$ rule. A key part of this rule is identifying the free variables in a given source language term. We realize the requirement by defining a predicate \lsti|fvars| that is such that if \lsti|(fvars M L1 L2)| holds then \lsti|L1| is a list that includes all the free variables of \lsti|M| and \lsti|L2| is another list that contains only the free variables of \lsti|M|. We show a few critical clauses in the definition of this predicate, omitting ones whose structure is easy predict. \vspace{-0.1cm} \begin{lstlisting} fvars X _ nil :- notfree X. fvars Y Vs (Y :: nil) :- member Y Vs. fvars (nat _) _ nil. fvars (plus M1 M2) Vs FVs :- fvars M1 Vs FVs1, fvars M2 Vs FVs2, combine FVs1 FVs2 FVs. ... fvars (let M R) Vs FVs :- fvars M Vs FVs1, (pi x\ notfree x => fvars (R x) Vs FVs2), combine FVs1 FVs2 FVs. fvars (fix R) Vs FVs :- pi f\ pi x\ notfree f => notfree x => fvars (R f x) Vs FVs. \end{lstlisting} \vspace{-0.1cm} The predicate \lsti|combine| used in these clauses is one that holds between three lists when the last is a combination of the elements of the first two. The essence of the definition of \lsti|fvars| is in the treatment of binding constructs. Viewed operationally, the body of such a construct is descended into after instantiating the binder with a new variable marked \lsti|notfree|. Thus, the variables that are marked in this way correspond to exactly those that are explicitly bound in the term and only those that are not so marked are collected through the second clause. It is important also to note that the specification of \lsti|fvars| has a completely logical structure; this fact can be exploited during verification. The $\ccfix$ rule requires us to construct an environment representing the mappings for the variables found by \lsti|fvars|. The predicate \lsti|mapenv| specified by the following clauses provides this functionality. \vspace{-0.1cm} \begin{lstlisting} mapenv nil _ unit. mapenv (X::L) Map (pair' M ML) :- member$\;$(map X M)$\;$Map, mapenv L Map ML. \end{lstlisting} \vspace{-0.1cm} \noindent The $\ccfix$ rule also requires us to create a new mapping from the variable list to projections from an environment variable. Representing the list of projection mappings as a function from the environment variable, this relation is given by the predicate \lsti|mapvar| that is defined by the following clauses. \vspace{-0.1cm} \begin{lstlisting} mapvar nil (e\ nil). mapvar (X::L) (e\ (map X (fst' e))::(Map (snd' e))) :- mapvar L Map. \end{lstlisting} \vspace{-0.1cm} We can now specify the closure conversion transformation. We provide clauses below that define the predicate \lsti|cc| such that \lsti|(cc Map Vs M M')| holds if \lsti|M'| is a transformed version of \lsti|M| under the mapping \lsti|Map| for the variables in \lsti|Vs|; we assume that \lsti|Vs| contains all the free variables of \lsti|M|. \begin{lstlisting} cc _ _ (nat N) (nat' N). cc Map Vs X M :- member (map X M) Map. cc Map Vs (pred M) (pred' M') :- cc Map Vs M M'. cc Map Vs (plus M1 M2) (plus' M1' M2') :- cc Map Vs M1 M1', cc Map Vs M2 M2'. cc Map Vs (ifz M M1 M2) (ifz' M' M1' M2') :- cc Map Vs M M', cc Map Vs M1 M1', cc Map Vs M2 M2'. cc Map Vs unit unit'. cc Map Vs (pair M1 M2) (pair' M1' M2') :- cc Map Vs M1 M1', cc Map Vs M2 M2'. cc Map Vs (fst M) (fst' M') :- cc Map Vs M M'. cc Map Vs (snd M) (snd' M') :- cc Map Vs M M'. cc Map Vs (let M R) (let' M' R') :- cc Map Vs M M', pi x\ pi y\ cc ((map x y) :: Map) (x :: Vs) (R x) (R' y). cc Map Vs (fix R) (clos' (abs' (p\ let' (fst' p) (g\ let' (fst' (snd' p)) (y\ let' (snd' (snd' p)) (e\ R' g y e))))) E) :- fvars (fix R) Vs FVs, mapenv FVs Map E, mapvar FVs NMap, pi f\ pi x\ pi g\ pi y\ pi e\ cc ((map x y)::(map f g)::(NMap e)) (x::f::FVs) (R f x) (R' g y e). cc Map Vs (app' M1 M2) (let' M1' (g\ open' g (f\e\ app' f (pair' g (pair' M2' e))))) :- cc Map Vs M1 M1', cc Map Vs M2 M2'. \end{lstlisting} These clauses correspond very closely to the rules in Figure~\ref{fig:cc}. Note especially the clause for transforming an expression of the form \lsti|(fix R)| that encodes the content of the $\ccfix$ rule. In the body of this clause, \lsti|fvars| is used to identify the free variables of the expression, and \lsti|mapenv| and \lsti|mapvar| are used to create the reified environment and the new mapping. In both this clause and in the one for transforming a \lsti|let| expression, the $\lambda$-tree representation, universal goals and (meta-language) applications are used to encode freshness and renaming requirements related to bound variables in a concise and logically precise way. \vspace{-0.3cm} \subsection{Implementing other transformations} \label{ssec:implothers} We have used the ideas discussed in the preceding subsections in realizing other transformations such as code hoisting and conversion to continuation-passing style (CPS). These transformations are part of a tool-kit used by compilers for functional languages to convert programs into a form from which compilation may proceed in a manner similar to that for conventional languages like C. Our implementation of the CPS transformation is based on the one-pass version described by Danvy and Filinski~\cite{danvy92mscs} that identifies and eliminates the so-called administrative redexes on-the-fly. This transformation can be encoded concisely and elegantly in \LProlog by using meta-level redexes for administrative redexes. The implementation is straightforward and similar ones that use the \HOAS approach have already been described in the literature;~\eg see~\cite{tian06cats}. Our implementation of code hoisting is more interesting: it benefits in an essential way once again from the ability to analyze binding structure. The code hoisting transformation lifts nested functions that are closed out into a flat space at the top level in the program. This transformation can be realized as a recursive procedure: given a function $(\abs x M)$, the procedure is applied to the subterms of $M$ and the extracted functions are then moved out of $(\abs x M)$. Of course, for this movement to be possible, it must be the case that the variable $x$ does not appear in the functions that are candidates for extraction. This ``dependency checking'' is easy to encode in a logical way within our framework. To provide more insight into our implementation of code-hoisting, let us assume that it is applied after closure conversion and that its source and target languages are both the language shown in Figure~\ref{fig:targlang}. Applying code hoisting to any term will result in a term of the form \vspace{-0.1cm} \begin{smallgather} \letexp {f_1} {M_1} {\ldots \letexp {f_n} {M_n} M} \end{smallgather} \vspace{-0.1cm} \noindent where, for $1 \leq i \leq n$, $M_i$ corresponds to an extracted function. We will write this term below as $(\letfun {\vctr{f} = \vctr{M}} M)$ where $\vctr{f} = (f_1,\ldots,f_n)$ and, correspondingly, $\vctr{M} = (M_1,\ldots,M_n)$. We write the judgment of code hoisting as $(\ch \rho M {M'})$ where $\rho$ has the form $(x_1,\ldots,x_n)$. This judgment asserts that $M'$ is the result of extracting all functions in $M$ to the top level, assuming that $\rho$ contains all the bound variables in the context in which $M$ appears. The relation is defined by recursion on the structure of $M$. The main rule that deserves discussion is that for transforming functions. This rule is the following: \vspace{-0.1cm} \begin{smallgather} \infer{ \ch \rho {\abs x M} {\letfun {(\vctr{f},g) = (\vctr{F},\abs {f} {\abs x {\letfun {\vctr{f} = (\pi_1(f),\ldots,\pi_n(f))} {M'}}})} {g~\vctr{f}}} }{ \ch {\rho,x} M {\letfun {\vctr{f} = \vctr{F}} {M'}} } \end{smallgather} \vspace{-0.1cm} We assume here that $\vctr{f}=(f_1,\ldots,f_n)$ and, by an abuse of notation, we let $(g~\vctr{f})$ denote $(g~(f_1,\ldots,f_n))$. This rule has a side condition: $x$ must not occur in $\vctr{F}$. Intuitively, the term $(\abs x M)$, is transformed by extracting the functions from within $M$ and then moving them further out of the scope of $x$. Note that this transformation succeeds only if none of the extracted functions depend on $x$. The resulting function is then itself extracted. In order to do this, it must be made independent of the (previously) extracted functions, something that is achieved by a suitable abstraction; the expression itself becomes an application to a tuple of functions in an appropriate let environment. It is convenient to use a special representation for the result of code hoisting in specifying it in \LProlog. Towards this end, we introduce the following constants: \vspace{-0.1cm} {\footnotesize \begin{tabbing} \quad\=\kill \>\lsti+hbase : tm' -> tm'+\\ \>\lsti+habs : (tm' -> tm') -> tm'+\\ \>\lsti+htm : tm'_list -> tm' -> tm'+ \end{tabbing} } \vspace{-0.1cm} \noindent Using these constants, the term $(\letfun {(f_1,\ldots,f_n) = (M_1,\ldots,M_n)} M)$ that results from code hoisting will be represented by \vspace{-0.1cm} \begin{lstlisting} htm (M1 :: ... :: Mn :: nil) (habs (f1\ ... (habs (fn\ hbase M)))). \end{lstlisting} \vspace{-0.1cm} \noindent We use the predicate \lsti+ch : tm' -> tm' -> o+ to represent the code hoisting judgment. The context $\rho$ in the judgment will be encoded implicitly through dynamically added program clauses that specify the translation of each variable \lsti+x+ as \lsti+(htm nil (hbase x))+. In this context, the rule for transforming functions, the main rule of interest, is encoded in the following clause: \vspace{-0.1cm} \begin{lstlisting} ch (abs' M) M'' :- (pi x\ ch x (htm nil (hbase x)) => ch (M x) (htm FE (M' x))), extract FE M' M''. \end{lstlisting} \vspace{-0.1cm} \noindent As in previous specifications, a universal and a hypothetical goal are used in this clause to realize recursion over binding structure. Note also the completely logical encoding of the requirement that the function argument must not occur in the nested functions extracted from its body: quantifier ordering ensures that \lsti+FE+ cannot be instantiated by a term that contains \lsti|x| free in it. We have used the predicate \lsti|extract| to build the final result of the transformation from the transformed form of the function body and the nested functions extracted from it; the definition of this predicate is easy to construct and is not provided here. \section{Introduction} \label{sec:intro} This paper concerns the verification of compilers for functional (programming) languages. The interest in this topic is easily explained. Functional languages support an abstract view of computation that makes it easier to construct programs and the resulting code also has a flexible structure. Moreover, these languages have a strong mathematical basis that simplifies the process of proving programs to be correct. However, there is a proviso to this observation: to derive the mentioned benefit, the reasoning must be done relative to the abstract model underlying the language, whereas programs are typically executed only in their compiled form. To close the gap, it is important also to ensure that the compiler that carries out the translation preserves the meanings of programs. The key role that compiler verification plays in overall program correctness has been long recognized; e.g. see \cite{mccarthy67ams,milner72mi} for early work on this topic. With the availability of sophisticated systems such as Coq~\cite{bertot04book}, Isabelle~\cite{nipkow02book} and HOL~\cite{gordon91tphol} for mechanizing reasoning, impressive strides have been taken in recent years towards actually verifying compilers for real languages, as seen, for instance, in the CompCert project~\cite{leroy06popl}. Much of this work has focused on compiling imperative languages like C. Features such as higher-order and nested functions that are present in functional languages bring an additional complexity to their implementation. A common approach to treating such features is to apply transformations to programs that render them into a form to which more traditional compilation methods can be applied. These transformations must manipulate binding structure in complex ways, an aspect that requires special consideration at both the implementation and the verification level~\cite{aydemir05tphols}. Applications such as those above have motivated research towards developing good methods for representing and manipulating binding structure. Two particular approaches that have emerged from this work are those that use the nameless representation of bound variables due to De Bruijn~\cite{debruijn72} and the nominal logic framework of Pitts~\cite{pitts03ic}. These approaches provide an elegant treatment of aspects such as $\alpha$-convertibility but do not directly support the analysis of binding structure or the realization of binding-sensitive operations such as substitution. A third approach, commonly known as the \emph{higher-order abstract syntax} or \HOAS approach, uses the abstraction operator in a typed $\lambda$-calculus to represent binding structure in object-language syntax. When such representations are embedded within a suitable logic, they lead to a succinct and flexible treatment of many binding related operations through $\beta$-conversion and unification. The main thesis of this paper, shared with other work such as \cite{hannan92lics} and \cite{belanger13cpp}, is that the \HOAS approach is in fact well-adapted to the task of implementing and verifying compiler transformations on functional languages. Our specific objective is to demonstrate the usefulness of a particular framework in this task. This framework comprises two parts: the \LProlog language~\cite{nadathur88iclp} that is implemented, for example, in the Teyjus system~\cite{teyjus.website}, and the \Abella proof assistant~\cite{baelde14jfr}. The \LProlog language is a realization of the hereditary Harrop formulas or \HOHH logic~\cite{miller12proghol}. We show that this logic, which uses the simply typed $\lambda$-calculus as a means for representing objects, is a suitable vehicle for specifying transformations on functional programs. Moreover, \HOHH specifications have a computational interpretation that makes them \emph{implementations} of compiler transformations. The \Abella system is also based on a logic that supports the \HOAS approach. This logic, which is called \Gee, incorporates a treatment of fixed-point definitions that can also be interpreted inductively or co-inductively. The \Abella system uses these definitions to embed \HOHH within \Gee and thereby to reason directly about the specifications written in \HOHH. As we show in this paper, this yields a convenient means for verifying implementations of compiler transformations. An important property of the framework that we consider, as also of systems like LF~\cite{harper93jacm} and Beluga~\cite{pientka10ijcar}, is that it uses a weak $\lambda$-calculus for representing objects. There have been attempts to derive similar benefits from using functional languages or the language underlying systems such as Coq. Some benefits, such as the correct implementation of substitution, can be obtained even in these contexts. However, the equality relation embodied in these systems is very strong and the analysis of $\lambda$-terms in them is therefore not limited to examining just their syntactic structure. This is a significant drawback, given that such examination plays a key role in the benefits we describe in this paper. In light of this distinction, we shall use the term \emph{$\lambda$-tree syntax} \cite{miller00cl} for the more restricted version of \HOAS whose use is the focus of our discussions. The rest of this paper is organized as follows. In Section~\ref{sec:framework} we introduce the reader to the framework mentioned above. We then show in succeeding sections how this framework can be used to implement and to verify transformations on functional programs. We conclude the paper by discussing the relationship of the ideas we describe here to other existing work.\footnote{The actual development of several of the proofs discussed in this paper can be found at the URL \url{http://www-users.cs.umn.edu/~gopalan/papers/compilation/}.} \section{Related Work and Conclusion} \label{sec:related} Compiler verification has been an active area for investigation. We focus here on the work in this area that has been devoted to compiling functional languages. There have been several projects with ambitious scope even in this setting. To take some examples, the CakeML project has implemented a compiler from a subset of ML to the X86 assembly language and verified it using HOL4~\cite{kumar14popl}; Dargaye has used Coq to verify a compiler from a subset of ML into the intermediate language used by CompCert~\cite{dargaye09phd}; Hur and Dreyer have used Coq to develop a verified single-pass compiler from a subset of ML to assembly code based on a logical relations style definition of program equivalence~\cite{hur11popl}; and Neis \etal\ have used Coq to develop a verified multi-pass compiler called Pilsner, basing their proof on a notion of semantics preservation called Parametric Inter-Languages Simulation (PILS)~\cite{neis15icfp}. All these projects have used essentially first-order treatments of binding, such as those based on a De Bruijn style representation. A direct comparison of our work with the projects mentioned above is neither feasible nor sensible because of differences in scope and focus. Some comparison is possible with a part of the Lambda Tamer project of Chlipala in which he describes the verified implementation in Coq of a compiler for the STLC using a logical relation based definition of program equivalence~\cite{chlipala07pldi}. This work uses a higher-order representation of syntax that does not derive all the benefits of $\lambda$-tree syntax. Chlipala's implementation of closure conversion comprises about 400 lines of Coq code, in contrast to about 70 lines of \LProlog code that are needed in our implementation. Chlipala's proof of correctness comprises about 270 lines but it benefits significantly from the automation framework that was the focus of the Lambda Tamer project; that framework is built on top of the already existing Coq libraries and consists of about 1900 lines of code. The \Abella proof script runs about 1600 lines. We note that \Abella has virtually no automation and the current absence of polymorphism leads to some redundancy in the proof. We also note that, in contrast to Chlipala's work, our development treats a version of the STLC that includes recursion. This necessitates the use of a step-indexed logical relation which makes the overall proof more complex. Other frameworks have been proposed in the literature that facilitate the use of \HOAS in implementing and verifying compiler transformations. Hickey and Nogin describe a framework for effecting compiler transformations via rewrite rules that operate on a higher-order representation of programs~\cite{hickey06hosc}. However, their framework is embedded within a functional language. As a result, they are not able to support an analysis of binding structure, an ability that brings considerable benefit as we have highlighted in this paper. Moreover, this framework offers no capabilities for verification. Hannan and Pfenning have discussed using a system called Twelf that is based on LF in specifying and verifying compilers; see, for example,~\cite{hannan92lics} and~\cite{murphy08modal} for some applications of this framework. The way in which logical properties can be expressed in Twelf is restricted; in particular, it is not easy to encode a logical relation-style definition within it. The Beluga system~\cite{pientka10ijcar}, which implements a functional programming language based on contextual modal type theory~\cite{nanevski08tocl}, overcomes some of the shortcomings of Twelf. Rich properties of programs can be embedded in types in Beluga, and Belanger \etal\ show how this feature can be exploited to ensure type preservation for closure conversion~\cite{belanger13cpp}. Properties based on logical relations can also be described in Beluga~\cite{cave15lfmtp}. It remains to be seen if semantics preservation proofs of the kind discussed in this paper can be carried out in the Beluga system. While the framework comprising \LProlog and \Abella has significant benefits in the verified implementation of compiler transformations for functional languages, its current realization has some practical limitations that lead to a larger proof development effort than seems necessary. One such limitation is the absence of polymorphism in the Abella implementation. A consequence of this is that the same proofs have sometimes to be repeated at different types. This situation appears to be one that can be alleviated by allowing the user to parameterize proofs by types and we are currently investigating this matter. A second limitation arises from the emphasis on explicit proofs in the theorem-proving setup. The effect of this requirement is especially felt with respect to lemmas about contexts that arise routinely in the $\lambda$-tree syntax approach: such lemmas have fairly obvious proofs but, currently, the user must provide them to complete the overall verification task. In the Twelf and Beluga systems, such lemmas are obviated by absorbing them into the meta-theoretic framework. There are reasons related to the validation of verification that lead us to prefer explicit proofs. However, as shown in \cite{belanger14lfmtp}, it is often possible to generate these proofs automatically, thereby allowing the user to focus on the less obvious aspects. In ongoing work, we are exploring the impact of using such ideas on reducing the overall proof effort. \section{Verifying Transformations on Functional Programs} \label{sec:verifpt} We now consider the verification of \LProlog implementations of transformations on functional programs. We exploit the two-level logic approach in this process, treating \LProlog programs as \HOHH specifications and reasoning about them using Abella. Our discussions below will show how we can use the $\lambda$-tree syntax approach and the logical nature of our specifications to benefit in the reasoning process. Another aspect that they will bring out is the virtues of the close correspondence between rule based presentations and \HOHH specifications: this correspondence allows the structure of informal proofs over inference rule style descriptions to be mimicked in a formalization within our framework. We use the closure conversion transformation as our main example in this exposition. The first two subsections below present, respectively, an informal proof of its correctness and its rendition in Abella. We then discuss the application of these ideas to other transformations. Our proofs are based on logical relation style definitions of program equivalence. Other forms of semantics preservation have also been considered in the literature. Our framework can be used to advantage in formalizing these approaches as well, an aspect we discuss in the last subsection. \vspace{-0.3cm} \subsection{Informal verification of closure conversion} \label{ssec:ccproof} To prove the correctness of closure conversion, we need a notion of equivalence between the source and target programs. Following~\cite{minamide95tr}, we use a logical relation style definition for this purpose. A complication is that our source language includes recursion. To overcome this problem, we use the idea of step indexing~\cite{ahmed06esop,appel01toplas}. Specifically, we define the following mutually recursive simulation relation $\sim$ between closed source and target terms and equivalence relation $\approx$ between closed source and target values, each indexed by a type and a step measure. \begin{smallalign} & \simulate T k M M' \iff \forall j \leq k. \forall V. M \step{j} V \imply \exists V'. {M'} \eval {V'} \conj \equal T {k-j} V {V'};\\ & \equal \tnat k n n; \qquad \equal \tunit k \unit \unit;\\ & \equal {(T_1 \tprod T_2)} k {\pair {V_1} {V_2}} {\pair {V_1'} {V_2'}} \iff \equal {T_1} k {V_1} {V_1'} \conj \equal {T_2} k {V_2} {V_2'};\\ & \equal {T_1 \to T_2} k {(\fix f x M)} {\clos {V'} {V_e}} \iff \forall j < k. \forall V_1, V_1', V_2, V_2'.\\ & \qquad \equal {T_1} j {V_1} {V_1'} \imply \equal {T_1 \to T_2} j {V_2} {V_2'} \imply \simulate {T_2} j {M[V_2/f, V_1/x]} {V' \app (V_2', V_1', V_e)}. \end{smallalign} Note that the definition of $\approx$ in the fixed point/closure case uses $\approx$ negatively at the same type. However, it is still a well-defined notion because the index decreases. The cumulative notion of equivalence, written $\simulatesans{T}{M}{M'}$, corresponds to two expressions being equivalent under \emph{any} index. Analyzing the simulation relation and using the evaluation rules, we can show the following ``compatibility'' lemma for various constructs in the source language. \begin{mylemma}\label{lem:sim_compose} \begin{enumerate} \item If $\simulate \tnat k {M} {M'}$ then $\simulate \tnat k {\pred M} {\pred M'}$. If also $\simulate \tnat k {N} {N'}$ then $\simulate \tnat k {M + N} {M' + N'}$. \item If $\simulate {T_1 \tprod T_2} k {M} {M'}$ then $\simulate {T_1} k {\fst{M}} {\fst{M'}}$ and $\simulate {T_2} k {\snd{M}} {\snd{M'}}$. \item If $\simulate {T_1} {k} {M} {M'}$ and $\simulate {T_2} {k} {N} {N'}$ then $\simulate {T_1 \tprod T_2} {k} {(M,N)} {(M',N')}.$ \item If $\simulate \tnat k M {M'}$, $\simulate T k {M_1} {M_1'}$ and $\simulate T k {M_2} {M_2'}$, then\\ $\simulate T k {\ifz M {M_1} {M_2}} {\ifz {M'} {M_1'} {M_2'}}$. \item If $\simulate{T_1 \to T_2}{k}{M_1}{M_1'}$ and $\simulate{T_1}{k}{M_2}{M_2'}$ then\\ $\simulate{T_2}{k} {M_1 \app M_2} {\letexp g {M_1'} {\open {x_f} {x_e} {g}{x_f \app (g,M_2',x_e)}}}.$ \end{enumerate} \end{mylemma} \noindent The proof of the last of these properties requires us to consider the evaluation of the application of a fixed point expression which involves ``feeding'' the expression to its own body. In working out the details, we use the easily observed property that the simulation and equivalence relations are closed under decreasing indices. Our notion of equivalence only relates closed terms. However, our transformation typically operates on open terms, albeit under mappings for the free variables. To handle this situation, we consider semantics preservation for possibly open terms under closed substitutions. We will take substitutions in both the source and target settings to be simultaneous mappings of closed values for a finite collection of variables, written as $(V_1/x_1,\ldots,V_n/x_n)$. In defining a correspondence between source and target language substitutions, we need to consider the possibility that a collection of free variables in the first may be reified into an environment variable in the second. This motivates the following definition in which $\gamma$ represents a source language substitution: \begin{smallalign} & \equal {x_m:T_m, \ldots, x_1:T_1} k {\gamma} {(V_1,\ldots,V_m)} \iff \forall 1 \leq i \leq m. \equal {T_i} k {\gamma(x_i)} {V_i}. \end{smallalign} Writing $\substconcat{\gamma_1}{\gamma_2}$ for the concatenation of two substitutions viewed as lists, equivalence between substitutions is then defined as follows: \begin{smallalign} & \equal {\Gamma, x_n:T_n, \ldots,x_1:T_1} k {\substconcat{(V_1/x_1,\ldots,V_n/x_n) }{\gamma}} {(V_1'/y_1, \ldots, V_n'/y_n, V_e/x_e)}\\ & \qquad \iff (\forall 1 \leq i \leq n. \equal {T_i} k {V_i} {V_i'}) \conj \equal \Gamma k \gamma {V_e}. \end{smallalign} Note that both relations are indexed by a source language typing context and a step measure. The second relation allows the substitutions to be for different variables in the source and target languages. A relevant mapping will determine a correspondence between these variables when we use the relation. We write the application of a substitution $\gamma$ to a term $M$ as $M[\gamma]$. The first part of the following lemma, proved by an easy use of the definitions of $\approx$ and evaluation, provides the basis for justifying the treatment of free variables via their transformation into projections over environment variables introduced at function boundaries in the closure conversion transformation. The second part of the lemma is a corollary of the first part that relates a source substitution and an environment computed during the closure conversion of fixed points. \begin{mylemma}\label{lem:var_sem_pres} Let $\delta = \substconcat{(V_1/x_1,\ldots,V_n/x_n)}{\gamma}$ and $\delta' = (V_1'/y_1,\ldots,$ $V_n'/y_n,V_e/x_e)$ be source and target language substitutions and let $\Gamma = (x_m':T_m',\ldots,x_1':T_1',x_n:T_n,\ldots,x_1:T_1)$ be a source language typing context such that $\equal \Gamma k \delta {\delta'}$. Further, let $\rho = (x_1 \mapsto y_1,\ldots,x_n \mapsto y_n, x_1' \mapsto \pi_1(x_e), \ldots, x_m' \mapsto \pi_m(x_e))$. \begin{enumerate} \item If $x : T \in \Gamma$ then there exists a value $V'$ such that $(\rho(x))[\delta'] \eval V'$ and $\equal T k {\delta(x)} {V'}$. \item If $\Gamma' = (z_1:T_{z_1},\ldots,z_j:T_{z_j})$ for $\Gamma' \subseteq \Gamma$ and $\ccenv \rho {(z_1,\ldots,z_j)} M$, then there exists $V_e'$ such that $M[\delta'] \eval V_e'$ and $\equal {\Gamma'} k \delta {V_e'}$. \end{enumerate} \end{mylemma} The proof of semantics preservation also requires a result about the preservation of typing. It takes a little effort to ensure that this property holds at the point in the transformation where we cross a function boundary. That effort is encapsulated in the following strengthening lemma in the present setting. \begin{mylemma}\label{lem:typ_str} If $\Gamma \tseq M:T$, $\{x_1,\ldots,x_n\} \supseteq \fvars M$ and $x_i:T_i \in \Gamma$ for $1 \leq i \leq n$, then $x_n:T_n ,\ldots,x_1:T_1 \tseq M :T$. \end{mylemma} The correctness theorem can now be stated as follows: \begin{mythm}\label{thm:cc_sem_pres} Let $\delta = \substconcat{(V_1/x_1,\ldots,V_n/x_n)}{\gamma}$ and $\delta' = (V_1'/y_1,\ldots,$ $V_n'/y_n,V_e/x_e)$ be source and target language substitutions and let $\Gamma = (x_m':T_m',\ldots,x_1':T_1',x_n:T_n,\ldots,x_1:T_1)$ be a source language typing context such that $\equal \Gamma k \delta {\delta'}$. Further, let $\rho = (x_1 \mapsto y_1,\ldots,x_n \mapsto y_n, x_1' \mapsto \pi_1(x_e), \ldots, x_m' \mapsto \pi_m(x_e))$. If $\Gamma \tseq M:T$ and $\cc \rho M M'$, then $\simulate T k {M[\delta]} {M'[\delta']}$. \end{mythm} \noindent We outline the main steps in the argument for this theorem: these will guide the development of a formal proof in Section~\ref{ssec:vericc}. We proceed by induction on the derivation of $\cc \rho M M'$, analyzing the last step in it. This obviously depends on the structure of $M$. The case for a number is obvious and for a variable we use Lemma~\ref{lem:var_sem_pres}.1. In the remaining cases, other than when $M$ is of the form $(\letexp x {M_1} {M_2})$ or $(\fix f x {M_1})$, the argument follows a set pattern: we observe that substitutions distribute to the sub-components of expressions, we invoke the induction hypothesis over the sub-components and then we use Lemma~\ref{lem:sim_compose} to conclude. If $M$ is of the form $(\letexp x {M_1} {M_2})$, then $M'$ must be of the form $(\letexp y {M_1'} {M_2'})$. Here again the substitutions distribute to $M_1$ and $M_2$ and to $M_1'$ and $M_2'$, respectively. We then apply the induction hypothesis first to $M_1$ and $M_1'$ and then to $M_2$ and $M_2'$; in the latter case, we need to consider extended substitutions but these obviously remain equivalent. Finally, if $M$ is of the form $(\fix f x {M_1})$, then $M'$ must have the form $\clos {M_1'} {M_2'}$. We can prove that the abstraction $M_1'$ is closed and therefore that $M'[\sigma'] = \clos {M_1'} {M_2'[\sigma']}$. We then apply the induction hypothesis. In order to do so, we generate the appropriate typing judgment using Lemma~\ref{lem:typ_str} and a new pair of equivalent substitutions (under a suitable step index) using Lemma~\ref{lem:var_sem_pres}.2. \vspace{-0.3cm} \subsection{Formal verification of the implementation of closure conversion} \label{ssec:vericc} In the subsections below, we present a sequence of preparatory steps, leading eventually to a formal version of the correctness theorem. \vspace{-0.3cm} \subsubsection{Auxiliary predicates used in the formalization.} We use the techniques of Section~\ref{sec:framework} to define some predicates related to the encodings of source and target language types and terms that are needed in the main development; unless explicitly mentioned, these definitions are in \Gee. First, we define the predicates \lsti|ctx| and \lsti|ctx'| to identify typing contexts for the source and target languages. Next, we define in \HOHH the recognizers \lsti|tm| and \lsti|tm'| of well-formed source and target language terms. A source (target) term \lsti|M| is closed if \lsti|{tm M}| (\lsti|{tm' M}|) is derivable. The predicate \lsti|is_sty| recognizes source types. Finally, \lsti|vars_of_ctx| is a predicate such that \lsti|(vars_of_ctx L Vs)| holds if \lsti|L| is a source language typing context and \lsti|Vs| is the list of variables it pertains to. Step indexing uses ordering on natural numbers. We represent natural numbers using \lsti|z| for 0 and \lsti|s| for the successor constructor. The predicate \lsti|is_nat| recognizes natural numbers. The predicates \lsti|lt| and \lsti|le|, whose definitions are routine, represent the ``less than'' and the ``less than or equal to'' relations. \vspace{-0.3cm} \subsubsection{The simulation and equivalence relations.} The following clauses define the simulation and equivalence relations. \begin{lstlisting} sim T K M M' := forall J V, le J K -> {nstep J M V} -> {val V} -> exists V' N, {eval' M' V'} /\ {add J N K} /\ equiv T N V V'; equiv tnat K (nat N) (nat' N); equiv tunit K unit unit'; equiv (prod T1 T2) K (pair V1 V2) (pair' V1' V2') := equiv T1 K V1 V1' /\ equiv T2 K V2 V2' /\ {tm V1} /\ {tm V2} /\ {tm' V1'} /\ {tm' V2'}; equiv (arr T1 T2) z (fix R) (clos' (abs' R') VE) := {val' VE} /\ {tm (fix R)} /\ {tm' (clos' (abs' R') VE)}; equiv (arr T1 T2) (s K) (fix R) (clos' (abs' R') VE) := equiv (arr T1 T2) K (fix R) (clos' (abs' R') VE) /\ forall V1 V1' V2 V2', equiv T1 K V1 V1' -> equiv (arr T1 T2) K V2 V2' -> sim T2 K (R V2 V1) (R' (pair' V2' (pair' V1' VE))). \end{lstlisting} The formula \lsti|(sim T K M M')| is intended to mean that \lsti|M| simulates \lsti|M'| at type \lsti|T| in \lsti|K| steps; \lsti|(equiv T K V V')| has a similar interpretation. Note the exploitation of $\lambda$-tree syntax, specifically the use of application, to realize substitution in the definition of \lsti|equiv|. It is easily shown that \lsti|sim| holds only between closed source and target terms and similarly \lsti|equiv| holds only between closed source and target values.\footnote{The definition of \lsti|equiv| uses itself negatively in the last clause and thereby violates the original stratification condition of \Gee. However, Abella permits this definition under a weaker stratification condition that ensures consistency provided the definition is used in restricted ways~\cite{baelde12lics,tiu12ijcar}, a requirement that is adhered to in this paper.} Compatibility lemmas in the style of Lemma \ref{lem:sim_compose} are easily stated for \lsti|sim|. For example, the one for pairs is the following. \begin{lstlisting} forall T1 T2 K M1 M2 M1' M2', {is_nat K} -> {is_sty T1} -> {is_sty T2} -> sim T1 K M1 M1' -> sim T2 K M2 M2' -> sim (prod T1 T2) K (pair M1 M2) (pair' M1' M2'). \end{lstlisting} These lemmas have straightforward proofs. \vspace{-0.3cm} \subsubsection{Representing substitutions.}\label{sec:expl_subst} We treat substitutions as discussed in Section~\ref{sec:framework}. For example, source substitutions satisfy the following definition. \begin{lstlisting} subst nil; subst ((map X V)::ML) := subst ML /\ name X /\ {val V} /\ {tm V} /\ forall V', member (map X V') ML -> V' = V. \end{lstlisting} By definition, these substitutions map variables to closed values. To accord with the way closure conversion is formalized, we allow multiple mappings for a given variable, but we require all of them to be to the same value. The application of a source substitution is also defined as discussed in Section~\ref{sec:framework}. \begin{lstlisting} app_subst nil M M; nabla x,app_subst$\;$((map x V)::(ML x))$\;$(R x)$\;$M := nabla x,app_subst (ML x) (R V) M. \end{lstlisting} As before, we can easily prove properties about substitution application based on this definition such as that such an application distributes over term structure and that closed terms are not affected by substitution. The predicates \lsti|subst'| and \lsti|app_subst'| encode target substitutions and their application. Their formalization is similar to that above. \vspace{-0.3cm} \subsubsection{The equivalence relation on substitutions.} We first define the relation \lsti|subst_env_equiv| between source substitutions and target environments: \begin{lstlisting} subst_env_equiv nil K ML unit'; subst_env_equiv ((of X T)::L) K ML (pair' V' VE) := exists V,subst_env_equiv$\;$L K ML VE /\ member$\;$(map X V)$\;$ML /\ equiv$\;$T K V V'. \end{lstlisting} Using \lsti|subst_env_equiv|, the needed relation between source and target substitutions is defined as follows. \begin{lstlisting} nabla e, subst_equiv L K ML ((map e VE)::nil) := subst_env_equiv L K ML VE; nabla x y, subst_equiv ((of x T)::L) K ((map x V)::ML) ((map y V')::ML') := equiv T K V V' /\ subst_equiv L K ML ML'. \end{lstlisting} \vspace{-0.3cm} \subsubsection{Lemmas about \lsti|fvars|, \lsti|mapvar| and \lsti|mapenv|.} Lemma~\ref{lem:typ_str} translates into a lemma about \lsti|fvars| in the implementation. To state it, we define a strengthening relation between source typing contexts: \begin{lstlisting} prune_ctx$\;$nil L nil; prune_ctx$\;$(X::Vs)$\;$L$\;$((of$\;$X$\;$T)::L') := member$\;$(of$\;$X$\;$T)$\;$L /\ prune_ctx$\;$Vs$\;$L$\;$L'. \end{lstlisting} \lsti|(prune_ctx Vs L L')| holds if \lsti|L'| is a typing context that ``strengthens'' \lsti|L| to contain type assignments only for the variables in \lsti|Vs|. The lemma about \lsti|fvars| is then the following. \begin{lstlisting} forall L Vs M T FVs, ctx L -> vars_of_ctx L Vs -> {L |- of M T} -> {fvars M Vs FVs} -> exists L', prune_ctx FVs L L' /\ {L' |- of M T}. \end{lstlisting} To prove this theorem, we generalize it so that the \HOHH derivation of \lsti|(fvars M Vs FVs)| is relativized to a context that marks some variables as not free. The resulting generalization is proved by induction on the \lsti|fvars| derivation. A formalization of Lemma \ref{lem:var_sem_pres} is also needed for the main theorem. We start with a lemma about \lsti|mapvar|. \begin{lstlisting} forall L Vs Map ML K VE X T M' V, nabla e, {is_nat K} -> ctx L -> subst ML -> subst_env_equiv L K ML VE -> vars_of_ctx L Vs -> {mapvar Vs Map} -> member (of X T) L -> app_subst ML X V -> {member$\;$(map$\;$X$\;$(M'$\;$e))$\;$(Map$\;$e)} -> exists V', {eval' (M' VE) V'} /\ equiv T K V V'. \end{lstlisting} In words, this lemma states the following. If \lsti|L| is a source typing context for the variables $(x_1,\ldots,x_n)$, \lsti|ML| is a source substitution and \lsti|VE| is an environment equivalent to \lsti|ML| at \lsti|L|, then \lsti|mapvar| determines a mapping for $(x_1,\ldots,x_n)$ that are projections over an environment with the following character: if the environment is taken to be \lsti|VE|, then, for $1 \leq i \leq n$, $x_i$ is mapped to a projection that must evaluate to a value equivalent to the substitution for $x_i$ in \lsti|ML|. The lemma is proved by induction on the derivation of \lsti|{mapvar Vs Map}|. Lemma \ref{lem:var_sem_pres} is now formalized as follows. \begin{lstlisting} forall L ML ML' K Vs Vs' Map, {is_nat K} -> ctx L -> subst ML -> subst' ML' -> subst_equiv L K ML ML' -> vars_of_ctx L Vs -> vars_of_subst' ML' Vs' -> to_mapping Vs Vs' Map -> (forall X T V M' M'', member (of X T) L -> {member (map X M') Map} -> app_subst ML X V -> app_subst' ML' M' M'' -> exists V', {eval' M'' V'} /\ equiv T K V V') /\ (forall L' NFVs E E', prune_ctx NFVs L L' -> {mapenv NFVs Map E} -> app_subst' ML' E E' -> exists VE', {eval' E' VE'} /\ subst_env_equiv L' K ML VE'). \end{lstlisting} Two new predicates are used here. The judgment \lsti|(vars_of_subst' ML' Vs')| ``collects'' the variables in the target substitution \lsti|ML'| into \lsti|Vs'|. Given source variables \lsti|Vs = $(x_1,\ldots,x_n,x_1',\ldots,x_m')$| and target variables \lsti|Vs' = $(y_1,\ldots,y_n,x_e)$|, the predicate \lsti|to_mapping| creates in \lsti|Map| the mapping \vspace{-0.2cm} \begin{tabbing} \qquad\=\kill \>$(x_1 \mapsto y_1,\ldots,x_n \mapsto y_n, x_1' \mapsto \pi_1(x_e), \ldots, x_m' \mapsto \pi_m(x_e))$. \end{tabbing} \vspace{-0.2cm} \noindent The conclusion of the lemma is a conjunction representing the two parts of Lemma \ref{lem:var_sem_pres}. The first part is proved by induction on \lsti|{member (map X M') Map}|, using the lemma for \lsti|mapvar| when \lsti|X| is some $x_i' (1 \leq i \leq m)$. The second part is proved by induction on \lsti|{mapenv NFVs Map E}| using the first part. \vspace{-0.3cm} \subsubsection{The main theorem.} The semantics preservation theorem is stated as follows: \begin{lstlisting} forall L ML ML' K Vs Vs' Map T P P' M M', {is_nat K} -> ctx L -> subst ML -> subst' ML' -> subst_equiv L K ML ML' -> vars_of_ctx L Vs -> vars_of_subst' ML' Vs' -> to_mapping Vs Vs' Map -> {L |- of M T} -> {cc$\;$Map$\;$Vs$\;$M$\;$M'} -> app_subst$\;$ML$\;$M$\;$P -> app_subst'$\;$ML'$\;$M'$\;$P' -> sim$\;$T$\;$K$\;$P$\;$P'. \end{lstlisting} We use an induction on \lsti|{cc Map Vs M M'}|, the closure conversion derivation, to prove this theorem. As should be evident from the preceding development, the proof in fact closely follows the structure we outlined in Section~\ref{ssec:ccproof}. \vspace{-0.3cm} \subsection{Verifying the implementations of other transformations} \label{ssec:veriothers} We have used the ideas presented in this section to develop semantics preservation proofs for other transformations such as code hoisting and the CPS transformation. We discuss the case for code hoisting below. The first step is to define the step-indexed logical relations $\chsim$ and $\approx'$ that respectively represent the simulation and equivalence relation between the input and output terms and values for code hoisting: \begin{smallalign} & \chsimulate T k M M' \iff \forall j \leq k. \forall V. M \step{j} V \imply \exists V'. {M'} \eval {V'} \conj \chequal T {k-j} V {V'};\\[5pt] & \chequal \tnat k n n; \\ & \chequal \tunit k \unit \unit;\\ & \chequal {(T_1 \tprod T_2)} k {\pair {V_1} {V_2}} {\pair {V_1'} {V_2'}} \iff \chequal {T_1} k {V_1} {V_1'} \conj \chequal {T_2} k {V_2} {V_2'};\\ & \chequal {T_1 \carr T_2} k {(\abs x M)} {(\abs x M')} \iff \forall j < k. \forall V, V'. \chequal {T_1} j V {V'} \imply \chsimulate {T_2} j {M[V/x]} {M'[V'/x]};\\ & \chequal {T_1 \to T_2} k {\clos {\abs p M} {V_e}} {\clos {\abs p M'} {V_e'}} \iff \forall j < k. \forall V_1, V_1', V_2, V_2'. \\ & \qquad \chequal {T_1} j {V_1} {V_1'} \imply \chequal {T_1 \to T_2} j {V_2} {V_2'} \imply \chsimulate {T_2} j {M[(V_2,V_1,V_e)/p]} {M'[(V_2',V_1',V_e')/p]}. \end{smallalign} We can show that $\chsim$ satisfies a set of compatibility properties similar to Lemma~\ref{lem:sim_compose}. We next define a step-indexed relation of equivalence between two substitutions $\delta = {(V_1/x_1,\ldots,V_m/x_m)}$ and $\delta' = {(V_1'/x_1,\ldots,V_m'/x_m)}$ relative to a typing context $\Gamma = ({x_m:T_m,\ldots, x_1:T_1})$: \begin{smallalign} & \chequal \Gamma k \delta {\delta'} \iff \forall 1 \leq i \leq m. \chequal {T_i} k {V_i} {V_i'}. \end{smallalign} The semantics preservation theorem for code hoisting is stated as follows: \begin{mythm} Let $\delta = {(V_1/x_1,\ldots,V_m/x_m)}$ and $\delta' = {(V_1'/x_1,\ldots,V_m'/x_m)}$ be substitutions for the language described in Figure~\ref{fig:targlang}. Let $\Gamma = (x_m:T_m,\ldots,x_1:T_1)$ be a typing context such that $\chequal \Gamma k \delta {\delta'}$. Further, let $\rho = (x_1,\ldots,x_m)$. If $\Gamma \tseq M:T$ and $\ch \rho M {M'}$ hold, then $\chsimulate T k {M[\delta]} {M'[\delta']}$ holds. \end{mythm} The theorem is proved by induction on the derivation for $\ch \rho M {M'}$. The base cases follow easily, possibly using the fact that $\chequal \Gamma k \delta {\delta'}$. For the inductive cases, we observe that substitutions distribute to the sub-components of expressions, we invoke the induction hypothesis over the sub-components and we use the compatibility property of $\chsim$. In the case of an abstraction, $\delta$ and $\delta'$ must be extended to include a substitution for the bound variable. For this case to work out, we must show that the additional substitution for the bound variable has no impact on the functions extracted by code hoisting. From the side condition for the rule deriving $\ch \rho M {M'}$ in this case, the extracted functions cannot depend on the bound variable and hence the desired observation follows. In the formalization of this proof, we use the predicate constants \lsti+sim'+ and \lsti+equiv'+ to respectively represent $\chsim$ and $\approx'$. The Abella definitions of these predicates have by now a familiar structure. We also define a constant \lsti+subst_equiv'+ to represent the equivalence of substitutions as follows: \begin{lstlisting} subst_equiv' nil K nil nil; nabla x, subst_equiv'$\;$((of' x T)::L) K ((map' x V)::ML)$\;$((map' x V')::ML') := equiv' T K V V' /\ subst_equiv' L K ML ML'. \end{lstlisting} The representation of contexts in the code hoisting judgment in the \HOHH specification is captured by the predicate \lsti+ch_ctx+ that is defined as follows: \begin{lstlisting} ch_ctx nil; nabla x, ch_ctx (ch x (htm nil (hbase x)) :: L) := ch_ctx L. \end{lstlisting} The semantics preservation theorem is stated as follows, where \lsti|vars_of_ctx'| is a predicate for collecting variables in the typing contexts for the target language, \lsti|vars_of_ch_ctx| is a predicate such that \lsti|(vars_of_ch_ctx L Vs)| holds if \lsti|L| is a context for code hoisting and \lsti|Vs| is the list of variables it pertains to: \begin{lstlisting} forall L K CL ML ML' M M' T FE FE' P P' Vs, {is_nat K} -> ctx' L -> ch_ctx CL -> vars_of_ctx' L Vs -> vars_of_ch_ctx CL Vs -> subst' ML -> subst' ML' -> subst_equiv' L K ML ML' -> {L |- of' M T} -> {CL |- ch M (htm FE M')} -> app_subst' ML M P -> app_subst' ML' (htm FE M') (htm FE' P') -> sim' T K P (htm FE' P'). \end{lstlisting} The proof is by induction on \lsti+{CL |-$\;$ch M (htm$\;$FE$\;$M')}+ and its structure follows that of the informal one very closely. The fact that the extracted functions do not depend on the bound variable of an abstraction is actually explicit in the logical formulation and this leads to an exceedingly simple argument for this case. \vspace{-0.3cm} \subsection{Relevance to other styles of correctness proofs} \label{ssec:othercorrectness} Many compiler verification projects, such as CompCert~\cite{leroy06popl} and CakeML~\cite{kumar14popl}, have focused primarily on verifying whole programs that produce values of atomic types. In this setting, the main requirement is to show that the source and target programs evaluate to the same atomic values. Structuring a proof around program equivalence base on a logical relation is one way to do this. Another, sometimes simpler, approach is to show that the compiler transformations permute over evaluation; this method works because transformations typically preserve values at atomic types. Although we do not present this here, we have examined proofs of this kind and have observed many of the same kinds of benefits to the $\lambda$-tree syntax approach in their context as well. Programs are often built by composing separately compiled modules of code. In this context it is desirable that the composition of correctly compiled modules preserve correctness; this property applied to compiler verification has been called modularity. Logical relations pay attention to equivalence at function types and hence proofs based on them possess the modularity property. Another property that is desirable for correctness proofs is transitivity: we should be able to infer the correctness of a multi-stage compiler from the correctness of each of its stages. This property holds when we use logical relations if we restrict attention to programs that produce atomic values but cannot be guaranteed if equivalence at function types is also important; it is not always possible to decompose the natural logical relation between a source and target language into ones between several intermediate languages. Recent work has attempted to generalize the logical relations based approach to obtain the benefits of both transitivity and modularity~\cite{neis15icfp}. Many of the same issues relating to the treatment of binding and substitution appear in this context as well and the work in this paper therefore seems to be relevant also to the formalization of proofs that use these ideas. Finally, we note that the above comments relate only to the formalization of proofs. The underlying transformations remain unchanged and so does the significance of our framework to their implementation.
1,314,259,994,609
arxiv
\section{How many evolved stars can be observed with MATISSE?} First of all, we need to emphasize that MATISSE will mainly focus on dusty stars, as the mid-infrared is the perfect match to collect data on the dust quantity and composition around a given object. To illustrate that, we selected a few topics of interest that will be developed in this paper, and counted the number of stars that will be observable with VLTI/MATISSE. We included a quantity of non-dusty targets (regular WR stars and Be stars) in the sample to compare the performances of MATISSE with its prime targets. To do so, one need to take into account the sensitivity of the instrument itself in the L- and N-bands\cite{Matter2016}, of course, but also the VLTI infrastructure subsystems sensitivity, which can be sometimes the limiting factor, especially for red targets. We need therefore to take into account: the telescope guiding limit in the V-band (V=13.5 and V=17 for ATs STRAP tip-tilt and UTs MACAO AO, respectively), and the fringe tracker limit in the K-band for GRA4MAT and other devices like IRIS (K=7.5 for ATs and K=10 for UTs). We considered star lists on Wolf-Rayet stars (WR), Red supergiant stars (RSG), Asymptotic Giant Branch stars (AGB), Be stars and B[e] supergiant stars visible from Paranal ; we extracted them from the following catalogs: \begin{itemize} \item Rosslowe \& Crowther (2014) for WR stars\cite{2015MNRAS.447.2322R}, \item Hoffleit (1991) for RSG stars\cite{1991bsc..book.....H}, \item we scanned through ADS publications on the topic for AGB stars, \item Frémat et al. (2005)\cite{2005A&A...440..305F} and Yudin (2001)\cite{2001A&A...368..912Y} for Be stars, \item For B[e] supergiant stars we used our own catalog of stars. \end{itemize} When available, we retrieved the SED of each object (V, K, L and N magnitudes) and ISO/IRAS spectra (if it exist). Magnitudes of these targets were compared with the theoretical limits of MATISSE, considering spectrally-usable data, i.e. data with a SNR$\geq3$ per spectral channel during the expected optimal DIT and exposure time combination, with and without fringe tracker\cite{Matter2016}. The result of this study is shown in Fig.~\ref{fig:targets}. One can see that, out of our selected catalogs, roughly one third of the considered AGB targets, two third of the supergiant B[e] stars, and half of the considered red supergiant stars can be observed with MATISSE in its low spectral resolution mode (R$\approx$35). We are likely limited by our sample size in these cases. On the other hand, very red targets like dusty Wolf-Rayet stars, or very blue targets like naked Wolf-Rayet stars or Be stars, will be more difficult to observe with MATISSE due to either the sensitivity limits of VLTI in the V band (very red targets), or to the MATISSE sensitivity limit (very blue targets). In both cases, the adjunction of a K-band fringe tracker to MATISSE, like the foreseen GRA4MAT project of ESO, will improve the situation by a large factor (especially when using the ATs). That matter of fact is even more striking when using the high-spectral resolution, where we will be able to resolve the kinematic motion of gaz in the circumstellar envelopes\cite{}. From a handful of targets that will be observable with MATISSE alone, the adjunction of an external fringe tracker will allow the instrument to go at full throttle for the delivery of scientific results. \begin{figure}[htbp] \begin{center} \begin{tabular}{ccc} \includegraphics[width=.47\textwidth]{Histo_LR.png}& \includegraphics[width=.47\textwidth]{Histo_HR.png} \end{tabular} \end{center} \caption[Number of targets observable] { \label{fig:targets} {\bf Left:} Number of targets observable with MATISSE in low spectral resolution (R$\approx$35) for different types of stars, both in L band (\texttt{L\_noft}, dark blue bars) and N band (\texttt{N\_noft}, light blue bars). The transparent bars show the improvement of observability when using an external fringe tracker (\texttt{L\_ft} and \texttt{N\_ft}), like e.g. the foreseen project GRA4MAT. {\bf Right:} The same plot for the high resolution of MATISSE (R$\approx$4500 around the hydrogen Brackett $\alpha$ line).} \end{figure} \section{Massive stars} Massive stars are stars with a mass above 8 solar masses (M$_{\odot}$). They evolve fast and blast as supernovae at the end of their lives, not without having gone through a fireworks of different physical processes. Among them, one can count the main sequence O and B-type stars, red, yellow and blue supergiant stars, Wolf-Rayet stars, and other types of exotic stars. We invite the reader to take a look to the different other papers of this conference for more details\cite{Hofmann2016,Soulain2016}. \subsection{Dusty blue supergiant stars} Several blue supergiant stars (Wolf-Rayet, Luminous Blue Variable, supergiant B[e] stars) produce large amounts of dust before exploding as supernovae. Though they are not the main dust-producers in our Galaxy, they do vastly influence their surroundings by inputting kinetic energy and dust in their vicinity. In some cases, they can even trigger star formation. The circumstellar environment (CE) around blue supergiant stars is often dense and highly anisotropic. This anisotropy could be driven either by fast-rotation and/or by the presence of a companion star. For example, fast-rotating B and O stars will present a denser and slower wind in the equatorial direction, and a lighter and faster wind at the poles, according to the usual sketch\cite{1991A&A...244L...5L}. At the equator, the density can be high-enough to veil the UV radiation from the star, allowing dust to condense into a disk-like structure, visible in the infrared. Such a break of symmetry of the CE has strong implications on the later phases of stellar evolution, as it can affect the stellar angular momentum and act as a brake for ejected material. On the other hand, a secondary star orbiting around an O or a B star can affect the wind of the main star through gravitational influence, and concentrate material in the orbital plane. The consequence is similar as for rotating stars, i.e. an over-density in the equatorial plane that can trigger dust formation. However, the shape of such a disk-like structure would likely be different than in the first case, and there are opportunities of detecting the companion star, depending on the flux ratio between the two stars\cite{Millour2009a, Kraus2012}. To prepare the MATISSE program on supergiant B[e] stars, we simulated what are the imaging performances of MATISSE in the case of a disk-like structure and a possible influence of fast-rotation of the hot star. To prepare and interpret the MATISSE observations we used a NLTE Monte Carlo radiation transfer code to compute a grid of supergiant B[e] models dedicated to mid- and near-IR interferometry \cite{Domiciano-de-Souza2012_v464p149}. The case of binarity is expected to be much easier, as it was already demonstrated that imaging with few baselines can be done in such a case\cite{Millour2009a, Kraus2012}. We simulated MATISSE with homemade scripts and produced OIFITS\cite{2005PASP..117.1255P} files. We input these files into the \texttt{MiRA} image reconstruction software developed by E. Thi\'ebaut\cite{Thiebaut2010a}. The results of our study is shown in Fig.~\ref{fig:B_crochet_e}. One can see that we are able to reconstruct most of the features seen in the model images: the central point source, the extent of the disk, and the inner hole seen in the 60 degrees inclination images. This study confirms that MATISSE will be a perfect imaging machine for these types of stars. \begin{figure}[htbp] \begin{center} \begin{tabular}{cc} \includegraphics[width=1.\textwidth]{B_crochet_e.pdf} \end{tabular} \end{center} \caption[B\[e\] images] { \label{fig:B_crochet_e} {\bf On the left}, one can see modelled ({\bf Left}) vs. reconstructed images ({\bf Right}) of typical B[e] star disks for a 90 degrees inclination (edge-on disk). {\bf The right panel} shows a 60 degrees inclination disk. The image reconstruction software used here is MiRA.} \end{figure} \subsection{The case of $\eta$ Carinae} The Luminous Blue Variable (LBV) phase is a short-lived phase in the life of massive stars. The strong mass loss in this phase is not well understood. The LBV $\eta$ Car provides a unique laboratory for studying the evolution of massive stars and the massive stellar wind during the LBV phase. $\eta$~Car is also an eccentric binary with a period of 5.54 yr. X-ray studies demonstrate that there is a wind-wind collision zone, which is depends on the orbital phase. Infrared interferometry with the VLTI allows us to study the stellar wind of the primary star and the wind-wind collision zone of the primary and secondary star with unprecedented angular resolution of about 5\,mas and simultaneously with high spectral resolution. Using the VLTI, the diameter of $\eta$~Car's wind region was measured to be about 4.2\,mas in the $K$-band continuum\cite{2003A&A...410L..37V,2007A&A...464.1045K,2007A&A...464...87W} (50\% encircled intensity diameter). In the He\,I\,2.059\,$\mu$m and the Br$\gamma$\,2.166\,$\mu$m emission lines, diameters of 6.5 and 9.6\,mas were measured, respectively\cite{2007A&A...464...87W}. For comparison, the radius of the primary star of $\eta$ Car is on the order of 100 solar radii $\sim$ 0.47\,au $\sim$ 0.20\,mas\cite{2001ApJ...553..837H}. This means that the measured 4.8\,mas Br$\gamma$ wind radius is about 25 times larger than the stellar radius. The measured line visibilities agree with predictions of the radiative transfer model of Hillier et al.\cite{2001ApJ...553..837H}. VLTI-AMBER observations (performed at the beginning of 2014, about 5 to 7 months before the August 2014 periastron passage) allowed the reconstruction of velocity-resolved aperture-synthesis images in more than 100 different spectral channels distributed across the Br$\gamma~$2.166\,$\mu$m emission line\cite{Hofmann2016, Weigelt2016}. The intensity distribution of the obtained images strongly depends on wavelength. Interestingly, in the blue wing of the Br$\gamma~$2.166\,$\mu$m emission line, the wind region is much more extended than in the red wing. At radial velocities of approximately $-$140 to $-$376\,km/s measured relative to line center (i.e., in the blue line wing), the intensity distribution is fan-shaped with a position angle of the symmetry axis of $\sim$126 degree. The fan-shaped structure extends approximately 8\,mas ($\sim$19\,au) to the south-east and 6\,mas ($\sim$14\,au) to the north-west (measured at the 16\% intensity contour). Three-dimensional smoothed particle hydrodynamic simulations of $\eta$~Car's colliding winds predict a very large wind density distribution that is more extended than the undisturbed primary star wind density distribution. At the time of our observations, the secondary star of $\eta$~Car was in front of the primary star, and therefore the extended wind collision cavity was opened up into our line of sight. The comparison of the model images with the shape and wavelength dependence of the intensity distributions of the reconstructed VLTI images suggests that the obtained VLTI images are the first direct images of the innermost wind-wind collision zone. In the red line wing (positive radial velocities), the extension of the resolved wind structure is much smaller than in the blue wing, because the red-wing light is emitted from the wind region behind the primary star, where the primary wind is not disturbed by the wind collision (i.e., at positive radial velocities, we are looking through the optically thin wind collision zone in front of the primary). The $\eta$~Car channel maps provide velocity-dependent image structures that can be used to test three-dimensional hydrodynamical models of the massive interacting winds. \subsection{Dust plumes around Wolf-Rayet stars} We also investigated the potential of MATISSE to detect asymmetries in the dusty environment of massive blue stars, with an application to the detection of several spiral dust plumes around dusty Wolf-Rayet stars. These dust plumes likely trace the dust forming in the wind collision zone of a binary system composed of a WR and an O star. Many dust shells have been detected around Wolf-Rayet stars, but few of them were resolved into spiral plumes. MATISSE has the potential to image most of these dust shells at a few milliseconds of arc resolution, enabling the detection of many more spiral plumes, leading to a confirmation that WC8 and WC9 spectral types are linked to the inner system binarity. To illustrate the MATISSE potential, we simulated observations of a phenomenological "pinwheel" model with MATISSE using the readily available ASPRO software that has been updated to take into account the MATISSE-specific noises. The obtained OIFITS files were fed into the MATISSE official image reconstruction tool \texttt{IRBis}, which notably won the interferometry beauty contest of this year\cite{Sanchez2016}. The result is shown in Fig.~\ref{fig:pinwheel}, evidencing that MATISSE is indeed able to resolve a "pinwheel" nebula in 3 nights of observation. \begin{figure}[htbp] \begin{center} \begin{tabular}{cc} \includegraphics[width=.6\textwidth]{Pinwheels.pdf} \end{tabular} \end{center} \caption[pinwheel image] { \label{fig:pinwheel} Modelled (Left) vs. reconstructed images (Right) of a typical "pinwheel nebula" around a dusty Wolf-Rayet star. We used \texttt{IRBis} for this test case.} \end{figure} \subsection{Young massive clusters} Young massive clusters, are gravitationally bound groups of newly formed stars. These objects provide a rare opportunity to study the formation of massive stars and their evolution in the early stages. Among the well-known young clusters, R136 is located in the heart of 30Doradus (in LMC) and hosts so far the most massive stars known in the Local Universe. These stars are born in the large HII regions, embedded in gas and dust. MATISSE can be used to resolve the crowded core of R136-like clusters with higher angular resolution than conventional telescopes equipped with adaptive optics (e.g. SPHERE), and at longer wavelengths (L- and N-bands), in order to overcome the problem of interstellar extinction. We simulated R136 using the publicly available N-body gravitational simulation \texttt{Nbody6} code\footnote{available at \url{http://www.ast.cam.ac.uk/~sverre/web/pages/nbody.htm}}. The left part of Figure \ref{fig:clusters} shows the inner $0.075 \times 0.075$\,pc of the simulated cluster, which correspond to the FoV of MATISSE using the UTs (0.3"). This snapshot of the core from the numerical \texttt{Nbody6} simulation is done at the age of 2 Myr in the N-band and at the distance of the LMC (50 Kpc). We used the \texttt{ASPRO} tool from JMMC\footnote{available here: \url{http://www.jmmc.fr}} to simulate the MATISSE observables (visibilities, closure phases and differential phases), and the \texttt{IRBis} software to reconstruct the synthetic image. The result can be seen in the right panel of Figure \ref{fig:clusters}. The core massive stars have separations down to 24 mas. However, we also put a secondary companion in a binary system with separation of 3.5\,mas (seen close to the lower-left star in the left part of Figure \ref{fig:clusters}), and this companion star cannot be resolved by MATISSE in the N band. However, we did not perform image reconstruction tests in the L band, and it could be that this companion star is resolved at this wavelength, where MATISSE has a factor 3 higher angular resolution. Further tests need to be done to fully characterize the MATISSE performances in the context of star cluster observations. \begin{figure}[htbp] \begin{center} \begin{tabular}{cc} \includegraphics[width=.6\textwidth]{Clusters.pdf} \end{tabular} \end{center} \caption[Cluster image] { \label{fig:clusters} Simulated image of the core of a R136-like cluster at the age of 2 Myr in N-band ({\bf Left}) vs. reconstructed images ({\bf Right}). The left panel shows $0.3" \times 0.3"$ (FoV of MATISSE using UTs) covering $0.075\times0.075 pc^2$ of the core of the cluster at the distance of LMC (50 Kpc). The image reconstruction software used here is \texttt{IRBis}.} \end{figure} \subsection{Red supergiant stars} Red supergiants (RSGs) are strong contributors to the chemical and dust enrichment of the Galaxy through their mass-loss ($2\times 10^{−7} $M$_⊙ / $yr -- $3 \times 10^{−4} $M$_{⊙}$/yr, de Beck et al. 2010) and the coming blast as supernova. Mass-loss process still remains poorly understood. RSGs do not experience regular flares or pulsations and the triggering of their mass loss remains a mystery. It may be linked to the magnetic activity, the stellar convection, and the radiation pressure on molecules and/or grains, or all these together. The stellar surface and circumstellar dynamics and constitution of these stars have to be studied in detail to unveil the mass-loss mechanism. MATISSE will open new insight at wavelengths still poorly observed for RSGs. The observations will range from the inner stellar photosphere (L and M band probing molecular transitions and pseudo- continuum) to the outer enveloppe (N band). The temporal variation is crucial to make a step forward into the understanding of the stellar dynamics and the link with the mass-loss mechanism. \section{Lower mass stars: AGB \& pulsating stars} \subsection{AGB stars} Most of the low and intermediate mass stars end their lives on the Asymptotic Giant Branch (AGB) phase. During this phase, pulsation and radiation pressure on dust leads to a phase of strong mass loss, during which gas and dust enriched by the products of the star's nucleosynthesis will be ejected. This mass loss is crucial for the chemical enrichment of the ISM and therefore for the chemical evolution of galaxies. A pivotal aspect of the mass-loss process is its geometry, i.e. the density distribution of the circumstellar envelope of the AGB stars at different scales and different evolutionary phases. To understand the mass-loss process, it is essential to study the mass-loss from very deep inside the star up to the interface with the ISM. Due to its broad wavelength coverage MATISSE is the unique instrument to study the different dust and molecular species present in the atmospheres and envelopes of AGB stars. The aim is to achieve a complete view from the upper photosphere to the outer envelope layers and to complement Herschel and MIDI surveys (see Paladini et al., this conference). \subsection{Envelopes around Cepheid stars} A powerful way of constraining the period-luminosity (PL) relation is to use the Baade-Wesselink (BW) method of distance determination. The basic principle of the BW method is to compare the linear and angular size variation of a pulsating star in order to derive its distance through a simple division. The angular diameter is either derived by interferometry\cite{kervella04a} (hereafter IBW for Interferometric Baade-Wesselink method), or by using the InfraRed Surface Brightness (IRSB) relation\cite{storm11a,storm11b}. However, when determining the linear radius variation of the Cepheid by spectroscopy, one has to use a conversion projection factor from radial to pulsation velocity \cite{nardetto04,merand05}. In this method, angular and linear diameters have to correspond to the same physical layer in the star to provide a correct estimate of the distance\cite{nardetto07,nardetto09}. Consequently, the circumstellar environment (CSE) around Cepheids should be considered in both versions of the BW method, particularly when deriving the angular diameter curve. The impact of CSE on the period-luminosity relation of Cepheids need also to be established. Envelopes around Cepheids have been discovered by long-baseline interferometry in the K Band with VLTI\cite{kervella06a} and CHARA\cite{merand06} (see Fig.\ref{lcar}). Since then, four Cepheids have been observed in the N band with VISIR and MIDI\cite{kervella09,gallenne13b} and one with NACO\cite{gallenne11,gallenne12}. Some evidences have also been found using high-resolution spectroscopy\cite{nardetto08b}. The size of the envelope seems to be at least 3 stellar radii and the flux contribution in K band is from 2\% to 10\% of the continuum, for medium- and long-period Cepheids respectively, while it is around 10\% or more in the N band (estimated also from SED derived with VISIR). \begin{figure}[htbp] \begin{center} \begin{tabular}{cc} \includegraphics[width=.6\textwidth]{lcar.pdf} \end{tabular} \end{center} \caption[lcar] {\label{lcar}{\bf Left:} the geometry of the surrounding envelop of l Car (in K band) as constrained by VINCI observations (Kervella et al. 2006). The envelope is supposed centro-symmetric and the Full-Width at Half-Maximum of the modelled gaussian is around 3 stellar radii and a flux contribution of 4\%. {\bf Right:} the geometry of the envelope expected in the N band is almost the same but with a larger flux contribution (figure taken from Antoine Merand PhD).} \end{figure} Recently, Nardetto et al. (2016, in press) found a resolved structure around the prototype classical Cepheid delta Cep in the visible spectral range using visible VEGA/CHARA interferometric data observations. These data are indeed consistent in first approximation with a quasi-hydrostatic model of pulsation surrounded by a static CSE with a size of 8.9 $\pm$ 3.0 mas and a relative flux contribution of 0.07 $\pm$ 0.01. A model of visible nebula (a background source filling the field of view of the interferometer) with the same relative flux contribution is also consistent with the data at small spatial frequencies. However, in both cases, they find discrepancies in the squared visibilities at high spatial frequencies (maximum 2$\sigma$) with two different regimes over the pulsation cycle of the star, $\phi=0.0-0.8$ and $\phi=0.8-1.0$. One possibility could be that the star is lighting up its environment differently at minimum ($\phi=0.0-0.8$) and maximum ($\phi=0.8-1.0$) radius. This reverberation effect would then be more important in the visible than in the infrared since the contribution in flux of the CSE (or the background) in the visible band is about 7\% compared to 1.5\% in the infrared (Merand et al. 2015). The processes at work in infrared and in the visible regarding the CSE are different. We expect thermal emission in the infrared and scattering in the visible. MATISSE offers an unique opportunity to study the envelopes of Cepheids. Determining the size of these envelope (as a function of the spectral band) and their geometry (as a function of the pulsation phase), will bring insights in the links between pulsation, mass loss and envelopes. \acknowledgments This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. We used data from the Infrared Space Observatory (ISO) from ESA, and from the Infrared Astronomical Satellite (IRAS). We are grateful to ESO, CNRS/INSU, and the Max-Planck Society for continuous support in the MATISSE project. NN acknowledges the support of the French Agence Nationale de la Recherche (ANR), under grant ANR-15-CE31-0012-01 (project UnlockCepheids). NN acknowledges financial support from “Programme National de Physique Stellaire” (PNPS) of CNRS/INSU, France.
1,314,259,994,610
arxiv
\section*{} \vspace{-1cm} \footnotetext{\textit{$^{a}$~Cavendish Laboratory, University of Cambridge, CB3 0HE Cambridge, U.K.; E-mail: am2212@cam.ac.uk}} \footnotetext{\textit{$^{b}$~Statistical Physics Group, Department of Chemical Engineering and Biotechnology, University of Cambridge, CB3 0AS Cambridge, U.K.; E-mail: az302@cam.ac.uk}} \footnotetext{\ddag~Present address: Institute Laue - Langevin, 71 Avenue des martyrs, 3800 Grenoble, France } \section{Introduction} We every day encounter materials that consist of colloidal particles adsorbed at fluid interfaces, like foams and emulsions mostly exploited in the oil-recovery industry and in food and pharmaceutical formulations.~\cite{Binks2002, Hunter2008, Calderon2008, Dickinson2010} In the last years, an increasing attention has been paid to fluid interfaces as templates for the direct assembly of inorganic nanoparticles. As a result, fluid interfaces are a versatile platform to create mechanically stable nanostructures~\cite{Russell2007, Marzan2010, KnowlesMezzenga_AM, Dai2015, Dai2017, DelGado2014} for cutting-edge applications including biosensing~\cite{Duyne2008}, and catalysis~\cite{Stellacci_2012,Mezzenga_2015} to mention a few examples. In addition, the use of ligand-nanoparticle complexes improve the control of not only adsorption to fluid interfaces but also the interfacial assembly and dynamics allowing to exploit nanoparticle interfacial layers more broadly in advanced materials applications.~\cite{Garbin_review} In these complexes, the nanoparticle core is related to the photonic or electronic properties, whereas the (physically or chemically) surface-attached ligands define the particle's adsorption and interaction in the interfacial plane. Colloidal particles have strong affinity for fluid interfaces because the adsorption energy largely overcomes the particle's thermal energy $k_{B}T$.~\cite{Binks2002} They are also small enough (from micro- to nanoscale) to avoid gravity effects, therefore, their equilibrium position with respect to an air/water interface is determined by the balance of three interfacial energies corresponding to the air/water, particle/water and particle/air. This is accounted for by an equilibrium contact angle described by Young's equation.~\cite{Maestro_ca_2015} At a fluid interface, the particles adsorbed are highly mobile being able to achieve an equilibrium assembly that is dictated by inter-particle interactions.~\cite{Pieranski_1980} In general, colloidal microparticles with dissociable charged groups on their surface, repel each other due to the existence of double-layer repulsive force that counteract the attractive van der Waals attraction.~\cite{Israelachvili, Oettel2008} Additionally, capillary forces may emerge induced by local interfacial deformation between particles.~\cite{Kralchevsky2000, Oettel2005} In the case of nanoparticles, their interfacial assembly is extremely dependent on the competition between thermal fluctuations, due to the fact that nanoparticles' adsorption energies are in the order of 10-100 times larger than the thermal energy, and interfacial forces.~\cite{Bresme2007} It is in this context where a precise determination of the nanoparticles wettability is crucial to control their interfacial assembly.~\cite{Isa_CA} We consider that it is necessary to address in the previous picture how the presence of bulk flow induces an interfacial shear deformation modifying particles' arrangement and, therefore, yielding a viscoelastic response. It has been demonstrated that particles assemble into polycrystalline~\cite{Retsch2009, Dai2012, Keim_2015, Isa_buttinoni_2017} and/or amorphous colloidal monolayers~\cite{Cicuta_superposition2003} show identical shear rheological features: under shear deformation, particle monolayers respond as linear elastic solids at small strains whereas at large enough strains, microstructural rearrangements become irreversible yielding to plastic flow behavior.~\cite{Keim_2015} In this work, we focus our attention to the case of nanoparticle monolayers that assemble into disordered solid structures reminiscent of colloidal glasses. Several rheological studies have been performed in the last years to show the link between the deformation and flow of nanoparticle-laden interfaces with the interfacial microstructure and interparticle interactions.~\cite{Zang_2010, Liggieri_2011, Vermant_2013, Barman_2014, Maestro_Langmuir2015} In general, the presence of nanoparticles increases the rigidity of fluid interfaces and, therefore, the resistance against deformation.~\cite{Erni_review} This has been extensively exploited mainly in the stabilization of foams~\cite{Gonzenbach2006, Orsi2012, Arriaga2014, Maestro_bubbles2014} and emulsions.~\cite{Whitby2005, Monteux2007, Vermant2017} In contrast, a fundamental explanation of the microscopic mechanism controlling the dynamics of the particles at fluid interfaces has been more elusive. A versatile model system for interfacial coating consists on hydrophobic colloidal silica nanoparticles in combination with oppositely charged surfactants.~\cite{Gonzenbach2006, RaveraCSA2006, MaestroWet2012} Here, the surfactant molecules control not only the adsorption and the wettability of the particles, but also the interaction strength between particles can be tuned --and, therefore, the interfacial packing density $\phi$-- by modifying the surfactant concentration $C_{s}$; \textit{i.e.}, the amphiphilic molecules anchored at the particle surface yield to an attractive force between the nanoparticles based on the hydrophobic interaction between their hydrocarbon tails. The shear-induced deformation of this surfactant-nanoparticle complexes, at high enough surfactant concentration, shows an overall solid-like response below a yield point with properties of a 2D glass.~\cite{Maestro_Langmuir2015} The yielding behavior of silica nanoparticles attached at air/water interfaces has been also studied by large-amplitude oscillation rheology very recently.~\cite{Harbottle_yielding_ 2016} In this work, varying both surfactant and particle concentration, the soft-glassy dynamics is also confirmed. In view of the above experimental evidences, here we propose a microscopic mechanism that is responsible for the deformation of inorganic nanoparticle interfacial layers under shear. This model is based on the description of local connectivity, and its temporal dynamics, and of the microstructural heterogeneity of the elastic response (giving rise to strongly nonaffine deformations) of particles interfacial assemblies. The mathematical model is in good agreement with the oscillatory shear measurements performed providing a fundamental connection between the concept of nonaffine deformations, the dynamical rearrangements of the local cage and the onset of plastic flow - all of which can be tuned to a certain extent by means of the colloidal chemistry. \begin{figure}[h] \centering \includegraphics [width=0.45\textwidth]{cartoon.pdf} \caption{(A) Cage-breaking model: In the absence of shear, the number of particles at the interface moving in and out of the cage is equal. In the presence of shear $\gamma$, the number of particles moving out of the cage in the sectors of the local extension axis is higher than in the sectors of the compression axis. (B) Top view of the distribution of nanoparticles at the interface in a situation of dense packing illustrating the proccess of yielding and cage breaking.} \label{fig:scheme} \end{figure} \section{Nonaffine elastic deformation} As for 3D systems,~\cite{ZacconePRB2011,ZacconePRB2014} the starting point of the analysis is the free energy of deformation of disordered solids which can be written as $F(\gamma)=F_{A}(\gamma) - F_{NA}(\gamma)$, with two distinct contributions arising in response to the macroscopic shear deformation $\gamma$. $F_{A}$ is the standard affine deformation energy. Affinity in this case means that every particle at the interface follows exactly the macroscopic shear deformation, and the associated interparticle displacement of a tagged particle $i$ is given by $\textbf{r}_{i}^{A}=\gamma \textbf{R}_{i}$, where $\textbf{r}_{i}^{A}$ is the affine particle position, while $\textbf{R}_{i}$ is the particle position in the rest frame. The nonaffine contribution $-F_{NA}$ lowers the free energy of deformation due to the fact that the particles at the interface are not local centers of lattice symmmetry, and, thus, there is an imbalance of forces on every particle in the affine position when the deformation is applied. This is due to the fact that the particle's nearest neighbours also react to the imposed deformation and in so doing transmit forces to the tagged particle. Clearly, these forces acting in the affine position can mutually cancel out only if the tagged particle is a center of symmetry in the lattice. In a disordered lattice, the particle is not a center of symmetry, and there is therefore a net force acting on it in the affine position. This additional net force acting on every particle in the network has to be relaxed through additional (\textit{nonaffine}) motions that happen on top of the affine displacement dictated by the macroscopic strain. Then, the force acting on every particle times the nonaffine displacement, contributes a net work that the system has to do in order to maintain mechanical equilibrium. This work is an internal work done by the system, and defines the nonaffine contribution $-F_{NA}(\gamma)$. In earlier work by Zaccone and Scossa-Romano ~\cite{ZacconePRB2011} it was shown that, for a disordered assembly of spheres, the resulting shear modulus is given by \begin{eqnarray} G&=G_{A}-G_{NA}=\frac{1}{30}\frac{N}{V}\kappa R_{0}(z-6)\\ G&=G_{A}-G_{NA}=\frac{1}{18}\frac{N}{A}\kappa R_{0}(z-4) \end{eqnarray} for $d=3$ and $d=2$, respectively. In general, the scaling $G\sim (z-2d)$ applies for generic $d$-dimensional systems. In the above formulae, $N/V$ and $N/A$ represent the number of particles per unit volume and per unit surface, respectively. $\kappa$ is the spring constant for nearest-neighbour interaction, defined as the second derivative of the interparticle potential evaluated at the bonding minimum. For hard-sphere systems, a spring constant can still be defined by considering the effective many-body potential from Boltzmann inversion of the radial distribution function (the effective minimum is due to entropic many-body effects). Finally, $R_{0}$ represents the equilibrium distance between its nearest neighbours and $z$ denotes the average coordination number. \section{Cage-breaking model} We consider here an interface covered by nanoparticles. Let us focus our attention to a single particle in the network (see Fig.~\ref{fig:scheme}A), the presence of local shear deformation, $\gamma > 0 $, around that given particle divides the interfacial space into two different sectors of a solid angle under shear: extension and compression. In the extension sector, the neighbouring particles are pulled apart from the considered particle at the center of the cage. The neighbours cross the boundary marked by the interparticle distance $R_{0}$ in the outward direction and, thus, they do not contribute to $z$. In the compression sector, particles are pushed inwards by the local deformation field. However, this effect is in opposition to the existence of excluded-volume interactions between particles. As a result, the shear-induced depletion of mechanical bonds in the extension sectors cannot be compensated by the formation of new bonds in the compression sectors. A simple, general expression for the evolution of the coordination number $z$ due to the thermal motion $k_{B}T$ and the shear-induced distortion of the network according to the mechanism proposed in ~\cite{ZacconePRB2014} is as follows. The probability of finding a nearest-neighbour particle $j$ around a tagged particle $i$ at a radial distance $r$ and time $t$ is given by the van Hove space-time correlation function~\cite{Hansen}: \begin{equation} G(r,t)=\frac{1}{N}\langle \sum_{i=1}^{N}\sum_{j=1}^{N} \delta(\textbf{r}+\textbf{r}_{j}(0)-\textbf{r}_{i}(t))\rangle \end{equation} which gives the probability that two particles $i$ and $j$ are at a distance $r$ at time $t$ under the constraint that one of them was at the origin at $t=0$. The van Hove correlation function can be split into two contributions, the self-part $G_{s}(r,t)$ and the distinct part $G_{d}(r,t)$, respectively. The self-part represents the motion of the particle which was initially at the origin, whereas the distinct part represents the motion of the second particle relative to the first. The Fourier transform of the van Hove correlation function gives the intermediate scattering function which is an experimentally accessible quantity (e.g. in light and neutron scattering experiments): \begin{equation} F(q,t)=\int d^{3} \textbf{r} ~G(r,t)\exp(-i \textbf{q} \cdot \textbf{r}) \end{equation} Clearly, also the intermediate scattering function can be split into a self and a distinct part, $F(q,t)=F_{s}(q,t)+F_{d}(q,t)$ which are the space-Fourier transform of $G_{s}(r,t)$ and of $G_{d}(r,t)$, respectively~\cite{Schmidt}. At $t=0$ the van Hove correlation function reduces to the static particle-particle autocorrelation function: \begin{equation} G(r,0)=\delta(\textbf{r})+\rho g(r) \end{equation} where the Dirac delta function comes from the self-part, while $g(r)$ is the standard radial distribution function coming from $G_{d}(r,t)$. Hence, $G_{d}(r,0)=\rho g(r)$. Here, $\rho=N/V$ in $d=3$ or $N/A$ in $d=2$. The static average number of nearest neighbours $Z_{0}\equiv z_{0}$ is defined in terms of the $g(r)$, as is well known, by the following relation \begin{equation} z_{0}=4\pi\rho\int_{0}^{R_{c}} g(r) r^{2} dr = 4\pi\int_{0}^{R_{c}} G_{d}(r,0) r^{2} dr \end{equation} where $R_{c}$ is a cut-off that is often set equal to the first minimum in the amorphous $g(r)$. With this choice, $z_{0}\simeq 12$ for liquids and glasses of spherical particles in $d=3$, and $z_{0}\simeq 6$ in $d=2$. The definition of average number of nearest neighbours can be extended to the dynamic case, by replacing the static distribution function $\rho g(r)=G_{d}(r,0)$ with the time-dependent one, $G_{d}(r,t)$, \begin{equation} z(t)=4\pi\int_{0}^{R_{c}} G_{d}(r,t) r^{2} dr \end{equation} provided that a nearest-neighbour peak is identifiable also in the space-dependent part of $G_{d}(r,t)$. At this point, in order to determine the cage dynamics, it is necessary to resort to theories of many-particle dynamics in dense liquids and glasses. Mode-coupling theory provides such a theory for the intermediate scattering function $F(q,t)$. In practice, MCT derives an equation of motion for $F(q,t)$ which is formally analogous to a generalized Langevin equation with a memory-kernel which provides a feedback mechanism to slow down the correlated particle motion~\cite{Goetze}. The final result shows that the time-decay of the intermediate scattering function for long times is dominated by the self-part and features a stretched-exponential decay~\cite{Goetze,Hansen}: \begin{equation} F(q,t)\sim\exp(-t/\tau_{c})^{\beta} \end{equation} where $\tau_{c}$ is the $\alpha$-relaxation time which is associated with substantial restructuring of the glassy cage, and the stretching exponent $\beta$ is typically in the range $0.5-0.6$~\cite{Goetze,Hansen}. In the following, we find that an excellent fit of experimental data is obtained with $\beta=0.55$ which perfectly falls within the range reported in the literature for glassy systems. Using the Vineyard approximation~\cite{Vineyard}, \begin{equation} F(q,t)\simeq S(q)F_{s}(q,t) \end{equation} it follows that the distinct part has the same time-dependence as the self-part and the total $F(q,t)$: \begin{equation} F_{d}(q,t)\simeq [S(q)-1] F_{s}(q,t)\simeq \frac{[S(q)-1]}{S(q)} F(q,t). \end{equation} Here $S(q)$ is the static structure factor, i.e. the space Fourier transform of $g(r)$. Hence, assuming that $F(q,t)\sim\exp(-t/\tau_{c})^{\beta}$, we then have \begin{equation} G_{d}(r,t)\simeq \frac{[S(q)-1]}{S(q)}\int d^{3} \textbf{q} ~F(q,t)\exp(+i \textbf{q} \cdot \textbf{r}). \end{equation} Since the inverse Fourier transform over space leaves the time-dependence unaltered, hence the time-dependence of $G_{d}(r,t)$ follows as: \begin{equation} G_{d}(r,t)\sim \exp(-t/\tau_{c})^{\beta} \end{equation} and therefore also the integration over the first peak of $G_{d}(r,t)$ leaves the following dependence for the dynamic mean nearest-neighbour number: \begin{equation} z(t) \sim \exp(-t/\tau_{c})^{\beta}. \end{equation} In a liquid, the mean number of nearest-neighbours is basically constant with time. If within a time interval $\tau$ there are $z(\tau)$ neighbours that leave the first coordination shell, there are as many other particles (originally not in the first coordination shell) that replace them. In a glass, the situation is similar although the time scale at which neighbours leave the cage is much longer. If a glass state is put under shear, the situation becomes much different. The spherical space around a tagged particle can be subdivided into 4 quadrants (see also Fig. 1). Two of them are extensional quadrants: here particles that leave the cage, are very unlikely to get back and also other particles are very unlikely to take their places, because the field pushes them outwardly away from the tagged particle at the center of the frame. The other two quadrants are instead compressional. In these quadrants particles originally in the cage are very unlikely to leave the cage, and if the attempt to do so they are pushed back by the field inwardly towards the particle at the center of the cage. Hence, in a glassy state under shear there is a net loss of nearest neighbours in the extensional quadrants of the shearing field, while the number of nearest neighbours in the compression quadrants is expected to remain basically constant. For small particles, such that the Peclet number is very small, $Pe\ll 1$, one can assume that the escape from the cage is controlled by thermal motion rather than by shear convection. For our experiments, this is certainly the case, however for larger colloidal particles the escape mechanism could be shear-driven and convective, which would lead to larger values of $\beta$ more typical of driven systems. For example in Ref.~\cite{Laurati} where larger colloids where used, an exponent $\beta=2$ was found which is indicative of convective dynamics. These considerations can be combined with the above result for the time-dependence of $z$ to build a model of cage deformation breaking. We can assume that the mean number of nearest neighbours in the stable glassy assembly prior to shearing is $6$, i.e. the value in 2D that one would obtain upon integrating $g(r)$ up to the first minimum. Furthermore, based on Fig.~\ref{fig:scheme}, we can assume that particles leave the cage in the extensional direction, and only in the compression direction the number of nearest neighbours remains approximately constant. Upon mapping time onto strain $\gamma$ for a linear increase of strain amplitude at constant rate $\dot{\gamma}$, i.e. $\gamma = \dot{\gamma} t$, these considerations imply the following limits: $z(\gamma=0)\equiv z_{0}=6$ and $z(\gamma\rightarrow \infty)=3$. In practice this means that in the limit of infinite strain (steady-state flow), only the particles in the two compression quadrants are still next to the tagged particle at the center of the frame, as they are continuously pushed back by the flow, according to a mechanism shown already in simulations of flow of hard-sphere colloids~\cite{Brady}. A function which has the time dependence given by Eq. (13) and complies with the limits imposed by the shearing geometry is the following \begin{equation} z(\gamma)= \frac{z_{0}}{2} [1 + e^{-(A \gamma)^{\beta}}], \label{Eq:nb} \end{equation} with $A = \Delta / k_{B}T + 1/ \dot{\gamma}\tau_{c}$ and we recall that $\gamma = \dot{\gamma} t$. Here $\Delta$ represents an energy barrier for the shear-induced breaking of the cage, which, in glassy systems might also be related to the glass transition temperature $T_{g}$. $\dot{\gamma}$ is the strain rate and $\tau_{c}$ is a cage relaxation time. \section{Nonlinear stress-strain relation} The nonaffine free energy of deformation that takes the loss of nearest-neighbours into account due to the cage breaking effect, is written such that its second derivative gives the local shear modulus $G(\gamma)$, hence~\cite{ZacconePRB2014}: \begin{equation} F_{el} = \frac{1}{2}K[z(\gamma) -z_{c}] \gamma^{2} \end{equation} where $K \equiv(1/18)(N/A)\kappa R_{0}$ and $z_{c}=4$ for a 2D assembly. One could more formally write this nonlinear free energy of deformation using neo-Hookean models~\cite{Ogden}, but this would not change the final result and the equations that we derive in the following. Upon inserting Eq. (14) into Eq. (15), and taking the first derivative of the free energy of deformation with respect to strain, we obtain the nonlinear stress-strain relationship for the 2D particle assembly: \begin{equation} \begin{split} \sigma_{el} = \frac{\partial F_{el}}{\partial \gamma} = K \gamma \bigg\{2 \bigg[2 + \exp\{- [\gamma(\tilde{\Delta}+\frac{1}{\dot{\gamma}\tau_{c}})]^{\beta}\} \bigg] -z_{c} \bigg\} \\ -\bigg\{\frac{1}{2} K \bigg(\tilde{\Delta}+\frac{1}{\dot{\gamma}\tau_{c}} \bigg) \exp\{- [\gamma(\tilde{\Delta}+\frac{1}{\dot{\gamma}\tau_{c}})]^{\beta}\} \gamma^{2} \bigg\} \bigg\{\gamma [\tilde{\Delta}+\frac{1}{\dot{\gamma}\tau_{c}}]^{\beta} \bigg\}^{-1}, \end{split} \label{Eq:sigma_elas} \end{equation} with $\tilde{\Delta}=\Delta / k_{B}T$ is the dimensionless energy associated with cage restructuring (which has been related to the glass transition temperature in previous work, $\tilde{\Delta}=\Delta / k_{B}T = T_{g}/T$ ~\cite{ZacconePRB2014}. The prefactor $K$ of the first term also contains the dependence of the stress-strain curve on the particle size, because $R_{0}$ is approximately equal to the particle diameter for a dense assembly. Upon considering a surface packing with fixed packing fraction $\phi=\pi(R_{0}/2)^{2} N/A$, the expression for $K$ becomes: $K=(2/9\pi)\kappa \phi/R_{0}$. Hence, as expected for a 2D solid~\cite{ZacconePRB2011}, the initial slope of the linear elastic regime (hence the shear modulus) of the stress-strain relation decreases upon increasing the particle size as $K \sim R_{0}^{-1}$. In 3D we would have $K \sim R_{0}^{-2}$, and in general $K \sim R_{0}^{1-d}$ in a generic space dimension $d$. Furthermore, $K$ also appears in the first bracket of the second negative term in Eq. (16), but it could be collected as a common factor in front of all brackets and hence does not affect the position of the yielding point (which is controlled by the expressions inside the brackets and by the competition between positive and negative terms therein). Since the first bracket in Eq. (16) is the one which controls the linear elastic regime, the elastic rigidity is therefore inversely proportional to $R_{0}$. Beside the elastic contribution, we also need to consider the dissipative contribution, $\sigma_{v}$, to the total stress. It is known that for deformations that are not quasi-static, i.e., with $\dot{\gamma} > 0$, microscopic friction induces a resistance to the particle displacements. This friction is associated with a viscosity $\eta$ and a viscous relaxation time $\tau_{v}$ according to the Maxwellian viscoelastic model~\cite{Hansen}.The viscous stress is then defined as \begin{equation} \sigma_{v} = \dot{\gamma} \eta\bigg[1 - \exp\bigg(-\frac{\gamma}{\dot{\gamma}\tau_{v}}\bigg)\bigg]. \label{Eq:sigma_vis} \end{equation} Finally, the total stress is the sum of the elastic (Eq.~\ref{Eq:sigma_elas}) and the viscous Eq.~\ref{Eq:sigma_vis} stress contributions~\cite{LandauLifshitz} \begin{equation} \sigma = \sigma_{el}(\gamma) + \sigma_{v}(\gamma), \label{Eq:sigma} \end{equation} where $\sigma_{el}(\gamma)$ is given by Eq. (16) while $\sigma_{v}(\gamma)$ is given by Eq. (17). This equation contains all the relevant particle-level physics: interparticle potential (contained in $K$), nonaffine displacements (associated to $z_{c}$ and contained in Eq.(16), shear-induced changes in the local particle network $z(\gamma)$, and also includes the thermally activated cage distortion, and the viscous dissipation due to microscopic friction. The equation recovers the elastic limit at small strain, where $\sigma \approx K (z_{0}-z_c) \gamma $, and the plastic flow $\sigma \rightarrow \eta \dot{\gamma} $ in the limit $\gamma \gg 1$. \section{Comparison} The theory has been tested on interfacial layers consisting of the model system previously introduced: hydrophilic colloidal silica nanoparticles in combination with oppositely charged surfactants adsorbed at air/water interfaces. In an earlier work, Maestro \textit{et al} demonstrated that the strength of interaction between neighbouring nanoparticles and, therefore, their interfacial network strongly depends on the concentration of surfactant.~\cite{Maestro_Langmuir2015} In principle, the interfacial assembly of silica nanoparticles ($15\,$nm radius) is dominated by van der Waals attractive forces, and electrostatic double-layer repulsive force between the particles (because silica nanoparticles have dissociable silanol groups on their surface) according to the DLVO theory.~\cite{DLVO1, DLVO2} In addition, a short-range attraction between the nanoparticles induced by the surfactant molecules anchored at the particle surface is present. This is known as an hydrophobic interaction between the hydrocarbon tails of CTAB molecules that likely dominates at short distances (in the range of surfactant chain length)~\cite{Israelachvili2011}. As a result, the number density of nanoparticles at the interface increases with the amount of surfactant used. The oscillatory interfacial rheology measurements performed in~\cite{Maestro_Langmuir2015} showed the existence of a solid-like behavior below a yield point for all the samples with a concentration of surfactant below the CMC. This network of interconnected CTAB-Silica complexes can be, thus, rationalised as an attractive glass with the yield stress scaling with the range of attraction (i.e., the surfactant concentration.) In particular, the shear stress $\sigma$ of the interfacial film at constant frequency $\omega$ (0.628$\,$rad/s), was measured in~\cite{Maestro_Langmuir2015} varying the strain amplitude $\gamma$. Fig.~\ref{fig:results} shows the experimental strain-sweep experiments performed in~\cite{Maestro_Langmuir2015} for samples at increasing CTAB concentration $C_{s}$. In all the samples studied, the stress $\sigma$ linearly grows with $\gamma$ at low values of $\gamma$ below a certain threshold known as the limit of linearity $\gamma_{0}$. This marks the end of the linear regime, with $\sigma \propto \gamma$. Beyond such regime, there is a non-linear regime where $\sigma$ increases sub-linearly until the local stress is maximum $\sigma_{y}$ at the yield-point $\gamma_{y}$. Beyond this point, the observed behaviour depends strongly on $C_{s}$. At low $C_{s}$, the yield-point is followed by a plastic flow regime characterised by a practically constant plateau stress; i.e., the sample is shear-melted with $\sigma \propto \dot{\gamma} \sim \omega \gamma $. Upon increasing $C_{s}$ and beyond $\gamma_{y}$, the stress progressively falls down with the strain. In general, the stress-strain relation shows an overshoot with a maximum in the stress beyond which the system yields a viscous Newtonian flow in the large strain limit, $\gamma \gg 1$. This behavior, illustrated in Fig.~\ref{fig:results}, can be qualitatively rationalised as a weakly attractive glass that exhibits one-step yielding at increasing oscillatory strain amplitude. The stress overshoot is a hallmark of the rheology of glassy materials, and is observed for example in colloidal glasses~\cite{Poon_yielding, Ballauff, Harbottle_yielding_ 2016, attractive_glass} as well as in metallic glasses.~\cite{Johnson,ZacconePRB2014} This behavior, which is well known in 3D glasses, was indeed found in other nanoparticle monolayers consisting on partially hydrophobic silica with larger size ($85\,$nm radius) in absence of surfactant. In this case, the partial hydrophobicity of the particles is obtained by replacing the silanol by silane groups at the surface of the particles. This data, obtained from~\cite{Zang_2010} has been also plotted as an inset in Fig.~\ref{fig:results} to be compared with the proposed theory. \begin{figure}[h] \centering \includegraphics [width=0.45\textwidth]{Fig2_theory_v3b.pdf} \caption{Comparison between the theoretical expression and the experimental results by oscillatory interfacial shear rheometry obtained from~\cite{Maestro_Langmuir2015}. Interfacial shear stress $\sigma$ is plotted versus the amplitude of strain $\gamma$ at a constant frequency $\omega=0.628\,$rad/s considering different concentrations of surfactant $C_{s}$ (expressed in mM). Eq.~\ref{Eq:sigma} is used with the viscosity $\eta$ and the strain rate $\dot{\gamma}$ fixed by the experiments. The values of $\tilde{\Delta}=6$, $\alpha=0.55$ and $z_{0}=6$ has been chosen accordingly to represent a disordered, amorphous solid network at the interface. Inset: Comparison between the theory (Eq.~\ref{Eq:sigma}) and the experimental $\sigma - \gamma$ for partially hydrophobic silica nanoparticle monolayers ($85\,$nm radius) obtained from~\cite{Zang_2010}. In this case, $\eta$ and $\dot{\gamma}$ were also fixed by the experiments and the values of $\alpha$, $\tilde{\Delta}$ and $n_{b}^{0}$ are similar than in the main figure. } \label{fig:results} \end{figure} We compare here this nonlinear stress-strain behaviour found experimentally in 2D nanoparticle systems with a description based on the coupling between many-body dynamics causing structural rearrangments of the glassy cage and a nonaffine response to deformation. The resulting fully analytical expression for the stress-strain curve (Eq.~\ref{Eq:sigma}, together with ~\ref{Eq:sigma_elas} and ~\ref{Eq:sigma_vis}) is in good agreement with the experiments being able to reproduce the limit of linearity, the strain-softening, the yielding point and the stress overshoot upon increasing $C_{s}$ as it can be seen in Fig.~\ref{fig:results}. There is remarkable agreement between the theoretical expression (Eq.~\ref{Eq:sigma}) and the experimental data across a broad range of parameters. The values for the viscosity $\eta$, the strain rate $\dot{\gamma}$ are fixed by the experimental values obtained from Ref~\cite{Maestro_Langmuir2015} and shown in Table~\ref{tab:results}. We fixed also a value of $\tilde{\Delta}=\frac{T_{g}}{T} = 6$ corresponding to about $1\,k_{B}T$ of attraction per nearest-neighbour as expected for weak attraction, and a value of $\alpha=0.55$, typical of glassy systems~\cite{Goetze,Hansen} for the stretched exponential of $\alpha$-relaxation~\cite{Goetze}. The only parameters that have not been fixed in Eq.~\ref{Eq:sigma} to fit the experimental data are the 'dressed' spring constant parameter $K$ (related to the interparticle potential via the spring constant $\kappa$ as mentioned earlier) and the two relevant relaxation times $\tau_{c}$, for the cage structural rearrangement and $\tau_{v}$, for the macroscopic viscous relaxation, respectively. Furthermore, to show the universality of the theoretical model proposed, we also compare it with the stress-strain behavior corresponding to larger, partially hydrophobic nanoparticles adsorbed at the air/water interface in the inset of Fig.~\ref{fig:results}. In this case, there is also a good description of the experimental $\sigma-\gamma$ data with Eq.~\ref{Eq:sigma} using the values of $\alpha$, $\tilde{\Delta}$, $z_0$ and $z_c$ previously described for glassy systems of spherical building blocks with central-force interaction. The values obtained from the fit $K=0.06\,$N/m, $\tau_{c} = 152\,$s and $\tau_{v}=5.6\times10^{-3}\,$s are in good agreement with those corresponding to the surfactant-decorated silica nanoparticles with smaller size shown in Table 1. In detail, the spring constant $K$ is in the same order than the CTAB-decorated silica nanoparticles with $0.1-0.08\,$mM CTAB as expected for a similar packing fraction of the particles at the interface. The value of $\tau_{c}$ is bigger than the ones shown in Table 1 which is physically meaningful because of the increase in particle size that, therefore, increases the Brownian relaxation time and hence $\tau_{c}$ as well. \begin{figure}[h] \centering \includegraphics [width=0.6\textwidth]{Fig_k.pdf} \caption{Dependence of the fitting parameters obtained with the concentration of surfactant used to increase the hydrophobicity of the nanoparticles and, therefore, the packing density at the interface.} \label{fig:fit} \end{figure} \section{Effect of surfactant on the stress overshoot} Looking at the experimental data in Fig.~\ref{fig:results}, one can see that the amplitude of the overshoot represented by $\sigma_{y}$, and also the yield strain $\gamma_{y}$, depend on the surfactant concentration $C_{s}$. In general terms, the model explains the existence and the amplitude of the overshoot based on the competition between the elastic instability driven by non-affine cage breakup and the build-up of viscous stress, respectively. When the elastic instability sets in, it causes the stress to go through a maximum value $\sigma_{y}$ and to subsequently decrease with further increasing strain, whereas the viscous contribution $\sigma_{v}$ increases monotonically up to the final Newtonian-like viscous plateau where $\sigma \sim \dot{\gamma}\eta$. Fig.~\ref{fig:fit} (and Table~\ref{tab:results}) shows the dependence of the parameters $K$, $\tau_{c}$ and $\tau_{v}$ on the concentration of surfactant $C_{s}$, and, therefore, on the particle density at the interface $\phi=f(C_{s}, \,C_{p})$ --being the concentration of particles $C_{p}$ fixed in all the cases to $1 wt.\%$--. Remarkably, the prefactor $K$ of the elastic free energy, defined as $K=(2/9\pi)\kappa\phi/R_{0}$ as discussed above, increases with $C_{s}$. This is a reasonable outcome because the increase of surfactant brings about an increase of the packing fraction $\phi$ of the particles at the interface~\cite{Maestro_Langmuir2015}, and at the same time an increase of the spring constant $\kappa$. The latter is defined as the second derivative of the total interaction energy between two particles evaluated in the attractive minimum. This quantity is expected to increase as a result of the increased attractive force due to hydrophobicity, and because the attractive minimum becomes narrower as the first neighbours distance $R_{0}$ decreases when more surfactant is added to the system. Upon increasing $C_{s}$, the surfactant-particle complexes are progressively creating a denser and stronger particle cage that results in the strengthening of the glassy network in which more energy is needed for the particles to escape from the cage, which is primarily related to $\sigma_{y}$. $\tau_{c}$ represents the cage relaxation time and it slightly increases with $C_{s}$. This means that the cage dynamics becomes slower upon strengthening the particle network. Finally, we rationalise the increase of $\tau_{v}$ as due to the increase of the microscopic friction in between the particles. This friction increases as the nearest-neighbour distance decreases upon increasing the surfactant. In particular, if we visualize the nanoparticles as surrounded by a shell of vertically oriented surfactant molecules, it is clear that the friction must increase markedly when the particles approach the distance of close contact between the respective surfactant layers. This point and its consequences on the rheology are explored in the paragraph below. \section{Nonmonotonic dependence of yield strain on surfactant concentration} As is clear from Fig.~\ref{fig:results}, there is a non-monotonic dependence of the yield-strain amplitude (\textit{i.e.} evaluated at the point of maximum of the stress-strain overshoot) $\gamma_{y}$ and the surfactant concentration $C_{s}$. In particular, $\gamma_{y}$ decreases with surfactant concentration upon going from $C_{s}=0.08$ to $C_{s}=0.1\,$mM, after which, however, it monotonically increases upon further increasing $C_{s}$. This non-monotonic behaviour would be impossible to explain without a model, but thanks to the theoretical fitting shown above, we can provide a possible physical explanation for this effect. As is shown in Fig.~\ref{fig:fit}, the time-scale associated with viscous friction, $\tau_{v}$, increases markedly upon going from $C_{s}=0.08$ to $C_{s}=0.1\,$mM, after which it is practically constant. As discussed above, the friction time scale $\tau_{v}$ is due to the local frictional interaction between layers of surfactants on nearest-neighbour particles. If the particles are sufficiently far apart, there is little interaction between the layers and therefore also the viscous time $\tau_{v}$ is lower. As soon as the particles become closer to each other and the layers start to interact, it is expected that the friction time scale $\tau_{v}$ increases significantly. Beyond this point, particles are unlikely to come closer due to strong steric repulsion, upon further increasing $C_{s}$. Hence, we can speculate that, upon going from $C_{s}=0.08$ to $C_{s}=0.1\,$mM, the nearest-neighbour particles become close enough such that the respective surfactant layers start to interact, which generates viscous friction in the relative motion between the particles. This would perfectly explain the jump in $\tau_{v}$ in Fig.~\ref{fig:fit} upon going from $C_{s}=0.08$ to $C_{s}=0.1\,$mM. Upon increasing $C_s$ further above $C_{s}=0.1\,$mM, the distance between nearest-neighbours cannot decrease further because of the strong steric repulsion between surfactant chains protruding in the layers. Hence, also the frictional time $\tau_{v}$ increases much less at this point, and saturates to a plateau. With reference to the non-monotonic dependence of $\sigma_{y}$ on $\gamma$, the sharp increase of $\tau_{v}$ implies that the build-up of dissipative stress $\sigma_{v}$, according to Eq.(17), becomes much slower with increasing $\gamma$. Hence the overshoot must happen at a lower strain, because the drop of $\sigma_{el}$ (which is a function featuring a maximum, that shifts towards larger $\gamma$ upon increasing $K$) is not compensated by a sufficiently fast increase of $\sigma_{v}$, as one goes from $C_{s}=0.08$ to $C_{s}=0.1\,$mM. Therefore, this consideration suggests that the drop of $\gamma_{y}$ upon going from $C_{s}=0.08\,$mM to $C_{s}=0.1\,$mM can be due to the fact that whilst the elastic stress increases (at a given $\gamma$) due to the increase of $K$, yet the stress overshoot happens "earlier" (at a lower $\gamma$) because the viscous stress does not catch up fast enough to compensate the drop of $\sigma_{el}$, due to a larger value of $\tau_v$ which makes the increase of $\sigma_v$ with $\gamma$ much slower. Finally, at $C_s > 0.1\,$mM the $\tau_{v}$ is basically constant, while $K$ keeps increasing. This means that the drop of elastic stress will occur at increasingly higher strain (because the maximum in $\sigma_{el}$ gets shifted to larger $\gamma$ upon increasing $K$), which is reflected in $\gamma_{y}$ increasing monotonically with $C_{s}$ in this regime. \begin{table}[h] \small \centering \caption{Relevant parameters in Eq~\ref{Eq:sigma}. The effective viscosity of the layer $\eta$ and the strain rate $\dot{\gamma}$ values are fixed by the experimental values obtained from Ref.~\cite{Maestro_Langmuir2015}. The values of the elastic constant $K$, (related to the interparticle potential) and the two relevant relaxation times $\tau_{c}$, for the cage rearrangement and $\tau_{v}$, for the macroscopic viscous relaxation have been obtained from the fitting of the experimental data by Eq~\ref{Eq:sigma}. \label{tab:results}} \begin{tabular}{l*{5}{c}r} CTAB (mM) & K (N/m) & $\tau_{c} (s)$ & $\tau_{v} (s)$ & $\eta$ (Ns/m) & $\dot{\gamma}$ (s$^{-1}$) \\ \hline 0.5 & 1.17 & 61 & 1$\times10^{-2}$ & 4.9$\times10^{-3}$ & 0.82 \\ 0.2 & 0.32 & 58 & 5$\times10^{-3}$ & 5.8$\times10^{-3}$ & 0.28 \\ 0.1 & 0.083 & 51 & 5$\times10^{-3}$ & 3.0$\times10^{-3}$ & 0.15 \\ 0.08 & 0.015 & 52 & 2$\times10^{-3}$ & 3.2$\times10^{-3}$ & 0.12 \\ \end{tabular} \end{table} \section{Conclusions} We have proposed a microscopic mechanism that explains the deformation of surfactant-decorated silica nanoparticle interfacial layers under shear deformation. Silica particles trapped at the air/water interface forms a 2D amorphous solid with features of a colloidal glass. The theoretical model proposed has been able to describe how this solid-like system deforms under a shear strain ramp up and beyond a yielding point which leads to plastic flow. In detail, the model is based on the description of nanoparticle interfacial assemblies by means of the local connectivity between particles, and its temporal dynamics, and of the microstructural heterogeneity of the elastic response giving rise to strongly nonaffine deformations. The model is able to reproduce experimental data from oscillatory shear measurements with only two non-trivial fitting parameters: the relaxation time of the cage and the viscous relaxation time. The interparticle spring constant contains information about the strength of interparticle bonding which is tuned by the amount of surfactant that renders the particles hydrophobic and mutually attractive. This model, therefore, shows --for the first time in interfacial systems-- a fundamental connection between the concept of nonaffine deformations, the dynamical rearrangements of the local cage and the onset of plastic flow --all of which can be controlled to a certain extent by the surfactant and particle concentration--. Finally, this framework opens up the possibility of quantitatively tuning and rationally designing the mechanical response of colloidal assemblies at the air-water interface as it can be stated from the goodness of the theoretical model for different nanoparticle systems compared in this study. We are now extending our theoretical description to nanoparticle systems at fluid interfaces taking into account not only the number density but also to explore in detail the particle's size effects (from the nm to the $\mu$m range) and also particle's shape (by studying different geometries like cylindrical, ellipsoidal and 'Janus') and surface roughness. \section{Acknowledgments} We are grateful to D. Langevin, P. Cicuta and Thomas Voigtmann for very useful discussions. AM acknowledge funding from a Royal Society Netwon International Fellowship. We are grateful to D. Langevin and D. Zang for sharing their interfacial rheological data on silica nanoparticles. \section*{Conflict of interest} There are no conflicts to declare. \balance
1,314,259,994,611
arxiv
\section{Introduction}\label{intro} The study of global properties of cosmological spacetimes is a fundamental problem in mathematical relativity, as it provides a first step toward understanding fundamental issues such as the structure of singularities and the cosmic censorship conjecture. Such a study can be reduced to investigating the global existence and asymptotic behavior of solutions to the Einstein equations, possibly coupled to the equations of motion for a specific matter model. In the present paper, we treat the class of perfect fluids whose pressure $p$ and mass-energy density $\mu \geq 0$ coincide. This is a limiting case ($\gamma=2$) with the class of pressure laws $p=(\gamma-1) \, \mu$, in which $\gamma \in [1,2]$ is referred as the adiabatic exponent of the fluid. Our main result concerns the initial-value problem for the associated Einstein equations: we establish a global existence result and rigorously determine the late-time asymptotic behavior of solutions. This allows us to conclude that the spacetime is future geodesically complete and approaches the de~Sitter spacetime whereas the matter asymptotically disperses. Observe first that singularities generically arise in initially smooth solutions to the fluid equations, that is, shock waves in the general case $\gamma\in (1,2]$ and shell-crossing singularities in the case $\gamma=1$. This is true even when gravitational effects are taken into account \cite{RS}. If the solution is to be continued beyond shock waves, it is necessary to lower the regularity of initial data and search for weak solutions, as investigated by LeFloch and co-authors (cf.~the review \cite{LeFloch} and the references therein). On the other hand, existence of {\sl smooth} solutions even in a long-time evolution can sometime be established in physically interesting situations. This is especially true when a cosmological constant is included, as we do in the present paper. Global-in-time solutions and the existence of future geodesically complete spacetimes can be established under a smallness condition on the initial data, as recognized by Tchapnda \cite{tchapnda} for $\gamma=1$ and under the assumption of plane symmetry and, later, without symmetry and for $\gamma\in(1,4/3)$, by Rodnianski and Speck \cite{RoS} and Speck \cite{speck,speck2}. As far as the limiting case $\gamma=2$ is concerned, plane symmetric spacetimes have been investigated by Tabensky and Taub \cite{TT} and LeFloch and Stewart \cite{LS}. In particular, \cite{TT} relies on two different coordinate systems in their analysis, a comoving coordinate system in which the fluid is at rest, and a characteristic coordinate system. On the other hand, the work \cite{LeFloch,LS} introduced the notion of weakly regular solutions to the Einstein equations. In the present paper, we rely on areal coordinates, a coordinate system in which the time is defined to be the area-radius function determined by surfaces of symmetry. In these geometry-based coordinates, we prove a global-in-time existence theorem (in the future direction) for plane-symmetric solutions to the Einstein-stiff fluid equations with cosmological constant. Importantly, we also derive the leading asymptotic behavior of solutions and conclude with the future geodesic completeness of the constructed spacetime. Our analysis relies on a change of fluid variables that allows us to write the fluid equations in a way analogous to the case of a massless scalar field, and then to take advantage of techniques for semi-linear hyperbolic equations. (A similar structure was observed in \cite{TNR}.) A specific technical difficulty overcome in this work originates in the fact that solutions may naturally contain vacuum states as well as velocities approaching the speed of light, both possibilities leading to singular behavior in the evolution equations. Note finally that our results extend to compressible fluids the conclusions obtained by Tchapnda and Rendall \cite{TR} for the Vlasov equation of (collision-less) kinetic dynamics. The outline of the paper is as follows. Section $2$ is concerned with the derivation of the field equations for stiff fluids under plane-symmetry. Next, in Section~3 we develop the local existence and uniqueness theory and then, in Section~4, determine the global geometry and asymptotic behavior of the spacetimes under consideration. \section{Einstein-stiff fluid equations} \label{system} \subsection*{Gravitational field equations} \label{einstein eq} We consider spacetimes $(M,g)$ such that the manifold has the topology $M=I\times \mathbb{T}^3$, where $I$ is a real interval and $\mathbb{T}^3=S^1 \times S^1 \times S^1$ is the three-torus. The metric $g$ and the matter fields are required to be invariant under the action of the Euclidean group $E_2$ on the universal cover. It is also required that the spacetime has an $E_2$-invariant Cauchy surface of constant areal time. In such conditions the metric can be expressed in the form \begin{equation} \label{1.1} ds^2 = -e^{2\eta(t,x)}dt^2 + e^{2\lambda(t,x)}dx^2 + t^2 (dy^2 + dz^{2}), \end{equation} where the time variable describes $t > 0$ and the spatial variable the interval $x \in [0,1]$, while the variables $y$ and $z$ range in $[0,2\pi]$; the metric coefficients $\eta$ and $\lambda$ are periodic in $x$ with period $1$. The Einstein equations read \begin{align}\label{einstein} G^{\alpha\beta} + \Lambda g^{\alpha\beta} = 8 \pi T^{\alpha\beta}, \end{align} where $G^{\alpha\beta}$ is the Einstein tensor, $T^{\alpha\beta}$ the energy-momentum tensor and $\Lambda$ is the cosmological constant which we assume to be positive. We also introduce the notation $$ \rho=e^{2\eta}T^{00}, \quad j=e^{\lambda+\eta}T^{01}, \quad S=e^{2\lambda}T^{11}, \quad p=t^2T^{22} $$ which defines the fluid variables of interest. After a tedious computation in the above coordinates, (\ref{einstein}) take the form of the following evolution and constraint equations (where the subscripts $t, x$ denote partial differentiation): \begin{equation} \label{1.2} e^{-2\eta} (2t\lambda_t+1) - \Lambda t^{2} = 8 \pi t^{2}\rho, \end{equation} \begin{equation} \label{1.3} e^{-2\eta} (2t\eta_t-1)+ \Lambda t^{2} = 8 \pi t^{2}S, \end{equation} \begin{equation} \label{1.4} \eta_x = -4 \pi t e^{\lambda+\eta}j, \end{equation} \begin{equation} \label{1.5} e^{-2\lambda}\left(\eta_{xx} + \eta_x(\eta_x - \lambda_x)\right) - e^{-2\eta}\left(\lambda_{tt}+(\lambda_t- \eta_t)(\lambda_t+\frac{1}{t})\right) + \Lambda = 8 \pi p. \end{equation} \subsection*{Stiff fluid equations}\label{fluid eq} The so-called stiff fluid under consideration is an isentropic perfect fluid with energy density $\mu>0$ equal to its pressure, that is, $p=\mu$. The $4$-velocity vector $U^\alpha$ of the fluid is normalized to be of unit length: $U^\alpha U_\alpha=-1$. The plane symmetry allows us to set $U^\alpha:=\xi(e^{-\eta},e^{-\lambda}u,0,0)$, where $\xi=(1-u^2)^{-1/2}$ is the relativistic factor and $u$ is the scalar velocity satisfying $|u|<1$. The energy momentum tensor for the stiff fluid is $$ T^{\alpha\beta} = \mu \, (2U^\alpha U^\beta+g^{\alpha\beta}), $$ that is \begin{equation}\label{fluid comp} \aligned &T^{00}=e^{-2\eta}\frac{1+u^2}{1-u^2}\mu=:e^{-2\eta}\rho, \qquad\quad T^{01}=e^{-\lambda-\eta}\frac{2u\mu}{1-u^2}=:e^{-\lambda-\eta}j, \\ &T^{11}=e^{-2\lambda}\frac{1+u^2}{1-u^2}\mu=:e^{-2\lambda}S, \qquad\quad T^{22}=T^{33}=t^{-2}\mu, \endaligned \end{equation} while, due to the above assumptions, all the other components vanish identically. The stiff fluid equations read \begin{equation} \label{euler} \nabla_\alpha T^{\alpha\beta}=0. \end{equation} We can assume that the components $T^{\alpha 2}$ and $T^{\alpha 3}$ vanish identically, while by computing the remaining two components we arrive at the two evolution equations \begin{equation} \label{1.6} \aligned & \rho_t+e^{\eta-\lambda}j_x=-2\lambda_t\rho-2\eta_x e^{\eta-\lambda}j-\frac{2}{t}(\rho+\mu), \\ & j_t+e^{\eta-\lambda}\rho_x=-2\lambda_tj-2\eta_x e^{\eta-\lambda}\rho-\frac{2}{t}j. \endaligned \end{equation} The equations may be put into a simpler form, as follows. Observe that the first-order principal part of (\ref{1.6}) is a strictly hyperbolic system of two equations associated with the two distinct speeds $\pm e^{\eta-\lambda}$. Introducing the Riemann invariants \begin{equation}\label{riem} r:=\frac{1+u}{1-u}\mu=\rho+j, \qquad s:=\frac{1-u}{1+u}\mu=\rho-j, \end{equation} and the directional derivatives $$ D^+:=\del_t+e^{\eta-\lambda}\del_x, \qquad D^-:=\del_t-e^{\eta-\lambda}\del_x, $$ and then combining the equations in (\ref{1.6}) together, we obtain \begin{equation} \label{1.9} \aligned & D^+r=-2\left(\lambda_t+\eta_xe^{\eta-\lambda}+\frac{1}{t}\right)r-\frac{2}{t}\sqrt{rs}, \\ & D^-s=-2\left(\lambda_t-\eta_xe^{\eta-\lambda}+\frac{1}{t}\right)s-\frac{2}{t}\sqrt{rs}. \endaligned \end{equation} Finally, the expressions for $\lambda_t$ and $\eta_x$ taken from (\ref{1.2}) and (\ref{1.4}) can be plugged in (\ref{1.9}), and by setting $X=e^\eta \sqrt{r}$ and $Y=e^\eta \sqrt{s}$ we arrive at \begin{equation} \label{Euler} \aligned & D^+X = - \Lambda te^{2\eta}X-\frac{1}{t}Y, \\ & D^-Y = - \frac{1}{t}X-\Lambda te^{2\eta}Y, \endaligned \end{equation} which we will refer to as the {\sl stiff fluid equations} for the unknowns $r$ and $s$. \subsection*{Basic properties}\label{properties} It is easily checked that (\ref{1.5}) is a {\sl consequence} of the equations (\ref{1.2})--(\ref{1.4}), (\ref{1.6}). One can also check that (\ref{1.4}) is a constraint equation, that is, it is automatically satisfied for all times once it is satisfied on an initial Cauchy hypersurface. Therefore, we will work with (\ref{1.2}), (\ref{1.3}) and (\ref{Euler}) for the unknowns $\eta$, $\lambda$, $r$ and $s$. Observe that by definition $r$ and $s$ must be non-negative. From the definition we see that $S=\rho=(r+s)/2$. We will solve the initial-value problem with data prescribed on the hypersurface $t=1$. Observe that once the fluid variables have been determined, the metric coefficient $\eta$ is obtained by integrating (\ref{1.3}) in the time direction, i.e. \begin{equation} \label{eta} e^{-2 \eta(t,x)} = {e^{-2 \etab(x)} \over t} + {1 \over t} \int_1^t \tau^2 \big(\Lambda - 4 \pi (r+s)(\tau,x)\big) \, d\tau \end{equation} with $\etab:=\eta(1,\cdot)$. Next, $\eta$ being known, the following equation (obtained from (\ref{1.2}) and (\ref{1.3})), \begin{equation} \label{lambda_t} \lambda_t(t,x) = \eta_t(t,x) + \Lambda te^{2\eta} -\frac{1}{t}, \end{equation} is integrated in time to yield the second metric coefficient \begin{equation} \label{lambda} \lambda(t,x) = \lambdab(1,x) + \int_1^t \lambda_t(\tau,x) \, d\tau \end{equation} with $\lambdab =\lambda(1,\cdot) $. Therefore, it will be enough to concentrate on the stiff fluid equations (\ref{Euler}) together with the metric equation (\ref{eta}), that determine an evolution system for the unknows $\eta$, $r$, $s$. Observe that there exists some $T^*>1$ such that the right hand side term in (\ref{eta}) is positive on $[1,T^*)\times [0,1]$. Estimates for $r$ and $s$ can easily be derived as follows. The expressions for $\lambda_t$ and $\eta_x$ taken from (\ref{1.2}) and (\ref{1.4}) can be plugged in (\ref{1.9}) to yield \be \label{rs} \aligned & D^+r = - \big( 8\pi t e^{2\eta}s+\Lambda t e^{2\eta}+{1 \over t} \big) \, r-{2\over t}\sqrt{rs}, \\ & D^-s = -\big( 8\pi t e^{2\eta}r+\Lambda t e^{2\eta}+{1 \over t} \big) \, s-{2\over t}\sqrt{rs}. \endaligned \ee Using the fact that $r$ and $s$ are positive, this implies $$ D^+r\leq-t^{-1}r, \qquad D^-s\leq-t^{-1}s, $$ and integrating this along the characteristic curves associated with the operators $D^\pm$ implies that \be \label{bound} r\leq r(1,\cdot)\,t^{-1},\ \ s\leq s(1,\cdot)\, t^{-1}. \ee As a consequence, if $\eta$ is bounded then so are $X$ and $Y$. A straightforward computation leads to the following result. \begin{lemma}\label{X1Y1} Set $$ \aligned & b_1=(\lambda-\eta)_xe^{\eta-\lambda}-\Lambda te^{2\eta}, \qquad b_2=-2\Lambda t\eta_xe^{2\eta}, \\ & b_3=(\eta-\lambda)_xe^{\eta-\lambda}-\Lambda te^{2\eta}, \qquad b=-\frac{1}{t}. \endaligned $$ If $X$ and $Y$ solve (\ref{Euler}) then $X_x$ and $Y_x$ satisfy \begin{equation} \label{Euler_x} \aligned & D^+X_x = b_1X_x+bY_x+b_2X, \\ & D^-Y_x = bX_x+b_3Y_x+b_2Y. \endaligned \end{equation} \end{lemma} The following result will be used to obtain bounds on derivatives of $X$ and $Y$. \begin{lemma}\label{A(t)} Set $$ \aligned & K(t)=\sup\{(X+Y)(t,x)\ | \ x\in[0,1]\}, \\ & A(t)=\sup\{(|X_x|+|Y_x|)(t,x)\ | \ x\in[0,1]\}, \\ & v(t)=\sup\{|(\lambda-\eta)_x|e^{\eta-\lambda}+\Lambda te^{2\eta}+\frac{1}{t}\ | \ x\in[0,1]\}, \\ & h(t)=2\Lambda t\sup\{|\eta_x|e^{2\eta}\ | \ x\in[0,1]\}. \endaligned $$ If $(X,Y)$ and $(X_x,Y_x)$ solve (\ref{Euler}) and (\ref{Euler_x}), respectively, with $X_x(1)=(e^{\etab}\sqrt{\rbar})_x$ and $Y_x(1)=(e^{\etab}\sqrt{\sbar})_x$, then \begin{equation} \label{A-bound} A(t)\leq A(1)+\int_1^t\big(v(\tau)A(\tau)+h(\tau)K(\tau)\big)\ d\tau. \end{equation} \end{lemma} \begin{proof} Equations (\ref{Euler_x}) can be written in the form $$ \aligned & \frac{d}{dt} X_x(t,\gamma_1(t)) = \big(b_1X_x+bY_x+b_2X\big)(t,\gamma_1(t)), \\ & \frac{d}{dt} Y_x(t,\gamma_2(t)) = \big(bX_x+b_3Y_x+b_2Y\big)(t,\gamma_2(t)), \endaligned $$ where $\gamma_1$ and $\gamma_2$ are the integral curves corresponding to $D^+$ and $D^-$ respectively.\\ Integrating this over $[1,t]$, taking the absolute value in each equation, adding the resulting inequalities and taking the supremum of each term yields (\ref{A-bound}). \end{proof} \section{Local existence theory} \subsection*{Main statement of this section} We are interested in regular solutions, defined as follows. \begin{definition} A {\rm regular solution} to the plane-symmetric Einstein-stiff fluid equations consists of two metric coefficients $\eta, \lambda$ and Riemann invariants $r,s$ given as continuously differentiable functions defined on $[1,T] \times [0,1]$ and periodic in space. \end{definition} We pose the initial-value problem by choosing some functions $\etab, \lambdab, \rbar, \sbar$ as periodic functions on $[0,1]$ satisfying the constraint \begin{equation} \label{constraint} \etab_x = - 2 \pi t e^{\lambdab+\etab} \, (\rbar - \sbar), \end{equation} and, on the initial hypersurface $t=1$, we impose \begin{equation} \label{initial} (\eta, \lambda, r, s)(1,\cdot) = (\etab, \lambdab, \rbar, \sbar). \end{equation} \begin{theorem}[Local existence and uniqueness theory in the Riemann variables] \label{theo1} Given periodic, continuously differentiable data $\etab, \lambdab, \rbar, \sbar$ prescribed on the initial hypersurface $t=1$ and satisfying the constraint (\ref{constraint}), there exists a future development which consists of continuously differentiable functions $\eta, \lambda, r, s$ defined on some time interval $[1,T)$ (with $T\in(1,\infty])$) that are periodic in space and satisfy the stiff fluid equations (\ref{Euler}), together with the evolution equations (\ref{1.2}) and (\ref{1.3}). \end{theorem} Once the Riemann invariants $r$ and $s$ are known, the primary fluid variables $\mu$ and $u$ can be determined from equations (\ref{fluid comp}) and (\ref{riem}): $$\mu=\sqrt{rs}, \quad \qquad u=\frac{\sqrt{r}-\sqrt{s}}{\sqrt{r}+\sqrt{s}}. $$ By construction, the Riemann invariants are {\sl bounded,} and this property is equivalent to the following restriction in the fluid variables: \be \label{fluid} {1 \pm u \over 1 \mp u} \, \mu \lesssim 1. \ee Observe that this condition allows the density to vanish, and the velocity component $u$ to approach $\pm 1$, which is the normalized light-speed. The condition is equivalent to saying \be \label{fluid2} 0 \leq \mu \lesssim 1 - |u|^2. \ee \begin{theorem}[Local existence and uniqueness theory in the fluid variables] \label{theo2} Under the assumptions of Theorem~\ref{theo1}, the problem with initial data satisfying the uniform bound \eqref{fluid2} admits a local-in-time solution which is unique in the following (generalized) sense: if $\mu_1, u_1$ and $\mu_2, u_2$ denote fluid solutions to the same initial value problem, then $$ \aligned & \text{either } \mu_1 = \mu_2 >0 \ \text{ and } \, u_1= u_2, \\ &\text{ or } \mu_1 = \mu_2 =0 \, \text{ and } \, u_1, u_2 \, \text{ are arbitrary.} \endaligned $$ \end{theorem} \subsection*{Proof of the local existence result} We rely on an iterative argument and define a sequence $(\eta_n, r_n, s_n)$ in the following way. \begin{enumerate} \item For $t\in[1,+\infty)$ and $x\in[0,1]$, we set $(\eta_0, r_0, s_0)(t,x) := (\etab, \rbar, \sbar)(x)$, $T_0=+\infty$. \item If $\eta_{n-1}$, $r_{n-1}$, $s_{n-1}$ are regular on $[1,T_{n-1})\times [0,1]$ with $T_{n-1}\leq\infty$, then we define $T_n$ to be supremum of all $t'\in(1,T_{n-1})$ such that $$ {e^{-2 \etab(x)} \over t} + {1 \over t} \int_1^t \tau^2 \left(\Lambda - 4 \pi (r_{n-1}+s_{n-1})(\tau,x)\right) \, d\tau >0 $$ for all $x\in[0,1]$ and $t\in[1,t']$, and we then set \begin{equation} \label{eta_n} e^{-2 \eta_n(t,x)} = {e^{-2 \etab(x)} \over t} + {1 \over t} \int_1^t \tau^2 \big(\Lambda - 4 \pi (r_{n-1}+s_{n-1}\big)(\tau,x))\ d\tau. \end{equation} \item We define $r_n$ and $s_n$ such that $X_n=e^{\eta_n}\sqrt{r_n}$, and $Y_n=e^{\eta_n}\sqrt{s_n}$ are solutions of the system \begin{equation} \label{Euler_n} \aligned & D^+_{n-1}X_n = a_{n-1}X_{n-1}+bY_{n-1}, \\ & D^-_{n-1}Y_n = bX_{n-1}+a_{n-1}Y_{n-1}, \endaligned \end{equation} where $a_{n-1}=- \Lambda te^{2\eta_{n-1}}$, $b=-{1 \over t}$. $D^{\pm}_{n}$ is the $D^{\pm }$-operator corresponding to the $n$-th iterate. We prescribe the same initial data (\ref{initial}) for all $n$. \end{enumerate} Observe that $T_n\geq T^*$ for all $n$, so that all the iterates are well-defined and regular on the fixed time interval $[1,T^*)$. In order to prove that the sequence of iterates converges to a regular solution, we establish uniform bounds on the iterates as well as their time and space derivatives, and we prove their uniform convergence. This is done in a series of lemmas. In the sequel we denote by $\| \ \|$ the sup-norm on the function space of interest, $C$ denotes a constant that may change at each occurrence. \begin{lemma}\label{bounds} The sequences $\eta_n$, $X_n$, $Y_n$, $r_n$, $s_n$ and $(\eta_n)_t$ are uniformly bounded in $n$, in the sup-norm by a continuous function of $t$, on a time interval $[1,T^{(1)}]$. \end{lemma} \begin{proof} Set $$ \aligned & P_n(t):=\sup\{e^{2\eta_n(t,x)} \ | x\in[0,1]\}, \\ & K_n(t):=\sup\{(X_n+Y_n)(t,x)\ | x\in[0,1]\}. \endaligned $$ Using equations (\ref{Euler_n}), we apply the same argument used in the proof of Lemma~\ref{A(t)} and obtain \begin{equation}\label{Kn-bound} K_n(t)\leq K_0+\int_1^t m_{n-1}(\tau)K_{n-1}(\tau)\ d\tau, \end{equation} with \begin{align*} m_n(t)&=\sup\{\Lambda t e^{2\eta_n}+\frac{1}{t} ; \ x\in[0,1]\}\\ &\leq t(1+\Lambda)(1+P_n(t)), \end{align*} so that \begin{equation}\label{Kn-bound1} K_n(t)\leq K_0+(1+\Lambda)\int_1^t \tau(1+ P_{n-1}(\tau))K_{n-1}(\tau)\ d\tau. \end{equation} On the other hand equation (\ref{eta_n}) implies \begin{equation} \label{eta n_t} (\eta_n)_t=\frac{1}{2t}-\frac{\Lambda}{2} t e^{2\eta_n}+2\pi te^{2\eta_n-2\eta_{n-1}}(X_{n-1}^2+Y_{n-1}^2), \end{equation} and since $e^{-2\eta_{n-1}}\leq \frac{e^{-2\etab}+\Lambda t^3}{t}\leq C(1+\Lambda)t^2$, it follows that \begin{equation} \label{Pn-bound} P_n(t)\leq \|e^{2\etab}\|+C(1+\Lambda)\int_1^t \tau^3(1+ K_{n-1}(\tau))^2(1+P_{n}(\tau))^2\ d\tau. \end{equation} Now defining $Q_n(t):=\sup\{K_m(t)+P_m(t) ; m\leq n\}$ and adding (\ref{Kn-bound}) and (\ref{Pn-bound}) we arrive at \begin{equation} \label{Qn-bound} Q_n(t)\leq K_0+ \|e^{2\etab}\|+C(1+\Lambda)\int_1^t \tau^3(1+Q_{n}(\tau))^4\ d\tau. \end{equation} Let $[1,T^{(1)})$ (with $T^{(1)}\in(1,T^*]$) be the maximal interval of existence for the solution $z_1$ of the integral equation $$ z_1(t)= K_0+ \|e^{2\etab}\|+C(1+\Lambda)\int_1^t \tau^3(1+z_1(\tau))^4\ d\tau, \ \ z_1(1)= K_0+ \|e^{2\etab}\|. $$ Then $Q_n(t)\leq z_1(t)$, for all $n\in \N$ and $t\in (1,T^{(1)})$. The same is true for $K_n$ and $P_n$. It follows that $\eta_n$, $X_n$, $Y_n$, and then $r_n$, $s_n$ and $(\eta_n)_t$ are uniformly bounded. To bound $(\eta_n)_t$, we use (\ref{eta n_t}). \end{proof} \begin{lemma}\label{der-bounds} The sequences $(\eta_n)_x$, $(X_n)_x$, $(Y_n)_x$, $(X_n)_t$, $(Y_n)_t$, $(r_n)_x$, $(s_n)_x$, $(r_n)_t$ and $(s_n)_t$ are uniformly bounded in $n$, the sup-norm by a continuous function of $t$ on a time interval $[1,T^{(2)}]$. \end{lemma} \begin{proof} Set $$ \aligned A_n(t) :=& \sup\{|(X_n)_x|+|(Y_n)_x|(t,x)\ | x\in[0,1]\}, \\ A_0:=& \sup\{(|\Xbar_x|+|\Ybar_x|)(x)\ | x\in[0,1]\}, \\ B_n(t):=&\sup\{|(e^{-2\eta_n(t,x)})_x| \ | x\in[0,1]\}. \endaligned $$ Then taking the spatial derivative in (\ref{Euler_n}) gives the following equations: \begin{align*} D_{n-1}^+(X_n)_x =& (\lambda_{n-1}-\eta_{n-1})_xe^{\eta_{n-1}-\lambda_{n-1}}(X_n)_x-2\Lambda t(\eta_{n-1})_xe^{2\eta_{n-1}}X_n\\&-\Lambda te^{2\eta_{n-1}}(X_{n-1})_x-\frac{1}{t}(Y_{n-1})_x , \end{align*} \begin{align*} D_{n-1}^-(Y_n)_x =& (\eta_{n-1}-\lambda_{n-1})_xe^{\eta_{n-1}-\lambda_{n-1}}(Y_n)_x-2\Lambda t(\eta_{n-1})_xe^{2\eta_{n-1}}Y_n\\&-\Lambda te^{2\eta_{n-1}}(Y_{n-1})_x-\frac{1}{t}(X_{n-1})_x. \end{align*} But, using Lemma~\ref{bounds}, we have \begin{align*} |(\lambda_{n-1}-\eta_{n-1})_x(s)|&=|(\lambdab-\etab)_x+2\Lambda\int_1^s \tau(\eta_{n-1})_xe^{2\eta_{n-1}}\ d\tau|\\&\leq Cs^2(1+B_{n-1}(s)), \end{align*} so that applying the same argument as in Lemma~\ref{A(t)} and using Lemma~\ref{bounds} again, we obtain \begin{equation} \label{An-bound} A_n(t)\leq A_0+C \int_1^t \tau^2 (1+B_{n-1}(\tau))(1+A_{n-1}(\tau)+A_{n}(\tau)))\ d\tau. \end{equation} On the other hand we have \begin{equation} \label{eta_n x} (e^{-2 \eta_n(t,x)})_x = {-2\etab_x e^{-2 \etab} \over t} - {4 \pi \over t} \int_1^t \tau^2 (r_{n-1}+s_{n-1})_x(\tau,x))\ d\tau, \end{equation} which implies \begin{equation} \label{Bn-bound} B_n(t)\leq 2\|\etab_x e^{-2 \etab}\|+C \int_1^t \tau^2 (A_{n-1}+B_{n-1})(\tau)\ d\tau. \end{equation} We have used the fact that \begin{align*} |(r_n+s_n)_x|&= |(e^{-2 \eta_n})_x(X_n^2+Y_n^2)+2e^{-2\eta_{n}}\big(X_n(X_n)_x+Y_n(Y_n)_x\big)|\\ &\leq C(A_{n}+B_{n})(t). \end{align*} Now defining $E_n(t):=\sup\{A_m(t)+B_m(t) ; m\leq n\}$ and adding (\ref{An-bound}) and (\ref{Bn-bound}) we arrive at \begin{equation} \label{En-bound} E_n(t)\leq A_0+ 2\|\etab_x e^{-2 \etab}\|+C\int_1^t \tau^2(1+E_{n}(\tau))^2\ d\tau. \end{equation} Let $[1,T^{(2)})$ (with $T^{(2)}\leq T^{(1)}$) be the maximal interval of existence for the solution $z_2$ of the integral equation $$ \aligned z_2(t) & = A_0+ 2\|\etab_x e^{-2 \etab}\|+C\int_1^t \tau^2(1+z_{2}(\tau))^2\ d\tau, \\ z_2(1) & = A_0+ 2\|\etab_x e^{-2 \etab}\|. \endaligned $$ Then $E_n(t)\leq z_2(t)$, for all $n\in \N$ and $t\in (1,T^{(2)})$. The same is true for $A_n$ and $B_n$. It follows that $(\eta_n)_x$, $(X_n)_x$, $(Y_n)_x$, $(X_n)_t$, $(Y_n)_t$, $(r_n)_x$, $(s_n)_x$ and then $(r_n)_t$, $(s_n)_t$ and $(\eta_n)_{tx}$ are uniformly bounded. \end{proof} \begin{lemma}\label{conv} The sequences $(\eta_n)$, $(X_n)$, and $(Y_n)$ converge uniformly on $[1,T^{(3)}]$ for all $T^{(3)}$ less than $T^{(2)}$. \end{lemma} \begin{proof} For $t\in[1,T^{(3)}]$, define $$ \aligned \theta_n(t):=& \sup\{|X_{n+1}-X_n|(t,x)+|Y_{n+1}-Y_n|(t,x); x\in[0,1]\}, \\ \alpha_n(t):=& \sup\{\|(\eta_{n+1}-\eta_n)(s)\|+\|(X_{n+1}-X_n)(s)\|+\|(Y_{n+1}-Y_n)(s)\|; s\in[1,t]\}, \\ \Tilde{X}_n:=&X_{n+1}-X_n, \qquad \Tilde{Y}_n:=Y_{n+1}-Y_n. \endaligned $$ Combining equations (\ref{Euler_n}) written for $n+1$ and $n$ gives \begin{equation} \label{tilde X} \aligned & D^+_{n}\Tilde X_n = a_{n}\Tilde X_{n-1}+b\Tilde Y_{n-1}+F_n, \\ & D^-_{n}\Tilde Y_n = b\Tilde X_{n-1}+a_{n}\Tilde Y_{n-1}+G_n, \endaligned \end{equation} with $$ \aligned F_n &=-(e^{2\eta_n}-e^{2\eta_{n-1}})\Lambda tX_{n-1}-(e^{\eta_n-\lambda_n}-e^{\eta_{n-1}-\lambda_{n-1}})(X_n)_x, \\ G_n & =-(e^{2\eta_n}-e^{2\eta_{n-1}})\Lambda tY_{n-1}+(e^{\eta_n-\lambda_n}-e^{\eta_{n-1}-\lambda_{n-1}})(Y_n)_x. \endaligned $$ Reasoning as in the proof of Lemma~\ref{A(t)} we have \begin{equation} \label{theta-bound} \theta(t)\leq \int_1^t\big(m_n(\tau)\theta_{n-1}+\sup\{|F_n(\tau,x)|+|G_n(\tau,x)| ; x\in[0,1]\}\big)\ d\tau, \end{equation} and this implies that \begin{equation} \label{tilde X n-bound} |\Tilde X_n|+|\Tilde Y_n|\leq C \int_1^t\alpha_{n-1}(\tau)\ d\tau, \end{equation} we have used the mean value theorem to handle the terms $e^{2\eta_n}-e^{2\eta_{n-1}}$ and\\ $e^{\eta_n-\lambda_n}-e^{\eta_{n-1}-\lambda_{n-1}}$, and the previous lemmas.\\ On the other hand equation (\ref{eta n_t}) implies \begin{align*} (\eta_{n+1}-\eta_n)_t = & -\frac{\Lambda}{2} t (e^{2\eta_{n+1}}-e^{2\eta_n}) +2\pi te^{2\eta_{n+1}-2\eta_{n}}\big((X_{n+1}^2-X_{n}^2)+(Y_{n+1}^2-Y_{n}^2)\big)\notag\\ &+2\pi t(e^{2\eta_{n+1}-2\eta_{n}}-e^{2\eta_{n}-2\eta_{n-1}})(X_n^2+Y_n^2), \end{align*} and using Lemma \ref{bounds} and the mean value theorem it follows after integration in time that $$ |\eta_{n+1}-\eta_n|\leq C \int_1^t(|\eta_{n+1}-\eta_n|+|\eta_{n}-\eta_{n-1}|+|X_{n+1}-X_n|+|Y_{n+1}-Y_n|)(\tau)\ d\tau, $$ so that \begin{equation} \label{tilde eta n-bound} |\eta_{n+1}-\eta_n|\leq C \int_1^t(\alpha_n+\alpha_{n-1})(\tau)\ d\tau. \end{equation} Combining (\ref{tilde X n-bound}) and (\ref{tilde eta n-bound}) leads to $$ \alpha_n(t)\leq C \int_1^t(\alpha_n+\alpha_{n-1})(\tau)\ d\tau, $$ which, by Gronwall's inequality, implies $$ \alpha_n(t)\leq C \int_1^t\alpha_{n-1}(\tau)\ d\tau, $$ and by induction $$ \alpha_n(t)\leq \frac{C^{n+1}}{n!}, $$ and so $\alpha_n\to 0$ as $n\to\infty$. This establishes the uniform convergence of $\eta_n$, $X_n$, and $Y_n$. \end{proof} It follows from (\ref{eta n_t}) that the sequence $(\eta_n)_t$ converges uniformly as well. In the following lemma, the uniform convergence of other iterates derivatives is proven. \begin{lemma}\label{der-conv} The sequences $(\eta_n)_x$, $(X_n)_x$, $(Y_n)_x$, $(X_n)_t$ and $(Y_n)_t$ converge uniformly on $[1,T^{(4)}]$, where $[1,T^{(4)}] \subset [1,T^{(3)}]$. \end{lemma} \begin{proof} We set $$ \beta_n(t):=\sup\Big\{\|(\eta_{n+1}-\eta_n)_x(s)\|+\|(X_{n+1}-X_n)_x(s)\|+\|(Y_{n+1}-Y_n)_x(s)\|; s\in[1,t]\Big\}. $$ Taking the space derivative in equations (\ref{Euler_n}) gives \begin{equation}\label{C n} D_n^+(X_{n+1})_x=\Tilde C_n, \qquad D_n^-(Y_{n+1})_x=\Tilde D_n \end{equation} with \begin{align*} \Tilde C_n&=(\lambda_n-\eta_n)_x e^{\eta_n-\lambda_n}(X_{n+1})_x-2\Lambda t(\eta_n)_xe^{2\eta_n} X_n-\Lambda te^{2\eta_n}(X_n)_x-\frac{1}{t}(Y_n)_x \\ \Tilde D_n&=(\eta_n-\lambda_n)_x e^{\eta_n-\lambda_n}(Y_{n+1})_x-2\Lambda t(\eta_n)_xe^{2\eta_n} Y_n-\Lambda te^{2\eta_n}(Y_n)_x-\frac{1}{t}(X_n)_x. \end{align*} Let $\gamma_n^1$ and $\gamma_n^2$ be the integral curves corresponding to $D_n^+$ and $D_n^-$ respectively, that start from the point $(s,x)$ that is, for each n, \begin{equation}\label{gamma} (\gamma_n^1)_t=e^{\eta_n-\lambda_n}, \ (\gamma_n^2)_t=-e^{\eta_n-\lambda_n}, \ \gamma_n^1(s)=\gamma_n^2(s)=x. \end{equation} Integrating the first equation in (\ref{C n}) along $\gamma_n^1$, the second one along $\gamma_n^2$ yields after subtraction \begin{equation} \label{X n+1-X n} \aligned & (X_{n+1}-X_n)(s)=\int_1^s\big(\Tilde C_{n}(\tau,\gamma_n^1(\tau))-\Tilde C_{n-1}(\tau,\gamma_{n-1}^1(\tau))\big)\ d\tau, \\ & (Y_{n+1}-Y_n)(s)=\int_1^s\big(\Tilde D_{n}(\tau,\gamma_n^1(\tau))-\Tilde D_{n-1}(\tau,\gamma_{n-1}^1(\tau))\big)\ d\tau. \endaligned \end{equation} But we have \begin{equation}\label{tilde (Cn-Cn-1)} \aligned & |\Tilde C_{n}(\tau,\gamma_n^1(\tau))-\Tilde C_{n-1}(\tau,\gamma_{n-1}^1(\tau)| \\ & \leq |\Tilde C_{n}(\tau,\gamma_n^1(\tau))-\Tilde C_{n}(\tau,\gamma_{n-1}^1(\tau)|+|(\Tilde C_{n}-\Tilde C_{n-1})(\tau)|. \endaligned \end{equation} Given now any $\varepsilon>0$, we find, for any sufficiently large $n$, \begin{equation}\label{tilde Cn} |\Tilde C_{n}(\tau,\gamma_n^1(\tau))-\Tilde C_{n}(\tau,\gamma_{n-1}^1(\tau)|\leq C\varepsilon, \end{equation} we have used the uniform convergence of $\eta_n$, the uniform continuity of $\Tilde C_n$ over the compact set $[1,T^{(4)}]\times\big(\gamma_n^1([1,T^{(4)}])\cup\gamma_{n-1}^1([1,T^{(4)}])\big)$, and the following inequality which follows from (\ref{gamma}) \begin{equation}\label{gamma n} |\gamma_n^1-\gamma_{n-1}^1|(\tau) \leq C \, \sup \Big\{ \|(e^{2\eta_n}-e^{2\eta_n-1})(t)\| \ ; \ t\in[1,T^{(4)}] \Big\}. \end{equation} For the second term of the right hand side in (\ref{tilde (Cn-Cn-1)}) we have \begin{eqnarray*} \aligned &\Tilde C_{n}-\Tilde C_{n-1} \\ & =\big((\lambda_{n}-\eta_{n})_x-(\lambda_{n-1}-\eta_{n-1})_x\big)e^{\eta_n-\lambda_n}(X_{n+1})_x\\ & \quad+ (\lambda_{n-1}-\eta_{n-1})_x\Big(e^{\eta_n-\lambda_n}(X_{n+1}-X_n)_x+(e^{\eta_n-\lambda_n}-e^{\eta_{n-1}- \lambda_{n-1}})(X_{n+1})_x\Big)\\ & \quad -2\Lambda t (\eta_{n}-\eta_{n-1})_xe^{2\eta_n}X_n-2\Lambda t(\eta_{n-1})_x \Big(e^{2\eta_n}(X_{n}-X_{n-1})+(e^{2\eta_n}-e^{2\eta_{n-1}})(X_{n-1})\Big)\\ & \quad -\Lambda t (e^{2\eta_{n}}-e^{2\eta_{n-1}})(X_n)_x-\Lambda te^{2\eta_{n-1}}\big((X_{n})_x-(X_{n-1})_x\big)-\frac{1}{t}\big((Y_{n})_x-(Y_{n-1})_x\big), \endaligned \end{eqnarray*} and $$ (\lambda_{n}-\eta_{n})_x=(\lambdab-\etab)_x+2\Lambda\int_1^t t(\eta_n)_xe^{2\eta_n}\ d\tau, $$ so that \begin{align*} |(\lambda_{n}-\eta_{n})_x-(\lambda_{n-1}-\eta_{n-1})_x|\leq C\varepsilon +C\sup\{\|(\eta_n-\eta_{n-1})_x(t)\| \ ; t\in[1,T^{(4)}]\}. \end{align*} Thus, for $n$ sufficiently large, \begin{equation}\label{bound tilde Cn} \|(\Tilde C_{n}-\Tilde C_{n-1})(\tau)\|\leq C\varepsilon+C(\beta_n+\beta_{n-1})(\tau). \end{equation} It then follows from (\ref{X n+1-X n})-(\ref{tilde Cn}) and (\ref{bound tilde Cn}) that for $n$ sufficiently large, \begin{equation} \label{Y n+1-Y n} \aligned & |(X_{n+1}-X_n)_x|(s)\leq C\varepsilon+C\int_1^s(\beta_n+\beta_{n-1})(\tau)\ d\tau, \\ & |(Y_{n+1}-Y_n)_x|(s)\leq C\varepsilon+C\int_1^s(\beta_n+\beta_{n-1})(\tau)\ d\tau. \endaligned \end{equation} On the other hand, taking the spatial derivative in (\ref{eta n_t}), subtracting the resulting equations written for $n+1$ and $n$ gives \begin{eqnarray*} \aligned & \hskip-.3cm (\eta_{n+1}-\eta_{n})_{tx} \\ = & -\Lambda t (\eta_{n+1}-\eta_{n})_xe^{2\eta_{n+1}}-\Lambda t(\eta_{n})_x (e^{2\eta_{n+1}}-e^{2\eta_{n}})\\&+4\pi t(\eta_{n+1}-\eta_{n})_xe^{2(\eta_{n+1}-\eta_n)}(X_{n}^2+Y_{n}^2) -4\pi t(\eta_{n}-\eta_{n-1})_xe^{2(\eta_{n}-\eta_{n-1})}(X_{n-1}^2+Y_{n-1}^2)\\& +4\pi te^{2(\eta_{n+1}-\eta_n)}\big((X_{n}-X_{n-1})_xX_n+(Y_{n}-Y_{n-1})_xY_n\big)\\& +4\pi te^{2(\eta_{n+1}-\eta_n)}\big((X_{n-1})_xX_n+(Y_{n-1})_xY_n\big)\\& -4\pi te^{2(\eta_{n}-\eta_{n-1})}\big((X_{n-1})_xX_{n-1}+(Y_{n-1})_xY_{n-1}\big), \endaligned \end{eqnarray*} from this and the previous lemmas, it follows that, for $n$ sufficiently large, \begin{equation} \label{eta n+1-eta n} |(\eta_{n+1}-\eta_n)_x|(s)\leq C\varepsilon+C\int_1^s(\beta_n+\beta_{n-1})(\tau)\ d\tau. \end{equation} Combining (\ref{Y n+1-Y n}) and (\ref{eta n+1-eta n}), and taking the supremum over $s\in[1,t]$ yields, for $n$ sufficiently large, $$ \beta_n(t)\leq C\varepsilon+C\int_1^t(\beta_n+\beta_{n-1})(\tau)\ d\tau, $$ and by Gronwall's lemma it follows that, for $n$ sufficiently large and $t\in[1,T^{(4)}]$, $$ \delta_n(t)\leq C\varepsilon, $$ where $\delta_n(t):=\sup\{\beta_m, m\leq n\}$. The uniform convergence of $(\eta_n)_x$, $(X_n)_x$, $(Y_n)_x$, $(X_n)_t$ and $(Y_n)_t$ follows. \end{proof} Lemmas \ref{conv} and \ref{der-conv} allow us to pass to the limit in (\ref{eta_n}) and (\ref{Euler_n}) and obtain a regular solution $(\eta, X, Y)$ to our system on a time interval $[1,T)$. It is easily checked that this solution is unique. Namely, let $(\eta_i, X_i, Y_i)$, $i=1, 2$, be two regular solutions of the Cauchy problem for the same initial data $(\etab,\Xbar, \Ybar)$ at $t=1$. Using the same argument as in the proof of iterates convergence leads to $$ \alpha(t)\leq C\int_1^t\alpha(\tau)\ d\tau, $$ where $\alpha(t)=\sup\{\|(\eta_1-\eta_2)(s)\|+\|(X_1-X_2)(s)\|+\|(Y_1-Y_2)(s)\| \ ; s\in[1,t]\}$.\\ It follows that $\alpha(t)=0$, for $t\in[1,T)$ i.e. the solution is unique. We have thus established the existence of a unique, local-in-time regular solution $(\eta, \lambda, r, s)$ to the Cauchy problem for the plane symmetric Einstein-stiff fluid equations written in areal coordinates. \section{Global existence theory and asymptotics} \subsection*{Global existence} We are now in a position to establish the following main result, which takes advantage of our assumption $\Lambda>0$. \begin{theorem}[Global existence theory and asymptotics] Under the assumptions in Theorem~\ref{theo1}, the solution constructed therein is defined up to $T=+\infty$, the spacetime is future geodesically complete, and the following asymptotic properties hold at late times: \begin{equation} \aligned &\eta=-\ln t(1+O\big((\ln t)^{-1})\big),\ \ \lambda=\ln t(1+O\big((\ln t)^{-1})\big), \\ & r=O(t^{-1}),\qquad s=O(t^{-1}), \\ &\eta_t=-\frac{1}{t}(1+O(t^{-1})),\ \ \lambda_t=\frac{1}{t}(1+O(t^{-1})), \\ & \eta_x=O(1). \endaligned \end{equation} Consequently, the generalized Kasner exponents associated with this spacetime (cf.~\eqref{Kasner-expo}, below) tend to $1/3$: $$ \lim_{t\to\infty}\frac{\kappa^1_1(t,x)}{\kappa(t,x)}=\lim_{t\to\infty}\frac{\kappa^2_2(t,x)}{\kappa(t,x)}=\lim_{t\to\infty}\frac{\kappa^3_3(t,x)}{\kappa(t,x)}=\frac{1}{3}, $$ where $\kappa=\kappa^i_i$ denotes the trace of the second fundamental form $\kappa_i^j$. \end{theorem} In particular, this shows that the spacetime approaches the {\sl de Sitter spacetime} asymptotically. To establish this global result, we begin with a continuation criterion, based on the same notation as in the previous section. \begin{lemma}\label{criterion} Let $[1,T)$ be the maximal interval of existence of solutions to the system under consideration. If $\sup\{|\eta(t,x)| \ | x\in[0,1], t\in[1,T)\}<+\infty$ then $T=+\infty$. \end{lemma} \begin{proof} It suffices to prove that under the assumption that $\eta$ is bounded on $[1,T)$, the same is true for $\eta_x$, $\eta_t$, $X$, $Y$, $X_x$, $Y_x$, $X_t$, and $Y_t$. First of all, by definition we have $X=e^\eta\sqrt{r}$ and $Y=e^\eta\sqrt{s}$ and it follows from the decay inequalities (\ref{bound}) that $X$ and $Y$ are bounded. Next, recalling that $\eta_x=-2\pi te^{\eta+\lambda}(r-s)$ and $\eta_t=\frac{1}{2t}+2\pi t(X^2+Y^2)-\frac{\Lambda t}{2}e^{2\eta}$, we find that $\eta_x$ and $\eta_t$ are bounded as well. Here, we have used the fact that $$ \lambda (t,x)=(\lambdab-\etab)(x)+\eta(t,x)-\ln t+\Lambda\int_1^t\tau e^{2\eta}(\tau,x)\ d\tau. $$ Taking the spatial derivative in this equation implies $$ (\lambda -\eta)_x(t,x)=(\lambdab-\etab)(x)+2\Lambda\int_1^t\tau (\eta_x e^{2\eta})(\tau,x)\ d\tau, $$ so that $v(t)$, defined in Lemma~\ref{A(t)}, is bounded. Rewriting (\ref{A-bound}) $$ A(t)\leq A(1)+\int_1^t\big(v(\tau)A(\tau)+h(\tau)K(\tau)\big)\ d\tau, $$ and using the fact that $h$ and $K$ are bounded, Gronwall's lemma allows us to conclude that $A$, and then $X_x$ and $Y_x$ are bounded. Bounds on $X_t$ and $Y_t$ then follow from (\ref{Euler}). \end{proof} We now prove that $\eta$ is bounded in order to conclude that $T=+\infty$. \begin{lemma}\label{global} The function $\eta$ satisfies $$ \sup\big\{|\eta(t,x)| \ \, / \, x\in[0,1], t\in[1,T) \big\}<+\infty. $$ \end{lemma} \begin{proof} We can deduce from (\ref{eta}) that $e^{-2\eta(t,x)}\leq \frac{e^{-2\etab(x)}+\Lambda t^3}{t}$, i.e. \begin{equation}\label{e 2eta} e^{2\eta(t,x)}\geq \frac{t}{C+\Lambda t^3}, \end{equation} which provides a (negative, say) lower bound on $\eta$. Now, let us prove that \begin{equation}\label{e eta+lambda} \int_0^1(e^{\eta+\lambda}\rho)(t,x)\ dx\leq Ct^{-4}, \ t\in[1,T), \ x\in[0,1], \end{equation} which will eventually lead us to an upper bound for $\eta$. Using the equations (\ref{1.2}), (\ref{1.3}), and (\ref{1.6}), after some computations we find \begin{align*} \frac{d}{dt}\Bigg(\int_0^1(e^{\eta+\lambda}\rho)(t,x)\ dx\Bigg) =&\int_0^1e^{\eta+\lambda}\rho\big(-\frac{1}{t}-\Lambda t e^{2\eta}\big)\ dx-\int_0^1e^{2\eta}\big(j_x+2\eta_x j\big)\ dx\\&-\int_0^1\frac{2}{t}e^{\eta+\lambda}\mu\ dx. \end{align*} Since $\mu \geq 0$ and $$ \int_0^1e^{2\eta}\big(j_x+2\eta_x j\big)\ dx=\int_0^1\big(e^{2\eta} j\big)_x\ dx=0, $$ it follows that \begin{equation}\label{rho} \frac{d}{dt}\Bigg(\int_0^1(e^{\eta+\lambda}\rho)(t,x)\ dx\Bigg) \leq \frac{1}{t}\int_0^1e^{\eta+\lambda}\rho\big(-1-\Lambda t^2 e^{2\eta}\big)\ dx. \end{equation} Thanks to (\ref{e 2eta}), we have $$-\Lambda t^2e^{2\eta}\leq\frac{-\Lambda t^3}{C+\frac{\Lambda}{3}t^3}\leq-3+\frac{9C}{\Lambda}t^{-3},$$ so that (\ref{rho}) implies \begin{align*} \frac{d}{dt} \Bigg( t^4\int_0^1(e^{\eta+\lambda}\rho)(t,x)\ dx\Bigg) & =4t^3\int_0^1(e^{\eta+\lambda}\rho)(t,x)\ dx+t^4\frac{d}{dt}\Bigg(\int_0^1(e^{\eta+\lambda}\rho)(t,x)\ dx\Bigg) \\ &\leq4t^3\int_0^1(e^{\eta+\lambda}\rho)(t,x)\ dx+t^3 \int_0^1e^{\eta+\lambda}\rho\big(-4+\frac{9C}{\Lambda t^3}\big)\ dx\\ &\leq\frac{9C}{\Lambda t^4}\, t^4\int_0^1(e^{\eta+\lambda}\rho)(t,x)\ dx, \end{align*} from which we deduce (\ref{e eta+lambda}) by integration. We are now in a position to make use of the integral estimate (\ref{e eta+lambda}). Recalling that $\eta_x=-4\pi te^{\eta+\lambda}j$ and $0\leq j\leq \rho$, we control the spatial oscillation of $\eta$ at each time, as follows: \begin{align*} \Bigg| \eta(t,x)-\int_0^1\eta(t,\tau)\ d\tau \Bigg| &=\Bigg| \int_0^1\int_\tau^x\eta_x(t,\sigma)\ d\sigma\ d\tau \Bigg| \leq \int_0^1\int_0^1|\eta_x(t,\sigma)|\ d\sigma\ d\tau\\& \leq 4\pi t\int_0^1(e^{\eta+\lambda}j)(t,\sigma)\ d\sigma \leq4\pi t\int_0^1(e^{\eta+\lambda}\rho)(t,\sigma)\ d\sigma, \end{align*} and thanks to (\ref{e eta+lambda}), this implies that \begin{equation}\label{eta-int} \Big| \eta(t,x)-\int_0^1\eta(t,\tau) \, d\tau \Big| \leq Ct^{-3}, \qquad t\in[1,T), \ x\in[0,1]. \end{equation} We will have the desired upper bound on $\eta$, provided we can control its integral. Recalling that $\eta_t-\lambda_t=\frac{1}{t}-\Lambda t e^{2\eta}$ and using (\ref{e 2eta}) gives $$ \frac{\partial}{\partial t}e^{\eta-\lambda}=(\eta_t-\lambda_t)e^{\eta-\lambda}\leq e^{\eta-\lambda}\Bigg(\frac{1}{t}-\frac{\Lambda t^2}{C+\frac{\Lambda}{3}t^3}\Bigg) $$ and, after integration, \begin{equation}\label{eta-lambda} e^{\eta-\lambda}\leq C\, \frac{t}{C+\frac{\Lambda}{3}t^3} \leq Ct^{-2}, \qquad t\in[1,T), \ x\in[0,1]. \end{equation} Next, using (\ref{1.3}), (\ref{e 2eta}), (\ref{e eta+lambda}), and (\ref{eta-lambda}), we have \begin{align*} \int_0^1\eta(t,x)\ dx &=\int_0^1\etab(x)\ dx+\int_1^t\int_0^1\eta_t(s,x)\ dxds \\ &\leq C+\int_1^t\frac{1}{2s}\int_0^1\big(1+e^{2\eta}(8\pi s^2\rho-\Lambda s^2)\big)\ dxds \\ & \leq C+\frac{1}{2}\ln t+4\pi\int_1^t\int_0^1 s e^{\eta-\lambda}e^{\eta+\lambda}\rho\ dxds-\int_1^t\int_0^1 \frac{\Lambda}{2}s e^{2\eta}\ dxds, \end{align*} thus \begin{align*} \int_0^1\eta(t,x)\ dx &\leq C+\frac{1}{2}\ln t+C\int_1^ts^{-5}\ ds-\frac{1}{2}\int_1^t\frac{\Lambda s^2}{C+\frac{\Lambda}{3}s^3}\ ds\\&\leq C+\frac{1}{2}\ln \Bigg(\frac{\Lambda t}{C+\frac{\Lambda}{3}t^3}\Bigg). \end{align*} It then follows from (\ref{eta-int}) that $$ \eta(t,x)\leq C(1+t^{-3})+\frac{1}{2}\ln \Bigg(\frac{\Lambda t}{C+\frac{\Lambda}{3}t^3}\Bigg), $$ which leads to an upper bound for $\eta$, i.e. $$ e^{2\eta(t,x)}\leq Ct^{-2}, \qquad t\in[1,T), \ x\in[0,1], $$ and the proof is complete. \end{proof} \subsection*{Late-time asymptotics} We determine now the explicit leading asymptotic behavior of $r$, $s$, $\eta$, $\lambda$, $\lambda_t$, $\eta_t$ and $\eta_x$, and then check that each of the generalized Kasner exponents tends to $1/3$. We have proven that (see equation (\ref{bound})) \begin{equation}\label{r s} r=O(t^{-1}), \qquad s=O(t^{-1}). \end{equation} and the equation (\ref{eta}) implies \[ (t e^{-2\eta})_t=\Lambda t^2-4\pi t^2(r+s). \] Integrating over $[1,t]$ and using (\ref{r s}), we obtain \[ \Big| t e^{-2\eta}-\frac{\Lambda}{3}t^3 \Big| \leq Ct^2, \] that is, $e^{-2\eta}=(\Lambda/3)t^2(1+O(t^{-1}))$, so that $$ e^{\eta}=\sqrt{\frac{3}{\Lambda}}t^{-1}(1+O(t^{-1})). $$ In view of $\eta_t= (1/2t) - (\Lambda/2) t e^{2\eta}+2\pi t e^{2\eta}(r+s)$, one has \begin{equation}\label{eta t decay} \eta_t=-\frac{1}{t}(1+O(t^{-1})), \end{equation} and, after integration over $[1,t]$, $\eta=-\ln t(1+O\big((\ln t)^{-1})\big)$. Since $\lambda_t=\eta_t+\Lambda t e^{2\eta}- (1/t)$, one also has \begin{equation}\label{lambda t decay} \lambda_t=\frac{1}{t}(1+O(t^{-1})), \end{equation} and integrating over$[1,t]$ gives $\lambda=\ln t(1+O\big((\ln t)^{-1})\big)$. This implies $e^\lambda=O(t)$, and recalling that $\eta_x=-2\pi t e^{\lambda+\eta}(r-s)$ one deduces that \begin{equation}\label{eta x decay} \eta_x=O(1). \end{equation} Consider the generalized Kasner exponents which take the following form for the metric under consideration (see for instance \cite{rein}): \be \label{Kasner-expo} \frac{\kappa^1_1(t,x)}{\kappa(t,x)}=\frac{t\lambda_t}{t\lambda_t+2}, \qquad \frac{\kappa^2_2(t,x)}{\kappa(t,x)}=\frac{\kappa^3_3(t,x)}{\kappa(t,x)}=\frac{1}{t\lambda_t+2}, \end{equation} where $\kappa(t,x)=\kappa^i_i(t,x)$ is the trace of the second fundamental form $\kappa_{ij}(t,x)$ of the metric. It follows from (\ref{lambda t decay}) that as $t$ tends to $\infty$, each of these quantities tends to $1/3$, uniformly in $x$. \subsection*{Future geodesic completeness} The late-time asymptotic expansion above allows us to establish that the spacetime is future geodesically complete, as follows. Let $\tau\mapsto \big(\gamma^\alpha\big)(\tau)$ (with $t=\gamma^0(\tau)$) be a future directed causal geodesic defined on an interval $[1,\tau_+)$ with $\tau_+$ maximal, and normalized so that $\gamma^0(\tau_0)=t(\tau_0)=1$ for some $\tau_0\in[1,\tau_+)$. We are going to prove that $\tau_+=+\infty$. Since $\gamma$ is causal and future directed, we have $$ g_{\alpha\beta}\gamma_\tau^\alpha\gamma_\tau^\beta=-m^2, \qquad \gamma_\tau^0>0, $$ where $m=0$ if $\gamma$ is null, and $m\neq0$ if $\gamma$ is timelike. Since $\frac{dt}{d\tau}=\gamma^0_\tau>0$, the geodesic can be parametrized by the coordinate time $t$. With respect to this coordinate time the geodesic exists on the whole interval $[1,+\infty)$ since on each bounded interval of $t$ the Christoffel symbols are bounded and the right-hand sides of the geodesic equation (written in coordinate time) are linearly bounded in $\gamma^1_\tau$, $\gamma^2_\tau$, $\gamma^3_\tau$. Along the geodesic we define $$ w:=e^\lambda \gamma^1_\tau, \qquad F:=t^4 \, \Big((\gamma^2_\tau)^2+(\gamma^2_\tau)^3\Big). $$ Using the geodesic equation it is easily checked that $$ \frac{dw}{d\tau}=-\lambda_t\gamma^0_\tau w-e^{2\eta-\lambda}\eta_x(\gamma_\tau^0)^2, \qquad \frac{dF}{d\tau}=0. $$ The relation between coordinate time and proper time is then given by \begin{equation}\label{proper} \frac{d\tau}{dt}=(\gamma^0_\tau)^{-1}=\frac{e^{\eta}}{\sqrt{m^2+w^2+F/t^2}}. \end{equation} We will now exhibit a lower bound for $d\tau/dt$ by a function with divergent integral on $[1, +\infty)$ and, to this end, an estimate on $w$ as a function of the coordinate time is needed. Assume that $w(t)>0$ for some $t\geq 1$. Then, as long as $w(s)>0$, we have \begin{align}\label{d w} \frac{dw}{ds}&=-\lambda_t w-e^{\eta-\lambda}\eta_x\sqrt{m^2+w^2+F/s^2}\notag \\ &=4\pi se^{2\eta}(j\sqrt{m^2+w^2+F/s^2}-\rho w)+\frac{1}{2t}w-\frac{\Lambda}{2}se^{2\eta}w. \end{align} Using the elementary inequality $\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}$ and the equation (\ref{e 2eta}), we obtain \begin{align*} \frac{dw}{ds}\leq4\pi se^{2\eta}(|j|-\rho)w+\frac{1-\Lambda s^2 e^{2\eta}}{2s}w+4\pi se^{2\eta}|j|\sqrt{m^2+F/s^2}. \end{align*} We can drop the first two terms which are negative since $|j|\leq\rho$ and $$1-\Lambda s^2 e^{2\eta}\leq\frac{C}{\Lambda}s^{-3}-2<0, \qquad s \ \rm{ sufficiently large},$$ and we estimate the third term by $C s^{-2}$ (since $|j|\leq Cs^{-1}$ and $e^{2\eta}\leq Cs^{-2}$). It then follows that \begin{equation}\label{dw ds} \frac{dw}{ds}\leq Cs^{-2}. \end{equation} Let $t_0\in[1,t)$ be the smallest time such that $w(s)>0$ for all $s\in[t_0,t)$. Then integrating (\ref{dw ds}) over $[t_0,t]$ gives $$w(t)\leq C.$$ For the case $w(t)<0$, it follows from (\ref{d w}) that, as long as $w(s)<0$ \begin{align*} \frac{dw}{ds}&\geq4\pi se^{2\eta}(-\rho\sqrt{m^2+w^2+F/s^2}-\rho w)+\frac{1-\Lambda s^2e^{2\eta}}{2s}w\\ &\geq-4\pi se^{2\eta}\rho\sqrt{m^2+F/s^2}+8\pi se^{2\eta}\rho w\\ &\geq Cs^{-2}(-1+w), \end{align*} we have used the fact that $|j|\leq \rho$, $\frac{1-\Lambda s^2e^{2\eta}}{2s}<0$ for large $s$ and the elementary inequality $\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}$. Therefore we have \begin{equation}\label{dw1 ds} \frac{1}{1-w}\frac{d(1-w)}{ds}\leq C s^{-2}. \end{equation} Let $t_1\in[1,t)$ be the smallest time such that $w(s)<0$ for all $s\in[t_1,t)$. Then integrating (\ref{dw1 ds}) over $[t_1,t]$ implies $$-w(t)\leq C.$$ In either case, we arrive at $$ |w(t)|\leq C, \qquad t\geq 1. $$ On the other hand equation (\ref{e 2eta}) implies that $$ e^\eta\geq C t^{-1}, \qquad t\geq 1, $$ so we then deduce from (\ref{proper}) that $$ \frac{d\tau}{dt}\geq\frac{Ct^{-1}}{\sqrt{m^2+C+F}}, $$ and since the integral of the right-hand side over $[1,+\infty)$ diverges, it follows that $\tau_+=+\infty$ and the proof of future geodesic completeness is completed. \section*{Acknowledgments} This work was completed when the first author (PLF) gave a short course at the thirteen GIRAGA seminar hold at the University of Yaounde in September 2010; he is particularly grateful to D. B\'ekoll\'e and the organizing committee for their invitation and warm welcome. PLF was supported by the Centre National de la Recherche Scientific and the Agence Nationale de la Recherche (ANR) through Grant 06-2-134423: ``Mathematical Methods in General Relativity''.
1,314,259,994,612
arxiv
\section{Introduction} We consider the critical Lane-Emden system \begin{equation}\label{eq-LEs} \begin{cases} -\Delta u = v^p &\text{in } \R^n,\\ -\Delta v = u^q &\text{in } \R^n,\\ u,v >0\ &\text{in } \R^n \end{cases} \end{equation} where $n \ge 3$ and $p,q>0$ and $(p,q)$ belongs to the {\em critical hyperbola} \begin{equation}\label{eq-hyp} \frac{1}{p+1} + \frac{1}{q+1} = \frac{n-2}{n}. \end{equation} For $s \ge 1$, let $\mcd^{2,s}_0(\R^n)$ be the completion of $C^{\infty}_c(\R^n)$ with respect to the norm $\|\Delta \cdot\|_{L^s(\R^n)}$. In \cite[Corollary I.2]{Li}, Lions found a positive ground state \[(U,V) \in \mcd^{2, {p+1 \over p}}_0(\R^n) \times \mcd^{2, {q+1 \over q}}_0(\R^n)\] of \eqref{eq-LEs}, by transforming it into an equivalent scalar equation \begin{equation}\label{eq-LEsc} (-\Delta) \(|\Delta u|^{{1 \over p}-1} (-\Delta u)\) = |u|^{q-1}u \quad \text{in } \R^n \end{equation} and employing a concentration-compactness argument to the associated minimization problem \begin{equation}\label{eq-K_pq} K_{p,q} = \inf \left\{\|\Delta u\|_{L^{p+1 \over p}(\R^n)}: \|u\|_{L^{q+1}(\R^n)} = 1 \right\} = \inf_{u \in \mcd^{2, {p+1 \over p}}_0(\R^n) \setminus \{0\}} \frac{\int_{\R^n} |\Delta u|^{p+1 \over p}}{(\int_{\R^n} |u|^{q+1})^{\frac{p+1}{p(q+1)}}}. \end{equation} As shown by Alvino et al. \cite{ALT}, it is always radially symmetric and decreasing in $r = |x|$, after a suitable translation. Moreover, Hulshof and Van der Vorst \cite{HV} proved that a positive radial solution of \eqref{eq-LEs} is unique up to scalings. \medskip The present paper deals with non-degeneracy of ground state solutions $(U,V)$ to \eqref{eq-K_pq} (for which we may assume that $U(0)=1$ without loss of generality). The invariance of the system under scaling and translations leads to natural solutions of the linearized system around the radial solution $(U,V)$. More precisely, the functions $$ (U_{\delta,\xi}(x),V_{\delta,\xi}(x)) := \(\delta^{2(p+1) \over pq-1} U(\delta(x-\xi)), \delta^{2(q+1) \over pq-1} V(\delta(x-\xi))\) \quad \hbox{for any}\ \delta>0,\ \xi\in\R^n$$ are solutions to system \eqref{eq-LEs}. If we differentiate the system $$\begin{cases} -\Delta U_{\delta,\xi} = V^p_{\delta,\xi} &\text{in } \R^n,\\ -\Delta V_{\delta,\xi} = U^q_{\delta,\xi} &\text{in } \R^n \end{cases} $$ with respect to the parameters at $\delta=1$ and $\xi=0,$ we immediately see that the $(n+1)$ linearly independent functions \[Z_0(x):= \(x\cdot\nabla U + \frac {2(p+1)}{pq-1} U, x\cdot\nabla V + \frac {2(q+1)}{pq-1} V\) \quad \hbox{and} \quad Z_i(x):= \(\frac{\pa U}{\pa x_i}, \frac{\pa V}{\pa x_i}\)\] for $i=1,\dots,n$ solves the linear system \begin{equation}\label{lin}\begin{cases} -\Delta \phi = pV^{p-1}\psi &\text{in } \R^n,\\ -\Delta \psi = qU^{q-1}\phi &\text{in } \R^n.\\ \end{cases} \end{equation} We say that $(U,V)$ is {\em non-degenerate} if all weak solutions to the linear system \eqref{lin} such that $\lim_{|x|\to\infty} (\phi(x),\psi(x)) = (0,0)$ are linear combinations of $Z_0,Z_1,\dots,Z_n.$ \medskip The non-degeneracy of the solutions of system \eqref{eq-LEs} is a key ingredient in understanding the blow-up phenomena of solutions to the Lane-Emden systems with critical growth. Therefore, it is quite natural to ask the following question: $$\hbox {\em (Q) \quad Are ground states $(U,V)$ non-degenerate?}$$ Here, we face the above question, and we give a positive answer in a perturbative setting. It would be extremely interesting to prove or to disprove non-degeneracy of ground state solutions when $(p,q)$ ranges along all the critical hyperbola \eqref{eq-hyp}. \medskip Our main result is: \begin{thm}\label{main} There is a small number $\ep > 0$ such that if either $|p-1| \le \ep_0$ (with $n\ge 5$) or $|p-{n+2\over n-2}| \le \ep$ (with $n\ge3$), then the unique positive solution $(U,V)$ of \eqref{eq-LEs} (with $U(0) = 1$) is non-degenerate. \end{thm} Theorem \ref{main} follows immediately by Corollaries \ref{cor1} and \ref{cor2}, proved in Sections \ref{sec_p1} and \ref{sec_pn}, respectively. The idea of the proof stems from the simple fact that if $p$ is close to $1$ or close to $n+2\over n-2$, system \eqref{eq-LEs} is formally close to a single equation whose solutions are non-degenerate. In particular, if $p$ is close to 1, then $q$ is close to $ {n+4\over n-4} $ and system \eqref{eq-LEs} can be regarded as a perturbation of the Paneitz-Branson equation \[(-\Delta)^2u=u^{n+4\over n-4} \quad \text{in } \R^n,\] while if $p$ is close to ${n+2 \over n-2}$, then $q$ is also close to ${n+2 \over n-2}$ and system \eqref{eq-LEs} becomes a perturbation of the Yamabe equation \[-\Delta u=u^{n+2\over n-2} \quad \text{in } \R^n.\] Therefore, to prove our result, we will follow a perturbation argument, which has been successfully applied in various problems like the pseudo-relativistic Hartree equations \cite{Le}, the fractional Schr\"odinger equations \cite{FV} and the Choquard equations \cite{Xi}. The most challenging part of the proof is to show rigorously that the linearized system \eqref{lin} is close to the corresponding linearized (single) equation, because sophisticated uniform estimates in $p$ and $q$ of the decay of ground state solutions are required. \medskip \noindent \textbf{Notations.} \begin{itemize} \item[-] For $a \in \R$, let $a_+ = \max\{a,0\}$. \item[-] For any $x \in \R^n$ and $R > 0$, let $B_R(x) = \{y \in \R^n: |y-x| < R\}$. \item[-] For a set $D \subset \R^n$, let $\chi_D$ be the characteristic function of $D$. \item[-] The letters $C$ and $c$ denote positive numbers independent of $p$ that may vary from line to line and inside the same line. \end{itemize} \section{Non-degeneracy of the Lane-Emden system near $p = 1$}\label{sec_p1} The main results of this section are Theorem \ref{thm-deg} and Corollary \ref{cor1}. To prove them, we will use the following well-known uniqueness and non-degeneracy results about the fourth-order critical equation \begin{equation}\label{eq-LEbi} \begin{cases} (-\Delta)^2 u = u^{n+4 \over n-4} &\text{in } \R^n,\\ u > 0 &\text{in } \R^n \end{cases} \end{equation} for $n \ge 5$. \begin{prop}\label{prop-lin-bi} \textnormal{(1) (uniqueness)} Any smooth solution of \eqref{eq-LEbi} is expressed as \begin{equation}\label{eq-w_lam} w_{\delta,\xi}(x) := c_n \(\frac{\delta}{\delta^2+|x-\xi|^2}\)^{n-4 \over 2} \end{equation} for some $\delta > 0$, $\xi \in \R^n$ and $c_n = [n(n-4)(n-2)(n+2)]^{-{n-4 \over 8}}$. \medskip \noindent \textnormal{(2) (non-degeneracy)} The solution space of the linear equation \begin{equation}\label{eq-lin-bi} (-\Delta)^2 \phi = \(\frac{n+4}{n-4}\) u^{8 \over n-4} \phi \quad \text{in } \R^n, \quad \phi \in \mcd^{2,2}_0(\R^n) \end{equation} is spanned by \[\frac{\pa u}{\pa x_1},\, \cdots,\, \frac{\pa u}{\pa x_n} \quad \text{and} \quad x \cdot \nabla u + \(\frac{n-4}{2}\)u.\] \end{prop} \begin{proof} Results (1) and (2) have been proved by Lin \cite{Lin} and Lu and Wei \cite{LW}, respectively. \end{proof} \subsection{A compactness result}\label{subsec-cpt} The following is our main result in this subsection. \begin{prop}\label{prop-cpt} Suppose that $n \ge 5$. Let $\{p_k\}_{k=1}^{\infty}$ be a sequence such that $p_k \in (\frac{2}{n-2}, \frac{n}{n-2})$ for all $k \in \N$ and $p_k \to 1$ as $k \to \infty$, Also, let $\{(U_{p_k}, V_{p_k})\}_{k=1}^{\infty}$ be a sequence of the unique positive ground states of \eqref{eq-LEs} with $p = p_k$ such that $U_{p_k}(0) = 1$. Then we have that \[(U_{p_k}, V_{p_k}) \to (U_1, V_1) \quad \text{in } \mcd^{2,2}_0(\R^n) \times \mcd^{2,{2n \over n+4}}_0(\R^n) \quad \text{as } k \to \infty.\] Here $U_1$ is the unique positive solution of \eqref{eq-LEbi} with $U_1(0) = 1$ and $V_1 = -\Delta U_1$ in $\R^n$. In other words, $(U_1, V_1) = (w_{a_n,0}, -\Delta w_{a_n,0})$ in $\R^n$ where $a_n := c_n^{2 \over n-4}$. \end{prop} As we will see, the proofs of the above proposition and Theorem \ref{thm-deg} require uniform upper bound of $(U_{p_k}, V_{p_k})$'s. It is useful to recall the asymptotic profile of ground state solutions. In \cite{HV}, it has been shown that there exists a pair of positive constants $(\alpha_p, \beta_p)$ such that \begin{equation}\label{eq-dec} \lim_{r \to \infty} r^{n-2}\, v(r) = \beta_p\ \hbox{and}\ \begin{cases} \lim\limits_{r \to \infty} r^{n-2}\, u(r) = \alpha_p &\text{if } p \in (\frac{n}{n-2}, \frac{n+2}{n-2}],\\ \lim\limits_{r \to \infty} \dfrac{r^{n-2}}{\log r}\, u(r) = \alpha_p &\text{if } p = \frac{n}{n-2} ,\\ \lim\limits_{r \to \infty} r^{p(n-2)-2}\, u(r) = \alpha_p &\text{if } p \in (\frac{2}{n-2}, \frac{n}{n-2}). \end{cases} \end{equation} Even though \eqref{eq-dec} depicts the precise asymptotic behavior of $(U_{p_k}, V_{p_k})$ for each $k \in \N$, it does not readily imply the uniform bound, because the arguments in \cite{HV} do not describe how the sequence $\{(\alpha_{p_k}, \beta_{p_k})\}_{k=1}^{\infty}$ behaves. In the next two lemmas, we will obtain it by using potential theory. It is not sharp but enough for our purpose. Without loss of generality, we can assume that $|p_k - 1| \le \ep_0$ for all $k \in \N$ and a small fixed number $\ep_0 > 0$. Let $q_k$ be the number $q$ determined by \eqref{eq-hyp} with $p = p_k$. \begin{lemma}\label{lem-cpt-11} There exist a constant $C > 0$ depending only on $n$ and $\ep_0$ such that \begin{equation}\label{eq-cpt-11} U_{p_k}(x) \le 1 \quad \text{and} \quad V_{p_k}(x) \le C \quad \text{for all } x \in \R^n \text{ and } k \in \N \end{equation} provided $\ep_0 > 0$ small enough. \end{lemma} \begin{proof} We present the proof by dividing it into 2 steps. \medskip \noindent \textsc{Step 1: Uniform boundedness of $K_{p_k, q_k}$.} Using $w_{a_n,0}$ as a test function of the minimization problem \eqref{eq-K_pq}, we obtain \begin{equation}\label{eq-K_pq-1} K_{p_k, q_k} \le \frac{\int_{\R^n} |\Delta w_{a_n,0}|^{p_k+1 \over p_k}}{(\int_{\R^n} w_{a_n,0}^{q_k+1})^{\frac{p_k+1}{p_k(q_k+1)}}}. \end{equation} Exploiting the explicit form of $w_{a_n,0}$ and applying the dominated convergence theorem on the right-hand side, we easily deduce that $K_{p_k, q_k}$ is uniformly bounded. In particular, \begin{equation}\label{eq-K_pq-2} K_{p_k, q_k} = \(\int_{\R^n} U_k^{q_k+1}\)^{1-\frac{p_k+1}{p_k(q_k+1)}} \le C \quad \text{or } \quad \int_{\R^n} U_k^{q_k+1} \le C. \end{equation} Note that the second inequality holds, since $q_k \to \frac{n+4}{n-4}$ and so $\frac{(p_k+1)}{p_k(q_k+1)} \to \frac{n-4}{n} < 1$ as $k \to \infty$. \medskip \noindent \textsc{Step 2: Uniform boundedness of $(U_{p_k}, V_{p_k})$.} Because $U_{p_k}(0) = 1$ and $U_{p_k}$ is decreasing in $r$, it holds that $\|U_{p_k}\|_{L^{\infty}(\R^n)} \le 1$ for all $k \in \N$. By \eqref{eq-LEs}, the Green's representation formula, H\"older's inequality and Step 1, we have \begin{align*} V_{p_k}(0) &= \ga_n \int_{B_1(0)} \frac{1}{|y|^{n-2}} U_{p_k}^{q_k}(y) dy + \ga_n \int_{\R^n \setminus B_1(0)} \frac{1}{|y|^{n-2}} U_{p_k}^{q_k}(y) dy \\ &\le \ga_n \int_{B_1(0)} \frac{1}{|y|^{n-2}} dy + \ga_n \(\int_{\R^n \setminus B_1(0)} \frac{1}{|y|^{(n-2)(q_k+1)}} dy\)^{1 \over q_k+1} \(\int_{\R^n \setminus B_1(0)} U_{p_k}^{q_k+1}\)^{q_k \over q_k+1} \\ &\le C \end{align*} where $\ga_n := (n(n-2)|B_1(0)|)^{-1}$. The last inequality holds because of \eqref{eq-K_pq-2} and the relation that $(n-2) \frac{2n}{n-4} > n$. As before, since $V_{p_k}$ is decreasing in $r$, we see that $\|V_{p_k}\|_{L^{\infty}(\R^n)} \le C$ for all $k \in \N$. \end{proof} \begin{lemma}\label{lem-cpt-12} Suppose that an arbitrarily small number $\eta_0 > 0$ is given. Reducing the size of $\ep_0$ if necessary, one can find a constant $C > 0$ depending only on $n$, $\ep_0$ and $\eta_0$ such that \[U_{p_k}(x) \le \frac{C}{1+|x|^{n-4-\eta_0}} \quad \text{and} \quad V_{p_k}(x) \le \frac{C}{1+|x|^{n-2}}\] for all $x \in \R^n$ and $k \in \N$. \end{lemma} \begin{proof} The arguments used here is motivated by the ones in \cite[Section 4]{CK}. Our proof is relatively simple since we make use of good qualitative properties of the ground states $(U_{p_k}, V_{p_k})$. We present the proof by dividing it into 2 steps. \medskip \noindent \textsc{Step 1: Tail estimate for $U_{p_k}$.} For any fixed number $R > 0$, we define the functions \[U_{p_ki} = \chi_{B_R(0)} U_{p_k} \quad \text{and} \quad U_{p_ko} = \chi_{\R^n \setminus B_R(0)} U_{p_k} \quad \text{in } \R^n.\] We assert that for any given number $\zeta > 0$, there exists a number $R > 0$ depending only on $n$, $\ep_0$ and $\zeta$ such that \begin{equation}\label{eq-dec-3} \int_{\R^n \setminus B_R(0)} U_{p_k}^{q_k+1} = \int_{\R^n} U_{p_ko}^{q_k+1} \le \zeta \end{equation} for all $k \in \N$. By Lemma \ref{lem-cpt-11} and elliptic regularity, there exists a pair $(\wtu_1, \wtv_1) \in (C^2(\R^n))^2$ of nonnegative radial functions such that \begin{equation}\label{eq-conv} (U_{p_k}, V_{p_k}) \to (\wtu_1, \wtv_1) \quad \text{in } (C^2_{\text{loc}}(\R^n))^2 \quad \text{as } k \to \infty \end{equation} along a subsequence. In particular, $(\wtu_1, \wtv_1)$ is a classical solution of \eqref{eq-LEs} with $(p,q) = (1,\frac{n+4}{n-4})$. Also, since $\wtu_1$ is superharmonic and $\wtu_1(0) = 1$, the maximum principle implies that $\wtu_1 > 0$ in $\R^n$. In view of Proposition \ref{prop-lin-bi} (1), it holds that $\wtu_1 = w_{a_n,0}$ in $\R^n$ and the convergence in \eqref{eq-conv} is valid for the entire sequence (not just for a subsequence). Summing up, \begin{equation}\label{eq-conv-2} (U_{p_k}, V_{p_k}) \to (U_1, V_1) \quad \text{in } (C^2_{\text{loc}}(\R^n))^2 \quad \text{as } k \to \infty \end{equation} where we write $(U_1, V_1) = (w_{a_n,0}, -\Delta w_{a_n,0})$ as in the statement of Proposition \ref{prop-cpt}. Taking the limit $k \to \infty$ on the both sides of \eqref{eq-K_pq-1}, and employing Fatou's lemma, \eqref{eq-K_pq-2} and \eqref{eq-conv-2}, we obtain \begin{equation}\label{eq-conv-3} \begin{aligned} \int_{\R^n} w_{a_n,0}^{\frac{2n}{n-4}} &\le \liminf_{k \to \infty} \int_{\R^n} U_{p_k}^{q_k+1} \le \limsup_{k \to \infty} \int_{\R^n} U_{p_k}^{q_k+1} \\ &= \limsup_{k \to \infty} K_{p_k, q_k}^{\(1-\frac{p_k+1}{p_k(q_k+1)}\)^{-1}} \le \left[ \frac{\int_{\R^n} |\Delta w_{a_n,0}|^2}{(\int_{\R^n} w_{a_n,0}^{2n \over n-4})^{n-4 \over n}} \right]^{n \over 4} = \int_{\R^n} w_{a_n,0}^{\frac{2n}{n-4}}. \end{aligned} \end{equation} Therefore, all the inequalities must be the equalities. Consequently, applying \eqref{eq-conv-2} and \eqref{eq-conv-3}, we can select $R > 0$ so large that \[\int_{\R^n} U_{p_ko}^{q_k+1} = \int_{\R^n} U_{p_k}^{q_k+1} - \int_{\R^n} U_{p_ki}^{q_k+1} \to \int_{\R^n} w_{a_n,0}^{\frac{2n}{n-4}} - \int_{B_R(0)} w_{a_n,0}^{\frac{2n}{n-4}} \le \frac{\zeta}{2} \quad \text{as } k \to \infty.\] This proves the assertion \eqref{eq-dec-3}. \medskip \noindent \textsc{Step 2: Completion of the proof.} By Green's representation formula, it holds that \begin{equation}\label{eq-V_po} \begin{aligned} \|V_{p_k}\|_{L^{a_1}(\R^n)} &= \ga_n \left\| |\cdot|^{-(n-2)} \ast U_{p_k}^{q_k} \right\|_{L^{a_1}(\R^n)} \\ &\le \ga_n \(\left\| |\cdot|^{-(n-2)} \ast U_{p_ko}^{q_k} \right\|_{L^{a_1}(\R^n)} + \left\| |\cdot|^{-(n-2)} \ast U_{p_ki}^{q_k} \right\|_{L^{a_1}(\R^n)}\) \end{aligned} \end{equation} provided that the rightmost side is finite. Its finiteness is guaranteed for $a_1 > \frac{n}{n-2}$, since \eqref{eq-dec} and Lemma \ref{lem-cpt-11} imply \begin{equation}\label{eq-dec-2} (|\cdot|^{-(n-2)} \ast U_{p_ko}^{q_k})(x) \le \frac{C\alpha_{p_k}}{1+|x|^{n-2}} \quad \text{and} \quad (|\cdot|^{-(n-2)} \ast U_{p_ki}^{q_k})(x) \le \frac{C}{1+|x|^{n-2}} \end{equation} for all $x \in \R^n$ and some constant $C > 0$ depending only on $n$, $\ep_0$ and $R$.\footnote{The inequalities in \eqref{eq-dec-2} are well-known and can be proved as in the proof of \cite[Lemma B.2]{WY}. We note that the right-hand side of the first inequality in \eqref{eq-dec-2} depends on $k \in \N$, while that of the second one does not.} On the other hand, the Hardy-Littlewood-Sobolev inequality, H\"older's inequality and \eqref{eq-dec-3} yield \begin{equation}\label{eq-V_po-2} \begin{aligned} \left\| |\cdot|^{-(n-2)} \ast U_{p_ko}^{q_k} \right\|_{L^{a_1}(\R^n)} &\le \|U_{p_ko}^{q_k}\|_{L^{a_2}(\R^n)} \le \Big\|U_{p_ko}^{p_kq_k-1 \over p_k}\Big\|_{L^{p_k(q_k+1) \over p_kq_k-1}(\R^n)} \Big\|U_{p_ko}^{1 \over p_k}\Big\|_{L^{a_3}(\R^n)} \\ &\le \|U_{p_ko}\|^{p_kq_k-1 \over p_k}_{L^{q_k+1}(\R^n)} \|U_{p_k}\|^{1 \over p_k}_{L^{a_3 \over p_k}(\R^n)} \\ &\le C\zeta \||\cdot|^{-(n-2)} \ast V_{p_k}^{p_k}\|^{1 \over p_k}_{L^{a_3 \over p_k}(\R^n)} \\ &\le C\zeta \|V_{p_k}^{p_k}\|^{1 \over p_k}_{L^{a_4}(\R^n)} = C\zeta \|V_{p_k}\|_{L^{a_4p_k}(\R^n)} \end{aligned} \end{equation} where \[\frac{1}{a_2} = \frac{1}{a_1} + \frac{2}{n}, \quad \frac{1}{a_3} + \frac{p_kq_k-1}{p_k(q_k+1)} = \frac{1}{a_2}, \quad \frac{1}{a_4} = \frac{p_k}{a_3} + \frac{2}{n}\] and $\zeta > 0$ is an arbitrarily small number. If we take $\ep_0 > 0$ sufficiently small, then \[\min\left\{a_2,\, \frac{p_k(q_k+1)}{p_kq_k-1},\, a_3,\, \frac{a_3}{p_k},\, a_4\right\} > 1.\] Furthermore, we infer from \eqref{eq-hyp} and \eqref{eq-dec} that $a_1 = a_4p_k$ and all the quantities in \eqref{eq-V_po-2} are finite. Plugging \eqref{eq-V_po-2} into \eqref{eq-V_po} and choosing any $\eta_0' > 0$ small, we find a constant $C > 0$ depending only on $n$, $\ep_0$, $R$ and $\eta_0'$ such that \[\|V_{p_k}\|_{L^{{n \over n-2} + \eta_0'}(\R^n)} \le C.\] From the radial symmetry and the decay property of $V_{p_k}$, we deduce \[V_{p_k}^{{n \over n-2} + \eta_0'}(r)\, r^n \le C\int_{B_r(0)} V_{p_k}^{{n \over n-2} + \eta_0'} \le C\] where $r = |x| \ge 1$. By combining this with \eqref{eq-cpt-11}, we see that \[V_{p_k}(x) \le \frac{C}{1+|x|^{n-2-\eta_0'}}\] for all $x \in \R^n$ and $k \in \N$. As a consequence, we obtain \[U_{p_k}(x) = \ga_n (|\cdot|^{-(n-2)} \ast V_{p_k}^{p_k})(x) \le C \(|\cdot|^{-(n-2)} \ast \frac{1}{1+|\cdot|^{n-2-\eta_0}}\)(x) \le \frac{C}{1+|x|^{n-4-\eta_0}},\] and so \[V_{p_k}(x) = \ga_n (|\cdot|^{-(n-2)} \ast U_{p_k}^{q_k})(x) \le C \(|\cdot|^{-(n-2)} \ast \frac{1}{1+|\cdot|^{n+4-\eta_0''}}\)(x) \le \frac{C}{1+|x|^{n-2}}\] for all $x \in \R^n$ for small $\eta_0,\, \eta_0'' > 0$. This completes the proof. \end{proof} \begin{proof}[Completion of the proof of Proposition \ref{prop-cpt}] By Lemma \ref{lem-cpt-12}, there exists a constant $C > 0$ depending only on $n$ and $\ep_0$ such that \[\|U_{p_k}\|_{L^{\frac{2n}{n+4} \cdot q_k}(\R^n)} + \|V_{p_k}\|_{L^{2p_k}(\R^n)} \le C.\] This together with \eqref{eq-LEs} implies uniform boundedness of the sequence $\{(U_{p_k}, V_{p_k})\}_{k=1}^{\infty}$ in the space $\mcd^{2,2}_0(\R^n) \times \mcd^{2,\frac{2n}{n+4}}_0(\R^n)$. In addition, the uniform decay estimate of $\{(U_{p_k}, V_{p_k})\}_{k=1}^{\infty}$ presented in Lemma \ref{lem-cpt-12} leads \[\|\Delta U_{p_k}\|_{L^2(\R^n)} = \|V_{p_k}\|_{L^{2p_k}(\R^n)}^{p_k} \to \|V_1\|_{L^2(\R^n)} = \|\Delta U_1\|_{L^2(\R^n)}\] and \[\|\Delta V_{p_k}\|_{L^{2n \over n+4}(\R^n)} = \|U_{p_k}\|_{L^{{2n \over n+4} \cdot q_k}(\R^n)}^{q_k} \to \|U_1\|_{L^{2n \over n-4}(\R^n)}^{n+4 \over n-4} = \|\Delta V_1\|_{L^{2n \over n+4}(\R^n)}\] as $k \to \infty$. As a result, we can invoke \eqref{eq-conv-2} to conclude that \[(U_{p_k}, V_{p_k}) \to (U_1, V_1) \quad \text{in } \mcd^{2,2}_0(\R^n) \times \mcd^{2,{2n \over n+4}}_0(\R^n) \quad \text{as } k \to \infty,\] finishing the proof. \end{proof} We end this subsection, providing two estimates which will used later. \begin{lemma}\label{lem-cpt-13} Suppose that an arbitrarily small number $\eta_2 > 0$ is given. Reducing the size of $\ep_0$ if necessary, one can find a constant $C > 0$ depending only on $n$, $\ep_0$ and $\eta_2$ such that \[U_{p_k}(x) \ge \frac{C}{1+|x|^{n-4+\eta_2}} \quad \text{and} \quad V_{p_k}(x) \ge \frac{C}{1+|x|^{n-2}}\] for all $x \in \R^n$ and $k \in \N$. \end{lemma} \begin{proof} According to \eqref{eq-conv-2}, there exists $C > 0$ depending only on $n$ such that \[U_{p_k}^{q_k}(x) \ge C \quad \text{for all } x \in B_1(0).\] Applying the argument in the proof of \cite[Proposition 2]{Vi}, we obtain \[V_{p_k}(x) \ge \int_{B_1(0)} \frac{\ga_n}{|x-y|^{n-2}} U_{p_k}^{q_k}(y) dy \ge \frac{C}{1+|x|^{n-2}} \int_{B_1(0)} U_{p_k}^{q_k}(y) dy \ge \frac{C}{1+|x|^{n-2}}\] and \[U_{p_k}(x) \ge \int_{B_{\frac{|x|}{2}}(x)} \frac{\ga_n }{|x-y|^{n-2}} V_{p_k}^{p_k}(y) dy \ge \frac{C}{|x|^{n-2}} \int_{B_{\frac{|x|}{2}}(x)} \frac{1}{1+|y|^{n-2+\eta_2}} dy \ge \frac{C}{1+|x|^{n-4+\eta_2}}\] for $x \in \R^n \setminus B_1(0)$. \end{proof} \begin{cor}\label{cor-cpt-12} Suppose that an arbitrarily small number $\eta_1 > 0$ is given. Reducing the size of $\ep_0$ if necessary, one can find a constant $C > 0$ depending only on $n$, $\ep_0$ and $\eta_1$ such that \[|\nabla^l U_{p_k}(x)| \le \frac{C}{1+|x|^{n-4+l-\eta_1}} \quad \text{and} \quad |\nabla^l V_{p_k}(x)| \le \frac{C}{1+|x|^{n-2+l}}\] for all $x \in \R^n$, $k \in \N$ and $l = 1, 2, 3$. \end{cor} \begin{proof} It immediately follows from Lemmas \ref{lem-cpt-12} and \ref{lem-cpt-13}, and the standard rescaling argument based on elliptic regularity. \end{proof} \subsection{Non-degeneracy results near $p = 1$} Employing the compactness result and pointwise estimates of the sequence $\{(U_{p_k},V_{p_k})\}_{k=1}^{\infty}$ of the unique positive ground states of \eqref{eq-LEs} derived in the previous subsection, we first deduce a non-degeneracy result of equation \eqref{eq-LEsc} for $p$ near $1$. \begin{thm}\label{thm-deg} There exists a small number $\ep_1 \in (0,\ep_0]$ such that if $|p-1| \le \ep_1$ and $U_p$ is the unique positive ground state of \eqref{eq-LEsc} with $U_p(0) = 1$, then the solution space of the linear equation \begin{equation}\label{eq-lin} (-\Delta) \((-\Delta U_p)^{{1 \over p}-1} (-\Delta \phi) \) = pq U_p^{q-1} \phi \quad \text{in } \R^n, \quad \phi \in \mcd^{2,2}_0(\R^n) \end{equation} is spanned by \begin{equation}\label{eq-lin-sol} \frac{\pa U_p}{\pa x_1},\, \cdots,\, \frac{\pa U_p}{\pa x_n} \quad \text{and} \quad x \cdot \nabla U_p + \(n-2 - \frac{n}{p+1}\) U_p. \end{equation} \end{thm} \begin{proof} Thanks to Lemma \ref{lem-cpt-12}, Corollary \ref{cor-cpt-12} and elliptic regularity, the functions listed in \eqref{eq-lin-sol} belong to the space $\mcd^{2,2}_0(\R^n) \cap C^{\infty}(\R^n)$. For $j = 1, \cdots, n$, each $\frac{\pa U_p}{\pa x_j}$ clearly solve \eqref{eq-lin}. Also, if we set \[U_{p,\delta}(x) = \delta^{2(p+1) \over pq-1} U_p(\delta x) \quad \text{in } \R^n\] for each $\delta > 0$, every $U_{p,\delta}$ is a solution of \eqref{eq-LEsc}. Therefore \begin{equation}\label{eq-U_pmu-2} \left. \frac{\pa U_{p,\delta}}{d\delta} \right|_{\delta=1} = x \cdot \nabla U_p + \(n-2 - \frac{n}{p+1}\) U_p, \end{equation} whose equality holds due to \eqref{eq-hyp}, solves \eqref{eq-lin}. \medskip Suppose that there exist \begin{itemize} \item[-] a sequence $\{p_k\}_{k=1}^{\infty} \subset [1-\ep_0, 1+\ep_0]$ of numbers tending to $1$ as $k \to \infty$; \item[-] a sequence $\{U_{p_k}\}_{k=1}^{\infty}$ of the unique positive ground states of \eqref{eq-LEsc} with $p = p_k$ such that $U_{p_k}(0) = 1$; \item[-] a sequence $\{\phi_k\}_{k=1}^{\infty} \subset \mcd^{2,2}_0(\R^n)$ of solutions of \eqref{eq-lin} with $p = p_k$ and $u = U_{p_k}$ which cannot be written as a linear combination of the functions \[\qquad Z_{1p_k} = \frac{\pa U_{p_k}}{\pa x_1},\, \cdots,\, Z_{np_k} = \frac{\pa U_{p_k}}{\pa x_1} \quad \text{and} \quad Z_{0p_k} = x \cdot \nabla U_{p_k} + \(n-2 - \frac{n}{p_k+1}\) U_{p_k}.\] \end{itemize} We may assume further that $\|\Delta \phi_k\|_{L^2(\R^n)} = 1$ and \begin{equation}\label{eq-lin-0} \int_{\R^n} \Delta \phi_k\, \Delta Z_{0p_k} = \int_{\R^n} \Delta \phi_k\, \Delta Z_{1p_k} = \cdots = \int_{\R^n} \Delta \phi_k\, \Delta Z_{np_k} = 0. \end{equation} The rest of the proof is split into 4 steps. \medskip \noindent \textsc{Step 1: Uniform boundedness of $\phi_k$'s and $\Delta \phi_k$'s.} We claim that there exists a constant $C > 0$ depending only on $n$ and $\ep_0$ such that \begin{equation}\label{eq-phi} \|\phi_k\|_{L^{\infty}(\R^n)} + \|\Delta \phi_k\|_{L^{\infty}(\R^n)} \le C \end{equation} for all $k \in \N$. Define \begin{equation}\label{eq-psi_k} \psi_k = \frac{1}{p_k} (-\Delta U_{p_k})^{{1 \over p_k}-1} (-\Delta \phi_k) = - \frac{1}{p_k} V_{p_k}^{1-p_k} \Delta \phi_k \quad \text{in } \R^n \end{equation} for each $k \in \N$. Then \eqref{eq-lin} is rewritten as the linearized equation of system \eqref{eq-LEs} \begin{equation}\label{eq-lins} \begin{cases} -\Delta \phi_k = p_k V_{p_k}^{p_k-1} \psi_k &\text{in } \R^n,\\ -\Delta \psi_k = q_k U_{p_k}^{q_k-1} \phi_k &\text{in } \R^n. \end{cases} \end{equation} Fix any $x_0 \in \R^n$ such that $|x_0| \ge 2$ and set $R = |x_0|$. For any $0 < r' < r \le 1$ and $l \in \N \cap \{0\}$ such that $n > 4l$, it holds that \begin{equation}\label{eq-CZ-1} \|\psi_k\|_{W^{2,{2n \over n-4l}}(B_{r'}(x_0))} \le C\(\|\psi_k\|_{L^{2n \over n-4l}(B_r(x_0))} + \|U_{p_k}^{q_k-1} \phi_k\|_{L^{2n \over n-4l}(B_r(x_0))} \) \end{equation} and \begin{equation}\label{eq-CZ-2} \|\phi_k\|_{W^{2,{2n \over n-4l}}(B_{r'}(x_0))} \le C\(\|\phi_k\|_{L^{2n \over n-4l}(B_r(x_0))} + \|V_{p_k}^{p_k-1} \psi_k\|_{L^{2n \over n-4l}(B_r(x_0))}\) \end{equation} provided that the right-hand sides are finite. By \eqref{eq-psi_k}, Lemmas \ref{lem-cpt-12} and \ref{lem-cpt-13}, $\|\Delta \phi_k\|_{L^2(\R^n)} = 1$ and the Sobolev inequality, we find \begin{align*} \|\psi_k\|_{L^2(B_1(x_0))} &\le C \|V_{p_k}^{1-p_k} \Delta \phi_k\|_{L^2(B_1(x_0))} \\ &\le CR^{(n-2)(p_k-1)} \|\Delta \phi_k\|_{L^2(B_1(x_0))} \le CR^{(n-2)(p_k-1)} \end{align*} and \[\|U_{p_k}^{q_k-1} \phi_k\|_{L^2(B_1(x_0))} \le CR^{-(n-4-\eta_0)(q_k-1)} \|\phi_k\|_{L^{2n \over n-4}(B_1(x_0))} \le C R^{-7} \|\Delta \phi_k\|_{L^2(\R^n)} = CR^{-7}.\] Here $C > 0$ is a constant depending only on $n$, $\ep_0$ and $\eta_0$, and particularly, independent of $x_0$ and $R$. Hence \eqref{eq-CZ-1} with $l = 0$ shows that \begin{equation}\label{eq-CZ-3} \|\psi_k\|_{L^{2n \over n-4}(B_{1/2}(x_0))} \le C \|\psi_k\|_{W^{2,2}(B_{1/2}(x_0))} \le CR^{(n-2)(p_k-1)}. \end{equation} On the other hand, it follows from \eqref{eq-CZ-3} that \[\|V_{p_k}^{p_k-1} \psi_k\|_{L^{2n \over n-4}(B_{1/2}(x_0))} \le R^{(n-2)(1-p_k)} \|\psi_k\|_{L^{2n \over n-4}(B_{1/2}(x_0))} \le C.\] Thus \eqref{eq-CZ-2} with $l = 1$ gives \begin{equation}\label{eq-CZ-4} \|\phi_k\|_{W^{2,{2n \over n-4}}(B_{1/3}(x_0))} \le C\(\|\Delta \phi_k\|_{L^2(\R^n)} + \|V_{p_k}^{p_k-1} \psi_k\|_{L^{2n \over n-4}(B_{1/2}(x_0))} \) \le C. \end{equation} Putting \eqref{eq-CZ-3} and \eqref{eq-CZ-4} into \eqref{eq-CZ-1} with $l = 1$, we obtain \begin{equation}\label{eq-CZ-5} \|\psi_k\|_{W^{2,{2n \over n-4}}(B_{1/4}(x_0))} \le CR^{(n-2)(p_k-1)}. \end{equation} If $5 \le n \le 7$, we have that $W^{2,{2n \over n-4}}(B_r(x_0)) \hookrightarrow L^{\infty}(B_r(x_0))$ for $r > 0$. Therefore, by means of \eqref{eq-psi_k}, \eqref{eq-CZ-4} and \eqref{eq-CZ-5}, we deduce \begin{equation}\label{eq-CZ-6} \|\phi_k\|_{L^{\infty}(\{|x| \ge 2\})} + \|\Delta \phi_k\|_{L^{\infty}(\{|x| \ge 2\})} \le C. \end{equation} For higher dimensional case, we repeat the above process to improve integrability of $\psi_k$'s and $\phi_k$'s. After a finite number of iterations, we obtain \eqref{eq-CZ-6}. The uniform boundedness of $\phi_k$'s and their Laplacians on the set $\{|x| \le 2\}$ is easier to deduce. Our claim \eqref{eq-phi} is justified. \medskip \noindent \textsc{Step 2: Rough decay estimates of $\phi_k$'s and $\Delta \phi_k$'s.} We assert that there exists a constant $C > 0$ depending only on $n$ and $\ep_0$ such that \begin{equation}\label{eq-dec-phi-1} |\phi_k(x)| \le \frac{C}{1+|x|^{n-4 \over 2}} \quad \text{and} \quad |\Delta \phi_k(x)| \le \frac{C}{1+|x|^{n \over 2}} \end{equation} for all $x \in \R^n$ and $k \in \N$. The arguments in this and the next steps are inspired by the proof of \cite[Lemma 3.3]{DKP}. Fix any $x_0 \in \R^n$ such that $|x_0| \ge 2$ and $R = |x_0|$. Define also \[\phi_{kR}(x) = R^{n-4 \over 2} \phi_k(Rx) \quad \text{and} \quad \psi_{kR}(x) = R^{n \over 2}\psi_k(Rx) \quad \text{in } \R^n\] for each $k \in \N$. They solve \[\begin{cases} -\Delta \phi_{kR} = p_k (V_{p_k}(Rx))^{p_k-1} \psi_{kR} &\text{in } \R^n,\\ -\Delta \psi_{kR} = q_k R^4 (U_{p_k}(Rx))^{q_k-1} \phi_{kR} &\text{in } \R^n. \end{cases}\] For each $t > 1$, set $A_t = \{x \in \R^n: 1/t < |x| < t\}$. For any $r > r' > 1$ and $l \in \N \cap \{0\}$ such that $n > 4l$, it holds that \[\|\psi_{kR}\|_{W^{2,{2n \over n-4l}}(A_{r'})} \le C\(\|\psi_{kR}\|_{L^{2n \over n-4l}(A_r)} + R^4 \|(U_{p_k}(R\cdot))^{q_k-1} \phi_{kR}\|_{L^{2n \over n-4l}(A_r)} \)\] and \[\|\phi_{kR}\|_{W^{2,{2n \over n-4l}}(A_{r'})} \le C\(\|\phi_{kR}\|_{L^{2n \over n-4l}(A_r)} + \|(V_{p_k}(R\cdot))^{p_k-1} \psi_{kR}\|_{L^{2n \over n-4l}(A_r)}\)\] provided that the right-hand sides are finite. Besides, we have that $\|\Delta \phi_{kR}\|_{L^2(\R^n)} = 1$. Hence, arguing as in Step 1, we obtain \[\|\phi_{kR}\|_{L^{\infty}(\{|x| \ge 2\})} + \|\Delta \phi_{kR}\|_{L^{\infty}(\{|x| \ge 2\})} \le C.\] Combining this with \eqref{eq-phi}, we conclude that \eqref{eq-dec-phi-1} is true. \medskip \noindent \textsc{Step 3: Almost sharp decay estimates of $\phi_k$'s and $\Delta \phi_k$'s.} Let $\eta_3 > 0$ be any small number. We will show that there exists a constant $C > 0$ depending only on $n$, $\ep_0$ and $\eta_3$ such that \begin{equation}\label{eq-dec-phi-2} |\phi_k(x)| \le \frac{C}{1+|x|^{n-4-\eta_3}} \quad \text{and} \quad |\Delta \phi_k(x)| \le \frac{C}{1+|x|^{n-2-\eta_3}} \end{equation} for all $x \in \R^n$ and $k \in \N$. Fix $k \in \N$, and let $\mu = \frac{n-4}{2}$ and $\nu = \frac{n}{2}$. For an arbitrary number $\eta > 0$, we define \[F_{k, \mu, \eta}(x) = \phi_k(x) - \frac{M_{\mu, \eta}}{|x|^{\mu+\eta}} \quad \text{and} \quad G_{k, \nu, \eta}(x) = \psi_k(x) - \frac{m_{\nu, \eta}}{|x|^{\nu+\eta}} \quad \text{in } \{|x| \ge 1\},\] where $M_{\mu, \eta}$ and $m_{\nu, \eta}$ are large positive numbers determined by their subscripts. If $R > 1$ is given, we get from \eqref{eq-dec-phi-1} that \[-\Delta G_{k, \nu, \eta}(x) = q_k U_{p_k}^{q_k-1} \phi_k - \frac{m_{\nu, \eta} (\nu+\eta)(n-2-(\nu+\eta))}{|x|^{\nu+\eta+2}} \le 0 \quad \text{in } \{1 < |x| < R\}\] and \[G_{k, \nu, \eta}(x) \le 0 \quad \text{on } \{|x| = 1\}\] provided that $\nu+\eta < \min\{\mu+5, n-2\}$ and $m_{\nu, \eta}$ is chosen to be large enough. Hence the maximum principle yields that \[G_{k, \nu, \eta}(x) \le \max_{\{|x| = R\}} (G_{k, \nu, \eta})_+ \quad \text{in } \{1 < |x| < R\}.\] Taking $R \to \infty$ and applying \eqref{eq-dec-phi-1} again, we deduce that $G_{k, \nu, \eta}(x) \le 0$, or equivalently, \[\psi_k(x) \le \frac{m_{\nu, \eta}}{|x|^{\nu+\eta}} \quad \text{in } \{|x| \ge 1\}.\] Similarly, one can show that $-\psi_k$ has the same upper bound in $\{|x| \ge 1\}$. Therefore, we improve the decay rate of $\psi_k$ as follows: \[|\psi_k(x)| \le \frac{m_{\nu, \eta}}{|x|^{\nu+\eta}} \quad \text{in } \{|x| \ge 1\}.\] Resetting $\nu$ as $\nu+\eta$, we repeat the above procedure with the function $F_{k, \mu, \eta}$ to improve the decay rate of $\phi_k$'s. This information can be used in further improvement of the decay rate of $\psi_k$'s. We iterate such a process until we reach \eqref{eq-dec-3}. \medskip \noindent \textsc{Step 4: Completion of the proof.} Eq. \eqref{eq-lin} is read as \begin{equation}\label{eq-lin-1} \int_{\R^n} (-\Delta U_{p_k})^{{1 \over p_k}-1} \Delta \phi_k\, \Delta \vph = p_k q_k \int_{\R^n} U_{p_k}^{q_k-1} \phi_k \vph \quad \text{for any } \vph \in C^{\infty}_c(\R^n). \end{equation} Also, there exists a function $\phi_{\infty} \in \mcd^{2,2}_0(\R^n)$ such that \[\phi_k \rightharpoonup \phi_{\infty} \quad \text{in }\mcd^{2,2}_0(\R^n) \quad \text{and} \quad \phi_k \to \phi_{\infty},\, \Delta \phi_k \to \Delta \phi_{\infty} \quad \text{a.e. in } \R^n\] as $k \to \infty$, passing to a subsequence. Invoking Lemmas \ref{lem-cpt-12} and \ref{lem-cpt-13}, Proposition \ref{prop-cpt} and the dominated convergence theorem, we infer \begin{equation}\label{eq-lin-2} \int_{\R^n} (-\Delta U_{p_k})^{{1 \over p_k}-1} \Delta \phi_k\, \Delta \vph = \int_{\R^n} V_{p_k}^{1-p_k} \Delta \phi_k\, \Delta \vph \to \int_{\R^n} \Delta \phi_{\infty}\, \Delta \vph \end{equation} and \begin{equation}\label{eq-lin-3} p_k q_k \int_{\R^n} U_{p_k}^{q_k-1} \phi_k \vph \to \(\frac{n+4}{n-4}\) \int_{\R^n} U_1^{8 \over n-4} \phi_{\infty} \vph \end{equation} as $k \to \infty$. Putting \eqref{eq-lin-1}, \eqref{eq-lin-2} and \eqref{eq-lin-3} together, we conclude that $\phi_{\infty}$ is a solution of \eqref{eq-lin-bi}. By Proposition \ref{prop-lin-bi} (2), it follows that \[\phi_{\infty} = \sum_{j=1}^n c_j \frac{\pa U_1}{\pa x_j} + c_0 \left[ x \cdot \nabla U_1 + \(\frac{n-4}{2}\)U_1 \right] \quad \text{in } \R^n.\] We now assert that $\phi_{\infty} \ne 0$. By \eqref{eq-dec-phi-2}, we can take $\vph = \phi_k$ in \eqref{eq-lin-1}. Applying the mean value theorem, we observe \[\int_{\R^n} (-\Delta U_{p_k})^{{1 \over p_k}-1} (\Delta \phi_k)^2 = \|\Delta \phi_k\|_{L^2(\R^n)}^2 + \int_{\R^n} \(V_{p_k}^{1-p_k}-1\) (\Delta \phi_k)^2 = 1 + o(1)\] where $o(1) \to 0$ as $k \to \infty$. Since \[p_k q_k \int_{\R^n} U_{p_k}^{q_k-1} \phi_k^2 \to \(\frac{n+4}{n-4}\) \int_{\R^n} U_1^{8 \over n-4} \phi_{\infty}^2 \quad \text{as } k \to \infty,\] it holds that \[\int_{\R^n} U_1^{8 \over n-4} \phi_{\infty}^2 = \frac{n-4}{n+4} \ne 0.\] Consequently, we have that $\phi_{\infty} \ne 0$ and so $\sum_{j=0}^n |c_j| \ne 0$. However, \eqref{eq-lin-0} and Corollary \ref{cor-cpt-12} imply \[\int_{\R^n} \Delta \phi_{\infty}\, \Delta \(\frac{\pa U_1}{\pa x_j}\) = \int_{\R^n} \Delta \phi_{\infty}\, \Delta \left[ x \cdot \nabla U_1 + \(\frac{n-4}{2}\)U_1 \right] = 0\] for $j = 1, \cdots, n$. Hence $c_0 = \cdots = c_n = 0$, a contradiction. This completes the proof of the theorem. \end{proof} Theorem \ref{thm-deg} and the equivalence between system \eqref{eq-LEs} and equation \eqref{eq-LEsc} yield the following non-degeneracy result for \eqref{eq-LEs} with $p$ near $1$. \begin{cor}\label{cor1} There exists a small number $\ep_1 \in (0,\ep_0]$ such that if $|p-1| \le \ep_1$ and $(U_p,V_p)$ is the unique positive ground state of \eqref{eq-LEs} with $U_p(0) = 1$, then the solution space of the linear equation \begin{equation}\label{eq-lins-2} \begin{cases} -\Delta \phi = p V_p^{p-1} \psi &\text{in } \R^n,\\ -\Delta \psi = q U_p^{q-1} \phi &\text{in } \R^n, \end{cases} \quad \lim_{|x|\to\infty} (\phi(x),\psi(x)) = (0,0) \end{equation} is spanned by \[\(\frac{\pa U_p}{\pa x_1}, \frac{\pa V_p}{\pa x_1}\),\, \cdots,\, \(\frac{\pa U_p}{\pa x_n}, \frac{\pa V_p}{\pa x_n}\)\] and \[\(x \cdot \nabla U_p + \(n-2 - \frac{n}{p+1}\) U_p, x \cdot \nabla V_p + \(n-2 - \frac{n}{q+1}\) V_p\).\] \end{cor} \begin{proof} The relationship between \eqref{eq-lin} and \eqref{eq-lins-2} was already explored in \eqref{eq-psi_k} and \eqref{eq-lins}. Moreover, with the help of the decay assumption of $(\phi,\psi)$ and elliptic regularity, one can argue as in Step 3 of the proof of Theorem \ref{thm-deg} to verify that $\phi \in \mcd^{2,2}_0(\R^n)$. Therefore, the necessary condition to apply Theorem \ref{thm-deg} is fulfilled and so \[\phi = \frac{\pa U_p}{\pa x_1},\, \cdots,\, \frac{\pa U_p}{\pa x_n} \quad \text{and} \quad x \cdot \nabla U_p + \(n-2 - \frac{n}{p+1}\) U_p.\] Suppose that $\phi = \frac{\pa U_p}{\pa x_j}$ for some $j = 1, \cdots, n$. Differentiating the first equation in \eqref{eq-LEs} with respect to $x_j$ and using \eqref{eq-lins-2}, we find \[\psi = - \frac{1}{p} V_p^{1-p} \Delta \(\frac{\pa U_p}{\pa x_j}\) = \frac{1}{p} V_p^{1-p} \cdot p V_p^{p-1} \frac{\pa V_p}{\pa x_j} = \frac{\pa V_p}{\pa x_j}.\] Set \[V_{p,\delta}(x) = \delta^{2(q+1) \over pq-1} V_p(\delta x) \quad \text{in } \R^n.\] If $\phi = x \cdot \nabla U_p + (n-2 - \frac{n}{p+1}) U_p$, then \eqref{eq-U_pmu-2} shows \begin{align*} \psi &= - \frac{1}{p} V_p^{1-p} \left. \Delta \(\frac{\pa U_{p,\delta}}{\pa \delta}\) \right|_{\delta=1} = \frac{1}{p} V_p^{1-p} \cdot \left. p V_{p,\delta}^{p-1} \frac{\pa V_{p,\delta}}{\pa \delta} \right|_{\delta=1} \\ &= x \cdot \nabla V_p + \(n-2 - \frac{n}{q+1}\) V_p. \end{align*} The proof is done. \end{proof} \section{Non-degeneracy of the Lane-Emden system near $p = \frac{n+2}{n-2}$}\label{sec_pn} The main results of this section are Theorem \ref{thm-deg-so} and Corollary \ref{cor2}, whose proof depends on arguments similar to those used in the previous section. In this time, we use the following well-known uniqueness and non-degeneracy results about the second-order critical equation \begin{equation}\label{eq-LE-so} \begin{cases} -\Delta u = u^{n+2 \over n-2} &\text{in } \R^n,\\ u > 0 &\text{in } \R^n \end{cases} \end{equation} for $n \ge 3$. \begin{prop \textnormal{(1) (uniqueness)} Any smooth solution of \eqref{eq-LE-so} is expressed as \[w_{\delta,\xi}^*(x) := c_n^* \(\frac{\delta}{\delta^2+|x-\xi|^2}\)^{n-2 \over 2}\] for some $\delta > 0$, $\xi \in \R^n$ and $c_n^* = [n(n-2)]^{{n-2 \over 4}}$. \medskip \noindent \textnormal{(2) (non-degeneracy)} The solution space of the linear equation \begin{equation}\label{eq-lin-so} -\Delta \phi = \(\frac{n+2}{n-2}\) u^{4 \over n-2} \phi \quad \text{in } \R^n, \quad \phi \in \mcd^{1,2}_0(\R^n) \end{equation} is spanned by \[\frac{\pa u}{\pa x_1},\, \cdots,\, \frac{\pa u}{\pa x_n} \quad \text{and} \quad x \cdot \nabla u + \(\frac{n-2}{2}\)u.\] Here, $\mcd^{1,2}_0(\R^n)$ be the completion of $C^{\infty}_c(\R^n)$ with respect to the norm $\|\nabla \cdot\|_{L^2(\R^n)}$. \end{prop} \begin{proof} Result (1) has been proved by Aubin \cite{A}, Talenti \cite{T} and Caffarelli et al. \cite{CGS}. A proof of (2) can be found in Rey \cite{R}. \end{proof} We will assume that $p$ is {\em slightly smaller} than $\frac{n+2}{n-2}$. By interchanging the role of $U$ and $V$ and of $p$ and $q$, we can also cover the case that $p$ is {\em slightly bigger} than $\frac{n+2}{n-2}$. Adapting the arguments in Subsection \ref{subsec-cpt}, we obtain the next two results. \begin{prop}\label{prop-cpt-so} Suppose that $n \ge 3$ and $2^* = \frac{n+2}{n-2}$. Let $\{p_k\}_{k=1}^{\infty}$ be a sequence such that $p_k \in [\frac{n}{n-2}, 2^*]$ for all $k \in \N$ and $p_k \to 2^*$ as $k \to \infty$, Also, let $\{(U_{p_k}, V_{p_k})\}_{k=1}^{\infty}$ be the sequence of the unique positive ground states of \eqref{eq-LEs} with $p = p_k$ such that $U_{p_k}(0) = 1$. Then we have that \[(U_{p_k}, V_{p_k}) \to (U_{2^*}, V_{2^*}) \quad \text{in } \mcd^{1,2}_0(\R^n) \times \mcd^{1,2}_0(\R^n) \quad \text{as } k \to \infty.\] Here $U_{2^*}$ is the unique positive radial solution of \eqref{eq-LE-so} with $U_{2^*}(0) = 1$ and $V_{2^*} = U_{2^*}$ in $\R^n$. In other words, $(U_{2^*}, V_{2^*}) = (w_{b_n,0}^*, w_{b_n,0}^*)$ in $\R^n$ where $b_n := c_n^{2 \over n-2}$. \end{prop} \begin{proof} The fact that $V_{2^*} = U_{2^*}$ comes from \cite[Lemma 2.7 and Remark 2.1 (a)]{Sou}. \end{proof} \begin{lemma Given $\ep_2 > 0$ small enough, we assume that $|p_k - \frac{n+2}{n-2}| \le \ep_2$ for all $k \in \N$. Then one can find a constant $C > 0$ depending only on $n$, $\ep_2$ and $\eta_1$ such that \[|\nabla^l U_{p_k}(x)| + |\nabla^l V_{p_k}(x)| \le \frac{C}{1+|x|^{n-2+l}}\] for all $x \in \R^n$, $k \in \N$ and $l = 1, 2, 3$. \end{lemma} By employing the above results and slightly modifying the proof of Theorem \ref{thm-deg}, one can deduce the following theorem. \begin{thm}\label{thm-deg-so} There exists a small number $\ep_3 \in (0,\ep_2]$ such that if $|p-\frac{n+2}{n-2}| \le \ep_3$ and $U_p$ is the unique positive ground state of \eqref{eq-LE-so} with $U_p(0) = 1$, then the solution space of the linear equation \[(-\Delta) \((-\Delta U_p)^{{1 \over p}-1} (-\Delta \phi) \) = pq U_p^{q-1} \phi \quad \text{in } \R^n, \quad \phi \in \mcd^{2,2}_0(\R^n)\] is spanned by \[\frac{\pa U_p}{\pa x_1},\, \cdots,\, \frac{\pa U_p}{\pa x_n} \quad \text{and} \quad x \cdot \nabla U_p + \(n-2 - \frac{n}{p+1}\) U_p.\] \end{thm} \begin{proof} The proof goes along the same way to the proof of Theorem \ref{thm-deg} except Step 4. In order to facilitate Step 4, we have to prove that $\phi_{\infty}$ is a solution of \eqref{eq-lin-so}. Taking $k \to \infty$ in \eqref{eq-lins} and using $U_{2^*} = V_{2^*} = w_{b_n,0}^*$ (which was confirmed in Proposition \ref{prop-cpt-so}), we obtain \[\int_{\R^n} \nabla \phi_{\infty} \cdot \nabla \vph = \(\frac{n+2}{n-2}\) \int_{\R^n} U_{2^*}^{4 \over n-2} \psi_{\infty} \vph\] and \[\int_{\R^n} \nabla \psi_{\infty} \cdot \nabla \vph = \(\frac{n+2}{n-2}\) \int_{\R^n} U_{2^*}^{4 \over n-2} \phi_{\infty} \vph.\] We subtract the second equation from the first equation, and then put $\vph = \phi_{\infty} - \psi_{\infty}$. Then we get \[0 \le \int_{\R^n} |\nabla (\phi_{\infty} - \psi_{\infty})|^2 = - \(\frac{n+2}{n-2}\) \int_{\R^n} U_{2^*}^{4 \over n-2} (\phi_{\infty} - \psi_{\infty})^2 \le 0.\] Therefore, $\phi_{\infty} = \psi_{\infty}$ solves \eqref{eq-lin-so}. The rest of the proof remains the same. \end{proof} Arguing as in the proof of Corollary \ref{cor1}, we derive the following result from the previous theorem. \begin{cor}\label{cor2} There exists a small number $\ep_3 \in (0,\ep_2]$ such that if $|p-{n+2\over n-2}| \le \ep_3$ and $(U_p,V_p)$ is the unique positive ground state of \eqref{eq-LEs} with $U_p(0) = 1$, then the solution space of the linear equation \[\begin{cases} -\Delta \phi = p V_p^{p-1} \psi &\text{in } \R^n,\\ -\Delta \psi = q U_p^{q-1} \phi &\text{in } \R^n, \end{cases} \quad \lim_{|x|\to\infty} (\phi(x),\psi(x)) = (0,0)\] is spanned by \[\(\frac{\pa U_p}{\pa x_1}, \frac{\pa V_p}{\pa x_1}\),\, \cdots,\, \(\frac{\pa U_p}{\pa x_n}, \frac{\pa V_p}{\pa x_n}\)\] and \[\(x \cdot \nabla U_p + \(n-2 - \frac{n}{p+1}\) U_p, x \cdot \nabla V_p + \(n-2 - \frac{n}{q+1}\) V_p\).\] \end{cor}
1,314,259,994,613
arxiv
\section{Compiler Technology} Synthesis technology and compiler technology are intimately tied. In the end, compilers are an essential consumer of the output of synthesis tools. Moreover, synthesis tools often reuse programming language processing and analysis infrastructure from compiler infrastructures in order to process inputs and mine existing code bases for patterns and other background knowledge. Improvements in synthesis technology are expected to go hand in hand with improvements in compiler technology. \sidebar{Improvements in synthesis technology are expected to go hand in hand with improvements in compiler technology.} \subsection{What opportunities exist for improving compilers and program analysis tools to better support scientific applications and HPC?} Compilers can better leverage machine learning techniques to drive heuristics and other algorithmic tuning parameters. Examples include the following: \begin{itemize} \item Optimizing the order in which transformations are applied \item Optimizing the thresholds and other parameters used by transformation and analysis routines, including choices between algorithms, to balance cost vs.\ benefit tradeoffs. \item Optimizing the sets of features used to drive the aforementioned tuning decisions. \item Building cheap surrogate models for performance modeling in autotuning, which can be used to prune the large search space and identify promising regions in a short time. \end{itemize} Compilers and other analysis tools can produce more information, both about the code being compiled and about the tool's modeling results and decisions. This information can be used by human engineers and by synthesis tools as part of an iterative development process. \iffalse Machine Learning/Programming to address architectural diversity \begin{itemize} \item Learn the optimization sequence to apply (in aggregate) \item Learn the features to be used for optimization decisions \item Learn when accelerators are likely to be profitable \item Learn the most appropriate algorithm to select \item Synthesize heuristics used in optimization \end{itemize} \fi As hardware architectures become more complex, with multiple levels in memory and storage hierarchies and different computational accelerators, compilers can offer additional support for mapping applications into the underlying hardware. This may require more information regarding data sizes, task dependencies, and so on than what is traditionally provided to a compiler for a programming language such as C++ or Fortran. Data sizes, for example, affect how code is optimized: the optimal code structure for data that fits into the L1 cache may be different from that for data that fits into the L2 cache. These code structure differences include loop structures, data structures and layouts, and accelerator targeting. \subsection{How might higher-level information be leveraged to capture those opportunities?} A significant challenge in implementing compilers for Fortran, C++, and other high-performance languages used in scientific computing is the lack of higher-level information in the source code. The compiler generally does not know the size of data arrays, the likely location of data within the memory subsystem, and the number of threads being used. This lack of information forces compiler developers to fall back on heuristics that are likely to work across a larger number of use cases instead of creating models that can optimize specific use cases. When DSLs are used, some of this information is contained in the input, and where it is not, the semantics of the DSL can constrain the number of specialized versions that the compiler might generate to a practical number. Similarly, known library interfaces can effectively form a DSL and have similar properties that can be extracted by an intelligent compiler to leverage in driving the compilation process. Separate compilation, the general case where an application's source code is split across multiple separately compiled source files, is helpful in enabling parallelism in the build process and other separations of concerns but limits the higher-level information that can be usefully extracted. Tools that build application-scoped databases from source-analysis processes can potentially mitigate that loss of information. These tools, along with associated source annotations and known library interfaces, can enable high-level compiler optimizations while remaining transparent to the programmer. \iffalse \begin{itemize} \item How to express intent? \item Domain-specific systems, advantages and disadvantages \begin{itemize} \item Shorthand description of computation \item Well-known and limited set of optimizations, map to different architectures \item For certain domains, widely used (e.g., TensorFlow for Deep Learning) \item Adoption difficult, economic argument often missing in HPC \item Users prefer to use familiar interface \item C++ templates often used, portable but complex \end{itemize} \item Hourglass Model: \begin{itemize} \item Top level: Many supported interfaces, DSLs \item Middle level: A single optimization process (Halide, Spiral, etc.) \item Bottom level: A wide range of devices and architectures \item Impact: \begin{itemize} \item Hide optimization behind the scenes, transparent to programmer \item Intent only exposed to lower levels \end{itemize} \end{itemize} \item Human-in-the-loop iterative process (shorter term) \begin{itemize} \item Compiler identifies opportunities for programmer to improve code \item Programmer provides guidance for compiler, e.g., programmer expresses intent to compiler at low level, constrains search, etc. \end{itemize} \end{itemize} \fi \subsection{How might compiler technology be improved to better integrate with program synthesis systems?} Synthesis systems are generally iterative, combining techniques to search the space of potential solutions with techniques for evaluating particular candidate solutions. Compiler technology can be enhanced for better participation in both parts of this process. Information extracted from code compilation or attempted compilation of partial solutions can be used to constrain the search. In addition, information from compilation can be used to evaluate potential solutions in terms of both correctness and performance; however, performance models from the compiler can be used to inform a wider set of metrics. \iffalse Representing programs for synthesizing optimizations \begin{itemize} \item Derive features from code structure \item Combine to form complex feature \item Select features to use based on structure \end{itemize} \fi Recent advances in compiler frameworks, such as LLVM and its multilevel IR (MLIR), can be leveraged to extract suitable information from applications. Examples of features that can be easily extracted are properties of loop-based computations that exhibit regularity, such as loop depth, array access patterns, shape and size of multidimensional arrays, and dependence structures (e.g., dependence polyhedra in affine programs). As these features evolve and change within and among intermediate representations of a compiler, metadata must be collected describing the sequence of transformations applied. Similarly, for more irregular application such as those in arising in sparse linear systems, the sparsity pattern and structure can be used as synthesis features and can be either collected during the system assembly process or provided by the end user via compiler directives. Integrating with an iterative synthesis process may also require new schemes to be implemented in compilers that allow for the caching of partial compilation and analysis results so that each variant evaluated during the synthesis process does not incur the full overhead of performing the necessary analysis and transformations each time. \sidebar{When both a synthesis tool and the underlying compiler are responsible for code optimization, it is an open question how this responsibility is best divided and how the cooperation will be arranged.} When both a synthesis tool and the underlying compiler are responsible for code optimization, it is an open question how this responsibility is best divided and how the cooperation will be arranged. Synthesis systems might be responsible for higher-level transformations that are difficult to prove correct in the absence of higher-level information, for example, or for the use of hand-tuned code fragments that a machine-programming system is unlikely to discover automatically. As work continues, open interfaces that are adopted by multiple compilers and synthesis tools will likely enable a diverse, vibrant ecosystem of programming-environment capabilities. \section{Current State of Scientific Application Development} Scientific application development is essential to scientific progress; and yet, while advances in both scientific techniques and programming tools continue to improve programmer productivity, creating state-of-the-art scientific programs remains challenging. Software complexity has increased over the past decades at an astounding rate, with many popular applications and libraries containing tens of millions of lines of code. As shown in Figure~\ref{fig:science-loc}, scientific development has not been immune from this increase in complexity. General software infrastructures, along with algorithmic and mathematical techniques, have become increasingly sophisticated. Hardware architectures and the techniques necessary to exploit them have also become increasingly sophisticated, as has the science itself. Managing the resulting complexity is difficult even for the most experienced scientific programmers. Moreover, a lot of scientific programming is not done by experienced programmers but by scientific-domain students and recent graduates with only a few years of experience~\cite{milewicz2019characterizing}. As a result, new programmers on a project have difficulty reaching high levels of productivity. \begin{figure*} \includegraphics[width=\linewidth]{figures/science-loc} \caption{Number of source lines of code in various packages: general packages on the left, scientific packages on the right. Nearly all data from openhub.net (Sept. 2020).}\label{fig:science-loc} \end{figure*} \subsection{The Largest Challenges} Code development itself is labor intensive; and in order to develop scientific applications, significant portions of the development require direct input from domain experts. These applications often contain critical components that are mathematically complex; and hence developers require advanced mathematical abilities, strong programming skills, and a good understanding of the science problem being solved. Separations of concerns are common, and not every scientific-software developer implements every mathematical technique from scratch. Nevertheless, developers need sufficient knowledge of the relevant mathematical techniques to select and use applicable libraries. \sidebar{Scientific software has become increasingly complex over the years. Tools and techniques to improve productivity in this space are desperately needed.} Moreover, often the complete design for the application cannot be specified up front. One may not know ahead of time what grid resolutions, discretization techniques, solvers, and so on will work best for the target problems. Instead, development is an iterative process that is part of the scientific investigatory process. As the science evolves, the target problem might change as well, necessitating significant changes to the application design. While application development for dynamic consumer markets also faces challenges with evolving requirements, scientific software must often meet tight performance and mathematical requirements, where large changes in design must be implemented quickly by small development teams, imparting unique needs for productivity-enhancing tools in this space. \textbf{Adapting to New Hardware:} Scientific computing applications tend to require high computational performance and, as a result, are structured and tuned to maximize achieved performance on platforms of interest. However, computational performance is maximized on cutting-edge hardware, and cutting-edge hardware has been evolving rapidly. As a result, applications have had to adapt to new CPU features, such as SIMD vector registers, GPU accelerators, and distribution over tens of thousands of independent nodes. These architectures continue to evolve, with corresponding changes to their programming models, and applications must adapt in turn. If significant work is invested in turning for a particular architecture, as is often the case~\cite{aleen:2016:cf}, repeating that level of work for many different kinds of systems is likely infeasible. To maintain developer productivity in the face of a variety of target architectures, the community has placed significant focus on programming environments that provide some level of \textit{performance portability}. The idea is that reasonable performance, relative to the underlying system capabilities for each system, can be obtained with no, or minimal, source code changes between systems. What qualifies as reasonable performance and what qualifies as minimal changes to the source code are hotly debated and, in the end, depend on many factors specific to individual development teams. Nevertheless, overall goals are clear, and these motivate the creation of compiler extensions (e.g., OpenMP), C++ abstraction libraries (e.g., Kokkos~\cite{edwards2014kokkos}), and domain-specific languages (e.g., Halide~\cite{ragan2013halide}). Furthermore, the design of these portable abstractions is largely reactionary; and while one can anticipate and incorporate some new hardware features before the hardware is widely available, often the best practices for using a particular hardware architecture are developed only after extensive experimentation on the hardware itself. This situation leads to a natural tension between the productivity gain from using portable programming models and the time-to-solution gain potentially available from application- and hardware-specific optimizations. The ability to perform autotuning on top of these technologies has been demonstrated to enhance performance significantly~\cite{Balaprakash:IEEE18}. Some systems (e.g., Halide) were specifically designed with this integration in mind. However, autotuning brings with it a separate set of challenges that make deployment difficult. If the autotuning process is part of the software build process, then the build process becomes slow and potentially nondeterministic. On the other hand, if the autotuning process produces artifacts that are separately stored in the source repository, then the artifacts need to be kept up to date with the primary source code, a requirement made more difficult by the fact that not all developers have access to all of the hardware on which the artifacts were generated. A potential solution to these challenges is to perform the autotuning while the application is running, but then the time spent autotuning must be traded against the potential performance benefits. Nevertheless, the optimal tuning results sometimes depend on the state of the application's data structures (e.g., matrix sizes), and these can change during the course of a long-running process, providing an additional advantage to during-execution autotuning and autotuning procedures that can make use of detailed profiling data. High performance can often be obtained by using different implementations of the same algorithm, and the aforementioned frameworks naturally apply to this case. Sometimes, however, especially when accelerators are involved, different algorithms are needed in order to obtain acceptable performance on different kinds of hardware. In recognition of this reality, a number of capabilities have been explored for supporting algorithmic variants that can be substituted in a modular fashion as part of the porting process (e.g., OpenMP metadirectives, PetaBricks~\cite{ansel2009petabricks}). Having multiple available algorithms for tasks within an application, however, makes testing and verification of the application more difficult. Development and maintenance are also more expensive because each algorithmic variant must be updated as the baseline set of required features expands over time and as defects are fixed. \sidebar{The ability of scientific codes to quickly adapt to new hardware is increasingly challenging. While performance-portable programming models and autotuning help, more advanced end-to-end capabilities are needed to adapt algorithms and data structures to new environments.} \textbf{Data Movement Cost:} The performance of many scientific applications is dominated by data movement. Customizing temporary storage requirements to architectures and application needs has demonstrated promising performance improvements~\cite{olschanowsky2014study}. Previous work combined scheduling transformations within a sparse polyhedral framework and dataflow graphs to enable human-in-the-loop architecture-specific optimization~\cite{davis2018transforming}. Because of large variations in the design of memory subsystems, optimizing the actual layout of data in memory can reduce data movement as well as reduce its cost. As one example, bricks---mini-subdomains in contiguous memory organized with adjacency lists---are used to represent stencil grids~\cite{Zhao:SC19,Zhao:PPoPP21}. Bricks reduce the cost of data movement because they are contiguous and therefore decrease the number of pages and cache lines accessed during a stencil application. Through a layout of bricks optimized for communication, we also eliminate the data movement cost of packing/unpacking data to send/receive messages. \textbf{Scheduling:} A large fraction of scientific applications are still written in legacy code such C/C++ and require mapping to multithreaded code either manually or by autoparalellizing compilers. Performant execution of such legacy codes is dependent on good task scheduling. While most scheduling techniques perform statically, Aleen et al.~\cite{aleen:2010:ppopp} showed that orchestrating dynamically (by running a lightweight emulator on the input-characterization graph extracted from the program) can better balance workloads and provide further improvement over static scheduling. {\bf Polyhedral Multiobjective Scheduling:} As a step toward achieving portable performance, Kong and Pouchet \cite{kong.pldi.2019} recently proposed a extensible kernel set of integer linear program (ILP) objectives that can be combined and reordered to produce different performance properties. Each ILP objective aims to maximize or minimize some property on the generated code, for instance, minimizing the stride penalty of the innermost loop. However, this work did not address how to select the objectives to embed into the ILP. More recently, Chelini et al.~\cite{chelini2020automatic} proposed a systematic approach to traverse the space of ILP objectives previously defined by first creating an offline database of transformations. Such a database is constructed from a number of input cases exhibiting different dependence patterns, which are then transformed via all the possible 3-permutations of ILP objectives from the kernel set of transformations. The resulting transformed codes are then analyzed to extract specific code features that are stored together with the input dependencies and the transformations used (see Fig.~\ref{fig:chelini:pact2020:offline}). Later, during the compilation phase, the database is queried to adaptively select ILP objectives to embed. This process is illustrated in Fig.~\ref{fig:chelini:pact2020:online}. \begin{figure}[h] \includegraphics[width=\linewidth]{figures/OfflineFlow} \caption{\label{fig:chelini:pact2020:offline} Offline Database Construction} \end{figure} \begin{figure}[h] \includegraphics[width=\linewidth]{figures/OnlineFlow} \caption{\label{fig:chelini:pact2020:online} Adaptive Scheduling and Objective Selection (Database Querying).} \end{figure} \textbf{Testing and Verification:} The correctness of scientific-computing applications is critical because the scientific enterprise depends on predictive computational techniques. Especially in areas where the results produced by scientific applications are used to ensure safety or otherwise inform public policy, incorrect results can cause serious problems. In general, science and engineering are competitive fields, and erroneous results put those depending on them at a relative disadvantage. Across the board, productivity can be severely hampered by application crashes and the time spent diagnosing and fixing misbehaving code. Writing tests for scientific applications is often difficult and time consuming. As in any other software project, individual components should be tested with unit tests, and in addition, application-level tests are essential. If tests represent a fixed set of known input-output pairs for each component or for the application, covering all of the various combinations of allowed features and behaviors generally requires an exponential number of test cases. For tests that, individually, automatically explore more of the allowed state space, verifying the correct behavior is difficult. Programmers often fall back on verifying only invariants, not the complete output itself. This approach is especially true for physical-simulation results where exact answers are not known. Invariants (where they exist) such as conservation of energy and the application not crashing might be the only things that the tests actually end up checking. Tests providing higher-confidence verification, including comparisons with competing applications, and numerical-convergence analysis are often not automated and hence are performed manually only on irregular occasions (e.g., just before a major release). Problems discovered during these manual tests, as with other problems discovered late in the development cycle, are often difficult to diagnose and correct. The following are some notable challenges. \begin{itemize} \item \textbf{Numerics:} Many physical systems exhibit nontrivial sensitivity to their initial conditions, and thus small differences from truncation error, round-off error, and other small perturbations legitimately cause large differences in the final results. Verification in these cases is subtle, sometimes relying on statistical properties to compare with known solutions and sometimes relying on physical invariants; but even so, picking good thresholds for numerical comparisons is often done by manual experimentation and rules of thumb. \item \textbf{Asynchronous Execution: } Modern hardware demands the use of concurrent, and often parallel, execution in order to take advantage of the available computational capabilities. This makes testing and verification difficult because concurrent execution is nearly always nondeterministic. It is difficult to be sure that a particular algorithmic implementation is free from race conditions and other constraint violations under all possible execution scenarios: only a finite number, often a small part of the overall space of possibilities, are exhibited during testing. Pairing that testing with special programs designed to detect race conditions (e.g., valgrind, Thread Sanitizer) can help, but these programs add overheads and thus additional trade-offs into the overall testing process. Asynchronous execution frameworks (e.g., OpenMP tasks) generally depend on programmer-declared dependencies to ensure valid task scheduling, and mistakes in these dependency declarations are sometimes difficult to detect. \item \textbf{Large Scale:} Scientific simulations often need to run at large scale in order to exhibit behaviors relevant for testing. Should something go wrong, debugging applications running in parallel on tens of thousands of nodes is difficult. Runs of an application, which are often dispatched from batch queues at unpredictable times, are expensive to recreate; and while interactive debugging sessions can be scheduled, turnaround times for large-scale reservations are not fast, and even state-of-the-art debugging tools for parallel applications have difficulty scaling to large systems. In some cases, state capture and mocking can be used to replicate and test relevant behaviors at smaller scales, but these are time consuming to implement. \item \textbf{System Defects:} At leadership scale, systems and their software push the state of the art and, as a result, may themselves have defects that lead to incorrect application behaviors. For the largest machines, vendors are often unable to test their software (e.g., their MPI implementation) at the full scale of the machine until that machine is assembled in the customer's data center. Early users of these machines frequently spend a lot of time helping track down bugs in hardware, system software, and compilers. For these users, the benefit from being able to run unprecedented calculations provides the motivation to continue even in the face of these kinds of issues, but getting things working at large scale is often more difficult than one might imagine. \item \textbf{Performance:} It is often desirable for testing to cover not only correctness but also performance. Application performance in scientific computing is often a critical requirement because only a limited number of core-hours are generally available in order to carry out particular calculations of interest. Testing of performance is difficult, however, both because many testing systems run many processes in parallel, making small-scale performance measurements noisy, and because many performance properties can be observed only with large problem sizes at larger scales. Special tests that extrapolate small-scale performance tests can sometimes be constructed, but these are often done manually because the automation would involve even more work. \item \textbf{Resource Management:} As scientific software becomes more complex, problems often appear due to unfortunate interactions between different components. A common issue is resource exhaustion, where resources such as processor cores, memory, accelerators, file handles, and disk space, which are adequately managed by the different components in isolation, are not managed well by the composition of the components. For example, different components often allocate system memory with no regard for the needs of other components. Dedicated tests for resource usage can be written, but in practice this kind of testing is also performed manually. Many of these issues surface only when problems are being run at large scale. \end{itemize} \iffalse \begin{itemize} \item Testing and verification \begin{itemize} \item Often application is mathematically complex \item Need to devise test suite for verification \item Debugging is also important \item Iterative process, as machines and software are updated and can have different users of codes \item May not be logically complex but a lot of combinations going on. \item Designing and maintaining tests takes longer than writing the code. \end{itemize} \item Code development itself is also labor-expensive, need automated approach \begin{itemize} \item Design in head what is the goal of the code, need tools to express \item Are we assuming the abstraction represents the need accurately? \item What is the right granularity of abstraction? Might mismatch physics \item Hierarchy of abstractions at the right level, components, refine for reuse \end{itemize} \item Library: \begin{itemize} \item Covering the diverse need of application and system \item Significant testing, verification, performance tuning, debug \item Have tools available for some general bugs today \item Need time to put all tools together and still manually handle special bugs \item Iterative process as hardware and applications are continuously updated \end{itemize} \item Long-running codes have some aspects that are labor-intensive compared with short-running codes \item Understanding existing systems is not trivial and not easily automatable \begin{itemize} \item Training models are not as well developed \item Development is sometimes built around the culture of microservices, but it can be improved \item Microservices might not be easy to build because the market is smaller than commercial systems \end{itemize} \item Keeping up with hardware (porting to new architectures) is very labor intensive today \begin{itemize} \item This can be mitigated through higher-level abstractions \item Every 5-10 years, there is significant refactoring of code to keep up with this \end{itemize} \item Test creation/maintenance is another labor-intensive (and hard to automate) aspect \begin{itemize} \item Running tests itself is already mostly automated today, but generating new tests (or reproducers) is not \item Some efforts are looking at synthesizing large programs to tests, but it is not common \end{itemize} \end{itemize} \fi \iffalse \subsection{The Most-Technically-Difficult Aspects of Programming} \begin{itemize} \item Performance portability: \begin{itemize} \item Design the code to be easily adapted to new HW \item May need different algorithms to take advantage of different platforms \item Hard to verify \end{itemize} \item Asynchronous programming model: \begin{itemize} \item Asynchronous threads/communication are norm today, every execution is different (e.g., task mode) \item Hard to capture dependencies, and consequently hard to verify and debug \end{itemize} \item Reconciliation between good programming design and actual physics demands \begin{itemize} \item Library: cover different HW in middleware, lack specific info from diffe 6 rent applications (applications know the specific demands, but libraries are more general) \end{itemize} \item Rapid hardware evolution is making programming hard \begin{itemize} \item Evolution is too rapid to even build the right abstractions in some cases \item We try to stay at the bleeding edge, but do not have enough financial control on hardware evolution \item Going from current heterogeneity to extreme heterogeneity will not make this problem simpler \end{itemize} \item Some “distrust” associated with high-level abstract frameworks \begin{itemize} \item Not easy to build this trust because they have not been around for that long \item General unease against losing some performance for higher productivity \end{itemize} \end{itemize} Maintain long-lived HPC codes which persist for decades in a dynamic environment with frequent refactorings. \fi \subsection{Requirements for Automation} Increasing the amount of automation in the scientific application development process can increase programmer productivity, and program synthesis can play a key role. In order to be truly helpful, automated tools need to be integrated into the iterative and uncertain scientific discovery process. Experience has taught us that a number of important factors should be considered. \begin{itemize} \item \textbf{Separation of Concerns: } Tools must be constructed and their input formats designed recognizing that relevant expertise is spread across different users and different organizations. A domain scientist may understand very well the kinds of mathematical equations that need to be solved, an applied mathematician may understand very well what kinds of numerical algorithms best solve different kinds of equations, and a performance engineer may understand very well how to tune different kinds of algorithms for a particular kind of hardware; but these people might not work together directly. As a result, it must be possible to compartmentalize the relevant knowledge provided to the tool, to enable both reuse and independent development progress. To the extent that tools support providing mathematical proofs of correctness, the information needed for these proofs should be providable on component interfaces, so the modularity can improve the efficiency, understandability, and stability of the proof process. \item \textbf{Testing, Verification, Debugging: } Tools require specific features in order to assist with verification and debugging. Any human-generated input can, and likely will, contain mistakes. Mistakes can take the form of a mismatch between the programmer's intent~\cite{gottschlich:2018:mapl} and the provided input and can also stem from the programmer's overlooking some important aspect of the overall problem. Moreover, tools themselves can have defects. Thus, tools must be constructed to maximize the ability of programmers to find their own mistakes and isolate tool defects~\cite{LeeIPDPS14}. To this end, tools must generally produce information on what they did, and why, and provide options to embed runtime diagnostics helpful for tracking down problems during execution. FPDetect~\cite{das2020efficient} is an example of a tool providing sophisticated, low-overhead diagnostics that might be integrated with an automation workflow. Another example of synthesizable error detectors that can trap soft errors as well as bugs associated with incorrect indexing transformations is FailAmp~\cite{briggs2020failamp}, which, in effect, makes errors more manifest for easier detection. \item \textbf{Community and Support: } State-of-the-art science applications are complex pieces of software, often actively used for decades, and some are developed by large communities. Tools automating this development process need to integrate with existing code bases and support the complexity of production software. Actively used tools need to be supported by an active team. The productivity loss from issues with an undersupported tool, whether defects or missing features, can easily overwhelm the gains from the tool itself. Moreover, tools that generate hardware-specific code need continual development to support the latest hardware platforms. All tools need to evolve over time to incorporate updated scientific and mathematical techniques. Support and adoption are helped by using tools that integrate with popular languages and programming environments, such as C++ and C++ libraries (e.g., Kokkos). Tools making use of production-quality compiler components (e.g., Clang as a library) tend to be best positioned for success. Sometimes, however, adoption of such tools is hampered by existing code bases in older or less widely used languages (e.g., C, Fortran), and an opportunity exists for automation tools to assist with translating that code to newer languages. \end{itemize} \iffalse \begin{itemize} \item Automation of testing/verification/debugging \begin{itemize} \item Automatic exploration of test space \item Abstraction possible ? need to represent domain knowledge \end{itemize} \item Automation of code development \begin{itemize} \item Tools to express the goal of the program \begin{itemize} \item Human should express the high-level algorithm which is not changed over time \item Use proof techniques from component description \item Language redesign for scientific programming: \begin{itemize} \item DSL(team-specific, large team is supporting it) \item Chapel (does not support accelerators, small group to support) \item Kokkos (right direction) \end{itemize} \end{itemize} \item Build very specific libraries, have guidance to describe which library should be used to address specific application needs \item Gap between mathematical models to actual implementation \begin{itemize} \item No tool to generate code \item Need tools to find out and capture evolution of methods (papers, github?) \end{itemize} \end{itemize} \item Autotuning \item Automate code development – selection of libraries, code reuse \begin{itemize} \item Need to capture the evolution of methods, techniques, etc. \item Rethink the way the languages are designed to express semantics \item Short-term future \begin{itemize} \item Need good way to express intent \item Automate trial-and-error framework for all possible combinations of existing building blocks for both correctness and performance \item Human expresses the invariants \end{itemize} \item Long-term future \begin{itemize} \item AI to program, AI to schedule tasks/adaptation \begin{itemize} \item Tools to automate communication for co-design. \item Can handle more combinations than human \item Challenge is how to put domain/CS knowledge into the tool \end{itemize} \item human expresses intent and system handles implementation and adaptation \end{itemize} \end{itemize} \item Software introspection tools are important \begin{itemize} \item We might not be lacking tools in many aspects, but the usage of tools might be limited \item Seems to be a gap in wholistic (cross-stack) tools -- something to learn from the deep learning community \end{itemize} \item How much of our development burden self inflicted? \begin{itemize} \item Some of the languages we prioritize (Fortran, C) are not prioritized in the rest of the community \item There are good reasons for some part of the code, but generally speaking we should move to more modern toolchains where the tools support is richer \item Might be hampered by a combination of inertia (what’s the immediate gain?) and lack of investment \end{itemize} \item Two key features desired from software systems \begin{itemize} \item Incremental move -- with one month effort, can I get access to one new architecture (instead of 20) \item Sustainable in the long term \begin{itemize} \item Multiple efforts? \item Ability to do bulk of the change to a widely used system + some increments? \end{itemize} \end{itemize} \end{itemize} \fi \section{Introduction} Computational methods have become a cornerstone of scientific progress, with nearly every scientific discipline relying on computers for data collection, data analysis, and simulation. As a result, significant opportunities exist to accelerate scientific discovery by accelerating the development and execution of scientific software. The first priority research direction of the U.S. Department of Energy's (DOE's) report on extreme heterogeneity~\cite{vetter2018extreme} and the ninth section of the DOE community's \textit{AI for Science} report~\cite{stevens2020ai}, among other sources, highlight the need for research in AI-driven methods for creating scientific software. This workshop expands on these reports by exploring how \emph{program synthesis}, an approach to automatically generating software programs based on some user intent~\cite{gottschlich:2018:mapl, gulwani:2017:program}---along with other high-level, AI-integrated programming methods---can be applied to scientific applications in order to accelerate scientific discovery. We believe that the promise of program synthesis (also referred to as \emph{machine programming}~\cite{gottschlich:2018:mapl}) for the scientific programming domain is at least the following: \begin{itemize} \item Significantly reducing the temporal overhead associated with software development. We anticipate increases in productivity of scientific programming by orders of magnitude by making all parts of the software life cycle more efficient, including reducing the time spent tracking down software quality defects, such as those concerned with correctness, performance, security, and portability~\cite{alam:2019:neurips, alam:2016:isca, hasabnis:2020:controlflag} \item Enabling scientists, engineers, technicians, and students to produce and maintain high-quality software as part of their problem-solving process without requiring specialized software development skills~\cite{ragan-kelley:2013:pldi} \end{itemize} Program synthesis is an active research field in academia, national labs, and industry. Yet, work directly applicable to scientific computing, while having some impressive successes, has been limited. This report reviews the relevant areas of program synthesis work, discusses successes to date, and outlines opportunities for future work. \subsection{Background on Program Synthesis} Program synthesis represents a wide array of machine programming~\cite{gottschlich:2018:mapl} techniques that can greatly enhance programmer productivity and software quality characteristics, such as program correctness, performance, and security. Specifically, program synthesis incorporates techniques whereby the following may occur: \begin{itemize} \item The desired program behavior is specified, but the (complete) implementation is not. The synthesis tool determines how to produce an executable implementation. \item The desired program behavior or any partial implementation is ambiguously specified. Iteration with the programmer, data-driven techniques, or both are used to construct a likely correct solution. \end{itemize} \begin{figure} \includegraphics[width=\linewidth]{figures/three_pillars} \caption{The Three Pillars of Machine Programming (credit: Gottschlich et al.~\cite{gottschlich:2018:mapl}).}\label{fig:three_pillars} \end{figure} In \emph{"The Three Pillars of Machine Programming"} nomenclature, program synthesis is classified in the space of \emph{intention}~\cite{gottschlich:2018:mapl} (see Figure~\ref{fig:three_pillars}). The goal of intention is to provide programmers (and non-programmers) with ways to communicate their ideas to the machine. Once these goals have been expressed, the machine constructs (\emph{invents}) a higher-order representation of the user's intention. Next, the machine programming system is free to \emph{adapt} the higher-order invented software representation to a lower-order representation that is specific to the user's unique software and hardware environment. This process tends to be necessary to ensure robust software quality characteristics such as performance and security. When the implementation---the \textit{how} of the program---is unambiguously specified, the process of translating that specified implementation into an executable program is called compilation. Good compiler technology is essential to scientific programming; and as discussed later in this report, opportunities exist for enhancing compiler technology to better enable program synthesis technology. These two areas, program synthesis and compilers, inform each other; and as we look toward the future, state-of-the-art programming environments are likely to contain both synthesis and compilation capabilities. \sidebar{Program synthesis is expected to reduce software development overhead and increase confidence in the correctness of programs. Compiler technology will be essential to enable this technology.} The first program-synthesis systems focused on the automated translation of mathematically precise program specifications into executable code. These systems performed what is sometimes called \textit{deductive synthesis} and often functioned by trying to match parts of the specification to a library of relevant implementation techniques. With the advent of strong satisfiability modulo theories (SMT) solvers, program synthesis had an important new technology on which to build. An SMT solver can naturally produce counterexamples to an inconsistent set of mathematical assertions or produce an assertion that no counterexamples exist, which is useful for several kinds of verification and synthesis tasks. Writing mathematical specification is often subtle, however, and considerable attention in the synthesis community has focused on \textit{inductive synthesis}: the synthesis of programs based on behavioral examples~\cite{flashfill}. Of course, systems can be both deductive and inductive, which is useful because sometimes a programmer knows some of the desired properties but wishes to fill in the remaining information necessary to construct the program using examples. For example, techniques for \emph{type-and-example-directed synthesis} deductively use types provided by the programmer to guide an inductive search for missing program fragments that satisfy the given examples \cite{DBLP:conf/pldi/OseraZ15,DBLP:journals/pacmpl/LubinCOC20}. Combining inductive synthesis with SMT solver iteration, using solver-generated counterexamples to guide the synthesis process, has also been a fruitful area: counterexample-guided inductive synthesis (CEGIS) has produced exciting results over the past decade. CEGIS and related techniques are good at refining potential solutions that are structurally close to a correct answer, but the techniques have more difficulty in searching the unbounded space of potential program structures. Evolutionary algorithms have made important contributions to this problem, especially those that encourage the selection of specialists (potential solutions that work for some, but not necessarily all, solution objectives). The deep learning revolution has led to significant advancements in program synthesis as well. The space of potential program structures can be explored by using differentiable programming, reinforcement learning, and other machine learning techniques. Deep learning has also made practical the incorporation of natural language processing into the synthesis process and the generation of natural language comments as part of the synthesis output. Machine learning techniques now power advanced autocomplete features in various development environments, and looking toward the future, advanced tools can explore more than single-statement completions. As program-synthesis technology, driven by advanced deep learning, evolutionary, and verification techniques, moves toward tackling real-world programming problems, imparting the resulting productivity gains to scientific programming will require techniques and capabilities that might not be required for other domains. In this report, we explore the state of and challenges in scientific programming and how research in program synthesis technology and synergistic research in compiler technology might be directed to apply to scientific-programming tasks. \subsection{Other Notes} \begin{itemize} \item {How program synthesis has been used so far?} \begin{itemize} \item {Halide} \end{itemize} \end{itemize} \begin{itemize} \item {What are the opportunities/gaps in the current approaches?} \begin{itemize} \item {Computer architecture complexity matters} \item {Compiler modularity} \item {Domain-specific libraries/programming frameworks (PetSc, Trineos, \ldots{}). Can learn from.} \end{itemize} \end{itemize} \begin{itemize} \item {Machine Learning} \begin{itemize} \item {Debugging/anomaly detection} \end{itemize} \end{itemize} \begin{itemize} \item {OpenGL as a domain-specific language (role-model)} \item {Extracting semantics from domain-specific libraries using static analysis?} \begin{itemize} \item {Raise level of abstraction} \item {Can do high-level validation} \item {Semantic annotations may help} \begin{itemize}\item {Community development of program database of library semantics} \item {Ontologies? ML? Capturing semantics?} \end{itemize} \end{itemize} \begin{itemize} \item {How to extract semantics from the general library (writing in general-purpose programming languages)?} \item {OpenMP-to-OpenACC case: too high level} \item {Compiler using pattern-matching; Eg. recognize sparse matrix-vector multiply.} \item {Numpy to Q-py example} \end{itemize} \end{itemize} \begin{itemize} \item {Short term? Long term? goals. What could be addressed using the Doe budget? With research sponsors? Wish-list:} \begin{itemize} \item {Database of semantic annotations: 2-5 years} \item {short term: demonstrate program synthesis using Fortran} \begin{itemize} \item {Fortran program transformations could be made easier} \item {Not that easily visible in C++} \end{itemize} \end{itemize} \begin{itemize} \item {Machine Learning} \begin{itemize} \item {Training data is long-term effort} \item {Is there universal data to train compilers using machine learning?} \item {Transfer learning to compiler optimizations?} \end{itemize} \end{itemize} \end{itemize} \begin{itemize} \item {Compiler optimizations: Legality/benefit} \begin{itemize}\item {Trade-offs} \item {Separate legality/benefit checks} \item {Generic legality/profitability for any transformation} \item {Allow speculative applying optimizations} \item {Use Machine/reinforcement learning for these checks?} \item {What is the definition of ``profitable''? When is it applied/determined?} \item {xl Fortran: Idea to select transformations ahead of time and estimated cost/benefit.} \item {Strategies: simple strategies such as backtracking can already address a lot of problems} \item {Short term opportunities in domain-specific compilers.} \item {High engineering effort to restructure current compiler pipelines to be more search-based} \end{itemize} \end{itemize} \begin{itemize} \item {Python} \begin{itemize} \item {No ahead-of-time model} \item {More ``natural'' way} \item {i.e. optimization opportunities depends on the language} \end{itemize} \end{itemize} \begin{itemize} \item {IDE suggesting program snippets} \begin{itemize} \item {Easier for compiler to optimized synthesized code than user-written code (difficulties such as pointer aliasing)} \end{itemize} \end{itemize} \begin{itemize} \item {Code complexity} \begin{itemize} \item {basic block} \item {function} \item {entire application} \item {larger optimization units more long-term} \end{itemize} \end{itemize} \begin{itemize} \item {Combine machine learning with existing compiler analyses} \begin{itemize} \item {replace compiler heuristics with machine learned ones} \item {use existing heuristics/analyses to train a machine learning model} \end{itemize} \end{itemize} \fi \footnotesize{ \section{Program Synthesis} \subsection{Existing Work in Scientific Programming} In scientific computing, domain-specific languages have a long history, and scientists and engineers have increased the productivity of programming by creating specialized translators from mathematical expressions to code in C, C++, or Fortran. While many of these tools have an implicit implementation specification, they are instructive in the synthesis context as demonstrations of the kinds of high-level descriptions of mathematical calculations that programmers find useful. Claw~\cite{clement2018claw}/GridTools~\cite{osuna2019report} (stencil generation for climate modeling), Kranc~\cite{husa2006kranc} (stencil generation for numerical relativity), FireDrake~\cite{rathgeber2016firedrake}/FEniCS~\cite{logg2012ffc} (for the generation of finite-element solvers for partial-differential equations), lbmpy~\cite{bauer2020lbmpy} (for the generation of lattice Boltzmann simulations), TCE~\cite{hirata2003tensor} (for the generation of tensor-contraction expressions), and many others have demonstrated the utility of code generation from physical equations. Likewise, for traditional mathematical kernels, high-level code generators have been produced (e.g., SPIRAL~\cite{franchetti2018spiral}, LGen~\cite{spampinato2014basic}, Linnea~\cite{barthels2020automatic}, TACO~\cite{kjolstad2017tensor}, Devito~\cite{lange2016devito}), and the codelet generator in FFTW~\cite{frigo1998fftw}). Some of these tools, such as SPIRAL, perform synthesis as well as code generation because they discover new algorithms from the exploration of relevant mathematical properties. Autotuning is a common technique used to produce high-performance code, and machine learning can be used to dynamically construct surrogate performance models to speed up the search process. Autotuning can be used simply to choose predetermined parameter values or to explore complex implementation spaces and anything between. Stencil code generators such as PATUS~\cite{christen11-patus} and LIFT~\cite{steuwer16-lift} use autotuning to find the best tile sizes. ATLAS~\cite{whaley2001automated} uses this technique to select implementation parameters for linear algebra kernels. Recent work, making use of the ytopt~\cite{ytopt} autotuning framework, has explored directly searching complex hierarchical spaces of loop nest transformations, as shown in Figure~\ref{fig:looptransform-tree}, using a tree search algorithm~\cite{llvmhpc20-mctree}. Some domain-specific languages, such as Halide, have been designed with a separate scheduling language that can be used by this kind of autotuning search technique. \begin{figure*} \includegraphics[width=\linewidth]{figures/looptransform-tree} \caption{Autotuning search space for composed loop transformations}\label{fig:looptransform-tree} \end{figure*} To synthesize code that examines runtime data to perform optimizations, inspector/executor strategies~\cite{Mirchandaney88,Saltz91} employ inspector code that at runtime determines data reorderings, communication schedules for parallelism, and computation reorderings that are then used by the transformed executor code~\cite{Saltz97,Stichnoth97,DingKen99,MitchellCarFer99,Mellor-Crummey2001,HanT:TPDS06,Wu2013,Basumallik06}. Strout and others have developed inspector/executor strategies such as full sparse tiling that enable better scaling on the node because of reduced memory bandwidth demands~\cite{StroutLCPC2002,Strout14IPDPS,Demmel08,Ravishankar12SC}. Ravishankar et al.~\cite{Ravishankar2015} composed their distributed-memory parallelization inspector/executor transformation with affine transformations that enable vector optimization. Compiler support for such techniques consists of program analyses to determine where such parallelization is possible and the compile-time insertion of calls to runtime library routines for performing aspects of the inspector~\cite{rauchwerger95scalable}. Some early work applying general-purpose synthesis techniques to scientific computing problems has appeared. For example, AccSynt~\cite{collieprogram} applied enumerative synthesis to the problem of generating efficient code for GPUs. AutoPandas~\cite{bavishi2019autopandas} uses machine-learning-based search to automate the programming of data-table-manipulating programs. Lifting, the process of extracting a high-level model from the source code of an existing implementation, is important for handling legacy code and extracting knowledge from existing code bases~\cite{DBLP:conf/pldi/KamilCIS16, DBLP:conf/pldi/CheungSM13, DBLP:conf/sigmod/AhmadC18}. Dexter~\cite{ahmad2019automatically}, for example, can translate C++ kernel implementations into Halide, and OpenARC~\cite{ACC2FPGA-ICS18} can automatically translate existing OpenACC/OpenMP programs to OpenCL specialized for FPGAs. \iffalse \begin{itemize} \item FLASH project effectively doing program synthesis \item Code assembly, optimization. Using Python scripts. \item CLGen for automatically generating benchmark suites \item Uses deep learning. Sweeps through GitHub repositories. \item Autotuners, optimizers (e.g., ATLAS, Spiral) \item Performance portability layers (Kokkos, RAJA, SYCL) \item ROSE (LLNL) \item Transpyler (Tokyo Tech) \item BOAST (Ruby-based DSL, Brice Videau) \item GridTools (CSCS, Switzerland) for climate model \item FireDrake (Imperial College) for finite-element methods - derivative of FEniCS \item OCCA (Tim Warburton) \item Halide \end{itemize} \begin{itemize} \item SPIRAL \item R-Stream \item many domain-specific languages - esp loop optimizations in IRs \item Halide, Claw, TCE \item domain-specific virtual processor \item SQL (considered program synthesis?) \item MatLab, Mathematica \item MS Excel \item Python \end{itemize} \fi \subsection{Existing Work in Other Areas} A lot of existing work on program synthesis has focused on general programming tasks, such as writing functions that operate on common data structures like strings, lists, and trees \cite{DBLP:conf/pldi/OseraZ15,DBLP:journals/pacmpl/LubinCOC20}. Recent work has tended to focus on tools that use natural language, behavioral examples, or both as input. The most widely deployed synthesis system is perhaps the Flash Fill feature in Microsoft Excel 2013+. Flash Fill can synthesize a small program to generate some column of output data given some columns of input data and some filled-in examples. The combination of deep learning with the large amount of publicly available source code (e.g., on GitHub) has provided opportunities for machine learning methods to capture knowledge from existing code bases at a massive scale. The resulting models can then be used for relatively simple tasks, such as autocomplete capabilities in editors, but can also drive complex search techniques in sophisticated synthesis systems. Past academic work in restricted domains, such as SQLizer~\cite{yaghmazadeh2017sqlizer}, which translates natural languages to SQL queries, has begun inspiring commercial implementations. One interesting aspect of mining data from version-control repositories is that not only is the code itself available but its development history can also provide critical data. Systems that learn from past commits have been able to learn likely defects and can suggest fixes. Sketch~\cite{solar2007sketching}, Rosette~\cite{torlak2013growing}, Bayou~\cite{murali2017neural}, Trinity~\cite{martins_trinity_2019}, HPAT~\cite{totoni2017hpat}, and many others have explored different aspects of data-driven synthesis. The autocomplete features in modern development environments are starting to incorporate advanced machine learning technology. Microsoft, for example, has built complex programming-language models for use with its technologies, and both Codota and Tabnine provide intelligent completion for several programming languages. Commercial tools have started to offer more advanced programming assistance, such as suggesting code refactorings to increase the productivity of maintenance tasks. Expanding on these techniques, researchers have demonstrated unsupervised translation between different programming languages~\cite{lachaux2020unsupervised}. In addition, considerable effort has been put into developing techniques for program repair. In the future, these kinds of technologies can be made available for scientific programmers as well, with models customized for relevant programming languages and libraries. \sidebar{Integrating sophisticated example- and type-driven program synthesis and automated program-repair techniques into modern development environments remains an ongoing challenge.} \emph{IDE Integration.} Integrating sophisticated example- and type-driven program synthesis and automated program-repair techniques into modern development environments remains an ongoing challenge. The Mnemosyne project\footnote{\url{https://grammatech.gitlab.io/Mnemosyne/docs/}} provides a framework for the integration of program synthesis techniques for code generation (e.g., Trinity~\cite{martins_trinity_2019}), as well as type inference~\cite{pradel2019typewriter,wei2020lambdanet} and test generation.\footnote{\url{https://hypothesis.readthedocs.io/en/latest/}} In this system independent synthesis \emph{modules} communicate with each other and with a user's IDE using Microsoft's Language Server Protocol.\footnote{\url{https://microsoft.github.io/language-server-protocol/}} Programmatic communication between modules enables workflows in which multiple modules collaborate in multiphase synthesis processes. For example, the results of an automated test generation process may trigger and serve as input to subsequent automated code synthesis or program repair processes. Another significant challenge is that program synthesis is often requested when the program is \emph{incomplete}, thst is, when there are missing pieces or errors that the programmer hopes the synthesizer can help with. In other situations, however, standard techniques for parsing, typechecking, and evaluation fail. Some or all of these may be necessary for the synthesizer to proceed. Recent work on formal reasoning about incomplete programs by using \emph{holes} has started to address this issue \cite{DBLP:conf/popl/OmarVHAH17,DBLP:journals/pacmpl/OmarVCH19}, and the Hazel programming environment is being designed specifically around this hole-driven development methodology \cite{DBLP:conf/snapl/OmarVHSGAH17}. Programs with holes are also known as program \emph{sketches} in the program synthesis community \cite{DBLP:conf/aplas/Solar-Lezama09}. Recent work on type-and-example directed program sketching that takes advantage of these modern advances in reasoning about incomplete programs represents a promising future direction for human-in-the-loop program synthesis \cite{DBLP:journals/pacmpl/LubinCOC20}. \emph{Reversible Computation.} The reversible computation paradigm extends the traditional forward-only mode of computation with the ability to compute deterministically in both directions, forward and backward. It allows the program to reverse the effects of the forward execution and go backwards to a previous execution state. \emph{Reverse execution} is based on the idea that for many programs there exists an inverse program that can uncompute all results of the (forward) computed program. The inverse program can be obtained automatically either by generating reverse code from a given forward code or by implementing the program in a reversible programming language; and its compiler offers the capability to automatically generate both the forward and the inverse program. Alternatively, an interpreter for a reversible language can execute a program in both directions, forward and backward. For example, the reverse C compiler presented in \cite{perumalla2013} generates reverse C code for a given C (forward) code. The imperative reversible language Janus~\cite{YokoyamaGlueck:2007:Janus} allows both the interpretation of a Janus program, where every language construct has standard and inverse semantics, and the generation of forward and reverse C code for a given Janus program.\footnote{Online Janus interpreter at \url{https://topps.diku.dk/pirc/?id=janus\,.}} Over the years, a number of theoretical aspects of reversible computing have been studied, dealing with categorical foundations of reversibility, foundations of programming languages, and term rewriting, considering various models of sequential computations (automata, Turing machines) as well as concurrent computations (cellular automata, process calculi, Petri nets, and membrane computing). An overview of the state of the art and use cases can be found in \cite{Ulidowski2020ReversibleCE}, titled ``Reversible Computation -- Extending Horizons of Computing.'' Reversible computation has attracted interest for multiple applications, covering areas as different as low-power computing \cite{landauer61}, high-performance computing with optimistic parallel discrete event simulation \cite{schordan15,cingolani17,DBLP:journals/ngc/SchordanOJB18}, robotics \cite{laursen15}, and reversible debugging \cite{chen01}. In \cite{Schordan2020}, the generation of forward and reverse C++ code from Janus code, as well as automatically generated code based on incremental state saving, is systematically evaluated. The example discussed in detail is a reversible variant of matrix multiplication and its use in a benchmark for optimistic parallel discrete event simulation. \emph{Automated Machine Learning.} Typically, the ML pipeline comprises several components: preprocessing, data balancing, feature engineering, model development, hyperparameter search, and model selection. Each of these components can have multiple algorithmic choices, and each of these algorithms can have different hyperparameter configurations. Configuring the whole pipeline is beyond human experts, who therefore often resort to trial-and-error methods, which are nonrobust and computationally expensive. Automated machine learning (AutoML) \cite{automl_book} is a technique for automating the design and development of an ML pipeline. Several approaches developed for AutoML can be leveraged for program synthesis and autotuning. A related field is programming by optimization \cite{hoos2012programming}, where the algorithm designer develops templates and an optimization algorithm is used to compose these templates to find the right algorithm to solve a given problem. These methods have been used to develop stochastic local search methods for solving difficult combinatorial optimization problems. \iffalse \begin{itemize} \item End user programming. Automating tedious tasks. Domain specific tasks and domain specific languages. Combining synthesis with DSL design. \item Superoptimization. Complicated optimization for new architectures. Academic success. \item General purpose programming: API usage. Restricted synthesis task that synthesizes individual function calls. \item Applying program synthesis to data science including data wrangling, cleaning, etc. There is no DSL, but looking at specific R APIs. \item Mining existing code bases. Some prior work from Google. \item Automatic patching, bug fixing. \item Generating programs from natural language. Natural language-code pairs. Only able to generate code similar to what you have seen. Mostly in the academic setting as of now. \end{itemize} Synthesis used in Industry \begin{itemize} \item Code completion in IDE, automatically suggests refactoring. \item Key successes - providing small snippets to increase productivity. Remove barriers of use (no need to know about ML or formal PL knowledge). \item Integrated with the workflow of normal programmers (software engineers). \item Microsoft has high quality code models not publicly available. Models are specific to each programming language. There is room for combining data-driven with logic based techniques for niche languages (when data is not enough). \end{itemize} \begin{itemize} \item Schema-based code generation: Spiral (filters, linear algebra) \item IDEs: FlashFill, Codota, Tabnine (code completion, prog by example) \item Compilation: superoptimization in gcc; schedule synthesis in Halide \item Legacy code: Dexter (lifting C++ to Halide, used in Adobe Photoshop) \item End-user programming: web scraping from demonstration (Helena) \end{itemize} New applications: privacy-preserving algorithms, TCP/IP stack, figuring out user intention from git history, explainable AI, configuration files, Some focus on privacy applications and integration with machine learning techniques. Specifically, combination of formal methods based synthesis with neural networks (transformers, GPT, etc). “We [Microsoft] built a nice synthesis tool, but found no one wants to write specifications. We are now exploring modular synthesis, and trying to figure out specs from user behavior (e.g. git source history) automatically.” We have seen that reducing the need for writing boilerplate code as a strength of synthesis Good progress in IDEs focused on using machine learning. A long history of work on synthesis for solving planning and scheduling problems. One application of note - synthesizing a TCP/IP stack. Another interesting application - using synthesis to have a constantly evolving program, which helps with software security. There are current efforts to migrate SQL synthesis (well explored academically) to industry. We believe going from one spec style to multi-style spec with interactivity can open up new directions. User in the loop (user <-> tool bidirectional communication) \begin{itemize} \item multi-modal specs, eg examples (tests) + properties (assertions, types) \item specification-free synthesis, eg user history as an implicit specification \item open questions: psychology of programming; what can tools tell/ask users? \end{itemize} Integration with machine learning (Synth for ML; ML for Synth): \begin{itemize} \item explainable AI (from deep nets to symbolic models) \item better synthesis algorithms: learnt models to guide the search + rank results \end{itemize} New applications (what is the synthesized artifact?): \begin{itemize} \item privacy preserving algos; program repair; synthesis of memory models; .. \end{itemize} Good research successes \begin{itemize} \item Example based synthesis \item Neuro-symbolic hybrid approaches \end{itemize} Some penetration into the (very) broad market \begin{itemize} \item FlashFill in Excel, an example of “Programming by example” \end{itemize} Some use in restricted domains \begin{itemize} \item Functions like “code autocomplete” in IDEs \item Polyhedral analysis and detecting errors in stencil codes (https://arxiv.org/abs/2004.04359) \end{itemize} Generating NLP from code perhaps more successful than NLP to code. CMU plugin to PyCharm for NLP to code. Examples back to 90s of generating efficient matrix-matrix multiply. There has been some recent work in semantic parsing using dialogue (i.e., interacting with the user). This raises the question of whether we can come up with a human-in-the-loop conversational programming approach. Reactive synthesis is also a large area of success that did not have much representation at this workshop. We were more focused on program synthesis. Recent advances in FRP -- synthesizing Android apps code completion tools -- statistical tools -- have been quite successful. https://github.com/mwshinn/paranoidscientist FlashFill/Halide are canonical examples of success in program synthesis \fi \subsection{Expanding on Existing Work} Existing work on program synthesis, while continuing to evolve to address programming challenges in a variety of domains, can be expanded to specifically address the needs of the scientific programming community. \begin{itemize} \item \textbf{Semantic Code Equivalence/Similarity: } The domain of semantic code similarity~\cite{Perry.SemCluster.PLDI19} aims to identify whether two or more code snippets (e.g., functions, classes, whole programs) are attempting to achieve the same goal, even if the way they go about achieving that goal (i.e., the implementation) is largely divergent. We believe this is one of the most critical areas of advancement in the space of machine programming. The reason is that once semantic code similarity systems demonstrate a reliable level of efficacy, they will likely become the backbone as well as enabling a deeper exploration of many auxiliary MP systems (e.g., automatic bug defection, automatic optimizations, repair)~\cite{ben-nun:2018:neurips, luan:2019:aroma, ye:2020;misim, Retreet,Dantoni.Qlose.CAV16,Perry.SemCluster.PLDI19}. \sidebar{Program synthesis might address challenges inherent in targeting heterogeneous hardware architectures and generating performance-portable code.} \item \textbf{Hardware Heterogeneity: } Program synthesis might address challenges inherent in targeting heterogeneous hardware architectures, in cases where specialized code is needed (perhaps ``superoptimizations''), where specialized algorithms are needed, and where specialized ``scheduling'' functions are needed to dynamically direct work and data to the most appropriate hardware. Different data structures, in addition to different loop structures, may be required to achieve acceptable performance on different kinds of hardware. \item \textbf{Performance Portability: } Program synthesis might address the challenge of generating code that performs well across a wide variety of relevant hardware architectures~\cite{Sabne:MICRO15}. This synthesis procedure might account for later abilities to autotune the implementation for different target architectures. \item \textbf{Data Representation Synthesis: } Optimizing data movement for performance portability demands the ability to synthesize the representation of data to take advantage of hardware features and input data characteristics such as sparsity. Data representation synthesis includes data layout considerations and potentially should be coordinated with storage mappings that specify storage reuse. Complex interactions among algorithms, memory subsystems, available parallelism, and data motivate delaying data structure selection until adequate information is available to make beneficial decisions. Existing data structure synthesis research~\cite{Hawkins11,Loncaric2018,DBLP:journals/pvldb/YanC19} has focused on more general relational or map-based data structures and must be extended to scientific computing domains. \item \textbf{Numerical Properties: } Scientific programs are often characterized by numerical requirements for accuracy and sensitivity. Synthesis techniques might address the challenges in finding concrete implementations of mathematical algorithms meeting these requirements. This problem is made more difficult because these requirements are often not explicitly specified. Instead, requirements might be only indirectly specified by the need for some postprocessing step to meet its requirements. In addition, the properties being extracted from the output of the algorithm might be fundamentally statistical in nature (e.g., a two-point correlation function), making verification of the program properties itself a fundamentally statistical process. \item \textbf{Workflows: } Program synthesis might address the challenges in correctly and efficiently composing different analysis and simulation procedures into an overall scientific workflow. This work might include the generation of specialized interfaces to enable coupling otherwise-modular components with low overhead. It might also include helping use existing APIs in order to combine existing components to accomplish new tasks. \item \textbf{Translation: } The existing code base of scientific software contains large amounts of code in C, C++, Fortran, Python, and other languages. Much of this code lacks high-level descriptions; moreover, reusing codes together that are written in different programming languages is often difficult. Worse, even within the same language, different parallel or concurrent programming models do not compose, for example, Kokkos vs.\ OpenMP vs.\ SYCL in C++, OpenMP vs.\ OpenACC in C/Fortran~\cite{SC20:CCAMP}, Dask vs.\ Ray vs.\ Parsl~\cite{babuji2019parsl} in Python. Program synthesis and semantic representations, such as Iyer et al.'s program-derived semantics graph~\cite{iyer:2020:PSG}, can help by performing lifting and translation of existing code between different languages and representations and by providing better documentation. \end{itemize} Program synthesis can act as an automated facility; but given the often ambiguous nature of expressed programming goals, synthesis tools are expected to interact with programmers in a more iterative manner. A program synthesis tool can act as an intelligent advisor, offering feedback in addition to some automated assistance. Synthesis tools can prompt programmers for additional information. However, how to best interact with scientists regarding different kinds of scientific programming tasks is an open question in need of further exploration. \sidebar{An important relationship exists between program synthesis and explainable AI.} An important relationship exists between program synthesis and explainable AI. State-of-the-art program synthesis techniques depend on deep learning and other data-driven approaches. As a result, the extent to which the program synthesis process itself is explainable depends on explainable AI, including machine learning techniques. Program synthesis itself, on the other hand, can distill a data-driven learning and exploration process into a set of symbolic, understandable rules. These rules can form an explainable result from the machine learning process and thus serve as a technique for producing explainable AI processes. \iffalse How do we differentiate between compilation and synthesis? Synthesis has non-determinism vs compilation which is a straight-forward deterministic application of rules \begin{itemize} \item Interest in making JIT technology work better for HPC \begin{itemize} \item You don’t know until you get problem input what the bounds will be, but once you have them they are often fixed. \item This could simplify a lot of Template programming. \end{itemize} \item Sophisticated tools to allow introspection during runtime to adjust control flow \item Selecting the most appropriate backend for accelerators (device scheduling). \item Performance portability. \item Tools for extreme heterogeneity \item Tools for workflow synthesis, coupling between different types of codes, analysis \item Interface generation between different tools (output of one tool as input to other) \item Reduce number of performance knobs given to users. Automate performance tuning. \item Useful, actionable information from compiler when it finds an error or performance issue in program (e.g., as in old Cray compilers) \end{itemize} \begin{itemize} \item Code synthesis that spans the motifs of scientific computing \begin{itemize} \item Don’t change front end for each motif \item Hide synthesis behind existing libraries that users know \end{itemize} \item Domain-specific frontends (domain-specific specification languages) \begin{itemize} \item Halide for stencils \item SPIRAL \item Claw, GridTools \end{itemize} \item Library developers should provide in/out spec \begin{itemize} \item Is there a language for this? Interface generation \end{itemize} \item Use Python as scientific frontends (numpy, ...) \begin{itemize} \item Map python to heterogeneous architectures \end{itemize} \item Entire toolchain is more complex: build (cmake, autoconf, containers, batch system, etc) \end{itemize} \fi \iffalse \begin{itemize} \item Repair, superoptimization, generation of API snippets, data wrangling. \item Learning based on MPI corpora. Expand approaches to deal with Matlab, R, Fortran, etc. \item Synthesis can be applied to all levels of the software stack. Low level: superoptimization (generating highly optimized fortran code), high level: Matlab, R. Bridging the gap between R to MPI. \item Migrate code from Fortran to C++ or other high level. \item Unsupervised Translation of Programming Languages, Facebook AI https://arxiv.org/pdf/2006.03511.pdf \item Reasoning about floating point. Precision tuning. Constraint solvers need to be expanded to handle floating point. \item Synthesize data driven heuristics. e.g. Programming by optimization. https://dl.acm.org/doi/10.1145/2076450.2076469 But the system should be able to synthesize explainable rules. \end{itemize} Some research questions \begin{itemize} \item Given standardized APIs, how do you generate programs to use them? \item Neural architecture search, maybe synthesis researchers should collaborate with them. \item Performance portability - a critical issue, national labs hire people just to rewrite the code from one platform to another. Can program synthesis help in this rewriting process? E.g. how to automatically convert CPU code -> GPU code (KNL to GPU). \end{itemize} Needs \begin{itemize} \item We need algorithms that can generalize from small sets of examples instead of large corpora \begin{itemize} \item Synthesis for synthesis could be a useful approach \item Learning in the context of a language of optimizations could be useful \item Generating synthetic examples could be another solution \end{itemize} \item We need algorithms that can transfer well from one domain to another \begin{itemize} \item Leverage insights from non-scientific code \end{itemize} \end{itemize} Case study (from Alvin Cheung): \begin{itemize} \item Background: astrophysicists run some parallel computations in the field (the "edge"), close to their telescopes. Limited bandwidth to datacenter, where the rest of the computation happens: \item A partitioning problem: what computation to perform on the edge given the limited bandwidth? \begin{itemize} \item lots of prior work on using ILP for partitioning \end{itemize} \item Dynamic prioritization: the computation of interest may differ each night, depending on humans spotted in the sky \begin{itemize} \item unclear what the best specs are for communicating human priorities \end{itemize} \end{itemize} Many potential burning problems: performance, portability, heterogeneity, numerical accuracy (and other approximations), power consumption. Types: a programmer-friendly mechanism for expressing non-functional properties asymptotic complexity, memory and cache usage, etc Synthesis for explainability (data to summaries): ex: abstract execution traces during debugging Synthesize transformations, eg to ease creation of scheduling languages Synthesize abstractions for new computation domains \begin{itemize} \item related work: synthesis of abstractions for a domain (Kevin Ellis et al) \end{itemize} Using synthesis to move from functional synthesis to non-functional metrics (e.g. reducing power consumption). Allowing approximations is a promising direction for synthesis Synthesis may be able to help with portability of code across hardware, languages, and architecture. Program repair is very hot in some communities - which closely related to synthesis. There are many parallel efforts here. how much can transfer to HPC. Program repair can potentially help in debugging. Program repair is a restricted synthesis problem. Types can capture non-functional properties, and can be utilized in this setting. Exploring synthesis in the infrastructure surrounding HPC code, like configuration files. Opportunities around upgrading and adapting existing code \begin{itemize} \item Use (unit) tests to recover specifications and examples for programming-by- example systems \begin{itemize} \item Challenge: Formalizing the analog of distribution over input examples (analogous to training data for ML components) \end{itemize} \item Adapt algorithms to be efficient on many different data structures. \begin{itemize} \item For example, there are many different implementations of sparse matrices, but it is too much work to rewrite the same algorithms for all of them, so the algorithmic coverage is “sparse”. \item Some discussion about whether the Spiral project @CMU applies \end{itemize} \item Synthesizing approximating solutions from existing programs that trade off fidelity for faster computation \end{itemize} Opportunity: How to scale up from trivial problems \begin{itemize} \item Composing smaller problems \item Synthesizing larger-scale programs feels like 5+ years, so focus on components \begin{itemize} \item Identifying repeatable patterns in scientific computing \item Complexity is often not in the control structures, but in mathematical expressions \end{itemize} \end{itemize} Additional opportunities \begin{itemize} \item Synthesis of workflows (Cf. Zhang, et al, Automated Workflow Synthesis) \item Learning from simulations: Build on success in predicting protein folding without physics-based modeling or understanding \end{itemize} Parallelizing and vectorizing of auto-differentiation code. Multi-modal input (NLP, I/O examples, etc) important. Need to improve human-in-the-loop interaction with program synthesis. How do we bridge that gap between areas of synthesis. Especially statistical models of code, which helps with common application development. Search methods look for the novel/corner cases, but are slower. there should be a lot of existing comments. Can we use those to synthesize annotations? work on testing and debugging big data applications (UCLA). http://people.cs.vt.edu/~gulzar/ \fi \sidebar{Tools useful for scientific programming may need to leverage transfer learning techniques in order to apply knowledge from larger programming domains to scientific programming.} \subsection{What are the challenges in applying program synthesis to scientific computing and HPC problems?} \begin{itemize} \item \textbf{Small Sample Sizes: } Scientific programming comprises a small part of the overall programming market. This situation implies that tools useful for scientific programming, regardless of how the tool development is funded, may need to leverage transfer learning techniques in order to apply knowledge from larger programming domains to scientific programming. The challenges associated with the small sample sizes of scientific programs are exacerbated by the fact that scientific programs are effectively optimizing a variety of different objectives (e.g., performance, portability, numerical accuracy), and how the relevant trade-offs were being made by the application developers often is unknown. The challenge of small sample size applies not only to the code itself but also to the set of programmers; and since many synthesis tools are interactive, training data on how humans most effectively interact with the tools is required but challenging to obtain. \item \textbf{Lack of Relevant Requirements Languages: } There does not currently exist, either as an official or a de facto standard, an input language capable of expressing the requirements of a scientific application. While work has been done on expressing correctness conditions in general-purpose software, this is often focused on correctness. Covering the performance requirements, numerical requirements, and so on needed for scientific applications has not been captured in a requirements language. Work also has been done on extracting requirements from natural language, unit tests, and other informal artifacts associated with existing code, and these techniques should be applied to scientific applications as well; but in cases where a user might wish to directly supply requirements, no broadly applicable way to do so currently exists. Significant domain knowledge is involved in constructing scientific applications, but this knowledge is often not explicitly stated, such as how boundary conditions should be handled and how accurate the results need to be. This lack also is manifest in challenges getting different synthesis tools to interoperate effectively. \item \textbf{Verification and Validation: } Verifying that a scientific program satisfies its requirements is a complicated process, and validating the application is likewise a complicated but essential part of the scientific method. The lack of a suitable requirements language, the ambiguities inherent in natural language specifications, and other implicit parts of the code requirements make even defining what ``correct'' means a challenge. For many configurations of interest, no analytic solution exists to which one can compare with certainty. Moreover, the use of randomized and nondeterministic algorithms, limited numerical stability, and the use of low-precision and approximate computing make even the process of comparing with a reference solution difficult. Scale also makes verification difficult: the computational resources required to test an application might not be readily or regularly acquired. Verification can take multiple forms, often all of which are important: { \begin{itemize} \item A mathematical verification (proof) of correctness \item The results of randomized testing showing no problems \item A human-understandable explanation of what the code does and why it is correct \end{itemize} } Verification often assumes that the base system functions correctly; but especially on leading-edge hardware, defects in the hardware and system software can be observed, and narrowing down problems actually caused by system defects is an important goal. How to measure comprehensibility, succinctness, and naturalness and otherwise ensure that code can be, and is, explained is an open question. For applications that simulate physical processes, the process of validating that the result matches reality sufficiently well can involve expensive and sometimes difficult-to-automate physical experiments. \item \textbf{Legacy Code: } Existing scientific code bases have large amounts of code in Fortran and C, generally considered legacy programming languages. Because of a lack of robust, modular components for processing code in these legacy languages and because of the smaller number of samples for training in these legacy languages, it can be challenging to apply program synthesis tools to code that must interact with these code bases. Moreover, even if modern languages such as C++ or Python are used, the libraries used for scientific development are often distinct from the libraries used more generally across domains. Some of these libraries use legacy interface styles (e.g., BLAS) that present some of the same challenges as do legacy programming languages. \item \textbf{Integration and Maintenance: } The use of tools that generate source code has long been problematic from an integration perspective, especially if the code generation process is nondeterministic or depends on hardware-specific measurements or on human interaction. With all such tools, difficult questions must be addressed by the development team: { \begin{itemize} \item Should the tool be run as part of the build process? \item If run as part of the build process, builds become nondeterministic and perhaps slower, both because of the time consumed by the tool and because the requirements associated with making sufficiently reliable performance measurements may limit build parallelism. \item If not run as part of the build process, how is the output cached? Is it part of the source repository? How is the cache kept synchronized with the tool's input? Do all developers have access to all of the relevant hardware to update the cached output? Alternatively, can the tool generate performance-portable code? If the tool is interactive, how are invocation-to-invocation changes minimized? \end{itemize} } \sidebar{Synthesis tools face challenges, not only in interfacing with existing code, but also in interfacing with each other.} \item \textbf{Composability: } As with other software components and tools, synthesis tools face challenges, not only in interfacing with existing code, but also in interfacing with each other. The properties of code generated by one tool may need to be understood by another synthesis tool, and the tools may need to iterate together in order to find an overall solution to the programming challenge at hand~\cite{aleen:2009:asplos}. The Sparse Polyhedral Framework~\cite{Strout18} provides a possible foundation for composing data and schedule transformations with compile-time and runtime components in the context of program synthesis. \item \textbf{Stability and Support:} Like other parts of the development environment, program synthesis tools require a plan for stability and support in order to mitigate the risk that defects or missing features in the tools do not block future science work. In order to transition from research prototypes to tools useful by the larger scientific community, the tools must be built on robust, well-maintained infrastructures and must be robust and well maintained themselves. \item \textbf{Search Space Modeling for Program Synthesis:} The search space of program synthesis for scientific workloads is complex, rendering many search algorithms ineffective. For example, in autotuning, the search can be formulated as a noisy, mixed-integer, nonlinear mathematical optimization problem over a decision space comprising continuous, discrete, and categorical parameters. For LLVM loop optimization, as an example, we are faced with a dynamic search space that changes based on the decisions at the previous step. Modeling the search space of these problems will significantly reduce the complexity of the problem and will allow us to develop effective problem-specific search methods. Application- and architecture-specific knowledge should be incorporated as models and constraints. Moreover, by considering metrics such as power and energy as objectives rather than constraints, we can obtain a hierarchy of solutions and quantify the sensitivities associated with changing constraint bounds. These can significantly simplify (or even trivialize) online optimization at runtime, when quantities such as resource availability and system state or health are known. \end{itemize} \iffalse \begin{itemize} \item Not large enough market in some cases \item Research prototype vs production tool \item Finding people with the right expertise (e.g., advanced compiler) \item Need a tool to develop the tool, or can the tool be generalizable to be used by others \item Alignment of interests of funding agencies \item Long term support and stability of tools \end{itemize} Biggest deterrent of adoption is not knowing whether a tool will be available in the future. \begin{itemize} \item Numerical stability, precision, and accuracy \item Legacy codes \begin{itemize} \item Extreme case in nuclear reaction simulation -- changed only very carefully \item Gradual replacement with newer code (easier if modular) \end{itemize} \item Parallelism - scale with our parallel programming models \item Diversification of different communities \item Debugging \begin{itemize} \item Program synthesis help by \begin{itemize} \item Emitting diagnostics in synthesized code \item Generating test Input \end{itemize} \end{itemize} \item Verification \begin{itemize} \item If you know semantics you can attempt to verify the behavior of the program. The more formal specifications exist for libraries, the better verification can be automated \item Cross-compare different code generator results \begin{itemize} \item e.g. CPU vs FPGA -- numerically different results \end{itemize} \end{itemize} \item Interactions between code generation and runtime systems (e.g. scheduling, JIT) \end{itemize} Data(program)-scarcity problem in HPC \begin{itemize} \item Data-driven techniques can be challenging in HPC. We don’t have enough programs on Github. \item Multi-objective, are they trying to optimize the same code patterns? \item Science applications remain relatively stable. For example, a good corpus of matrix multiplication implementations and stencil computations may be good enough to generate the optimizations we care about. Also, there may not be a lot of different ways to program accelerators. \item Another idea is automatically generating code variants to help learning optimizations. \item How to learn from relatively small amounts of data? Can we do transfer learning in the context of programs, similar to images. \item HPC code is made out of smaller blocks and these blocks are available in the Github. Rather than trying to find entire programs, retrieve and edit for these smaller blocks. \item Synthesis for synthesis: replacing the black box learning algorithms with a synthesis algorithm. Learning a synthesizer from examples. \item Can we do synthesis for non-scientific codes and then apply for scientific codes? Generalizability issues may arise. \end{itemize} Generic challenges: \begin{itemize} \item how do you specify your intent \item how do you scale the process. \end{itemize} For HPC: \begin{itemize} \item Specify performance intent as opposed to just functional correctness. Performance/precision approximation. Experts have to weigh in on this trade-off. \item Numeric stability. Multiple implementations giving different results. \item Separating out intent vs implementation of numeric computation. Open problem. \item Reasoning about floating point. \item Working with different levels of abstraction: \item Working with legacy code. \end{itemize} Categories of synthesis techniques \begin{itemize} \item Local - code completion \item Domain specific - single line text editing, single line text manipulation (scales well), bit vector manipulation (SyGuS), string manipulations \item Lifting - targets legacy code, custom synthesizer \item It was noted that DSLs are hard to develop and take months to mature. Therefore, always relying on a DSL may not be feasible. \end{itemize} Economics: Some problems are hard (deep expertise; performance is a must) while the market (community) is small. \begin{itemize} \item target the (simpler) problems outside HPC core, eg configs, data preparation \item empower HPC experts to homebrew synthesizers, a'la parser generation \item wait a decade until all performance programming is HPC programming :-) \end{itemize} Legacy code: in other domains, rapid adoption of DSLs enabled adoption of FM \begin{itemize} \item good news: the lifting problem needs to be solved only once; after that, lifted high-level specs can be retargeted to future platforms \end{itemize} Accessibility of formal methods \begin{itemize} \item dress specifications as unit tests \item develop methods for writing unit tests of memory transfer patterns, etc \end{itemize} Interactive programming tools \begin{itemize} \item lots of recent PS interest in user aspects -- but HPC programmers are often extremes the spectrum of end users <-> ninja programmers \item studying what expert programmers want could strongly inform our directions \end{itemize} No matter how good your tools are, users prefer libraries with large sets of functionality. Similarly, can we leverage many libraries at the same time for synthesis? Obstacles to adaption in scientific computing community \begin{itemize} \item How to maintain output code of synthesis tool \item Resistance to DSLs \item Need robust, well-productized tools that integrate with design practices \end{itemize} if the tool shows the user weird things, they are not gonna use it. How does domain expertise knowledge and let the user better understand the results. Correctness and reliability was a large concern \begin{itemize} \item How do you validate auto-generated programs when there are not analytical solutions? \item How do you know you can rely on this tool to produce correct results? \item Challenge and costly today in NLP and other areas \end{itemize} HPC is often seeking to address new problems, which makes it harder to synthesis code \begin{itemize} \item When new code is a combination of different elements this will be easier \item However, how to have a system recognize something it has never seen before will be hard \end{itemize} Implementing something like Euler’s is 100+ lines, beyond the scope of existing program synthesis. Requires domain-specific knowledge in order to know how to act intelligent questions to bring human-into-the-loop. Difficult to even define correctness to verify against in some cases. Capturing Programmer Intent \begin{itemize} \item This is a hard problem, but very important: Synthesis systems need good specifications \item Natural Language Interaction \begin{itemize} \item Pros: this is how humans communicate, even in technical settings \item Cons: imprecise, can’t capture all of the details \begin{itemize} \item Algorithmic details, numerical details \item Details such as boundary conditions \end{itemize} \end{itemize} \item Mathematical Specifications \begin{itemize} \item We are familiar with succinct and elegant textbook equations of many scientific phenomenon - But sometimes the details above are also elided! \item Can we program in LaTeX? \item Functional languages are mathematically structured and simple, may be particularly well-suited to capturing programmer intent. Synthesis could be used to iteratively turn into performant imperative code. Lots of interesting recent work, e.g. DeepSpec, Project Everest, show potential. \end{itemize} \end{itemize} Search \begin{itemize} \item Humans could perhaps be prompted to help search (incl. in porting applications) \item Many different methods have been shown suitable in different scenarios \item When you formalize the mathematics, you can expose structure that can help make search more efficient. \item Have to be careful about optimizers that find degenerate solutions, e.g. using undefined behavior to trivially satisfy constraints. \end{itemize} Verification and Validation \begin{itemize} \item Explanation is important \begin{itemize} \item Comprehensibility (incl. succinctness) could be added as a constraint to be optimized \item How to measure comprehensibility / naturalness? \begin{itemize} \item cf. notions of beauty and elegance in physics and math \item Similar statistical properties to human-generated code \end{itemize} \end{itemize} \item Distinction between verifying correctness against a specification (FM) and validating results of a simulation against empirical data \item PL folks have been working on multi-objective verification (e.g. formalizing side channel attacks, not just simple security properties) \end{itemize} What is distinct about scientific computing? \begin{itemize} \item Floating point / numerical considerations \item PDEs \item Lots of data being used for validation, coming in continuously \item Scale -- simulations and data analyses can consume huge computational resources \item Wide variety of algorithms \item Significant legacy code in HPC space (not so much in exploratory scientific computing) \item Scientific computing is tied to being at the edge of our understanding of the universe / science -- codes embody our collective understanding \item Performance matters more than in many other domains \end{itemize} Most common languages in HPC are C,C++, fortran, CUDA, python. Checking for data races is a common problem. The numerical aspect plays a large part in getting the “high performance” of HPC. Floating point is one of the big issues - we dont understand how compiler optimizes code in terms of numerical aspects. Can use categories of floats (ie refinement types) to guide development and bug checking of software. How can annotations work in scientific computing? Carry information on what the units are of the values. Temperature, metric, imperial, etc. Can use this to track expected values of types. Java had annotations through specially formatted comments. Some ML based tools use the file you are currently writing to help with auto-completion - not only of code, but also of annotations. Lots of information is \begin{itemize} \item \end{itemize}already there in the form of natural language commes. We should not lock ourselves to legacy languages - what can we do to help encourage adoption of new languages. ML models can be challenging to understand \begin{itemize} \item Can we make them more understandable or debuggable? \item Can we combine ML model-based and code-based synthesis \item Can we make ML systems less brittle? \item What does IID (independent and identically distributed) datasets for programs look like? \end{itemize} How can program synthesis be most effective, not intrusive? \begin{itemize} \item Hole-based, repair-based, incremental synthesis? Will this slowly build trust? \item Can we study "behaviors" which are implicitly examples (without them being seen as such)? \end{itemize} \fi \iffalse \subsection{What opportunities exist for applying existing successful approaches in scientific computing and HPC?} \begin{itemize} \item Be more purposeful in API design in HPC space \item Good JIT tools could leverage well-defined APIs \item Standards need more precise definition of semantics (should not need to ask experts for clarification) \item Semi-automatic performance tuning \item Performance portability \item Verification and validation \end{itemize} \begin{itemize} \item Gradual replacement of legacy components with new implementations (how modular is legacy code?) \item Abstract machine details and use program synthesis at lower level to enhance portability \item Optimization across functions/libraries (e.g. common data structure) \item As architectures grow more complex, it creates the need to optimize at a higher level in the applications (e.g., dense and sparse matrix representation) \begin{itemize} \item Separation of concerns by domain-specific language \item Contract/specification between program and library \begin{itemize} \item User should be allowed to make all the decisions in selection of data structures/algorithms but not need to do all the work of optimization - allow user-defined profitability analysis in compilers? \end{itemize} \item Give users a different tool if they want more control \end{itemize} \item Formalize knowledge from physics, math fields in a way that it can be systematically generated and verified \begin{itemize} \item Verification -- took 5 years to validate using Coq \end{itemize} \end{itemize} Shifting from "code" to "workflows" \begin{itemize} \item Develop techniques that abstract program behavior on actual systems, as a debugging and performance tuning tool \item Develop program-synthesis tools that support “Explainability”, in terms of program behavior, correctness, performance, etc. to reduce the gap between the tools and scientific programmers. \item Human in the loop to relax restrictions on the compiler related to correctness. But the communication of these decisions needs to be clear and understandable for users \end{itemize} Requirement specification \begin{itemize} \item Create easy ways to specify application requirements without involving low-level implementation details. \item However, we need to recognize that developing the logic for scientific code is a large part of the burden, which often requires fine-grained details, not just high-level requirements. \item High-level representations should create a clear boundary between software and hardware, for future portability. Though many scientific programmers do prefer to fine-tune applications to specific hardware. Finding a balance between abstraction for "typical-users" and control for "power-users" is important. \item Define "low-level" details as architecture-specific details, not as much low-level logic. It’s important not to obscure the program intention in low level code, to allow abstractions. \item The tradeoff point for performance and productivity is relative for different codes, projects, and programmers. \item One way to approach this balance is using compiler hints and other optional semantics. Though these optional semantics may not satisfy “power-users”. And it’s important that these optional semantics are clearly defined and consistent to apply. \item Longevity of implementations is important, as they’re only portable to new systems if they’re supported by new systems (Chapel, X10 example). This could be addressed by program synthesis to generate these abstractions, and coordinate orthogonal approaches. \end{itemize} Approximate transformations \begin{itemize} \item Can we learn from fairly inefficient code (learn data structure transformations) and also can we do transformations that are not semantic-preserving? \item Algorithm selection / data-structure selection \item Differential equations to algorithms \item The HPC community expects no wrong transformations (accuracy); compilers may be better at it compared to a synthesizer. \item The computation is already approximate; so how do we verify? Related to applied math questions (e.g., when are these approximations valid? In what regions?). Programming errors make it even more difficult. \item Machine learning algorithms are only approximately correct. Only probabilistic guarantees. Then how can these be used to synthesize programs for HPC? ML algorithms to synthesize programs can be problematic. \item Validation and verification is still manual in HPC (From specification to the actual program). You need a good oracle. Verification of HPC is very different from traditional verification. \item When an ML application is deployed in a HPC domain, loss function makes it hard to make incorrect solutions. However, ML algorithms only have PAC-guarantees. We need to model the problem carefully (inductive bias) or have better data to make the learning-based system more robust. \end{itemize} In scientific computing, a key challenge is running the code in a distributed environment \begin{itemize} \item Requires lots of effort in going from implementation on single node to a large cluster \item Communication between different machines is a bottleneck \end{itemize} Obscure bugs creep up when sys admins update to new environments \begin{itemize} \item Can automated tools help with debugging such errors? \item Many people must encounter same types of errors \item Could we develop models that predict the fix? \end{itemize} Scientific code running on the cloud and energy efficiency: Can program synthesis help decrease power usage? Lifting/porting of code: Common theme during this workshop Existing synthesis frameworks: Sketch, Rosette, Bayou, Trinity (UT Austin), HPAT: Intel Python to MPI. API synthesis, end-user synthesis. data science. Synthesize can be used to simplify concurrent programming and parallelization. Combining synthesis with the goals of explainable AI may help scientific programmers understand synthesis (or ML models) better. Types are actually good at expressing non-functional properties of software. This could be leveraged more in the domain of HPC synthesis. We should tell people to not be scared of types. Doing more user studies and understanding what users want. We can use synthesis for the infrastructure surrounding HPC (e.g. configuration files, cloud resource provisioning) to help with the code beyond the core computation of the application. FPGA/GPU programming is a promising domain for synthesis. Here we must partition not only computation across hardware (what computation is run where), but also manage the flow of data between hardware components. Can the synthesis for this process be interactive? There is a lot of HPC code out there, how can we transfer what we learn to different projects. Can we leverage legacy code (e.g. fortran) to achieve transfer learning to new systems. Perhaps, rather than looking at static code, then look at transformations. Unfortunately, this is not always how scientific programmers work. Creating transformations on the deductive side vs optimizing the sygus side. server vs edge computing. can synthesis guide the partition Memory safety of low-level code Optimal synchronization (e.g. placement of locks and fences) Verified controllers in safety-critical applications Porting code to new platforms with correctness guarantees Research opportunities /near-term challenges \begin{itemize} \item Composability: can we apply synthesis to a component, can we use different synthesis tools/techniques to different components? \item Emphasis on human in the loop and user studies (users ahead of algorithms) \item DSLs (DSLs are a good target for code generated by synthesis tool, and resistance to adapting DSLs can be addressed by demonstrable benefits, e.g. correctness proofs) \item Multi-modal specifications, e.g. input-output examples + semantic constraints \item Integration of machine learning and logical constraint \end{itemize} accuracy, resource utilization, computation time are all good metrics to measure quality of the synthesized program Think about all aspects of program development: Requirements, Debugging, Verification, Performance, Presentation Challenge: how to represent the same algorithm using different data structures. Example: how to represent sparse matrix. As representation changes, you have to rewrite the algorithm implementation though it's the same algorithm. Can synthesis help? For each representation, one needs high performance implementation that worries about fine-grained access patterns How to choose pre-conditioner for sparse matrix representation. How to exploit complex space-efficient representation of sparse matrices such as block-compressed rows. How to switch between different representations automatically Opportunities \begin{itemize} \item Up the level of human interaction with tools \item Modernizing legacy codes \item User guided optimization \item Deeply analyze program execution \item Bug finding by finding outliers \item Performance optimization \item Automated exploration \item Safety and Verification: What numerical approach to take \item Design variants \end{itemize} Challenge: Small market, big effort Debugging is a problem. Work like Prodometer (Ignacio Laguna PLDI’14) can compute progress dependence. Such efforts can be accelerated by building a debugging community. Methods for injecting application requirements \begin{itemize} \item Exhaustive requirements-gathering from domain experts (hard/time-consuming, but has been done for some limited domains) \item Feedback loop with domain experts \item Mine the high-level specification from the code (to port/maintain legacy code) \item Manual lifting of performance-critical code to a higher level abstraction (so that it never needs to be ported again) \item Separate math from memory usage \end{itemize} Methods for searching the space of potentially-suitable programs \begin{itemize} \item Combine constraint-based searches with learning (but for some problems we may not know how to define learning) \item Scale the search (multi-objective makes the search harder) \item Avoid linear scaling something that is non-linear \end{itemize} Methods for verifying correctness/increasing correctness confidence \begin{itemize} \item Correctness is hard, but has been done for some limited domains \item Correctness by construction (initially limited to FFTs and linear transforms) \item Formal methods \item Get more help from tools \item Users being able to confidently approve or disprove if the tool did well - funnel domain expertise into helping the user understand the result \item Whenever there is a new domain, you need to fit it into the existing theorem provers - find ways to reuse across domains \item Static tools are insufficient, run-time tools are needed \item Open question: Is something less then complete formal correctness acceptable? \begin{itemize} \item Understand what level of correctness is necessary \item Fuzzing and other techniques for partial correctness \end{itemize} \end{itemize} Methods reflecting the multi-objective nature of these problems \begin{itemize} \item Not a lot of focus on this in the synthesis literature \item Can be very hard, especially with disparate or competing objectives \item Pick the one or two most important objectives and solve for them first \item Evolutionary techniques can sometimes help with balancing multiple objectives \item Genetic algorithms are a good alternative for problems with two tiers of objectives \item Better ways of doing divide-and-conquer seems like a better approach. \item Take advantage of large-scale parallelism, like our deep learning colleagues \item Understand where constraints can be relaxed (i.e., to allow tradeoff of performance/mathematical accuracy) \end{itemize} What we want to see effort on going forward \begin{itemize} \item Don’t believe in large jumps. People were interested in various incremental approaches \begin{itemize} \item Building better auto-tuners for areas , such as automatic differentiation \end{itemize} \item Human in the loop was of interest even in the longer term. Synthesis optimizes people time approach \begin{itemize} \item People specify the framework and program synthesis fills in the rest? Or the other way around? \item The challenging part is the auto-scheduling. How do you know when you’re done? Knowing when you hit the peak of the distribution, some bounding condition. \item Ability to re-target a program for different architectures with less human effort. Having high-level code that can get re-targeted efficiently. \end{itemize} \end{itemize} Another challenge is knowing what questions to ask. Just saying, “I have a PDE, now generate code for it” is not sufficient. Maybe engaging in conversation between a user and agent can be useful. In HPC, many times, there’s a new problem to solve. So, just doing things that are already known is not sufficient. However, there is often a hierarchy of abstraction. We know how to do some pieces of it and we need to figure out how to combine them to do something new. Correctness remains a challenge. Any solution is an approximation. For analytic problems, it might be straightforward to verify, but this isn’t always the case. This goes back to wanting and needing human-in-the-loop. We could also possibly learn to generate tests from specifications. Synthesis of reference implementations from science specifications for use in verification Opportunities for new components/libraries/frameworks that can be identified, and (partially) synthesized, by mining scientific codes Online synthesis specialized to data structures, e.g., sparse matrices, stencils Synthesis of API sequences for scientific libraries, which can also help developers understand the libraries Program synthesis can (partially) automate generation of mini-apps Program synthesis can automate generation of reduced codes to expose functional/performance bugs (c.f. creduce, llvm-reduce, bugpoint, etc.) Annotations may provide a nice way to introduce new forms of analyses to scientific computing/HPC Annotations are widely used in Java-like languages They can carry useful information that can include domain knowledge E.g., whether a value is distance or temperature and what its units are Annotations may be inferred from comments in code or descriptive variable names Test cases for scientific applications can be generated automatically Testing techniques can also be adapted to scientific computing/HPC An example specific problem is when testing must use reduced workloads How to create test inputs that are reduced but provide a similar confidence to actual inputs? Separation of intention from invention / adaptation: \begin{itemize} \item Synthesize concepts for intention not "end system" code? \begin{itemize} \item Invention for high-level program, adaptation for fine-tuning \end{itemize} \item Help HPC programmers comprehend what is synthesized (so they trust it) \item Help HPC programmers with task completion \begin{itemize} \item If we understand intent, this might not be too hard? \end{itemize} \item Understand behaviors of HPC community to understand what bottlenecks \end{itemize} Need for always-constructible structural representations Aim to “train” and “educate” junior HPC programmers via synthesis \fi \subsection{What are aggressive short-term, medium-term, and long-term opportunities and goals?} A number of opportunities exist to apply program synthesis techniques to scientific-computing problems, and these opportunities will expand with time. \subsubsection{Short Term (1--2 Years)} \begin{itemize} \item \textbf{Defining Challenge and Benchmark Problems: } The community working to develop new program synthesis technologies can use challenge problems to direct their long-term aims and enable conversations with the scientific programming community. In order to measure progress toward addressing those challenge problems, establish concrete examples, and enable comparison between systems, collections of benchmark problems should also be developed. Separate challenge and benchmark collections can be developed for different classes of problems (e.g., programming-language translation, specification-driven synthesis, and example-driven synthesis). \item \textbf{Interactive Synthesis, Repair, and Debugging: } Constrained techniques, such as proposing simple hole fillings \cite{DBLP:journals/pacmpl/LubinCOC20,An.AugmentedExampleSynthesis.POPL20} and local source-code edits to correct user-identified mismatches between expected and observed application behaviors, can be intcorporated into integrated development environment software that is being used to develop scientific software. \item \textbf{Smarter Code Templates: } Synthesis tools can provide assistance with generating boilerplate code, applying design patterns, and performing other largely repetitive programming tasks. Some of these patterns occur more frequently in scientific computing than in other domains, such as halo exchange in physical simulations. \item \textbf{Numerical Precision, Accuracy, and Stability: } Synthesis tools can start addressing situations in which different options exist between algorithmic variants and the precision used to represent numerical quantities. Automatic selection between these options, based on user-supplied metrics, including performance, accuracy, and stability, should be possible. A fundamental requirement is to guard synthesis via rigorous and scalable roundoff error estimation methods~\cite{das2020satire}. \item \textbf{Superoptimization and High-Quality Compilation: } Synthesis systems can use exhaustive search techniques effectively in restricted domains to find the best algorithmic compositions and code generation for particular systems. Superoptimization in restricted domains can apply to library call sequences and other high-level interface generation tasks. Compilation improvements will include more helpful and more accurate user feedback regarding where and what code annotations will be helpful. \item \textbf{Knowledge Extraction: } Analysis and verification tools will extract ever more precise and relevant high-level specifications from existing implementations, often referred to as lifting, and use these specifications to enable verification, interface generation, and other tasks~\cite{iyer:2020:PSG}. \item \textbf{Intelligent Searching: } Intelligent code search engines can be constructed that account for code structure, naming, comments, associated documents (e.g., academic papers), and other metadata. Finding code examples and other existing solutions to related problems can significantly increase programmer productivity. \item \textbf{Synthesis via Component Assembly:} The large-scale application frameworks in the past two decades have used componentization and assembly for a modest level of high-level program construction. The key idea is to have an infrastructural framework that becomes a backbone that model and algorithmic components of the solution system can plug into as needed. While this methodology by itself would not suffice for the multilevel parallelism that we are facing now, the concept of assembly from components can still provide an easy way of attacking some of the performance portability challenges by incorporating variable granularities in how componentization occurs. Past frameworks assumed components at the level of separate standalone capabilities in a quest to shield the science and core numerical algorithms from the details of infrastructure and assembly. Allowing componentization within such standalone capabilities (in a sense letting some of the infrastructural aspects intrude into the science sections of the code) can be helpful in reducing replicated code for different platforms. For example, one can view any function as having a declaration block, blocks of arithmetic expressions, blocks of logic and control, and blocks of explicit data movements that can each become a component. A subset of these components can have multiple implementations if needed or, better still, be synthesized from higher-level expressions. A tool that does not care whether the implementations exist as alternatives or are synthesized will be relatively simple and general purpose and can have a huge impact on productivity and maintainability, while at the same time reducing the complexity burden on other synthesis tools in the toolchain. \item \textbf{Benchmarks:} Developing a set of easy-to-use benchmarks and well-defined metrics for comparison will be critical for advancing the algorithms for program generation. These benchmarks should reduce or hide the overhead required to let researchers from other areas to develop algorithms. For example, autotuning can benefit greatly from applied math and optimization researchers, but currently no easy-to-use framework allows these researchers to test algorithms. \end{itemize} \subsubsection{Medium Term (3--5 Years)} \begin{itemize} \item \textbf{Synthesis of Test Cases: } Based on user-specified priorities, background domain knowledge, and source analysis, synthesis tools can generate test cases for an application and its subcomponents. These span the granularity space from unit tests through whole-application integration tests and, moreover, can include various kinds of randomized testing (e.g., ``fuzz testing'') and the generation of specific, fixed tests. \item \textbf{Parallelization and Programming Models: } Synthesis tools can assist with converting serial code to parallel code and converting between different parallel-programming models. Parallelism exists at multiple levels, including distributed-memory parallelism (e.g., expressed using MPI) and intranode parallelism (e.g., expressed using OpenMP, SYCL, or Kokkos). \item \textbf{Optimized Coupling Code: } Complex applications often require different subcomponents to communicate efficiently, and the ``glue code'' needed between different components is often tedious to write. Synthesis tools can create this kind of code automatically and over the medium term can create customized bindings that limit unnecessary copying and format conversions. Over the longer term, these customized data structure choices can permeate and be optimized over the entire application. \item \textbf{Performance Portability: } Generating code that works well on a particular target architecture is challenging, but generating code that performs well across many different target architectures adds an additional layer of complexity. Synthesis should be able to address this combined challenge of generating performance-portable code (using, e.g., OpenMP, SYCL, or Kokkos) that has good performance across a wide array of different platforms. \item \textbf{Autotuning:} Autotuning is becoming a proven technology to achieve high performance and performance portability. Despite several promising results, however, a number of challenges remain that we need to overcome for a wider adoption \cite{balaprakash2018autotuning}. Autotuning should be made seamless and easy to use from the application developer's perspective. This task involves a wide range of advancements from automated kernel extraction and large-scale search algorithms, to reducing the computationally expensive nature of the autotuning process. Modeling objectives such as runtime, power, and energy as functions of application and platform characteristics will play a central role. These models will be used to quantify meaningful differences across the decision space and to offer a convenient mechanism for exposing near-optimal spots in the decision space. \end{itemize} \subsubsection{Long Term} \begin{itemize} \item \textbf{Solving Challenge Problems: } Challenge problems should be solvable by using composable, widely available tools. These tools should be capable of incorporating background knowledge from a wide variety of domains and of producing efficient, verifiable solutions in a reasonable amount of time. Where appropriate, the code will use sensible identifier names and otherwise be readable and maintainable. \item \textbf{Intentional Programming: } Synthesis tools can operate using high-level mathematical and natural language specifications, largely automatic but eliciting key feedback from human scientists, working within a common framework that supports the tooling ecosystem. \item \textbf{Lifelong Learning: } Synthesis tools will use an iterative refinement scheme, learning from user feedback and automatically improving themselves as time goes on. Synthesis tools will be able to create new abstractions for different domains and evolve them over time. Recent developments in reinforcement learning can be an effective vessel for lifelong learning. \item \textbf{Understanding Legacy Code: } Lifting, or the extraction of high-level features from concrete implementations, can work over large bodies of code. Synthesis tools will use lifting processes to understand legacy code, including any necessary use of background knowledge, and can interface with it or translate it to other forms. \item \textbf{Full-Application Verification: } Full applications, including library dependencies, can be verified, symbolically, statistically, or otherwise, with high confidence. The verification procedures can be driven automatically based on user-provided priorities and intelligent source-code analysis. \item \textbf{Proxy Generation:} Proxy applications, representing user-specified aspects of a full application, can be automatically generated by synthesis tools. These proxies will include appropriate source-code comments and documentation. \item \textbf{End-to-End Automation for Autotuning:} Autotuning needs to be part of the compiler tool chain. The process of autotuning should not involve any manual intervention. We need to develop autonomous, intelligent autotuning technology that can surpass human capability to accelerate computational science on extreme-scale computing platforms. \end{itemize} \iffalse Short Term \begin{itemize} \item APIs, focused efforts, incremental progress on existing successes \item Define Challenge Problems in this space \item Encourage cross collaborations \end{itemize} Medium Term \begin{itemize} \item Develop tools that are applied to more than one application space \item Tools for convergence of HPC and ML/DL \item Get buy-in from users \end{itemize} Long Term \begin{itemize} \item Portable tools for full applications \item Verification of full applications \end{itemize} Goals \begin{itemize} \item Validate programs - automatically generate tests. Generate tests from specification. \item Coordination and communication language - the glue code to make the HPC application work. Can we synthesize this code? \item Iterative refinement approach to synthesis; model incorrect first then learns from the feedback process. Lifelong learning. \end{itemize} Good short-term goal: Identification of challenge problems and concrete benchmarks Short-term goal might be to focus on interactive repair/debugging rather than full-fledged synthesis Short term \begin{itemize} \item Synthesize to multiple architectures (medium?) \item Address legacy components \end{itemize} Medium Term \begin{itemize} \item Understand numerical accuracy, precision, stability of key kernels more formally, systematically \item Redesign code generation to include numerical accuracy and stability (e.g., multiple code versions that are tested) \item Address legacy applications \end{itemize} Long term \begin{itemize} \item Composability of synthesized codes \item Retire legacy applications :) \end{itemize} Short term: \begin{itemize} \item superoptimization (around DSLs (around numerical computing)) \item Automatic precision tuning. \item Synthesize boiler-plate code or design patterns such as halo exchange communication pattern in MPI. \item Translating simple legacy code. \end{itemize} Medium term: \begin{itemize} \item Resynthesize MPI code \item Auto parallelization \item Automatic generation of unit tests. \end{itemize} Immediate, medium term goals \begin{itemize} \item Instruction level optimization (should be doable) in the short term. \item Common framework: Intent expressing -> universal backends to all the hardware -> Math libraries \end{itemize} Long term goals: \begin{itemize} \item Generating long code snippets, solving computational goals \item Translating legacy code across paradigms, dealing with performance/precision, and generating idiomatic code. Synthesizing code that uses sensible identifier names (and otherwise be readable/maintainable). \end{itemize} 1-2 years: \begin{itemize} \item accelerate compiler construction for new SC/HPC domains, based on disentangling the functional specification, accuracy, mapping to space-time \end{itemize} 3-5 years: \begin{itemize} \item porting/recreating libraries with interactive synthesis tools \end{itemize} > 5 years: \begin{itemize} \item learn abstractions for new domains \end{itemize} Short Term \begin{itemize} \item Legacy code lifting \begin{itemize} \item Target a big-impact low-effort example \item Embed PS practitioners with HPC team \end{itemize} \item Synthesis of ML models as HPC proxies \item Define some HPC-Synthesis “Challenge Problems” \item Related: Incorporate ML into HPC applications \end{itemize} Medium and Long Term \begin{itemize} \item Human-in-the-loop \begin{itemize} \item Follow the successes of the theorem proving workbenches \end{itemize} \item Scale short term effort to larger scale codes \begin{itemize} \item Goal is to address problem complexity beyond current human capabilities \end{itemize} \end{itemize} Short term (1-2 years) \begin{itemize} \item Synthesis of API sequences for scientific libraries (with adapter/glue code) \item Compiler feedback on where users can provide functional hints/assertions that can lead to improved performance \end{itemize} Medium term (3-5 years) \begin{itemize} \item Compiler redesign with more flexibility in IRs and optimization sequences \item Dynamic code generation for specialized (sparse) data structures \item Generation of test cases and reduced codes for more productive debugging \end{itemize} Long term (> 5 years) \begin{itemize} \item Generating mini-apps from full apps \item Generating reference apps from science-based specifications e.g., PDEs \item Generating new components/libraries/frameworks for code reuse \end{itemize} How to integrate synthesized with other code (short/medium) \begin{itemize} \item Synthesis must compose with other code \item Question of defining the interface at boundaries \item Global optimization problem \end{itemize} DARPA MUSE program - something similar for this domain (short) \begin{itemize} \item Corpus of Java methods for synthesis \item Search for similar existing methods \item Goal: existing code may have some clever trick you can use \item Must verify correctness before using the code \end{itemize} Huge advantage to a large, very large, library of algorithms (long) \begin{itemize} \item Should be easy to add to the library \end{itemize} Natural languages for expressing intent (medium/long) \begin{itemize} \item GPT-3 can do some subtractions, but only up to a certain size \item Not advanced enough to find underlying mathematical structure \end{itemize} Accuracy, Performance, and (other objectives) (medium/long) \begin{itemize} \item There’s no complete correctness \item Parameter ranges super important \item More contextual knowledge: faster, less hardware, better algorithm \item Measure the set of good-enough contexts \item Very hard to get context, not just specifications \item Accuracy specifications basically impossible to elicit \item Tools that measure / verify stability? \item Running old code and comparing to new code \item Still reverting to testing \item Static tools often over-flag \end{itemize} Numerical bugs in synthesized code (short/medium/long) \begin{itemize} \item Validated by 30-years of cross-testing with the real world \item How do you know the new code is good \item Is it even numerically stable? \item How do you verify that the code is accurate \item More building trust than verification \item Many layers of approximation \item Can’t even verify any one of them, let alone all of them \end{itemize} short term (even variable names are indicative of types, temp, energy, etc.) Short: \begin{itemize} \item Step 1: Synthesize libraries that HPC people trust (helpful for long-term adoption) \item "Need finding" survey (plug-ins may make this easier than before), find ways to make people "not hate" synthesis \end{itemize} Mid: \begin{itemize} \item Step 2: Transforming systems using synthesized libraries \item Deeper user studies to understand what can help HPC community more productivity (e.g., check-ins, small refactors, bug fixes) \end{itemize} Long: \begin{itemize} \item Step 3: Composition and decomposition of previously synthesized systems for broader uses / adoptions \end{itemize} \fi
1,314,259,994,614
arxiv
\section{Introduction} \label{sec:into} The relativistic hydrodynamics (RHD) plays the leading role in astrophysics and nuclear physics etc. The RHDs is necessary in situations where the local velocity of the flow is close to the light speed in vacuum, or where the local internal energy density is comparable (or larger) than the local rest mass density of fluid. The paper is concerned with developing higher-order accurate numerical schemes for {the} 1D and 2D special RHDs. The $d$-dimensional governing equations of the special RHDs is a first-order quasilinear hyperbolic system. In the laboratory frame, it can be written {into} the divergence form \begin{align}\label{eq:rhd-eq001} \frac{\partial \vec U}{\partial\vec t} +\sum_{i=1}^d \frac{\partial \vec F_i(\vec U)}{\partial x_i}=0, \end{align} where $d=1$ or 2, and $\vec U$ and $\vec F_i(\vec U)$ denote the conservative vector and the flux in the $x_i$-direction, respectively, defined by \begin{align} \vec U=(D,\vec m, E)^T, \ \vec F_i(\vec U)=(D v_i, v_i \vec m+ p \vec e_i, m_i)^T, i=1,\cdots,d, \end{align} with the mass density $D = \rho W$, the momentum density (row) vector $\vec m = D hW\vec v$, the energy density $E=D h W - p$, and the row vector $\vec e_i$ denoting the $i$-th row of the unit matrix of size 2. Here $\rho$ is the rest-mass density, { $v_i$} denotes the fluid velocity in the $x_i$-direction, $p$ is the gas pressure, $W=1/\sqrt{1- v^2}$ is the Lorentz factor with $v:=\left(v_1^2+\cdots+v_d^2\right)^{1/2}$, $h$ is the {specific enthalpy} defined by $$h = 1 + e + \frac{p}{\rho},$$ with units in which the speed of light $c$ is equal to one, and $e$ is the specific internal energy. Throughout the paper, the equation of state (EOS) will be restricted to the $\Gamma$-law \begin{equation}\label{eq:EOS} p = (\Gamma-1) \rho e, \end{equation} where the adiabatic index $\Gamma \in (1,2]$. The restriction of $\Gamma\le 2$ is required for the compressibility assumptions and the causality in the theory of relativity (the sound speed does not exceed the speed of light {$c=1$}). The RHD equations \eqref{eq:rhd-eq001} are highly nonlinear so that their analytical treatment is extremely difficult. Numerical computation has become a major way in studying RHDs. The pioneering numerical work {can} date back to the Lagrangian finite difference code via artificial viscosity for the spherically symmetric general RHD equations \cite{May-White1966,May-White1967}. Multi-dimensional RHD equations were first solved in \cite{Wilson:1972} by using the Eulerian finite difference method with the artificial viscosity technique. Later, modern shock-capturing methods were extended to the RHD (including RMHD) equations. {Some representative methods} are the HLL (Harten-Lax-van Leer) scheme \cite{Zanna:2003}, HLLC (HLLC contact) schemes \cite{MignoneHLLCRMHD,Honkkila:2007}, Riemann solver \cite{Balsara1994}, approximate Riemann solvers based on the local linearization \cite{GodunovRMHD,Koldoba:2002}, second-order GRP (generalized Riemann problem) schemes \cite{Yang-He-Tang:2011,Yang-Tang:2012,Wu-Tang:2016}, third-order GRP scheme \cite{Wu-Yang-Tang:2014b}, locally evolution Galerkin method \cite{WuEGRHD}, discontinuous Galerkin (DG) methods \cite{Zhao-Tang:2017a,Zhao-Tang:2017b}, gas-kinetic schemes {(GKS)} \cite{QamarKinetic2004,Chen-Kuang-Tang:2017}, adaptive mesh refinement methods \cite{Anderson:2006,Host:2008}, and moving mesh methods \cite{HeAdaptiveRHD,HeAdaptiveRMHD} etc. Recently, the higher-order accurate physical-constraints-preserving (PCP) WENO (weighted essentially non-oscillatory) and DG schemes were developed for the special RHD equations \cite{Wu-Tang-JCP2015,Wu-Tang-RHD2016,QinShuYang-JCP2016}. They were built on studying the admissible state set of {the} special RHDs. The admissible state set and PCP schemes of {the} special ideal RMHDs were also studied for the first time in \cite{Wu-Tang-RMHD2016}, where the importance of divergence-free fields was revealed in achieving PCP methods. Those works were successfully extended to the special RHDs with a general equation of state \cite{Wu-Tang-RMHD2017b,Wu-Tang-RHD2016} and the general RHDs \cite{Wu-PRD2017}. In comparison with second-order shock-capturing schemes, the higher-order methods can provide more accurate solutions, but they are less robust and more complicated. For most of the existing higher-order methods, the Runge-Kutta time discretization is usually used to achieve higher order temporal accuracy. For example, a four-stage fourth-order Runge-Kutta method (see e.g. \cite{Zhao-Tang:2017b}) is used to achieve a fourth-order time accuracy. {If each stage of the time discretization needs to call the Riemann solver or the resolution of the local GRP, then the shock-capturing scheme with multi-stage time discretization for \eqref{eq:rhd-eq001} is very time-consuming.} Recently, based on the time-dependent flux function of the GRP, a two-stage fourth-order accurate time discretization was developed for Lax-Wendroff (LW) type flow solvers, particularly applied for the hyperbolic conservation laws \cite{LI-DU:2016}. Such two-stage LW time stepping method does also provide an alternative framework for the development of a fourth-order GKS with a second-order flux function \cite{PXLL:2016}. The aim of this paper is to study the two-stage fourth-order accurate time discretization {\cite{LI-DU:2016}} and its application to the special {RHD} equations {\eqref{eq:rhd-eq001}}. {Based our analysis, the new} two-stage fourth-order accurate time discretizations {can} be proposed. With the aid of the direct Eulerian GRP methods and the analytical resolution of the local ``quasi 1D'' GRP, those two-stage fourth-order accurate time discretizations can be conveniently implemented for the special {RHD} equations. {Their performance and accuracy as well as robustness can be demonstrated by numerical experiments}. The paper is organized as follows. Section \ref{sec:scheme} studies the two-stage fourth-order accurate time discretizations and applies them to the special RHD equations. Section \ref{sec:example} conducts several numerical experiments to demonstrate the accuracy and efficiency of the proposed methods. Conclusions are given in Section \ref{Section-conclusion}. \section{Numerical methods} \label{sec:scheme} {In this section, we study the two-stage fourth-order accurate time discretization \cite{LI-DU:2016} and then propose the new two-stage fourth-order accurate time discretizations.} With the aid of the direct Eulerian GRP methods, those two-stage time discretizations {can be} implemented for the special RHD equations \eqref{eq:rhd-eq001}. \subsection{Two-stage fourth-order time discretizations}\label{subsec:timedis} Consider the time-dependent differential equation \begin{align}\label{eq:ode01} u_t=L(u),\ t>0, \end{align} which {can} be a semi-discrete scheme for the conservation laws \eqref{eq:rhd-eq001}. Assume that the solution $u$ of \eqref{eq:ode01} is a fully smooth function of $t$ and $L(u)$ is also fully smooth, and give a partition of the time interval by $t_n=n\tau$, where $\tau$ denotes the time stepsize and $n\in \mathbb Z$. The Taylor series expansion of $u$ in $t$ reads \begin{align}\nonumber u^{n+1}=&\Big(u +\tau u_t +\frac{\tau^2}{2!} u_{tt} + \frac{\tau^3}{3!} u_{ttt}+\frac{\tau^4}{4!} u_{tttt}\Big )^n+{\mathcal O}(\tau^5) \\ \label{eq:add-decomp} =&\Big(u +\tau u_t +\frac{\tau^2}{6} u_{tt}\Big)^n +{ 2} \frac{\tau^2}{6} \Big( \big( u + \frac{\tau}{2} u_{t}+\frac{\tau^2}{8} u_{tt}\big)_{tt} \Big )^n+{\mathcal O}(\tau^5). \end{align} Substituting \eqref{eq:ode01} {into} \eqref{eq:add-decomp} gives \begin{align}\nonumber u^{n+1}=&\Big(u +\tau L(u) +\frac{\tau^2}{6} \partial_t L(u)\Big)^n + { 2} \frac{\tau^2}{6} \Big( \big( u + \frac{\tau}{2} L(u)+\frac{\tau^2}{8} \partial_t L(u)\big)_{tt} \Big )^n+{\mathcal O}(\tau^5) \\ \label{eq:add-decomp02} =&\Big(u +\tau L(u) +\frac{\tau^2}{6} \partial_t L(u)\Big)^n + { 2} \frac{\tau^2}{6} \left( (u^*)_{tt} \right)^n+{\mathcal O}(\tau^5), \end{align} where \begin{align*} u^*:= u + \frac{1}{2} \tau L(u) + \frac{1}{8} \tau^2 L_t(u) =u\left(t_n+\frac{\tau}2\right)+{\mathcal O}(\tau^3). \end{align*} Because \begin{align*} u^*_t=& u_t + \frac{1}{2} \tau L_t(u) + \frac{1}{8} \tau^2 L_{tt}(u) =L(u)+\frac{1}{2} \tau L_u L(u)+\frac{1}{8} \tau^2 \left( (L_u)^2L+ L_{uu} L^2 \right), \\ L(u^* =&L(u)+L_u \left(\frac{1}{2} \tau L(u) + \frac{1}{8} \tau^2 L_t(u)\right) +\frac12 L_{uu} \left(\frac{1}{2} \tau L(u) + \frac{1}{8} \tau^2 L_t(u)\right)^2 {+\cdots}, \end{align*} one has \begin{align*} u^*_t=L(u^*)+{\mathcal O}(\tau^3), \ u^*_{tt}=L_t(u^*)+{\mathcal O}(\tau^3). \end{align*} Combining the second equation with \eqref{eq:add-decomp02} gives \begin{align*} u^{n+1} =&\Big(u +\tau L(u) +\frac{\tau^2}{6} \partial_t L(u)\Big)^n + { 2} \frac{\tau^2}{6} \left( L_t(u^*) \right)^n+{\mathcal O}(\tau^5). \end{align*} The above discussion gives the two-stage fourth-order accurate time discretization of \eqref{eq:ode01} \cite{LI-DU:2016}: \begin{itemize} \item[Step 1.] Compute the intermediate value \begin{equation} \label{eq:1stage} u^* = u^n + \frac{1}{2} \tau L(u^n) + \frac{1}{8} \tau^2 \frac{\pa}{\pa t} L(u^n), \end{equation} \item[Step 2.] Evolve the solution at $t_{n+1}$ via \begin{equation} u^{n+1} = u^n + \tau L(u^n) + \frac{1}{6} \tau^2\left( \frac{\pa}{\pa t} L(u^n) + { 2} \frac{\pa}{\pa t} L(u^*) \right). \end{equation} \end{itemize} Obviously, the additive decomposition in \eqref{eq:add-decomp} is not unique. For example, it can be replaced {with} a more general decomposition \begin{align} u^{n+1} =&\Big(u +\tau L(u) +\frac{{\alpha}\tau^2}{2} \partial_t L(u)\Big)^n + { } \frac{(1-\alpha)\tau^2}{2} \Big( \big( \tilde{u}^*\big)_{tt} \Big )^n+ { O(\tau^5) }, \label{EQ:decom03}\end{align} {with} $\alpha\neq 1$ and \begin{align}\label{EQ:decom03a} \tilde{u}^*:= u+ \frac{\tau}{3(1-\alpha)} L(u)+\frac{\tau^2}{12(1-\alpha)} \partial_t L(u). \end{align} {If} $\alpha=\frac13$, then {\eqref{EQ:decom03} becomes the additive decomposition in \eqref{eq:add-decomp}} for the two-stage fourth-order time discretization \cite{LI-DU:2016}. {The identity \eqref{EQ:decom03a}} implies \begin{align}\label{EQ:decom03b} \tilde{u}^*_t=& L(u)+ \frac{\tau}{3(1-\alpha)} L_t(u)+\frac{\tau^2}{12(1-\alpha)} \partial_{tt} L(u). \end{align} {Comparing \eqref{EQ:decom03b} to the following Taylor series expansion} \begin{align*} L(\tilde{u}^*)=& L(u)+ L_u\left( \frac{\tau}{3(1-\alpha)} L +\frac{\tau^2}{12(1-\alpha)} L_t(u)\right)+\frac12 L_{uu} \left( \frac{\tau}{3(1-\alpha)} L +\frac{\tau^2}{12(1-\alpha)} L_t(u)\right)^2 {+\cdots}, \end{align*} { gives \begin{align*} \tilde{u}^*_t=L(\tilde{u}^*)+\frac{\tau^2}{12(1-\alpha)} \left(1-\frac{2}{3(1-\alpha)}\right) L_{uu} (u_t)^2+{\mathcal O}(\tau^3). \end{align*} If \begin{align}\label{EQ:decom03z} 1-\frac{2}{3(1-\alpha)}=C \tau^p, \ p\geq 1, \end{align} where $C$ is independent on $\tau$, the \begin{align*} \tilde{u}^*_t=L(\tilde{u}^*)+{\mathcal O}(\tau^3). \end{align*} {Therefore}, if $\alpha=\alpha(\hat{\tau})$ is a differentiable function of $\hat{\tau}$ and satisfies $\alpha(0)=1/3$, $\alpha\neq 1$, and $\hat{\tau}=C \tau^p$, then \eqref{EQ:decom03z} does hold. For example, we may choose $\alpha=(1-6\hat{\tau})/(3-6\hat{\tau})$ with $\hat{\tau}\neq 1/2$. At this time, one has \begin{align*} \tilde{u}^*= u\left(t+\frac{\tau}{3(1-\alpha)}\right)+{\mathcal O}(\tau^3), \end{align*} and similarly, from \eqref{EQ:decom03a} and the Taylor series expansion of $L_t\left(\tilde{u}^*\right)$ at $u$, one can get \begin{align}\label{EQ:00178} \tilde{u}^*_{tt}= L_t\left(\tilde{u}^*\right)+{\mathcal O}(\tau^3). \end{align} } Substituting \eqref{EQ:00178} into \eqref{EQ:decom03} gives \begin{align}\nonumber u^{n+1} =&\Big(u +\tau L(u) +\frac{{\alpha}\tau^2}{2} \partial_t L(u)\Big)^n + \frac{{ (1-\alpha)}\tau^2}{2} \Big( \partial_t L(\tilde{u}^*) \Big )^n+{\mathcal O}(\tau^5). \label{EQ:decom02}\end{align} {In conclusion, when $\alpha=\alpha(\hat{\tau})$ is a differentiable function of $\hat{\tau}=C \tau^p$ and satisfies $\alpha(0)=1/3$ and $\alpha\neq 1$, where $p\geq 1$ and $C$ is independent on $\tau$, the additive decomposition \eqref{EQ:decom03} can give new two-stage fourth-order accurate time discretizations as follows: \begin{itemize} \item[Step 1.] Compute the intermediate value \begin{equation} \label{eq:1stage02} u^* = u^n + \frac{1}{3(1-\al)} \tau L(u^n) + \frac{\tau^2 }{12(1-\al)} \frac{\pa}{\pa t} L(u^n), \end{equation} \item[Step 2.] Evolve the solution at $t_{n+1}$ via \begin{equation} u^{n+1} = u^n + \tau L(u^n) + \frac{\al\tau^2}{2} \frac{\pa}{\pa t} L(u^n) + \frac{(1-\al)\tau^2}{2} \frac{\pa}{\pa t} L(u^*) . \end{equation} \end{itemize} } \subsection{Application of two-stage time discretizations to 1D RHD equations} \label{subsec:method-1d} {In this section, we apply the above two-stage fourth-order time discretizations to the 1D RHD equations, i.e. \eqref{eq:rhd-eq001} with $d=1$.} For the sake of convenience, the symbols $x_1$ and $v_1$ {are} replaced with $x$ and $u$, respectively, and a uniform partition of the spatial domain is given by $I_j = (x_{j-\frac{1}{2}} ,\, x_{j+\frac{1}{2}})$ with $\Delta x=x_{j+\frac{1}{2}}-x_{j-\frac{1}{2}}$. For the given ``initial'' approximate cell averages $\{\overline{\vec U}^n_j\}$ at $t_n$, {we want to reconstruct the WENO values of $\vec U$ and $\partial_x\vec U$ at the cell boundaries, denoted by $\vec U^{\pm,n}_{j+\frac12}$ and $(\partial_x \vec U)^{\pm,n}_{j+\frac12}$. Our initial reconstruction procedure is given as follows: \begin{description} \item[(1)] Use the standard 5th-order WENO reconstruction \cite{Jiang-Shu1996} to get $\vec U^{\pm,n}_{j+\frac12}$ with the aid of the characteristic decomposition. If $\vec U^{\pm,n}_{j+\frac12}$ does not belong to the admissible state set of 1D RHD equations \cite{Wu-Tang-JCP2015}, then we set $\vec U^{\pm,n}_{j+\frac12} = \overline{\vec U}_j^n$ directly in order to avoid the nonphysical solution as soon as possible. \item[(2)] Calculate $(\overline{\partial_x \vec U})_j^n = \frac{1}{\Delta x}( \vec U^{-,n}_{j+\frac12} - \vec U^{+,n}_{j-\frac12} )$, which is the approximate cell average value of $\partial_x\vec U$ over the cell $I_j$, and then use the above WENO again to get $(\partial_x \vec U)^{\pm,n}_{j+\frac12}$. \end{description} } {Such initial} reconstruction is also used {at} $t_*=t_n+\tau/(3-3\alpha)$, where {$\alpha=\alpha(\hat{\tau})$ is a differentiable function of $\hat{\tau}$ and satisfies $\alpha(0)=1/3$, $\alpha\neq 1$, and $\hat{\tau}=C \tau^p$ with $p\geq 1$ and $C$ independent on $\tau$. } The two-stage fourth-order time discretizations in Section \ref{subsec:timedis} can be applied to the 1D RHD equations by the following steps. \begin{itemize} \item[Step 1.] { For the reconstructed data $\vec U^{\pm,n}_{j+\frac12}$ and $(\partial_x \vec U)^{\pm,n}_{j+\frac12}$, % following the GRP method \cite{Yang-He-Tang:2011}, solve the Riemann problem of \begin{equation} \label{eq:GRP1D} \vec U_t + \vec F_1(\vec U)_x = 0,\ t>t_n,\\ \end{equation} to get $\vec U_{j+\frac12}^{RP,n}$ and then resolve analytically the GRP of \eqref{eq:GRP1D} to obtain the value of $(\partial \vec U/\partial t)_{j+\frac12}^n$.} \item[Step 2.] Compute the intermediate values $\{\overline{\vec U}_j^*\}$ {by} \begin{equation} \overline{\vec U}_j^* = \overline{\vec U}^n_{j} + \frac{\tau}{3(1-\alpha)} L_j (\overline{\vec U}^n) + \frac{\tau^2}{12(1-\alpha)} \pa_t L_j(\overline{\vec U}^n), \end{equation} where the terms $L_j (\overline{\vec U}^n)$ and $\pa_t L_j(\overline{\vec U}^n)$ are given by \begin{equation} L_j(\overline{\vec U}^n) = -\frac{1}{\Delta x} \left( \vec F_1(\vec U_{j+\frac{1}{2}}^{RP,n}) - \vec F_1(\vec U_{j-\frac{1}{2}}^{RP,n})\right), \end{equation} and \begin{equation}\label{eq:1dLt} \pa_t L_j(\overline{\vec U}^n) = -\frac{1}{\Delta x} \left( (\pa_t \vec F_1)_{j+\frac{1}{2}}^n - (\pa_t \vec F_1)_{j-\frac{1}{2}}^n\right), \end{equation} with $$(\pa_t \vec F_1)_{j\pm \frac{1}{2}}^n= \frac{\partial \vec F_1}{\partial \vec U} \left( \vec U_{j\pm \frac{1}{2}}^{RP,n} \right) \cdot \left(\frac{\partial \vec U}{\partial t}\right)_{j\pm \frac12}^n. $$ \item[Step 3.] Reconstruct the values {$\vec U^{\pm,*}_{j+\frac12}$ and $(\partial_x \vec U)^{\pm,*}_{j+\frac12}$} with $\{\overline{\vec U}_j^*\}$ by the above initial reconstruction procedure, and then resolve analytically the local GRP {of \eqref{eq:GRP1D}} to get $\vec U_{j+\frac12}^{RP,*}$ and $(\partial \vec U/\partial t)_{j+\frac12}^*$. \item[Step 4.] Evolve the solution at $t_{n+1} = t_n + \tau$ by \begin{equation} \overline{\vec U}_j^{n+1} = \overline{\vec U}_j^n + \tau L_j( \overline{\vec U}^n) + \frac{\alpha\tau^2}{2} \pa_t L_j(\overline{\vec U}^n) + \frac{(1-\alpha)\tau^2}{2} \pa_t L_j(\overline{\vec U}^*), \end{equation} where \begin{equation} \pa_t L_j(\overline{\vec U}^*) = -\frac{1}{\Delta x} \left( (\pa_t \vec F_1)_{j+\frac{1}{2}}^* - (\pa_t \vec F_1)_{j-\frac{1}{2}}^*\right), \end{equation} with $$(\pa_t \vec F_1)_{j\pm \frac{1}{2}}^*= \frac{\partial \vec F_1}{\partial \vec U} \left( \vec U_{j\pm \frac{1}{2}}^{RP,*}\right ) \cdot \left(\frac{\partial \vec U}{\partial t}\right)_{j\pm \frac12}^*. $$ \end{itemize} \subsection{Application of two-stage time discretizations to 2D RHD equations} {In this section, we apply} the two-stage fourth-order time discretizations to the 2D RHD equations, i.e. \eqref{eq:rhd-eq001} with $d=2$ with the aid of the analytical resolution of the local ``quasi 1D'' GRP and an adaptive primitive-conservative scheme. {The latter given in \cite{E.F.Toro:2013}} is used to reduce the spurious solution generated by the conservative scheme across the contact discontinuity, see Example \ref{example2.1}. Similarly, the symbols $(x_1, x_2)$ and $(v_1,v_2)$ {are} replaced with $(x,y)$ and $(u,v)$, respectively, and a uniform partition of the spatial domain is given by $I_{jk} {=} (x_{j-\frac{1}{2}} ,\, x_{j+\frac{1}{2}})\times (y_{k-\frac{1}{2}} ,\, y_{k+\frac{1}{2}})$ with $\Delta x=x_{j+\frac{1}{2}}-x_{j-\frac{1}{2}}$ and $\Delta y=y_{k+\frac{1}{2}}-y_{k-\frac{1}{2}}$. \begin{example} \label{example2.1}\rm Because of the nonlinearity of \eqref{eq:rhd-eq001}, when a conservative scheme is used, a spurious solution across the contact discontinuity, a well-known phenomenon in multi-fluid systems, {can} arise even for {a single material}. It is similar to the phenomenon mentioned in \cite{E.F.Toro:2013}. To clarify that, let us solve the Riemann problem of \eqref{eq:rhd-eq001} with the initial data \begin{equation} \label{eq:example} (\rho , u , v, p)(x,y,0) \; = \; \begin{cases} (0.5, -0.5, 0.5, 5 ) , & x>0.5, \\ (0.5, -0.5, -0.5, 5) , & x<0.5. \end{cases} \end{equation} The computational domain is taken as $[0,1]\times [0,1]$. Fig. \ref{fig:001} gives the solutions obtained by using the 2D (first-order, conservative) Godunov method. Fig. \ref{fig:example001b} gives the solutions obtained by using the 2D two-stage fourth-order conservative method. The obvious oscillations near the contact discontinuity are observed, in other words, the spurious solutions have been generated near the contact discontinuity. It is easy to verify it theoretically. To overcome such difficulty, the generalized Osher-type scheme in an adaptive primitive-conservative framework \cite{E.F.Toro:2013} can be employed to avoid {or reduce} the above spurious solutions at the expense of the conservation. {Figs. \ref{fig:example001c} and \ref{fig:example001d} do respectively display more better solutions obtained by the adaptive primitive-conservative scheme with the reconstructions of the characteristic and primitive variables than those in Figs. \ref{fig:001} and \ref{fig:example001b},} in which the generalized Osher-type scheme is adaptively used to solve the RHD equations \eqref{eq:rhd-eq001} in the equivalently primitive form \begin{equation}\label{eq:RHD2d-prim} \pa_t \vec V + \widetilde{\vec A}(\vec V) \pa_x \vec V +\widetilde{\vec B}(\vec V) \pa_y \vec V = 0, \end{equation} where $\vec V = (\rho,\, u,\, v,\, p)^T$ and \begin{equation} \label{eq:matrixA} \widetilde{\vec A}(\vec V) = u \cdot \vec I_4 + \begin{pmatrix} 0 & \dfrac{\rho}{1-(u^2+v^2)c_s^2} & 0 & \dfrac{-u}{W^2h[1-(u^2+v^2)c_s^2]} \\ 0 & \dfrac{-uc_s^2}{W^2[1-(u^2+v^2)c_s^2]} & 0 & \dfrac{H}{\rho h W^2[1-(u^2+v^2)c_s^2]} \\ 0 & \dfrac{-vc_s^2}{W^2[1-(u^2+v^2)c_s^2]} & 0 & \dfrac{-uv(1-c_s^2)}{\rho h W^2[1-(u^2+v^2)c_s^2]} \\ 0 & \dfrac{\rho h c_s^2}{1-(u^2+v^2)c_s^2} & 0 & \dfrac{-uc_s^2}{W^2[1-(u^2+v^2)c_s^2]} \end{pmatrix}, \end{equation} with $H = 1-u^2 - v^2c_s^2$ and $c_s^2 = \frac{\G p} { \rho h}$. The matrix $\widetilde{\vec B}(\vec V)$ can be gotten by exchanging $u$ and $v$, and then the second and third row, and the second and third column of the matrix $\widetilde{\vec A}$. % % \begin{figure}[htbp] \centering \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \subfigure[$\rho$]{ \begin{minipage}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{img/d} \end{minipage} } \subfigure[$u$]{ \begin{minipage}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{img/u} \end{minipage} } \subfigure[$v$]{ \begin{minipage}[t]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{img/v} \end{minipage} } \subfigure[$p$]{ \begin{minipage}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{img/p} \end{minipage} } \caption{\small Example \ref{example2.1}: The solutions at $t = 0.4$ obtained by the first-order Godunov method along the line $y = 0.5$ with $400 \times 40$ uniform cells.}\label{fig:001} \end{figure} \begin{figure}[htbp] \centering \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \subfigure[$\rho$]{ \begin{minipage}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{img/dCon} \end{minipage} } \subfigure[$u$]{ \begin{minipage}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{img/uCon} \end{minipage} } \subfigure[$v$]{ \begin{minipage}[t]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{img/vCon} \end{minipage} } \subfigure[$p$]{ \begin{minipage}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{img/pCon} \end{minipage} } \caption{\small Same as Fig. \ref{fig:001} except for two-stage fourth-order conservative scheme {with the reconstructed characteristic variables}.}\label{fig:example001b} \end{figure} % \begin{figure}[htbp] \centering \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \subfigure[$\rho$]{ \begin{minipage}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{img/dDOT} \end{minipage} } \subfigure[$u$]{ \begin{minipage}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{img/uDOT} \end{minipage} } \subfigure[$v$]{ \begin{minipage}[t]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{img/vDOT} \end{minipage} } \subfigure[$p$]{ \begin{minipage}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{img/pDOT} \end{minipage} } \caption{\small Same as Fig. \ref{fig:001} except for {two-stage fourth-order} adaptive primitive-conservative scheme {with the reconstructed characteristic variables}.}\label{fig:example001c} \end{figure} \begin{figure}[htbp] \centering \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \subfigure[$\rho$]{ \begin{minipage}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{DOT/eps_U/dDOT} \end{minipage} } \subfigure[$u$]{ \begin{minipage}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{DOT/eps_U/uDOT} \end{minipage} } \subfigure[$v$]{ \begin{minipage}[t]{0.2\textwidth} \centering \includegraphics[width=\textwidth]{DOT/eps_U/vDOT} \end{minipage} } \subfigure[$p$]{ \begin{minipage}[t]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{DOT/eps_U/pDOT} \end{minipage} } \caption{\small { Same as Fig. \ref{fig:example001c} except for the reconstructed primitive variables}.}\label{fig:example001d} \end{figure} % \end{example} With the given ``initial'' cell-average data $\{{\overline{\vec U}}^n_{jk}\}$, {in the $x-$direction, we want to reconstruct $\vec U^{\pm,n}_{j+\frac12,k_l}$, $(\partial_x \vec U)^{\pm,n}_{j+\frac12,k_l}$ and $(\partial_x \vec F_2)^{\pm,n}_{j+\frac12,k_l}$, where $ \vec U^{\pm,n}_{j+\frac12,k_l} \approx \vec U(x_{j+\frac12}\pm 0, y_{k_l}^G, t_n)$, $(\partial_x \vec U)^{\pm,n}_{j+\frac12,k_l} \approx (\partial_x \vec U)(x_{j+\frac12}\pm 0, y_{k_l}^G, t_n)$, $(\partial_y \vec F_2)^{\pm,n}_{j+\frac12,k_l} \approx (\partial_y \vec F_2)(x_{j+\frac12}\pm 0, y_{k_l}^G, t_n)$, here $y_{k_l}^G \in (y_{k-\frac{1}{2}}, y_{k+\frac{1}{2}})$ denotes the associated Gauss-Legendre point, $l=1,2,\cdots,K$. The procedure is given as follows: \begin{description} \item[(1)] Calculate $\vec U^{\pm,n}_{j+\frac12,k_l}$ and $(\partial_x \vec U)^{\pm,n}_{j+\frac12,k_l}$ by the following two steps: \begin{description} \item[$-$] For each $j$, use the standard 5th-order WENO technique \cite{Jiang-Shu1996} to reconstruct the approximate value of $\vec U$ at the point $y_{k_l}^G$, denoted by $\overline{\vec U}_{j,k_l}^n$, which is an approximation of $\frac{1}{\Delta x}\int_{x_{j-\frac12}}^{x_{j+\frac12}} \vec U(x,y_{k_l}^G,t_n) \; dx$, $l=1,2,\cdots,K$. \item[$-$] Reconstruct $\vec U^{\pm,n}_{j+\frac12,k_l}$ and $(\partial_x \vec U)^{\pm,n}_{j+\frac12,k_l}$ by the initial reconstruction procedure in Section \ref{subsec:method-1d} and the data $\{\overline{\vec U}_{j,k_l}^n\}$. \end{description} \item[(2)] Calculate $(\partial_y \vec F_2)^{\pm,n}_{j+\frac12,k_l}$ as follows: \begin{description} \item[$-$] Calculate $\{ {\vec F}_2({\overline{\vec U}}^n_{jk}) \}$ and then for each $j$, use those data and the 5th-order WENO technique to reconstruct $(\overline{\vec F}_2)_{j,k+\frac12}^{\pm,n}$, approximating $\frac{1}{\Delta x} \int_{x_{j-\frac12}}^{x_{j+\frac12}} $ $\vec F_2(x,y_{k+\frac12}\pm 0,t_n) \; dx$. \item[$-$] Calculate $ {(\overline{\pa_y \vec F_2})}^n_{j,k} = \frac{1}{\Delta y} \big((\overline{\vec F}_2)_{j,k+\frac12}^{-,n} - (\overline{\vec F}_2)_{j,k-\frac12}^{+,n} \big)$, and then use those data and the 5th-order WENO technique to reconstruct $(\overline{\pa_y \vec F_2})_{j,k_l}^n$ at the point $y_{k_l}^G$. Here $(\overline{\pa_y \vec F_2})_{j,k_l}^n \approx \frac{1}{\Delta x} \int_{x_{j-\frac12}}^{x_{j+\frac12}} \pa_y \vec F_2(x,y_{k_l}^G,t_n) \; dx$ and $l=1,2,\cdots,K$. \item[$-$] Use the data $\{(\overline{\pa_y \vec F_2})_{j,k_l}^n\}$ and the 5th-order WENO technique to get $(\partial_y \vec F_2)^{\pm,n}_{j+\frac12,k_l}$. \end{description} \end{description} } Such reconstruction is also used at $t_*=t_n+\tau/(3-3\alpha)$, where {$\alpha=\alpha(\hat{\tau})$ is a differentiable function of $\hat{\tau}$ and satisfies $\alpha(0)=1/3$, $\alpha\neq 1$, and $\hat{\tau}=C \tau^p$ with $p\geq 1$ and $C$ independent on $\tau$. } The two-stage fourth-order time discretizations in Section \ref{subsec:timedis} can be applied to the 2D RHD equations by the following steps. \begin{itemize} \item[Step 1.] In the $x$-direction, solve the local Riemann problem \begin{equation} \begin{cases} \vec U_t + \vec F_1(\vec U)_x = 0,\\ \vec U(x,y_{k_l}^G,t_n) = \begin{cases} {\vec U^{-,n}_{j+\frac12,k_l}}, & x < x_{j+\frac{1}{2}},\\ {\vec U^{+,n}_{j+\frac12,k_l}}, & x > x_{j+\frac{1}{2}}, \end{cases} \end{cases} \end{equation} to get $\vec U_{j+\frac{1}{2},k_l}^{RP,n}$ {and $\vec V_{j+\frac{1}{2},k_l}^{RP,n}$}, and resolve the local ``quasi 1D'' GRP of \begin{equation} \label{eq:quasi1D} \vec U_t + \vec F_1(\vec U)_x = -{\widehat{(\partial_y \vec F_2)}^n_{j+\frac12,k_l}},\ t>t_n,\\ % \end{equation} to obtain $\left(\frac{\pa}{\pa t}\vec U\right)_{j+\frac{1}{2},k_l}^n$ {and $\left(\frac{\pa}{\pa t}\vec V\right)_{j+\frac{1}{2},k_l}^n$}, where { \begin{equation*} \widehat{(\partial_y \vec F_2)}^n_{j+\frac12,k_l} = \vec R\vec I^+\vec R^- (\partial_y \vec F_2)^{-,n}_{j+\frac12,k_l} + \vec R\vec I^-\vec R^- (\partial_y \vec F_2)^{+,n}_{j+\frac12,k_l}, \end{equation*} } and \begin{equation*} \vec A =\frac{\pa \vec F_1}{\pa \vec U}(\vec U_{j+\frac{1}{2},k_l}^{RP,n}) = \vec R \vec\Lambda \vec R^-, \vec \Lambda = \mbox{diag}\{\la_i\}, \ \vec I^\pm = \frac{1}{2}\mbox{diag}\{1\pm \mbox{sign}(\la_i)\}. \end{equation*} The analytical resolution of {the} ``quasi-1D'' GRP is give in Appendix \ref{Appendix:001}. Similarly, solve the Riemann problem and resolve {the} ``quasi 1D'' GRP in the $y$-direction to get $\vec U_{j_l,k+\frac{1}{2}}^{RP,n}$, $\left(\frac{\pa}{\pa t}\vec U\right)_{j_l,k+\frac{1}{2}}^n $, { $\vec V_{j_l,k+\frac{1}{2}}^{RP,n}$ and $\left(\frac{\pa}{\pa t}\vec V\right)_{j_l,k+\frac{1}{2}}^n$}. \item[Step 2.] Compute the intermediate solutions $\overline{\vec U}_{jk}^*$ or $\overline{\vec V}_{jk}^*$ at $t^*$ with the adaptive procedure \cite[Section 3.3]{E.F.Toro:2013}, whereby the conservative scheme is only applied to the cells in which the {shock waves} are involved and the primitive scheme is used elsewhere to address the issue mentioned in Example \ref{example2.1}. With the help of $\vec U_{j\pm \frac{1}{2},k_l}^{RP,n}$ and $\vec U_{j_l,k\pm\frac{1}{2}}^{RP,n}$, the pressures $p_{j\pm \frac{1}{2},k_l}^n$ and $p_{j_l,k\pm\frac{1}{2}}^n$, {the} fastest shock speeds $s_{j+\frac{1}{2},k_l}^{n,L}$, $s_{j-\frac{1}{2},k_l}^{n,R}$, $s_{j_l,k+\frac{1}{2}}^{n,L}$, $s_{j-\frac{1}{2},k_l}^{n,R}$ are first obtained and then {we} do the followings. \begin{itemize} \item If \begin{equation*} \begin{cases} \frac{p_{j+\frac{1}{2},k_l}^n}{p_{jk}^n} > P_{\mbox{sw}}, \\ s_{j+\frac{1}{2},k_l}^{n,L} < 0, \end{cases} \mbox{or}\; \begin{cases} \frac{p_{j-\frac{1}{2},k_l}^n}{p_{jk}^n} > P_{\mbox{sw}}, \\ s_{j-\frac{1}{2},k_l}^{n,R} > 0, \end{cases} \mbox{or}\; \begin{cases} \frac{p_{j_l,k+\frac{1}{2}}^n}{p_{jk}^n} > P_{\mbox{sw}}, \\ s_{j_l,k+\frac{1}{2}}^{n,L} < 0, \end{cases} \mbox{or}\; \begin{cases} \frac{p_{j_l,k-\frac{1}{2}}^n}{p_{jk}^n} > P_{\mbox{sw}}, \\ s_{j_l,k-\frac{1}{2}}^{n,R} > 0, \end{cases} \end{equation*} the cell $I_{jk}$ is marked {and the solution in $I_{jk}$ is evolved} by the conservative scheme \begin{equation*} \overline{\vec U}_{jk}^* = \overline{\vec U}^n_{jk} + \frac{\tau}{3(1-\alpha)} L_{jk} (\overline{\vec U}^n) + \frac{\tau^2}{12 (1-\alpha)} \pa_t L_{jk}(\overline{\vec U}^n), \end{equation*} where \begin{align*} L_{jk}^n(\vec U) = & - \frac{1}{\Delta x} \left( \sum_{l=1}^K \om_l \vec F_1(\vec U_{j+\frac{1}{2},k_l}^{RP,n}) - \sum_{l=1}^K \om_l \vec F_1(\vec U_{j-\frac{1}{2},k_l}^{RP,n}) \right) \; \\ & - \;\frac{1}{\Delta y} \left( \sum_{l=1}^K \om_l \vec F_2(\vec U_{j_l,k+\frac{1}{2}}^{RP,n}) - \sum_{l=1}^K \om_l\vec F_2(\vec U_{j_l,k-\frac{1}{2}}^{RP,n}) \right), \end{align*} the term $\pa_t L_{jk}(\overline{\vec U}^n)$ can be similarly given to \eqref{eq:1dLt}, and $P_{\mbox{sw}}=1+\epsilon$ denotes the shock sensing parameter. \item Otherwise, the cell $I_{jk}$ is marked to be updated by the non-conservative scheme \begin{equation*} \overline{\vec V}_{jk}^* = \overline{\vec V}^n_{jk} + \frac{\tau}{3(1-\alpha)} \widetilde{L}_{jk} (\overline{\vec V}^n) + \frac{\tau^2}{12 (1-\alpha)} \pa_t \widetilde{L}_{jk}(\overline{\vec V}^n), \end{equation*} with \begin{align*} -\widetilde{L}_{jk} (\overline{\vec V}^n) = & \frac{1}{\Delta x} \sum_{l=1}^K \om_l \left( \int_{ { \vec V^{-,n}_{j-\frac{1}{2},k_l} } } ^ {\vec V_{j-\frac{1}{2},k_l}^{RP,n}} \widetilde{\vec A}^+ \;d\vec V + \int^{ { \vec V^{+,n}_{j+\frac{1}{2},k_l} } } _{\vec V_{j+\frac{1}{2},k_l}^{RP,n}} \widetilde{\vec A} ^- \;d\vec V + \int_{\vec V_{j-\frac{1}{2},k_l}^{RP,n}} ^{\vec V_{j+\frac{1}{2},k_l}^{RP,n}} \widetilde{\vec A} \;d\vec V \right) \\ + & \frac{1}{\Delta y} \sum_{l=1}^K \om_l \left( \int_{ { \vec V^{-,n}_{j_l,k-\frac{1}{2}} } }^{\vec V_{j_l,k-\frac{1}{2}}^{RP,n}} \widetilde{\vec B}^+ \;d\vec V + \int^{ { \vec V^{+,n}_{j_l,k+\frac{1}{2}} } }_{\vec V_{j_l,k+\frac{1}{2}}^{RP,n}} \widetilde{\vec B}^- \;d\vec V + \int_{\vec V_{j_l,k-\frac{1}{2}}^{RP,n}} ^{\vec V_{j_l,k+\frac{1}{2}}^{RP,n}} \widetilde{\vec B} \;d\vec V \right), \end{align*} and \begin{align*} -\pa_t \widetilde{L}_{jk} (\overline{\vec V}^n) = & \frac{1}{\Delta x} \sum_{l=1}^K \om_l \left( \widetilde{\vec A}(\vec V_{j+\frac{1}{2},k_l}^{RP,n}) \cdot \pa_t \vec V_{j+\frac{1}{2},k_l}^n - \widetilde{\vec A} (\vec V_{j-\frac{1}{2},k_l}^{RP,n}) \cdot \pa_t \vec V_{j-\frac{1}{2},k_l}^n \right) \\ + & \frac{1}{\Delta y} \sum_{l=1}^K \om_l \left( \widetilde{\vec B}(\vec V_{j_l,k+\frac{1}{2}}^{RP,n}) \cdot \pa_t \vec V_{j_l,k+\frac{1}{2}}^n - \widetilde{\vec B} (\vec V_{j_l,k-\frac{1}{2}}^{RP,n})\cdot \pa_t \vec V_{j_l,k-\frac{1}{2}}^n \right), \end{align*} where { $ \vec V^{\pm,n}_{j+\frac{1}{2},k_l}, \vec V^{\pm,n}_{j_l,k+\frac{1}{2}} $ are obtained from $ \vec U^{\pm,n}_{j+\frac{1}{2},k_l}, \vec U^{\pm,n}_{j_l,k+\frac{1}{2}} $. } The above integrals are evaluated by using a numerical integration such as the Gauss-Legendre quadrature along a simple canonical path defined by \begin{equation} \vec \Psi(s; \vec V_L, \vec V_R) = \vec V_L + s(\vec V_R - \vec V_L), \quad s \in [0,1]. \end{equation} \end{itemize} \item[Step 3.] With the ``initial'' data $\{\overline{\vec U}^*_{{jk}}\}$, {reconstruct values $\vec U^{\pm,*}_{j+\frac12,k_l}, (\partial_x \vec U)^{\pm,*}_{j+\frac12,k_l}, (\partial_y \vec F_2)^{\pm,*}_{j+\frac12,k_l}$ and $\vec U^{\pm,*}_{j_l,k+\frac12}, (\partial_x \vec U)^{\pm,*}_{j_l,k+\frac12}, (\partial_x \vec F_1)^{\pm,*}_{j_l,k+\frac12}$}. Then, similar to Step 1, compute $\vec U_{{j+\frac{1}{2},k_l}}^{RP,*}$, $\left(\frac{\pa}{\pa t}\vec U\right)_{{j+\frac{1}{2},k_l}}^* $, $\vec U_{j_l,k+\frac{1}{2}}^{RP,*}$ and $\left(\frac{\pa}{\pa t}\vec U\right)_{{j_l,k+\frac{1}{2}}}^* $. \item[Step 4.] Evolve the solution $\overline{\vec U}_{{jk}}^{n+1}$ or $\overline{\vec V}_{{jk}}^{n+1}$ at $t_{n+1} = t_n + \Delta t$ by the adaptive primitive-conservative scheme in Step 2 with \begin{equation} \overline{\vec U}_{jk}^{n+1} = \overline{\vec U}_{jk}^n + \tau L_{jk}( \overline{\vec U}^n) + \frac{\alpha\tau^2}{2} \pa_t L_{jk}(\overline{\vec U}^n) + \frac{(1-\alpha)\tau^2}{2} \pa_t L_{jk}(\overline{\vec U}^*), \end{equation} and \begin{equation} \overline{\vec V}_{jk}^{n+1} = \overline{\vec V}_{jk}^n + \tau \widetilde{L}_{jk}( \overline{\vec V}^n) + \frac{\alpha\tau^2}{2} \pa_t \widetilde{L}_{jk}(\overline{\vec V}^n) + \frac{(1-\alpha)\tau^2}{2} \pa_t \widetilde{L}_{jk}(\overline{\vec V}^*). \end{equation} % \end{itemize} \section{Numerical results} \label{sec:example} {In this section, several one-dimensional and two-dimensional tests are presented to demonstrate the performance of our methods.} Unless otherwise stated, the time stepsizes for the 1D and 2D schemes are respectively chosen as \begin{equation*} \tau = \frac{\mu \Delta x}{\max_{\ell, j}\{ |\la_\ell^1( \overline{\vec U}_j^n )|\}}, \end{equation*} and \begin{equation*} \tau = \frac{\mu}{ \max_{\ell,j,k}\{ |\la_\ell^1 \left( \overline{\vec U}_{jk}^n \right) | \} /{\Delta x} + \max_{\ell,j,k}\{ | \la_\ell^2\left( \overline{\vec U}_{jk}^n \right) | \} /{\Delta y} }, \end{equation*} where $\la_\ell^1$ ({resp.} $\la_\ell^2$) is the $\ell$th eigenvalue of 2D RHD equations in the direction $x$ ({resp.} $y$), $\ell=1,2,3,4$. {The} parameter $\al$ is taken $\frac13$, the CFL number $\mu$ are taken as {$0.7$ and $0.5$} for the 1D and 2D problems, respectively. Our numerical experiments { show} that there is no obvious difference between $\alpha=\frac13$ {and $\alpha=(1-6 {\tau})/(3-6 {\tau})$ or $\alpha=\frac13+\tau$}. Here we take $K \geq 3$ in order to ensure that {the degree of} the algebraic precision of corresponding quadrature is at least 4. \subsection{One-dimensional case} \begin{example}[Smooth problem] \label{example3.1}\rm It is used to verify the numerical accuracy. The initial data are taken as \begin{equation*} (\rho , u , p)(x,0) \; = \;( 1+0.2\sin(2x), 0.2, 1), \ x\in [0,\pi], \end{equation*} and the periodic boundary condition is specified. The exact solutions can be given by \begin{equation*} \rho (x,t) \; = \; 1+0.2\sin\big(2(x-u t)\big), \ u(x,t) = 0.2,\ p(x,t) = 1. \end{equation*} In our computations, the adiabatic index $\Gamma=5/3$ and {$ \tau = \frac{\mu \Delta x^{5/4}}{\max_{\ell, j}\{ |\la_\ell^1( \overline{\vec U}_j^n )|\}}, $} the computational domain $[0,\pi]$ is divided into $N$ uniform cells. Tables \ref{tabel:001}$-$\ref{table:001b} list the errors and convergence rates in $\rho$ at $t = 2$ obtained by using our 1D method with different $\alpha$. It is seen that the two-stage schemes {can} get the theoretical orders. \begin{table}[htpb] \centering \caption{The errors and convergence rates for solution at $t = 2$. $\alpha=\frac13$. } \label{tabel:001} \begin{tabular}{ccccccc} \hline $N$ & $l^1$ error & order & $l^2$ error & order & $l^{\infty}$ error & order \\ \hline 5 &2.8450e-02 & - &3.2450e-02 & - &4.5761e-02 & -\\ 10 &2.5393e-03 & 3.4859 &2.9509e-03 & 3.4590 &3.8805e-03 & 3.5598 \\ 20 &1.1042e-04 & 4.5233 &1.3168e-04 & 4.4861 &1.9960e-04 & 4.2811 \\ 40 &3.3904e-06 & 5.0255 &4.0143e-06 & 5.0357 &7.2420e-06 & 4.7846 \\ 80 &1.0513e-07 & 5.0112 &1.2192e-07 & 5.0412 &2.1990e-07 & 5.0415 \\ 160 &3.3151e-09 & 4.9870 &3.7593e-09 & 5.0193 &6.9220e-09 & 4.9895 \\ \hline \end{tabular} \end{table} \begin{table}[htpb] \centering \caption{{Same as Table \ref{tabel:001} except for } $\alpha=\frac{1-6\tau}{3-6\tau}$.} \label{table:001a} \begin{tabular}{ccccccc} \hline $N$ & $l^1$ error & order & $l^2$ error & order & $l^{\infty}$ error & order \\ \hline 5 &2.8452e-02 & - &3.2453e-02 & - &4.5767e-02 & -\\ 10 &2.5392e-03 & 3.4861 &2.9509e-03 & 3.4591 &3.8805e-03 & 3.5600 \\ 20 &1.1042e-04 & 4.5233 &1.3168e-04 & 4.4861 &1.9960e-04 & 4.2810 \\ 40 &3.3904e-06 & 5.0255 &4.0143e-06 & 5.0357 &7.2420e-06 & 4.7846 \\ 80 &1.0513e-07 & 5.0112 &1.2192e-07 & 5.0412 &2.1990e-07 & 5.0415 \\ 160 &3.3151e-09 & 4.9870 &3.7593e-09 & 5.0193 &6.9221e-09 & 4.9895 \\ \hline \end{tabular} \end{table} \begin{table}[htpb] \centering \caption{{Same as Table \ref{tabel:001} except for } $\alpha=\frac13 + \tau$. } \label{table:001b} \begin{tabular}{ccccccc} \hline $N$ & $l^1$ error & order & $l^2$ error & order & $l^{\infty}$ error & order \\ \hline 5 &2.8448e-02 & - &3.2447e-02 & - &4.5756e-02 & -\\ 10 &2.5393e-03 & 3.4858 &2.9510e-03 & 3.4588 &3.8805e-03 & 3.5596 \\ 20 &1.1042e-04 & 4.5233 &1.3168e-04 & 4.4861 &1.9960e-04 & 4.2811 \\ 40 &3.3904e-06 & 5.0255 &4.0143e-06 & 5.0357 &7.2420e-06 & 4.7846 \\ 80 &1.0513e-07 & 5.0112 &1.2192e-07 & 5.0412 &2.1990e-07 & 5.0415 \\ 160 &3.3151e-09 & 4.9870 &3.7593e-09 & 5.0193 &6.9224e-09 & 4.9894 \\ \hline \end{tabular} \end{table} \end{example} \begin{example}[Riemann problems] \label{example3.2}\rm This example { considers four} Riemann problems, whose initial data are given in Table \ref{table:002} with the initial discontinuity located at $x = 0.5$ in the computational domain $[0,\,1]$. The adiabatic index $\G$ is taken as $5/3$, but $4/3$ for the third problem. The numerical solutions (``$\circ$'') at $t = 0.4$ are displayed in Figs. \ref{fig:002}-\ref{fig:005} with 400 uniform cells, respectively. The exact solutions ({``solid line"}) with 2000 uniform cells are also provided for comparison. It is seen that the numerical solutions are in good agreement with the exact, and the shock and rarefaction waves and contact discontinuities are well captured, and the positivity of the density and the pressure can be well-preserved. However, there exist {slight} oscillations in the density behind the left-moving shock wave of {\tt RP3} and serious undershoots in the density at $x = 0.5$ {of} {\tt RP4}, similar to those in the literature, see e.g. \cite{Wu-Tang-JCP2015,Wu-Yang-Tang:2014,Yang-He-Tang:2011}{.} It is worth noting that no obvious oscillation is observed in the densities of {\tt RP3} obtained by the Runge-Kutta central DG methods \cite{Zhao-Tang:2017a} and {the} adaptive moving mesh method \cite{HeAdaptiveRHD}. \begin{table}[htpb] \centering \caption{ Initial data of four RPs. } \label{table:002} \begin{tabular}{c|c|ccc|c|c|ccc} \hline \multicolumn{2}{c|}{} & $\rho$ & $u$ & $p$ & \multicolumn{2}{c|}{} & $\rho$ & $u$ & $p$\\ \hline \hline \multirow{2}{*}{\tt RP1} & left state & 10 & 0 & 40/3 & \multirow{2}{*}{\tt RP2} & left state & 1 & 0 & $10^{3}$ \\ \cline{2-5} \cline{7-10} & right state & 1 & 0 & $10^{-6}$ & & right state & 1 & 0 & $10^{-2}$ \\ \hline \multirow{2}{*}{\tt RP3} & left state & 1 & 0.9 & 1 & \multirow{2}{*}{\tt RP4} & left state & 1 & $-0.7$ & 20\\ \cline{2-5} \cline{7-10} & right state & 1 & 0 & 10 & & right state & 1 & 0.7 & 20\\ \hline \end{tabular} \end{table} \begin{figure}[htbp] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \subfigure[$\rho/10$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/URP1d} \end{minipage} } \subfigure[$u$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/URP1u} \end{minipage} } \subfigure[$\frac{3}{40}p$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/URP1p} \end{minipage} } \caption{\small{{\tt RP1} in Example \ref{example3.2}: The solutions at $t = 0.4$. }} \label{fig:002} \end{figure} \begin{figure}[htbp] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \subfigure[$\rho/7$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/URP2d} \end{minipage} } \subfigure[$u$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/URP2u} \end{minipage} } \subfigure[$p/1000$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/URP2p} \end{minipage} } \caption{\small{{\tt RP2} in Example \ref{example3.2}: The solutions at $t = 0.4$. }} \label{fig:003} \end{figure} \begin{figure}[htbp] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \subfigure[$\rho/7$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/URP3d} \end{minipage} } \subfigure[$u$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/URP3u} \end{minipage} } \subfigure[$p/20$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/URP3p} \end{minipage} } \caption{\small{{\tt RP3} in Example \ref{example3.2}: The solutions at $t = 0.4$. }} \label{fig:004} \end{figure} \begin{figure}[htbp] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \subfigure[$\rho$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/URP4d} \end{minipage} } \subfigure[$u$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/URP4u} \end{minipage} } \subfigure[$p/20$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/URP4p} \end{minipage} } \caption{\small{{\tt RP4} in Example \ref{example3.2}: The solutions at $t = 0.4$. }}\label{fig:005} \end{figure} \end{example} \begin{example}[Density perturbation problem]\label{example3.3}\rm This is a more general Cauchy problem obtained by including a density perturbation in the initial data of corresponding Riemann problem in order to test the ability of shock-capturing schemes to resolve small scale flow features, which may give a good indication of the numerical (artificial) viscosity of the scheme. The initial data are given by \begin{equation*} (\rho , u , p)(x,0) = \begin{cases} (5 ,\, 0 ,\, 50 ) , & x < 0.5, \\ ( 2+0.3\sin(50x) ,\, 0 ,\,5) , & x>0.5. \end{cases} \end{equation*} The computational domain is taken as $[0,\,1]$ with the out-flow boundary conditions. Fig. \ref{fig:006} shows the solutions at $t = 0.35$ with 400 uniform cells and $\G = 5/3$, where the reference solution ({``solid line"}) are obtained with 2000 uniform cells. It can be seen that our scheme resolves the high frequency waves better than the third order GRP scheme \cite{Wu-Yang-Tang:2014}. \begin{figure}[htbp] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \subfigure[$\rho$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/U1DRP5d} \end{minipage} } \subfigure[$u$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/U1DRP5u} \end{minipage} } \subfigure[$p$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/U1DRP5p} \end{minipage} } \caption{\small{Example \ref{example3.3}: The solutions at $t = 0.35$.}} \label{fig:006} \end{figure} \end{example} \begin{example}[Collision of two blast waves]\label{example3.4}\rm The last 1D example simulates the collision of two strong relativistic blast waves. The initial data for this initial-boundary value problems consist of three constant states of an ideal gas with $\G = 1.4$, at rest in the domain [0,1] with outflow boundary conditions at $x = 0$ and 1. The initial data are given {by} \begin{equation*} (\rho , u , p)(x,0) = \begin{cases} (1 ,\, 0 ,\, 10^3 ) , & 0 \leq x < 0.1, \\ (1 ,\, 0 ,\, 10^{-2} ) , & 0.1 \leq x < 0.9, \\ (1 ,\, 0 ,\,10^2) , & 0.9 \leq x < 1.0. \end{cases} \end{equation*} Two strong blast waves develop and collide, producing a new contact discontinuity. Figs. \ref{fig:007}$-$\ref{fig:007b} show the close-up of solutions at $t = 0.43$ with 4000 uniform cells and different $\alpha$, where the exact solution ( ``solid line") are obtained by the exact RP solver with 4000 uniform cells. It is seen that our scheme can well resolve those strong discontinuities, and clearly capture the relativistic wave configurations generated by the collision of the two strong relativistic blast waves. \begin{figure}[htbp] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \subfigure[$\rho$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/U1DRP6d} \end{minipage} } \subfigure[$u$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/U1DRP6u} \end{minipage} } \subfigure[$p$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al1/eps/U1DRP6p} \end{minipage} } \caption{\small{Example \ref{example3.4}: Close-up of the solutions at $t = 0.43$. $\alpha=\frac13$.}}\label{fig:007} \end{figure} \begin{figure}[htbp] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \subfigure[$\rho$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al2/eps/U1DRP6d} \end{minipage} } \subfigure[$u$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al2/eps/U1DRP6u} \end{minipage} } \subfigure[$p$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al2/eps/U1DRP6p} \end{minipage} } \caption{\small{Same as Fig. \ref{fig:007} except for $\alpha=\frac{1-6\tau}{3-6\tau}$.}}\label{fig:007a} \end{figure} \begin{figure}[htbp] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \subfigure[$\rho$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al3/eps/U1DRP6d} \end{minipage} } \subfigure[$u$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al3/eps/U1DRP6u} \end{minipage} } \subfigure[$p$]{ \begin{minipage}[t]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{img/1D/al3/eps/U1DRP6p} \end{minipage} } \caption{\small{Same as Fig. \ref{fig:007} except for $\alpha=\frac13 + \tau$. }}\label{fig:007b} \end{figure} \end{example} \subsection{Two-dimensional case} Unless otherwise stated, the adiabatic index $\G$ is taken as $5/3$ and the parameter $\epsilon$ in the adaptive switch procedure is specified as $0.05$, that is to say, $P_{\mbox{sw}}=1.05$. \begin{example}[Smooth problem]\label{example3.5}\rm The problem considered here describes a RHD sine wave propagating periodically in the domain $\Om = [0,2/ \sqrt{3}]\times [0,2]$ at an angle of $\al = 30^{\circ}$ with the $x$-axis. The initial data are taken as \begin{equation*} \begin{cases} \rho(x,y,0) = 1+0.2\sin(2\pi (x\cos \al + y\sin \al)), \\ u(x,y,0) = 0.2, \quad v(x,y,0) = 0.2, \quad p(x,y,0) = 1. \end{cases} \end{equation*} The exact solution can be given {by} \begin{equation*} \begin{cases} \rho(x,y,t) = 1+0.2\sin(2\pi ((x-ut)\cos \al + (y-vt)\sin \al)), \\ u(x,y,t) = 0.2, \quad v(x,y,t) = 0.2, \quad p(x,y,t) = 1. \end{cases} \end{equation*} {In our computations, $\tau = \frac{\mu}{ \max_{\ell,j,k}\{ |\la_\ell^1 \left( \overline{\vec U}_{jk}^n \right) | \} /{\Delta x^{5/4}} + \max_{\ell,j,k}\{ | \la_\ell^2\left( \overline{\vec U}_{jk}^n \right) | \} /{\Delta y^{5/4}} }$. } Tables \ref{table:003}$-$\ref{table:003b} list the errors and convergence rates in $\rho$ at $t = 2$ obtained by using our 2D scheme with $N\times N$ uniform cells and different $\alpha$. The results show that {our 2D two-stage schemes can have the theoretical orders}. \begin{table}[htpb] \centering \caption{ The errors and convergence rates for solution at $t = 2$. $\alpha=\frac13$. } \label{table:003} \begin{tabular}{ccccccc} \hline $N$ & $l^1$ error & order & $l^2$ error & order & $l^{\infty}$ error & order \\ 5 &8.8639e-02 & - &6.4567e-02 & - &6.1366e-02 & -\\ 10 &7.3103e-03 & 3.6000 &5.2506e-03 & 3.6202 &4.8578e-03 & 3.6591 \\ 20 &3.4830e-04 & 4.3915 &2.6427e-04 & 4.3124 &2.7917e-04 & 4.1211 \\ 40 &1.0722e-05 & 5.0217 &8.2909e-06 & 4.9943 &9.4647e-06 & 4.8824 \\ 80 &3.3578e-07 & 4.9969 &2.5428e-07 & 5.0271 &2.9210e-07 & 5.0180 \\ 160 &1.0576e-08 & 4.9887 &7.8638e-09 & 5.0150 &9.2428e-09 & 4.9820 \\ \hline \end{tabular} \end{table} \begin{table}[htpb] \centering \caption{{Same as Table \ref{table:003} except for} $\alpha=\frac{1-6\tau}{3-6\tau}$.} \label{table:003a} \begin{tabular}{ccccccc} \hline $N$ & $l^1$ error & order & $l^2$ error & order & $l^{\infty}$ error & order \\ 5 &8.8544e-02 & - &6.4503e-02 & - &6.1296e-02 & -\\ 10 &7.3094e-03 & 3.5986 &5.2467e-03 & 3.6199 &4.8472e-03 & 3.6606 \\ 20 &3.4800e-04 & 4.3926 &2.6414e-04 & 4.3120 &2.7981e-04 & 4.1146 \\ 40 &1.0709e-05 & 5.0222 &8.2864e-06 & 4.9944 &9.5006e-06 & 4.8803 \\ 80 &3.3590e-07 & 4.9946 &2.5419e-07 & 5.0268 &2.9340e-07 & 5.0171 \\ 160 &1.0589e-08 & 4.9875 &7.8733e-09 & 5.0128 &9.3046e-09 & 4.9788 \\ \hline \end{tabular} \end{table} \begin{table}[htpb] \centering \caption{{Same as Table \ref{table:003} except for} $\alpha=\frac13 + \tau$. } \label{table:003b} \begin{tabular}{ccccccc} \hline $N$ & $l^1$ error & order & $l^2$ error & order & $l^{\infty}$ error & order \\ 5 &8.8719e-02 & - &6.4621e-02 & - &6.1424e-02 & -\\ 10 &7.3109e-03 & 3.6011 &5.2537e-03 & 3.6206 &4.8660e-03 & 3.6580 \\ 20 &3.4852e-04 & 4.3907 &2.6437e-04 & 4.3127 &2.7868e-04 & 4.1261 \\ 40 &1.0731e-05 & 5.0214 &8.2950e-06 & 4.9942 &9.4376e-06 & 4.8840 \\ 80 &3.3618e-07 & 4.9964 &2.5446e-07 & 5.0267 &2.9111e-07 & 5.0188 \\ 160 &1.0593e-08 & 4.9881 &7.8775e-09 & 5.0136 &9.1963e-09 & 4.9844 \\ \hline \end{tabular} \end{table} \end{example} \begin{example}[Riemann problems]\label{example3.6}\rm This example solves three 2D Riemann problems to verify the capability of the 2D two-stage scheme in capturing the complex 2D relativistic wave configurations. The computational domain is taken as $[0,1]\times [0,1]$ and divided into $300 \times 300$ uniform cells. {The output solutions at $t = 0.4$ will be plotted with $30$ equally spaced contour lines}. The initial data of {\tt RP1} are given {by} \begin{equation* (\rho , u , v, p)(x,0) = \begin{cases} (0.5 ,\, 0.5 ,\, -0.5 ,\, 0.5 ) , & x > 0.5,\,y>0.5, \\ (1 ,\, 0.5,\, 0.5 , \, 5) , & x<0.5,\,y>0.5, \\ (3 ,\, -0.5,\, 0.5 , \, 5) , & x<0.5,\,y<0.5, \\ (1.5 ,\, -0.5,\, -0.5 , \, 5) , & x>0.5,\,y<0.5. \end{cases} \end{equation*} It describes the interaction of four contact discontinuities (vortex sheets) with the same sign (the negative sign). Fig. \ref{fig:RP1} shows the contour of the density and pressure logarithms. The results show that the four initial vortex sheets interact each other to form a spiral with the low density around the center of the domain as time increases, which is the typical cavitation phenomenon in gas dynamics. \end{example} \begin{figure}[htbp] \centering \includegraphics[width=0.36\textwidth]{img/2D/al1/eps/RP1d} \includegraphics[width=0.36\textwidth]{img/2D/al1/eps/RP1p} \caption{\small{{\tt RP1} of Example \ref{example3.6}: Left: $\log \rho$; right: $\log p$. }} \label{fig:RP1} \end{figure} The initial data of {\tt RP2} are given {by} \begin{equation*} (\rho , u , v, p)(x,0) \; = \; \begin{cases} (1 ,\, 0 ,\, 0 ,\, 1 ) , & x > 0.5,\,y>0.5, \\ (0.5771 ,\, -0.3529,\, 0 , \, 0.4) , & x<0.5,\,y>0.5,\\ (1 ,\, -0.3529,\, -0.3529 , \, 1) , & x<0.5,\,y<0.5,\\ (0.5771 ,\, 0,\, -0.3529 , \, 0.4) , & x>0.5,\,y<0.5. \end{cases} \end{equation*} Fig. \ref{fig:RP2} shows the contour of the density { and pressure logarithms. The results show } that those four initial discontinuities first evolve as four rarefaction waves and then interact each other and form two (almost parallel) curved shock waves perpendicular to the line $x = y$ as time increases. \begin{figure}[htbp] \centering \includegraphics[width=0.36\textwidth]{img/2D/al1/eps/RP2d} \includegraphics[width=0.36\textwidth]{img/2D/al1/eps/RP2p} \caption{\small{ {\tt RP2} of Example \ref{example3.6}: Left: { $\log \rho$; right: $\log p$.} }} \label{fig:RP2} \end{figure} The initial data of {\tt RP3} are given {by} \begin{equation*} (\rho , u , v, p)(x,0) \; = \; \begin{cases} (0.035145216124503 ,\, 0 ,\, 0 ,\, 0.162931056509027 ) , & x > 0.5,\,y>0.5, \\ (0.1 ,\, 0.7,\, 0.0 , \, 1.0) , & x<0.5,\,y>0.5, \\ (0.5 ,\, 0.0,\, 0.0 , \, 1.0) , & x<0.5,\,y<0.5, \\ (0.1 ,\, 0.0,\, 0.7 , \, 1.0) , & x>0.5,\,y<0.5, \end{cases} \end{equation*} where the left and bottom discontinuities are two contact discontinuities and the top and right are two shock waves with the speed of $0.9345632754$. Fig. \ref{fig:RP3} shows the contour of the density { and pressure logarithms}. We see that four initial discontinuities interact each other and form a ``mushroom cloud'' around the point $(0.5, 0.5)$ as $t$ increases. \begin{figure}[htbp] \centering \includegraphics[width=0.36\textwidth]{img/2D/al1/eps/RP3d} \includegraphics[width=0.36\textwidth]{img/2D/al1/eps/RP3p} \caption{\small{{\tt RP3} of Example \ref{example3.6}: Left: { $\log \rho$; right: $\log p$.}}} \label{fig:RP3} \end{figure} \begin{example}[Double Mach reflection problem]\label{example3.9}\rm The double Mach reflection problem for the ideal relativistic fluid with the adiabatic index $\Gamma = 1.4$ within the domain $\Omega = [0, 4]\times[0, 1]$ has been widely used to test the high-resolution shock-capturing scheme, see e.g. \cite{HeAdaptiveRHD,WuEGRHD,Yang-Tang:2012}. Initially, a right-moving oblique shock with speed $v_s = 0.4984$ is located at $(x, y) = (1/6, 0)$ and makes a $60^\circ$ angle with $x$-axis. Thus its position at time $t$ may be given by $h(x,t) = \sqrt{3}(x-1/6) - 2v_s t$. The left and right states of the shock wave for the primitive variables are given by \begin{equation*} \vec V(x,y,0) \; = \; \begin{cases} \vec V_L , & y > h(x,0), \\ \vec V_R , & y < h(x,0), \end{cases} \end{equation*} with $\vec V_L = (8.564,0.4247\sin(\pi/3),-0.4247\cos(\pi/3),0.3808)^T$ and $\vec V_R = (1.4, 0.0,0.0,0.0025)^T$. The setup of boundary conditions can be found in \cite{HeAdaptiveRHD,WuEGRHD,Yang-Tang:2012}. Figs. \ref{fig:RP4}$-$\ref{fig:RP4b} give the contours of the density and pressure at time $t = 5.5$ with $640 \times 160$ uniform cells and different $\alpha$. We see that the complicated structure around the double Mach region can be clearly captured. \begin{figure}[htbp] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \subfigure{ \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{img/2D/al1/eps/RP4d1} \end{minipage} } \subfigure{ \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{img/2D/al1/eps/RP4p} \end{minipage} } \caption{\small{Example \ref{example3.9}: {the contours of $\rho$ (top) and $p$ (bottom)} with $30$ equally spaced contour lines. $\alpha=\frac13$.}} \label{fig:RP4} \end{figure} \begin{figure}[htbp] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \subfigure{ \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{img/2D/al2/eps/RP4d1} \end{minipage} } \subfigure{ \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{img/2D/al2/eps/RP4p} \end{minipage} } \caption{\small Same as Fig. \ref{fig:RP4} except for $\alpha=\frac{1-6\tau}{3-6\tau}$.} \label{fig:RP4a} \end{figure} \begin{figure}[htbp] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \subfigure{ \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{img/2D/al3/eps/RP4d1} \end{minipage} } \subfigure{ \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{img/2D/al3/eps/RP4p} \end{minipage} } \caption{\small Same as Fig. \ref{fig:RP4} except for $\alpha=\frac13 + \tau$.} \label{fig:RP4b} \end{figure} \end{example} \begin{example}[Shock-bubble interaction problems]\label{example3.10}\rm The final example considers two shock-bubble interaction problems {within} the computational domain $[0,325]\times [0,90]$. Their detailed setup can be found in \cite{WuEGRHD}. For the first shock-bubble interaction problem, the left and right states of planar shock wave moving left are given {by} \begin{equation*} \vec V(x,y,0) = \begin{cases} (1,0,0,0.05)^T , & x < 265, \\ (1.865225080631180,-0.196781107378299,0,0.15)^T , & x > 265, \end{cases} \end{equation*} and the bubble is described as $\vec V(x,y,0) = (0.1358,0,0,0.05)^T$ if $\sqrt{(x-215)^2+(y-45)^2} \leq 25$. The setup of the second shock-bubble problem is the same as the first, except for that the initial state of the fluid in the bubble is replaced with $ \vec V(x,y,0) = (3.1538,0,0,0.05)^T$ { if } $\sqrt{(x-215)^2+(y-45)^2} \leq 25$. Fig. \ref{fig:RP5} gives the contour plots of the density at $t = 90, 180, 270, 360, 450$ {(from top to bottom)} of the first shock-bubble interaction problem, obtained by using our scheme with $325 \times 90$ uniform cells. Fig. \ref{fig:RP6} presents the contour plots of the density at several moments $t = 100, 200, 300, 400, 500$ (from top to bottom) of the second shock-bubble interaction problem, obtained by using our 2D two-stage scheme with $325 \times 90$ uniform cells. Those results show that the discontinuities and some small wave structures including the curling of the bubble interface are captured well and accurately, and at the same time, the multi-dimensional wave structures are also resolved clearly. Those plots are also clearly displaying the dynamics of the interaction between the shock wave and the bubble and obviously different wave patterns of the {interactions between those shock waves and the bubbles}. \end{example} \begin{figure}[htbp] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \subfigure{ \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{img/2D/al1/eps/RP5d1I13} \end{minipage} } \subfigure{ \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{img/2D/al1/eps/RP5d1I23} \end{minipage} } \subfigure{ \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{img/2D/al1/eps/RP5d1I33} \end{minipage} } \subfigure{ \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{img/2D/al1/eps/RP5d1I43} \end{minipage} } \subfigure{ \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{img/2D/al1/eps/RP5d1I53} \end{minipage} } \caption{\small{The first problem of Example \ref{example3.10}: the contours of $ \rho$ {at $t = 90,180,270,360,450$} with $15$ equally spaced contour lines.}} \label{fig:RP5} \end{figure} \begin{figure}[htbp] \setlength{\abovecaptionskip}{0.cm} \setlength{\belowcaptionskip}{-0.cm} \subfigure{ \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{img/2D/al1/eps/RP6d1I13} \end{minipage} } \subfigure{ \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{img/2D/al1/eps/RP6d1I23} \end{minipage} } \subfigure{ \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{img/2D/al1/eps/RP6d1I33} \end{minipage} } \subfigure{ \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{img/2D/al1/eps/RP6d1I43} \end{minipage} } \subfigure{ \begin{minipage}[t]{\textwidth} \centering \includegraphics[width=0.8\textwidth]{img/2D/al1/eps/RP6d1I53} \end{minipage} } \caption{\small{The second problem of Example \ref{example3.10}: the contours of $ \rho$ { at $t=100,200,300,400,500$} with $15$ equally spaced contour lines.}} \label{fig:RP6} \end{figure} \section{Conclusions} \label{Section-conclusion} The paper studied the two-stage fourth-order accurate time discretization \cite{LI-DU:2016} and its application to the special relativistic hydrodynamical {(RHD)} equations. It was shown that new two-stage fourth-order accurate time discretizations could be proposed. The local ``quasi 1D'' GRP {(generalized Riemann problem)} of { the special RHD } equations was also analytically resolved. With the aid of the direct Eulerian {GRP} methods \cite{Yang-He-Tang:2011,Yang-Tang:2012} and the analytical resolution of local ``quasi 1D'' GRP as well as the adaptive primitive-conservative scheme \cite{E.F.Toro:2013}, the two-stage fourth-order accurate time discretizations were successfully implemented for the 1D and 2D special RHD equations. The adaptive primitive-conservative scheme was used to reduce the spurious solution generated by the conservative scheme across the contact discontinuity. Several numerical experiments were conducted to demonstrate the performance and accuracy as well as robustness of our schemes.
1,314,259,994,615
arxiv
\section{Introduction} The classical Poisson bracket defined on functions on ${{\mathbb{R}}^{2n}}$ is\cite{1,7,10} \[\left\{ {{f}_{1}},{{f}_{2}} \right\}= \frac{\partial {{f}_{1}}}{\partial {{q}^{i}}}\frac{\partial {{f}_{2}}}{\partial {{p}_{i}}}-\frac{\partial {{f}_{1}}}{\partial {{p}_{i}}}\frac{\partial {{f}_{2}}}{\partial {{q}^{i}}},~~~\forall {{f}_{j}}\in {{C}^{\infty }}\left( M,\mathbb{R} \right)\] On ${{\mathbb{R}}^{r}}$ such a structure is given by functions ${{J}_{ij}}\left( x \right)$ satisfying the identities which are also the conditions for the definition of the GPB given by\cite{2,5} \begin{description} \item[(i)] Antisymmetry: ${{J}_{ij}}\left( x \right)=-{{J}_{ji}}\left( x \right)$. \item[(ii)] Jacobi identity: \begin{equation}\label{eq9} {{J}_{il}}\frac{\partial {{J}_{jk}}}{\partial {{x}_{l}}}+{{J}_{jl}}\frac{\partial {{J}_{ki}}}{\partial {{x}_{l}}}+{{J}_{kl}}\frac{\partial {{J}_{ij}}}{\partial {{x}_{l}}}=0 \end{equation}where $i,j,k=1,\ldots ,m$, and ${{J}_{ij}}\left( x \right)=\left\{ {{x}_{i}},{{x}_{j}} \right\}$. \end{description} and cosymplectic structure $J=\left( {{J}_{ij}} \right)$ can construct the following bivector \begin{definition}\label{d4} Cosymplectic structure $J$ forms bivector $\Lambda$ on a Poisson manifold $\left( P,\left\{ \cdot ,\cdot \right\} \right)$ such that \[\Lambda ={{J}_{ij}}\left( x \right){{\partial}_{i}}\otimes {{\partial}_{j}}=\frac{1}{2}{{J}_{ij}}{{\partial }_{i}}\wedge {{\partial }_{j}},~~{{J}_{ij}}=-{{J}_{ji}}\] then there exists a homomorphic mapping $f\to {{X}_{f}}=\left[ \Lambda ,f \right]={{J}_{ij}}{{\partial }_{i}}f{{\partial }_{j}}$ based on the Schouten bracket, for all $f\in {{C}^{\infty }}\left( M,\mathbb{R} \right)$, and vector field ${{X}_{f}} \in {{C}^{\infty }}\left( TM \right)$. \end{definition} (i) and (ii) imply that the bilinear operation \begin{equation}\label{eq1} \left\{ F,G \right\}={{{J}_{ij}}\left( x \right)}\frac{\partial F}{\partial {{x}_{i}}}\frac{\partial G}{\partial {{x}_{j}}} \end{equation} for $ F,G\in {{C}^{\infty }}\left( M,\mathbb{R} \right)$, is antisymmetric and satisfies the Jacobi identity; i.e., the algebra of functions ${{C}^{\infty }}\left( {{\mathbb{R}}^{r}} \right)$ becomes a Lie algebra.\cite{1} An abstractly defined Lie algebra structure $\left\{ , \right\}$ on ${{C}^{\infty }}\left( {{\mathbb{R}}^{r}} \right)$ arises in this way if and only if it satisfies the Leibniz identity $\left\{ FG,H \right\}=\left\{ F,H \right\}G+F\left\{ G,H \right\}$. and this enables us to define a Poisson structure on a manifold $P$ to be a Lie algebra structure $\left\{ , \right\}$ on ${{C}^{\infty }}\left(P \right)$ which satisfies the Leibniz identity. The functions ${J}_{ij}$ may then be seen as the components in local coordinates of an antisymmetric contravariant 2-tensor $J$; the Jacobi identity may be interpreted as the vanishing on $J$ of a certain natural quadratic differential operator of first order. Equation \eqref{eq9} in fact is Jacobi identity, it forms a set of nonlinear partial differential equations whose structural functions must be satisfied. In particular, any antisymmetric constant matrix is obviously satisfied by \eqref{eq9}. The generalized Poisson bracket \ref{eq1} has four simple but crucial properties:\cite{2,4,10} \begin{enumerate} \item Antisymmetry: $\left\{ F,G \right\}=-\left\{ G,F \right\}$. \item Bilinearity: $\left\{ \lambda F+\mu G,K \right\}=\lambda \left\{ F,K \right\}+\mu \left\{ G,K \right\}$. $\left\{ F,G \right\}$ is real bilinear in $F$ and $G$. \item Jacobi identity: $\left\{ F,\left\{ G,K \right\} \right\}+\left\{ G,\left\{ K,F \right\} \right\}+\left\{ K,\left\{ F,G \right\} \right\}=0$. \item Leibnitz identity: $\left\{ F\cdot G,K \right\}=F\cdot \left\{ G,K \right\}+G\cdot \left\{ F,K \right\}$. where $F,G,K$ are the elements of $C^{\infty}(P)$, $\lambda$, $\mu$ are arbitrary real numbers. \end{enumerate} A manifold (that is, an $n$-dimensional smooth surface) $P$ together with a bracket operation on $F(P)$, the space of smooth functions on $P$, and satisfying properties $1\sim 4$, is called a Poisson manifold, and the GHS is defined on it for arbitrary dimensions. Any symplectic manifold is a Poisson manifold. The Poisson bracket is defined by the symplectic form. On a Poisson manifold $\left( P,\left\{ \cdot ,\cdot \right\} \right)$, \cite{3} associated to any function $H$ there is a vector field, denoted by $X_{ H}$ and called Hamiltonian vector fields, which has the property that for any smooth function $F:P\to \mathbb{R}$ we have the identity $$\left\langle dF,{{X}_{H}} \right\rangle =dF\cdot {{X}_{H}}=\left\{ F,H \right\}$$ where $dF$ is the differential of $F$ and $dF\cdot {{X}_{H}}$ denotes the derivative of $F$ in the direction ${{X}_{H}}$. We say that the vector field ${{X}_{H}}$ is generated by the function $H$, or that ${{X}_{H}}$ is the Hamiltonian vector field associated with $H$. \[{{X}_{H}}=\frac{\partial H}{\partial {{q}^{i}}}\frac{\partial }{\partial {{p}_{i}}}-\frac{\partial H}{\partial {{p}_{i}}}\frac{\partial }{\partial {{q}^{i}}}\] Assume first that $M$ is $n$-dimensional, and pick local coordinates $(q^{ 1} ,\cdots ,q^{ n} )$ on $M$. Since $(dq^{ 1} ,\cdots ,dq^{ n} )$ is a basis of ${{T}^{*}}_{q}M$, we can write any $\alpha \in {{T}^{*}}_{q}M$ as $\alpha= p_{ i} dq^{ i }$. This procedure defines induced local coordinates $(q^{ 1} ,\cdots ,q^{ n} ,p_{ 1} ,\cdots ,p_{ n} ) $ on ${{T}^{*}}M$. Define the canonical symplectic form on ${{T}^{*}}M$ by $\Omega= d{{p}_{i}}\wedge d{{q}^{i}}$. The interior product $i_{ H}\Omega$ is given by ${{i}_{{{X}_{H}}}}\Omega =dH$\cite{3,7,8,9}. \begin{proposition}[\cite{3}] Suppose that $\left( Z,\Omega \right)$ is a 2n-dimensional symplectic vector space, and let $\left( {{q}^{i}},{{p}_{i}} \right)$ denote canonical coordinates, with respect to which $\Omega$ has matrix $J$. Then in this coordinate system, ${{X}_{H}}:Z\to Z$ is given by ${{X}_{H}}=\left( \frac{\partial H}{\partial {{p}_{i}}},-\frac{\partial H}{\partial {{q}^{i}}} \right)=J\nabla H$. Thus, Hamilton's equations in canonical coordinates are \begin{equation}\label{eq13} {\dot {q}}^{i}=\frac{\partial H}{\partial {{p}_{i}}},~~{\dot {p}}_{i}=-\frac{\partial H}{\partial {{q}^{i}}},~~i=1,\cdots ,n \end{equation} \end{proposition} \begin{definition}[\cite{3}]\label{d3} Let $(P,\Omega)$ be a symplectic manifold. A vector field $X$ on $P$ is called Hamiltonian if there is a function $H : P \to \mathbb{R}$ such that \[{{i}_{X}}\Omega =dH\]that is, for all $ v\in {{T}_{z}}P$, we have the identity ${{\Omega }_{z}}\left( X\left( z \right),v \right)=dH\left( z \right)\cdot v$. In this case we write $X_{ H}$ for $X$. The set of all Hamiltonian vector fields on $P$ is denoted by $X_{ Ham} (P)$. Hamilton’s equations are the evolution equations ${\dot {z}}={{X}_{H}}\left( z \right)$. \end{definition} In finite dimensions, Hamilton's equations in canonical coordinates are \[{\dot {q}}^{i}=\frac{\partial H}{\partial {{p}_{i}}},~~{\dot {p}}_{i}=-\frac{\partial H}{\partial {{q}^{i}}},~~i=1,\cdots ,n\] \begin{theorem}[\cite{6}]\label{th2} Let $M$ be $n$-dimensional $C^{\infty}$ manifold, $X\in {{C}^{\infty }}\left( TM \right)$, then ${{L}_{X}}=d\circ {{i}_{X}}+{{i}_{X}}\circ d$ for $\forall \omega \in {{C}^{\infty }}\left( {{\bigwedge }^{r}}{{T}^{*}}M \right)$ such that ${{L}_{X}}\omega =\left( d\circ {{i}_{X}}+{{i}_{X}}\circ d \right)\omega $. \end{theorem} \begin{lemma}[\cite{6}]\label{lem2} Let vector field $X$ on the symplectic manifold $(P,\Omega)$, 2-form $\Omega \in {{C}^{\infty }}\left( {{\bigwedge }^{2}}{{T}^{*}}M \right)$, if ${{L}_{X}}\Omega =0$ holds for all $X\in {{C}^{\infty }}\left( TM \right)$, then $X$ is the symplectic vector field on $(P,\Omega)$. \end{lemma} \begin{theorem}[\cite{6}]\label{t3} Let $M$ be $n$-dimensional $C^{\infty}$ manifold, $X\in {{C}^{\infty }}\left( TM \right)$, let ${{L}_{X}}:{{C}^{\infty }}\left( {{\otimes }^{r,s}}TM \right)\to {{C}^{\infty }}\left( {{\otimes }^{r,s}}TM \right)$, $\theta \mapsto {{L}_{X}}\theta $ satisfy \begin{enumerate} \item ${{L}_{X}}f=Xf,f\in {{C}^{\infty }}\left( M,\mathbb{R} \right)={{C}^{\infty }}\left( {{\otimes }^{0,0}}TM \right),$ \item ${{L}_{X}}Y=\left[ X,Y \right],Y\in {{C}^{\infty }}\left( TM \right)={{C}^{\infty }}\left( {{\otimes }^{1,0}}TM \right)$ \end{enumerate} \end{theorem} \begin{lemma}\label{lem3} Let ${{X}_{1}},{{X}_{2}},X\in {{C}^{\infty }}\left( TM \right),f\in {{C}^{\infty }}\left( M,\mathbb{R} \right),\theta ,\eta \in {{C}^{\infty }}\left( {{\otimes }^{0,s}}TM \right)$ be given, then \begin{align} {{i}_{X}}\left( \theta +\eta \right) &={{i}_{X}}\theta +{{i}_{X}}\eta ,{{i}_{X}}\left( f\theta \right)=f{{i}_{X}}\theta \notag\\ & {{i}_{{{X}_{1}}+{{X}_{2}}}}={{i}_{{{X}_{1}}}}+{{i}_{{{X}_{2}}}},{{i}_{fX}}=f{{i}_{X}} \notag \end{align} \end{lemma} \begin{theorem}[\cite{6}]\label{th4} Let $\omega \in {{C}^{\infty }}\left( {{\bigwedge }^{r}}{{T}^{*}}M \right)$ be given on smooth manifold along with $Y,X\in {{C}^{\infty }}\left( TM \right)$, then \[d\omega \left( X,Y \right)=X\left\langle Y,\omega \right\rangle -Y\left\langle X,\omega \right\rangle -\left\langle \left[ X,Y \right],\omega \right\rangle \] \end{theorem} \begin{definition}[\cite{3}] Given a symplectic vector space $\left( Z,\Omega \right)$ and two functions $F,G:Z\to \mathbb{R}$, the generalized Poisson bracket $\left\{ F,G \right\}:Z\to \mathbb{R}$ of $F$ and $G$ is defined by$$\left\{ F,G \right\}\left( z \right)=\Omega \left( {{X}_{F}}\left( z \right),{{X}_{G}}\left( z \right) \right)$$ \end{definition} Using the definition of a Hamiltonian vector field, we find that equivalent expressions are $$\left\{ F,G \right\}\left( z \right)=dF\left( z \right)\cdot {{X}_{G}}\left( z \right)=-dG\left( z \right)\cdot {{X}_{F}}\left( z \right)$$ where we write ${{L}_{{{X}_{G}}}}F=dF\cdot {{X}_{G}}$ for the derivative of $F$ in the direction $X_{ G }$. Lie Derivative Notation. The Lie derivative of $f$ along $X$, ${{L}_{X}}f=df\cdot X$, is the directional derivative of $f$ in the direction $X$. Generalized Hamiltonian system(GHS) is defined as({see \cite{1,2,6,7}}). \begin{equation}\label{eq7} {\dot{x}}=\frac{dx}{dt}=J\left( x \right)\nabla H\left( x \right),~~~x\in {{\mathbb{R}}^{m}} \end{equation}where $J\left( x \right)$ is structure matrix, $\nabla H\left( x \right)$ is the gradient of the function Hamilton, structure matrix $J\left( x \right)$ satisfies the conditions (i) and (ii). Using the Leibnitz properties of GPB, Hamiltonian equations can be further written as: $\left\{ {{x}_{i}},H \right\}={{{J}_{ij}}\frac{\partial H}{\partial {{x}_{j}}}}$. Transformation of Hamiltonian Systems. As in the vector space case, we have the following results. \begin{proposition}[\cite{3}]\label{p1} A diffeomorphism $\varphi: P_{ 1}\to P_{ 2}$ of symplectic manifolds is symplectic if and only if it satisfies\[{{\varphi }^{*}}{{X}_{H}}={{X}_{H\circ \varphi }}\] for all functions $H : U \to \mathbb{R}$ (such that $X_{ H}$ is defined) where $U$ is any open subset of $P_{ 2}$ . \end{proposition} Thus, $\varphi$ preserves Poisson brackets if and only if ${{\varphi }^{*}}{{X}_{G}}={{X}_{G\circ \varphi }}$ for every $G:Z\to \mathbb{R}$. \begin{proposition}[\cite{3}] Let $X_{ H}$ be a Hamiltonian vector field on $Z$, with Hamiltonian $H$ and flow $\varphi_{t}$ . Then for $F:Z\to \mathbb{R}$, $$\frac{d}{dt}\left( F\circ {{\varphi }_{t}} \right)=\left\{ F\circ {{\varphi }_{t}},H \right\}=\left\{ F,H \right\}\circ {{\varphi }_{t}}$$ \end{proposition} \begin{corollary}[\cite{3}]\label{c2} Let $F,G:Z\to \mathbb{R}$. Then $F$ is constant along integral curves of $X_{ G}$ if and only if $G$ is constant along integral curves of $X_{ F}$ , and this is true if and only if $\left\{ F,G \right\} = 0$. \end{corollary} \begin{lemma}[\cite{5}]\label{lem4} If $\left\{ f,g \right\} = 0$ holds for all $g\in {{C}^{\infty }}\left( M,\mathbb{R} \right)$, then $f$ is called the Casimir function on the Poisson manifold. \end{lemma}Clearly, the Casimir function has no corresponding Hamiltonian vector field $X_{H}$, for $ \forall g\in {{C}^{\infty }}\left( M,\mathbb{R} \right)$, ${{X}_{g}}f=0$ gives rises to ${{X}_{g}}=0$, hence $f\in H^{0}(M,\mathbb{R})$. \begin{lemma}[\cite{6}]\label{d1} A smooth curve $\gamma =\left[ a,b \right]\to M$, which satisfies \[{\ddot {x}}^{i}\left( t \right)+\Gamma _{jk}^{i}\left( x\left( t \right) \right){\dot {x}}^{j}\left( t \right){\dot {x}}^{k}\left( t \right)=0,~~i=1,\cdots ,m\] is called geodesic equation. \end{lemma} \begin{remark} The brackets in the introduction are GCP brackets, and in the following discussion, it will bring the subscript GHS to show the differences . \end{remark} \section{Generalized Structural Poisson Bracket } Let $Z$ be a real Banach space, possibly infinite-dimensional, and let $\Omega: Z\times Z\to \mathbb{R}$ be a continuous bilinear form on $Z$. As known, nabla symbol $\nabla$ is the vector differential operator, hence we naturally obtain the extension of $\nabla$, denoted by the vector differential operator $D=\nabla +\nabla \chi\in {{\mathbb{R}}^{m}}$, namely the transformation of operator $$\nabla \to D=\nabla +\nabla \chi $$that is made. Of course, vector function $A=\nabla \chi\in {{\mathbb{R}}^{m}}$. For its components, the covariant derivative operator denoted by ${{D}_{i}}=\frac{\partial }{\partial {{x}_{i}}}+{{A}_{i}}$, ${{A}_{i}}={{\partial }_{i}}\chi $ is the structural derivative as the components of vector function $A$, where ${{\partial }_{i}}=\frac{\partial }{\partial {{x}_{i}}}$, $A$ is a structural vector field derived from the structure function $\chi$. Actually, this is the derivative transformation showing as $${{\partial }_{i}}\to {{D}_{i}}={{\partial }_{i}}+{{A}_{i}}$$ where structural derivative $A_{i }$ is the vector field. the covariant derivative $D_{i }$ in this context as a generalization of the partial derivative $\partial _{i }$ which transforms covariant under parallel transport. More precisely, given a function $f\in C^{\infty}$, $$\nabla f\to Df=\nabla f+f\nabla \chi=\nabla f+Af\in {{\mathbb{R}}^{m}}$$ is defined on the $m$-dimensional Poisson manifold which is self consistent, compatible. For the following discussions, we will use GSPB bracket $\left\{ F,G \right\}={{J}_{ij}}{{D}_{i}}F{{D}_{j}}G$ without distinction. \begin{proposition}[Structure Matrix]\label{p2} A generalized Poisson structure is given by functions $J=\left ({{J}_{ij}}\left (x\right) \right)$ on ${{\mathbb{R}}^{m}}$ satisfying the identities \begin{enumerate} \item skew-symmetry: ${{J}_{ij}}\left( x \right)=-{{J}_{ji}}\left( x \right)$. \item Generalized Jacobi identity: \begin{equation}\label{eq4} {{J}_{il}}{{D}_{l}}{{J}_{jk}}+{{J}_{jl}}{{D}_{l}}{{J}_{ki}}+{{J}_{kl}}{{D}_{l}}{{J}_{ij}} =0 \end{equation} where ${{D}_{l}}={{\partial }_{l}}+{{A}_{l}}$ is covariant derivative. \end{enumerate} \end{proposition} Skew-symmetry used to preserve structure, $J$ is called a cosymplectic structure and satisfing generalized Jacobi identity. Equation \eqref{eq4} forms a set of nonlinear partial differential equations whose structural functions must be satisfied. Plugging covariant derivative into \eqref{eq4} gives \begin{align}\label{la1} & {{J}_{il}}{{D}_{l}}{{J}_{jk}}+{{J}_{jl}}{{D}_{l}}{{J}_{ki}}+{{J}_{kl}}{{D}_{l}}{{J}_{ij}} =0 \\ & =\left( {{J}_{il}}{{\partial }_{l}}{{J}_{jk}}+{{J}_{jl}}{{\partial }_{l}}{{J}_{ki}}+{{J}_{kl}}{{\partial }_{l}}{{J}_{ij}} \right)+{{A}_{l}}\left( {{J}_{il}}{{J}_{jk}}+{{J}_{jl}}{{J}_{ki}}+{{J}_{kl}}{{J}_{ij}} \right) \notag \end{align} So as $D_{i}$ is degenerated to $\partial_{i}$, GJI is become to generalized Jacobi identity, hence complete Jacobi identity should be taken as GJI. \begin{equation}\label{eq3} {{J}_{il}}{{\partial }_{l}}{{J}_{jk}}+{{J}_{jl}}{{\partial }_{l}}{{J}_{ki}}+{{J}_{kl}}{{\partial }_{l}}{{J}_{ij}}=-{{A}_{l}}\left( {{J}_{il}}{{J}_{jk}}+{{J}_{jl}}{{J}_{ki}}+{{J}_{kl}}{{J}_{ij}} \right) \end{equation} As a matter of fact, GJI \eqref{eq4} represents structural conservation on non-Euclidean manifold $M$ with ${A}_{l}$, when ${A}_{l}=0$, GJI in non-Euclidean space \eqref{la1} reduces to structural equations on Euclidean flat manifolds $M$ of conditions (i) and (ii), that is to say, $GCPB\rightarrow GPB $. Obviously, the preserving structure equation \eqref{eq4} is natural prolongation of \eqref{eq9}. Let's do some notations for discussions, we omit the subscript of $GCHS$ in the equation ${{\left\{ F,G \right\}}_{GCHS}}$ except special instructions in the following discussions of the main results, namely ${{\left\{ F,G \right\}}_{GCHS}}\equiv{{\left\{ F,G \right\}}}$, and GPB is denoted as ${{\left\{ F,G \right\}}_{GHS}}$. We will define one of the most important operators in GCHS. Based on the definition \ref{d4}, for a given map $${{\lambda }}:{{\Lambda }^{1}}\left( M \right)\to {{C}^{\infty }}\left( M,\mathbb{R} \right),~~{{\lambda }}:df\mapsto {{X}_{f}}$$ such that ${{\alpha }_{i}}d{{x}^{i}}\to \left\langle {{\alpha }_{i}}d{{x}^{i}},\Lambda \right\rangle ={{J}_{ij}}\left( x \right){{\alpha }_{i}}\frac{\partial }{\partial {{x}^{j}}}$ and $df={{f}_{,i}}d{{x}^{i}}\to {{J}_{ij}}\left( x \right){{f}_{,i}}\frac{\partial }{\partial {{x}^{j}}}={{X}_{f}}$. Hence we give the definition of structural operator as a structural vector field. \begin{definition}[Structural Operator] Let $\left( Z,\Omega \right)$ be a symplectic vector space. A vector field $D:Z\to Z$ is given on $Z$, $A=\nabla\chi$ is a structural vector field, structural operator is defined as \[\widehat{S}\equiv {{A}^{T}}JD ={{J}_{ij}}{{A}_{i}}{{D}_{j}}={{J}_{ij}}{{A}_{i}}{{\partial }_{j}}=X_{\chi}\in T_{p}M\] where $D=\nabla +A$, $X_{\chi}$ is structural vector field, and ${{J}_{ij}}{{A}_{i}}{{A}_{j}}=0$ has been used. \end{definition} The most peculiar thing is that structural operator $ \widehat{S}$ as a complete operator only exists in the non-Euclidean space, it does not exist in flat Euclidean space and double summation to get a complete functional form. \begin{definition}\label{d6} Let a vector field ${{X}_{f}}={{J}_{ij}}{{\partial }_{i}}f{{\partial }_{j}}\in {{T}_{p}}M$ be given on the Poisson manifold, then a vector transformation is given $${{X}_{f}}\to X_{f}^{M}={{X}_{f}}+f{{X}_{\chi}}$$ for all ${{X}_{f}},{{X}_{\chi}}={{J}_{ij}}{{A}_{i}}{{\partial }_{j}}\in {{T}_{p}}M$. ${X_{f}^{M}}$ is non-symplectic vector field, the space denoted by $\left( Z_{N},\Omega \right)$. \end{definition} Note that symplectic structure $\Omega$ on symplectic manifold is closed form, $d\Omega =0$, based on the theorem \ref{th2}, hence \[{{L}_{X}}\Omega =\left( d\cdot {{i}_{X}}+{{i}_{X}}\cdot d \right)\Omega =d{{i}_{X}}\Omega=0\] to make symplectic vector field $X$ satisfy above equation is equivalent to ${{i}_{X}}\Omega$ is closed form. then we can obtain the Lie derivative of a differential 2-form $\Omega$ on ${X_{f}^{M}}$ \[{{L}_{X_{f}^{M}}}\Omega =df\wedge d\chi \in {{C}^{\infty }}\left( {{\bigwedge }^{2}}{{T}^{*}}M \right)\]for all $f, \chi \in {{C}^{\infty }}\left( M,\mathbb{R} \right)$, according to the lemma \ref{lem2}, it generally reveals that ${X_{f}^{M}}$ is not symplectic vector field. This follows 2-form $\alpha={{L}_{X_{f}^{M}}}\Omega =d\xi\in {{C}^{\infty }}\left( {{\bigwedge }^{2}}{{T}^{*}}M \right)$, if let $\xi =fd\chi$ be denoted, and then $d{{L}_{X_{f}^{M}}}\Omega =0$, where the form $\xi$ is called a potential form for $\alpha$, ${{L}_{X_{f}^{M}}}\Omega=\alpha= d\xi$ is an exact form for differential form $\xi$ of one lesser degree than $\alpha$, because $d^{2} = 0$, exact form ${{L}_{X_{f}^{M}}}\Omega$ is automatically closed. Especially, on a contractible domain, every closed form is exact by the Poincar\'{e} lemma. According to the theorem \ref{th4}, for $\alpha\in {{C}^{\infty }}\left( {{\bigwedge }^{2}}{{T}^{*}}M \right)$, $\xi\in {{C}^{\infty }}\left( {{\bigwedge }^{1}}{{T}^{*}}M \right)$, we have \[\alpha \left( X,Y \right)=d\xi \left( X,Y \right)=X\left\langle Y,\xi \right\rangle -Y\left\langle X,\xi \right\rangle -\left\langle \left[ X,Y \right],\xi \right\rangle \] for all $Y,X\in {{C}^{\infty }}\left( TM \right)$. One takes the covariant differential $\mathcal{D}f={{D}_{i}}fd{{x}^{i}}=df+fd\chi $ for all $f\in {{C}^{\infty }}\left( M,\mathbb{R} \right)$ to replace the ordinary differential $df$, and definition \ref{d4}, then we can define the generalized structural Poisson bracket below \begin{definition}[GSPB]\label{d5} Let vector field $X_{f}^{M}$ be given on the manifold $M$ along bivector $\Lambda$ such that \[\left\{ f,g \right\} \equiv \left\langle \mathcal{D}f\otimes \mathcal{D}g,\Lambda \right\rangle ={{D}^{T}}fJDg={{J}_{ij}}{{D}_{i}}f{{D}_{j}}g\] is called the generalized structural Poisson bracket for all $f,g\in {{C}^{\infty }}\left( M,\mathbb{R} \right)$ , where $D=\nabla +A$ is vector operator. \end{definition} Analytic expression of the GSPB is given by $\left\{ f,g \right\}={{D}^{T}}fJDg={{J}_{ij}}{{D}_{i}}f{{D}_{j}}g$ with the vector operator $D=\nabla +A$, \begin{align} \left\{ f,g \right\} & ={{D}^{T}}fJDg={{\nabla }^{T}}fJ\nabla g+f{{A}^{T}}J\nabla g+g{{\nabla }^{T}}fJA+gf{{A}^{T}}JA \notag\\ & ={{\left\{ f,g \right\}}_{GHS}}+f{{\left\{ \chi ,g \right\}}_{GHS}}-g{{\left\{ \chi ,f \right\}}_{GHS}} \notag \end{align}for all $f,g\in {{C}^{\infty }}\left( M,\mathbb{R} \right)$ , where ${{\left\{ f,g \right\}}_{GHS}}={{\nabla }^{T}}fJ\nabla g,{{\left\{ \chi ,g \right\}}_{GHS}}={{A}^{T}}J\nabla g,{{A}^{T}}J\nabla f={{\left\{ \chi ,f \right\}}_{GHS}}$ and ${{A}^{T}}JA=0$, hence structural operator is correspondingly rewritten as $\widehat{S}={{A}^{T}}JD={{A}^{T}}J\nabla $, and based on the definition \ref{d5} can be calculated in the following theorem \begin{theorem}\label{le5} The generalized structural Poisson bracket of two functions $f,g\in {{C}^{\infty }}\left( M,\mathbb{R} \right)$ is shown as \[\left\{ f,g \right\}={{\left\{ f,g \right\}}_{GHS}}+{{X}_{\chi }}\left( f,g \right) \]where $\left\{ f,g \right\}=-\left\{ g,f \right\}$ is skew-symmetric, and ${{X}_{\chi }}\left( f,g \right)=f{{X}_{\chi }}g-g{{X}_{\chi }}f=-{{X}_{\chi }}\left( g,f \right)$ is together defined. \begin{proof} As previously illustrated , the nabla symbol $\nabla$ is replaced by vector differential operator $D$, then the generalized structural Poisson bracket of two given functions $f,g\in {{C}^{\infty }}\left( M,\mathbb{R} \right)$ is defined such that \begin{align} \left\{ f,g \right\} &=DfJDG={{J}_{ij}}{{D}_{i}}f{{D}_{j}}g\notag\\ & ={{J}_{ij}}{{\partial }_{i}}f{{D}_{j}}g+f\widehat{S}g \notag \end{align}where $\widehat{S}\equiv {{J}_{ij}}{{A}_{i}}{{D}_{j}}={{J}_{ij}}{{A}_{i}}{{\partial }_{j}}$. Let a vector field ${{X}_{f}}={{J}_{ij}}{{\partial }_{i}}f{{\partial }_{j}}\in {{T}_{p}}M$ be given on the generalized Poisson manifold, then $${{X}_{f}}\to X_{f}^{M}={{X}_{f}}+f{{X}_{\chi}}$$ for all ${{X}_{f}},{{X}_{\chi}}={{J}_{ij}}{{A}_{i}}{{\partial }_{j}}$. Hence the generalized structural Poisson bracket is reexpressed as \[\left\{ f,g \right\}=X_{f}^{M}g+gX_{f}^{M}\chi=X_{f}^{M}g+g{{X}_{f}}\chi\] for all $f,g\in {{C}^{\infty }}\left( M,\mathbb{R} \right)$. Hence we obtain \begin{align} \left\{ f,g \right\} &= {{\left\{ f,g \right\}}_{GHS}}+f{{\left\{ \chi,g \right\}}_{GHS}}-g{{\left\{ \chi,f \right\}}_{GHS}}\notag \\ & ={{X}_{f}}g+f{{X}_{\chi}}g-g{{X}_{\chi}}f \notag \end{align} at here, we denote antisymmetric expression ${{X}_{\chi }}\left( f,g \right)=f{{X}_{\chi }}g-g{{X}_{\chi }}f=-{{X}_{\chi }}\left( g,f \right)$. \end{proof} \end{theorem} The GSPB also can be expressed as \[\left\{ f,g \right\}={{X}_{f}}g+g{{X}_{f}}\chi +f{{X}_{\chi }}g\] \begin{corollary}\label{c1} An identity ${{X}_{\chi}}\chi=0$ holds for all $\chi\in{{C}^{\infty }}\left( M,\mathbb{R} \right) $. \end{corollary} Apparently, the theorem \ref{le5} indicates the following formal expressions \[\left\{ f, \right\}={{X}_{f}}+{{X}_{\chi }}\left( f,\cdot \right), ~~\left\{ \cdot ,g \right\}=-{{X}_{g}}+{{X}_{\chi }}\left( \cdot ,g \right)\] As we can see that ${{X}_{\chi }}\left( f,g \right)$ is thoroughly derived by the structural vector ${{X}_{\chi}}$. As a result of GSPB, it follows the properties below \begin{theorem}\label{th3}\label{th5} For all $f,g,h \in {{C}^{\infty }}\left( M,\mathbb{R} \right)$, $\lambda,\mu \in \mathbb{R}$, the GSPB has the following important properties \begin{enumerate} \item Symmetry: $\left\{ f,g \right\}=-\left\{ g,f \right\}$. \item Bilinearity: $\left\{ \lambda f+\mu g,h \right\}=\lambda \left\{ f,h \right\}+\mu \left\{ g,h \right\}$. \item GJI: $\left\{ f,\left\{ g,h \right\} \right\}+\left\{ g,\left\{ h,f \right\} \right\}+\left\{ h,\left\{ f,g \right\} \right\}=0$. \item Generalized Leibnitz identity: ${{\left\{ fg,h \right\}}}={{\left\{ fg,h \right\}}_{GHS}}+{{X}_{\chi }}\left( fg, h \right) $. \item Non degeneracy: if for all $F$, $\left\{ f,g \right\}=0$, then ${{\left\{ f,g \right\}}_{GHS}}={{X}_{\chi }}\left( g,f \right)$. \end{enumerate} \end{theorem} Obviously, the nature of properties 1,4,5 has undergone fundamental change, according to the mathematical expressions of 1,4,5, it's obvious that they're directly linked to the structural operator and the symmetry has been greatly expanded. The law of Leibniz's derivation is not linear anymore. Jacobi identity can deduce general identity connected to the structural derivative. property 2 is also complete under GSPB. Transparently, the property 5 in the theorem \ref{th3} reveals $f,g \in {{C}^{\infty }}\left( M,\mathbb{R} \right)$, $\left\{ f,g \right\}=0$ can lead to the result $f{{X}_{\chi }}g=X_{g}^{M}f$ or $X_{g}^{M}f+f{{X}_{g}}\chi =0$, the Casimir function has corresponding Hamiltonian vector field $X_{H}$, assume that ${{X}_{\chi }}g\ne 0$, then we have $\frac{X_{f}^{M}}{{{X}_{f}}\chi }=-id$, or identity map $\frac{X_{g}^{M}}{{{X}_{\chi }}g}=id$, obviously, this is one of the biggest different feature from the lemma \ref{lem4}. \begin{corollary}\label{c3} Let $f,H\in {{C}^{\infty }}\left( M,\mathbb{R} \right)$. Then $f$'s solution equals $f={{f}_{0}}{{e}^{-wt}}$ along the integral curves of $X_{H}^{M}$ if and only if $\left\{H, f\right\}=0$. \end{corollary} Obviously, the corollary \ref{c3} is a generalization of corollary\ref{c2} and lemma \ref{lem4} which classically correspond the special case at $w=0$ of the corollary \ref{c3}. \begin{definition} A manifold $P$ endowed with a generalized structural Poisson bracket $\left\{ \cdot ,\cdot \right\}$ and satisfying properties $1\sim 4$ on ${{C}^{\infty }}\left( P,\mathbb{R} \right)$ is called a generalized Poisson manifold $\left( P,S,\left\{ , \right\} \right)$. \end{definition} Specifically, the vector field ${X_{f}^{M}}$ defined in definition \ref{d6} is non-symplectic vector field on the generalized Poisson manifold $\left( P,S,\left\{ , \right\} \right)$ satisfying the theorem \ref{th5}, hence it implies that generalized Hamiltonian vector ${X_{H}^{M}}$ is non-symplectic vector field on $\left( P,S,\left\{ , \right\} \right)$, \begin{definition} Let $\left\{ f,g \right\}={{X}_{f}}g+{{X}_{\chi }}\left( f,g \right)$ be on the generalized Poisson manifold $\left( P,S,\left\{ , \right\} \right)$ such that \begin{align} & {{X}_{\chi }}\left( f,\cdot \right)=f{{X}_{\chi }}-{{X}_{\chi }}f \notag\\ & {{X}_{\chi }}\left( \cdot ,g \right)={{X}_{\chi }}g-g{{X}_{\chi }} \notag \end{align}for all $f,g\in C^{\infty}(P)$ are respectively called exterior clamp and interior containing. \end{definition}Accordingly, then one can obtain symmetric identity \[\widehat{S}\left( fg \right)={{X}_{\chi }}\left( f,g \right)+2g\widehat{S}f=\widehat{S}\left( gf \right)\] Theorem \ref{le5} indicates that GSPB $\left\{ \cdot ,\cdot \right\}$ can be decomposed into two parts and \[\widehat{S}\left( FG \right)=\left\{ F,G \right\}-{{\left\{ F,G \right\}}_{GHS}}+2G\widehat{S}F\] Once it goes back to flat Euclidean space, structural operator $\widehat{S}$ disappears. Namely, if $A=0$ holds true, then ${{\left\{ F,G \right\}}}-{{\left\{ F,G \right\}}_{GHS}}=0$. The GSPB can also be shown in the matrix form \[\left\{ F,G \right\}={{D}^{T}}FJDG={{\left\{ F,G \right\}}_{GHS}}+F{{A}^{T}}J\nabla G+G{{\nabla }^{T}}FJA\] It indicates that GSPB still uses the structural matrix $J=\left( {{J}_{ij}} \right)$ to preserve structure invariant. The biggest difference lies in the differential form, GSPB relies on covariant derivative ${{D}_{l}}$, GPB depends on ordinary derivative ${\partial }_{l}$, due to the existence of the structure function $\chi$ that one has to consider covariant derivative operator. So, one can evaluate the GSPB between the coordinates as shown in the following equation \begin{align} {{W}_{kl}} & =\left\{ {{x}_{k}},{{x}_{l}} \right\} ={{J}_{ij}}{{D}_{i}}{{x}_{k}}{{D}_{j}}{{x}_{l}}={{J}_{ij}}\left( {{\delta }_{ik}}+{{A}_{i}}{{x}_{k}} \right)\left( {{\delta }_{jl}}+{{A}_{j}}{{x}_{l}} \right) \notag\\ & ={{J}_{ij}}{{\delta }_{ik}}{{\delta }_{jl}}+{{x}_{k}}{{J}_{ij}}{{A}_{i}}{{\delta }_{jl}}+{{x}_{l}}{{J}_{ij}}{{A}_{j}}{{\delta }_{ik}} \notag\\ &={{J}_{kl}}+{{\mathcal{J}}_{kl}} \notag \end{align}where ${{\mathcal{J}}_{kl}}={{x}_{k}}{{b}_{l}}-{{x}_{l}}{{b}_{k}}=-{{\mathcal{J}}_{lk}}$, ${{\delta }_{ik}}$ is Kronecker's sign, ${{b}_{k}}={{J}_{jk}}{{A}_{j}}$, and ${{J}_{kl}}={{J}_{ij}}{{\delta }_{ik}}{{\delta }_{jl}}$. Let structural operator $\widehat{S}$ function onto the ${x}_{l}$ which deduces the result below $\widehat{S}{{x}_{l}}={{J}_{ij}}{{A}_{i}}{{D}_{j}}{{x}_{l}} ={{b}_{l}}$. Obviously, the antisymmetry ${{W}_{kl}}=-{{W}_{lk}}$ holds, Once the effects of structural operator is removed, then GSPB degenerates into the GPB. In other words $\left\{ {{x}_{k}},{{x}_{j}} \right\}\to {{\left\{ {{x}_{k}},{{x}_{j}} \right\}}_{GHS}}$, and if $k=j$ holds for the GSPB between the coordinates, then it yields $\left\{ {{x}_{k}},{{x}_{k}} \right\}=0$. \begin{definition} Given a non-symplectic vector space $\left( Z_{N},\Omega \right)$ and two functions $f,g:Z\to \mathbb{R}$ with 2-form $\Omega \in {{C}^{\infty }}\left( {{\bigwedge }^{2}}{{T}^{*}}M \right)$, the generalized structural Poisson bracket $\left\{ f,g \right\}:Z\to \mathbb{R}$ of $f$ and $g$ is given by$$\left\{ f,g \right\}\left( x \right)=\Omega \left( X_{f}^{M}\left( x \right),X_{g}^{M}\left( x \right) \right)={{i}_{X_{f}^{M}}}\Omega \left( X_{g}^{M} \right)$$ where $X_{f}^{M}={{X}_{f}}+f{{X}_{\chi}}$. \begin{proof} The GSPB $\left\{ f,g \right\}\left( x \right)={{J}_{ij}}{{D}_{i}}f{{D}_{j}}g$ as previously defined for all $f,g\in C\left( M,\mathbb{R} \right)$, and it's applied to the definition $$\left\{ f,g \right\}\left( x \right)=\Omega \left( X_{f}^{M}\left( x \right),X_{g}^{M}\left( x \right) \right)=\left\langle \mathcal{D}f,X_{g}^{M}\left( x \right) \right\rangle =-\left\langle\mathcal{D}g,X_{f}^{M}\left( x \right) \right\rangle $$ More specifically, by using lemma \ref{lem3}, we ulteriorly deduce the consequence as follows \begin{align} \left\{ f,g \right\}\left( x \right) &=\left\langle {{i}_{X_{f}^{M}}}\Omega ,X_{g}^{M}\left( x \right) \right\rangle =\left\langle {{i}_{{{X}_{f}}}}\Omega +f{{i}_{{{X}_{\chi}}}}\Omega ,X_{g}^{M}\left( x \right) \right\rangle \notag\\ & =\Omega \left( {{X}_{f}}\left( x \right),X_{g}^{M}\left( x \right) \right)+f\Omega \left( {{X}_{\chi}},X_{g}^{M}\left( x \right) \right) \notag \end{align} Substituting the vector field $X_{g}^{M}={{X}_{g}}+g{{X}_{\chi}}\in T_{p}Z$ into the GSPB above, it leads to the further step \begin{align} \left\{ f,g \right\}\left( x \right) &=\Omega \left( {{X}_{f}},{{X}_{g}}+g{{X}_{\chi }} \right)+f\Omega \left( {{X}_{\chi }},{{X}_{g}}+g{{X}_{\chi }} \right)\notag \\ & =\Omega \left( {{X}_{f}},{{X}_{g}} \right)+\Omega \left( {{X}_{\chi }},f{{X}_{g}}-g{{X}_{f}} \right) \notag\\ & ={{\left\{ f,g \right\}}_{GHS}}+f{{\left\{ \chi ,g \right\}}_{GHS}}-g{{\left\{ \chi ,f \right\}}_{GHS}} \notag \end{align} where $\Omega \left( {{X}_{f}},{{X}_{g}} \right)={{\left\{ f,g \right\}}_{GHS}}$ and $f{{X}_{g}}-g{{X}_{f}}\in {{T}_{p}}Z$. \end{proof} \end{definition} \begin{theorem}\label{t2}\label{th6} Let $(P,\Omega)$ be a symplectic manifold. A vector field $X_{f}^{M}:Z\to Z$ on $P$ is called function $f$ if there is a function $f : P \to \mathbb{R}$ such that \[{{i}_{X_{f}^{M}}}\Omega =\mathcal{D}f=df+fd\chi\]that is, for all $ v\in {{T}_{x}}P$, we have the identity\[\frac{\mathcal{D}f}{dt}=\Omega \left( X_{f}^{M},v \right)=\mathcal{D}f\cdot v\] \begin{proof} For the vector field $X_{f}^{M}\in {{T}_{p}}M$, its interior product with respect to the 2-form $\Omega$ is given by $${{i}_{X_{f}^{M}}}\Omega =\mathcal{D}f ={{i}_{{{X}_{f}}}}\Omega +f{{i}_{{{X}_{\chi}}}}\Omega $$with the help of definition \ref{d3}, one can deduce the consequences ${{i}_{{{X}_{f}}}}\Omega =df,{{i}_{{{X}_{\chi}}}}\Omega =d\chi$, we obtain ${{i}_{X_{f}^{M}}}\Omega =\mathcal{D}f=df+fd\chi$. According to the operation rule of interior product, we have ${{i}_{X_{f}^{M}}}\Omega \left( v \right)=\Omega \left( X_{f}^{M},v \right)$, then the covariant evolution of function $f$ is shown as $$\frac{\mathcal{D}f}{dt}={{i}_{X_{f}^{M}}}\Omega \left( v \right)=\mathcal{D}f\cdot v=df\cdot v+fd\chi\cdot v=\frac{df}{dt}+fw$$where $df\cdot v=\frac{df}{dt},~~~d\chi\cdot v=w$. \end{proof} \end{theorem} \begin{definition} Let $\left( Z_{N},\Omega \right)$ be a non-symplectic vector space. A vector field $X_{H}^{M}:Z\to Z$ is called generalized Hamiltonian if $${{i}_{X_{H}^{M}}}\Omega =\mathcal{D}H\left( x \right)$$ for all $x\in Z$, for some $C^{ 1}$ function $H:Z\to \mathbb{R}$, and call $H$ a generalized Hamiltonian function for the vector field $X_{H}^{M}$. \end{definition} \section{S Dynamics and TGHS} Mathematically, on the basis of previous foundation, we would expect to establish the GCHS, and there is no doubt that the GCHS is equivalent to GSPB, GSPB should naturally deduce the GCHS and they can be derived from each other, additionally the GCHS is completely built on the S dynamics and nonlinear generalized Hamiltonian system , hence we need to define the concepts of SD and TGHS . \subsection{S Dynamics and TGHS} \begin{definition}[S Dynamics(SD)]Let $\left( Z,\Omega \right)$ be a symplectic vector space. A scalar field $w:Z\to Z$ is given on manifold $M$, $A=\nabla\chi$ is a structural vector field, \[w=\widehat{S}H\left( x \right)=-{{X}_{H}}\chi\left( x \right)\] is called the S dynamics along with the structural vector field ${{X}_{\chi }}$. \end{definition} Obviously, that the vector field ${{X}_{\chi}}$ is completely generated by the structure function $\chi$, or that ${{X}_{\chi}}$ is the structure vector field associated with $\chi$. In fact, here we have another expression for $w$ given by $-b\left( x \right)\nabla H\left( x \right)={{\left\{ H ,\chi\right\}}_{GHS}},~~~x\in {{\mathbb{R}}^{m}}$, where $b\left( x \right)=A^{T}\left( x \right)J\left( x \right)$ is an $1\times m$ matrix, its component is ${{b}_{j}}={{J}_{ij}}{{A}_{i}}$. S dynamics can also be deduced from the GSPB as follows $$w=\left\{ 1,H \right\}=-X_{H}^{M}\chi =-{{X}_{H}}\chi =\widehat{S}H={{\left\{ \chi ,H \right\}}_{GHS}}={{X}_{\chi }}H$$ where ${{\left\{ 1,H \right\}}_{GHS}}=0$ is obvious. As a matter of fact, the S dynamics just represents the rotational mechanical effects. The $w$ is only associated with the S operator, mathematically, the S dynamics equation has no component expression and it only describes curved spaces, which is the characteristic quantity of non-Euclidean space. Strictly, S dynamics is a brand new and independent Hamiltonian system similar to the GHS, it's completely derived by the structural operator in the Hamiltonian system. Let's generalize equation \eqref{eq7} to the Non-Euclidean manifold. The TGHS should be substitute for the GHS to improve and perfect original theoretical system. The nonlinear generalized Hamiltonian system is defined as follows: \begin{theorem}[TGHS] The thorough generalized Hamiltonian system on $\left( P,S,\left\{ \cdot ,\cdot \right\} \right)$ is defined as \begin{equation}\label{eq21} {\dot{x}}=\frac{dx}{dt}=J\left( x \right)DH\left( x \right),~~~x\in {{\mathbb{R}}^{m}} \end{equation} the component expression of TGHS is ${\dot{x}_{k}}={{J}_{kj}}{{D}_{j}}H$. \end{theorem} Undoubtedly, TGHS embodies the GHS and new item ${{J}_{kj}}{{A}_{j}}H$ derived from the structural function, it implies that TGHS is holonomic and valid theory to replace the GHS using the time operator $\frac{d}{dt}$. Furthermore the S dynamics can be rewritten in the form $w=A^{T}\dot{x}={{A}_{j}}{\dot{x}_{j}}$ with the help of the fact ${{\left\{ \chi ,\chi \right\}}_{GHS}}=0$, where structural derivative matrix $A={{\left( {{A}_{1}},\cdots ,{{A}_{m}} \right)}^{T}}$ and $x={{\left( {{x}_{1}},\cdots ,{{x}_{m}} \right)}^{T}}$. Seeing that S dynamics and TGHS are in terms of the structural function, so that once the $A$ is removed, it will lead to the disappearance of SD along with degeneration of TGHS. Accordingly, the equilibrium equation is $\frac{dx}{dt}={\dot{x}}=0$ or expressed as $J\left( x \right)\nabla H\left( x \right)+J\left( x \right)AH\left( x \right)=0$, if $J\left( x \right)$ is nondegenerate, then equilibrium equation is rewritten as $DH\left( x \right)=0$. Explicitly, TGHS is not compatible with GSPB, so it's just a modification of GHS. \subsection{Covariant Time Operator} In order to achieve compatibility with GSPB, one must generalize the dynamics to general condition, a new covariant time operator is constructed by combining the dynamics of the time operator with the S dynamics. \begin{definition}[Covariant Time Operator(CTO)]Let $w={{X}_{\chi}}H\in {{C}^{\infty }}\left( M,\mathbb{R} \right)$ with the Hamiltonian $H$ be on the generalized Poisson manifold, then covariant time operator formally holds $$\frac{\mathcal{D}}{dt}=\frac{d}{dt}+w$$ for all functions on $\left( P,S, \left\{ , \right\} \right)$. \end{definition} The CTO can be shown as $\frac{\mathcal{D}}{dt}=\frac{d}{dt}+\left\{1,H \right\}$. Transparently, the S dynamics is directly linked to the Hamiltonian $H$ using the GSPB. \begin{align} \frac{\mathcal{D}}{dt}f &=\frac{d}{dt}f+wf=\left\{ f,H \right\}=\Omega \left(X_{f}^{M}, X_{H}^{M}\right)\notag \\ & ={{J}_{ij}}{{D}_{i}}f{{D}_{j}}H={{X}_{f}}H-H{{X}_{\chi }}f+f\widehat{S}H \notag \end{align}where based on the corollary \ref{c1} ${{X}_{\chi }}\chi =0$ which leads to the equality $X_{H}^{M}\chi ={{X}_{H}}\chi $, and obviously we have the evolution operators as follows $$\frac{df}{dt}={{X}_{f}}H-H{{X}_{\chi }}f,~~~-X_{H}^{M}\chi =\widehat{S}H=w$$Hence time operator is $\frac{d}{dt}=-{{X}_{H}}-{{X}_{\chi }}$, in other words, $\frac{df}{dt}=-{{X}_{H}}f-{{X}_{\chi }}f$. Covariant time operator has greatly expanded the scope of the time operator so that we can study the evolution of various physical systems in a broader and more general mathematical space. Certainly, the extended part $w$ is an independent and complete dynamic system, it is derived from the interaction of Hamiltonian function $H$ and S operator, in another word, the dynamic function $w$ completely deduced from S operator acting on Hamiltonian functions. We can also deduce covariant differential form $\mathcal{D}=d+\delta $, where second part is $\delta =wdt$. Then CTO can be rewritten as $\frac{\mathcal{D}}{dt}=\frac{d}{dt}+\frac{\delta }{dt}$, and we also have the following covariant differential form $$\mathcal{D}f=df+\delta f ,~~\mathcal{D}\left( fg \right)=gdf+fdg+\delta \left( fg \right)$$ where $f,g\in {{C}^{\infty }}$. Accordingly, the time rate of change of any scalar function $f\in {{C}^{\infty}}\left( M,\mathbb{R} \right)$ on the generalized phase space can be computed from CTO by using the chain rule \[\frac{\mathcal{D}f}{dt}=\frac{df}{dt}+wf=\frac{\partial f}{\partial {{x}_{i}}}\overset{\cdot }{\mathop{{{x}_{i}}}}+wf\] This can be compactly written $\frac{\mathcal{D}f}{dt}=\left\{ f,H \right\}$. \begin{theorem} The generalized equations of motion with the Hamiltonian $H$ on $\left( P,S, \left\{ , \right\} \right)$ is the covariant evolution such that \begin{align} \frac{\mathcal{D}}{dt}f &=\frac{d}{dt}f+wf=\left\{ f,H \right\}\notag \\ & ={{J}_{ij}}{{D}_{i}}f{{D}_{j}}H\notag \\ &={{X}_{f}}H-H{{X}_{\chi }}f+f\widehat{S}H \notag \end{align}in which we define TGHE as $$\frac{d}{dt}f={{J}_{ij}}{{\partial }_{i}}f{{D}_{j}}H={{X}_{f}}H-H{{X}_{\chi }}f$$ holds for all $f\in {{C}^{\infty }}$ and S dynamics $$w=\widehat{S}H=\left\{ 1,H \right\}$$ \end{theorem} \begin{theorem} Let $\left\{ d{{q}^{i}},i=1,\cdots ,n \right\}$ is a basis of ${{T}^{*}}_{q}Q$ on the generalized Poisson manifold $\left( P,S,\left\{ , \right\} \right)$, and 1-form $\omega \in {{T}^{*}}_{q}Q$ as $\alpha= p_{ i} dq^{ i }$, the induced local coordinates $\left\{ {{q}^{i}},{{p}_{i}};i=1,\cdots ,n \right\}$ on ${{T}^{*}}Q$, symplectic 2-form on ${{T}^{*}}Q$ is given by $ \Omega =d{{p}_{i}}\wedge d{{q}^{i}}$, the structure vector is defined by \[{{X}_{\chi }}={{A}_{i}}\frac{\partial }{\partial {{p}_{i}}}-{{b}_{i}}\frac{\partial }{\partial {{q}^{i}}}\] where structural derivatives are shown as ${{A}_{i}}={{X}_{{{p}_{i}}}}\chi $, ${{b}_{i}}={{X}_{{{x}_{i}}}}\chi $ respectively. \end{theorem} Hence we can define the generalized Hamiltonian vector field by using the generalized structural Poisson bracket as following \begin{theorem} Suppose that $\left( Z_{N},\Omega \right)$ is non-symplectic vector space, $X_{H}^{M}: Z\to Z$ is given by $$X_{H}^{M}={{X}_{H}}+H{{X}_{\chi}}=\left( {{\widetilde{D}}_{{{p}_{i}}}}H,-{{D}_{i}}H \right)$$ is call the generalized Hamiltonian vector field, where ${{\widetilde{D}}_{{{p}_{i}}}}=\frac{\partial }{\partial {{p}_{i}}}+{{b}_{i}}$, ${{b}_{i}}={{\widehat{b}}_{i}}\chi$. Thus, generalized Hamilton's equations in canonical coordinates are \begin{equation}\label{eq27} {\dot {q}}^{i}={{\widetilde{D}}_{{{p}_{i}}}}H,~~{\dot {p}}_{i}=-{{D}_{i}}H,~~i=1,\cdots ,n \end{equation} where ${\dot {q}}^{i}=\frac{d}{dt}{{q}^{i}}$, \end{theorem} Obviously, generalized Hamilton's equations \eqref{eq27} is a reasonable extension of Hamilton's equations\eqref{eq13}. Specifically, the generalized Hamiltonian vector fields is reexpressed as \[X_{H}^{M}={{X}_{H}}+H{{X}_{\chi }}={{D}_{i}}H\frac{\partial }{\partial {{p}_{i}}}-{{\widetilde{D}}_{{{p}_{i}}}}\frac{\partial }{\partial {{q}^{i}}}\]therefore the interior product about ${X_{H}^{M}}$ is given by ${{i}_{X_{H}^{M}}}\Omega =\mathcal{D}H=dH+Hd\chi$ as theorem \ref{t2} illustrated. \section{GCHS} The structural derivative ${{A}_{i}}$ in covariant derivative operator ${{D}_{i}}$ can help us study different physical and mathematical fields, namely different ${{A}_{i}}$ for different fields on manifold $M$ can be specifically studied, once structural derivative ${{A}_{i}}$ is ensured, the relating generalized covariant Hamilton system is followed to ascertain, for the sake of actualizing compatibility and self consistency between GSPB and GCHS, the GCHS on the generalized Poisson manifold $\left( P,S,\left\{ \cdot ,\cdot \right\} \right)$ with structural derivative vector $A$ can be defined as follows: \begin{definition}[GCHS]\label{d2}The generalized covariant Hamilton system on $\left( P,S,\left\{ \cdot ,\cdot \right\} \right)$ is defined as \begin{equation}\label{eq23} \frac{\mathcal{D}x}{dt}=W\left( x \right)DH\left( x \right),~~~x\in {{\mathbb{R}}^{m}} \end{equation}is called the GCHS, where $m\times m$ matrix $ W=J+\mathcal{J}=-W^{T}$, the components is $\frac{\mathcal{D}{{x}_{k}}}{dt}={{W}_{kj}}{{D}_{j}}H$. \end{definition} In local coordinates $\left( {{x}_{1}},\cdots {{x}_{r}} \right)$, a generalized Poisson structure is determined by the component functions ${{W}_{ij}}\left( x \right)$ of $W$. Specifically, ${{W}_{kl}}={{J}_{kl}}+{{\mathcal{J}}_{kl}}$, where \\${{J}_{kl}}={{\left\{ {{x}_{k}},{{x}_{l}} \right\}}_{GHS}},{{\mathcal{J}}_{kl}}={{X}_{\chi }}\left( {{x}_{k}},{{x}_{l}} \right)$. In terms of the bracket we have simply ${{W}_{kj}}={{\left\{ {{x}_{k}},{{x}_{j}} \right\}}}$; in other words, the generalized Poisson structure is specified if we give the bracket relations satisfied by the coordinate functions. Equation \eqref{eq23} can be expanded and written as form $\frac{\mathcal{D}x}{dt}={\dot{x}}+wx$, where ${\dot{x}}=\frac{dx}{dt}=J\left( x \right)DH\left( x \right)$ describes the TGHS, $w$ depicts the S dynamical effect. Consequently, GCHS and GSPB have brought about the compatibility and self consistency in mathematics. One can study the general topological properties and geometric properties of non-Euclidean space structures, and reveal the specific operation and details of Hamiltonian systems through the additional structures. Using the GSPB to express the equation of GCHS is $\frac{\mathcal{D}{{x}_{k}}}{dt} =\left\{ {{x}_{k}},{{x}_{j}} \right\}{{D}_{j}}H$. Apparently, the GCHS consists of TGHS and SD term. Actually, equation \eqref{eq23} is real Hamiltonian dynamical system, $w$ is the necessary kinetic parameter, at the same time, it also reveals the non kinetic defects of GHS and the great restriction of its application. Obviously, equilibrium equation of the GCHS is shown as $\frac{\mathcal{D}x}{dt}=0$, that is to say ${\dot{x}}+wx=0$, its formal solution is $x={{x}_{0}}{{e}^{-wt}}$, where ${x}_{0}$ is initial position. the solutions of $x(t)=0$ are called zero solution. \begin{theorem}The GCHS on $\left( P,S,\left\{ \cdot ,\cdot \right\} \right)$ in a component form is \begin{equation}\label{eq10} \frac{\mathcal{D}{{x}_{k}}}{dt}=\left\{ {{x}_{k}},H\right\}={\dot{x}_{k}}+{{x}_{k}}w \end{equation}for all $x\in P$. \begin{proof} By the form of GSPB , one can easily obtain \begin{align} \left\{{{x}_{k}}, H\right\} &={{J}_{ij}} {{D}_{i}}{{x}_{k}} {{D}_{j}}H ={{J}_{kj}}{{D}_{j}}H+{{x}_{k}}\widehat{S}H \notag\\ & ={\dot{x}_{k}}+{{x}_{k}}w \notag \end{align}where Kronecker's sign is ${{\delta }_{ik}}={{\partial }_{i}}{{x}_{k}}=\left\{ \begin{matrix} 1,i=k \\ 0,i\ne k \\ \end{matrix} \right.$, equivalently, GCHS can also be expressed as $\frac{\mathcal{D}{{x}_{k}}}{dt}=-\widehat{S}\left( {{x}_{k}}H \right)+{{\left\{{{x}_{k}},H \right\}}_{GHS}}+2{x}_{k}\widehat{S}H$. \end{proof} \end{theorem} the essential difference between GHS and GCHS is whether or not associated with the structural function, obviously, if condition $w=0$ holds, GSPB degenerate into the ususal GPB. Apparently \eqref{eq10} is covariant expression. \begin{corollary} the structural operator $\widehat{S}$ can induce the following equation \[{{b}_{k}}=\widehat{S}{{x}_{k}},{{A}_{k}}=\widehat{S}{{p}_{k}},w=\widehat{S}H\] in terms of position ${x}_{k}$, momentum ${p}_{k}$ and Hamiltonian $H$ respectively. \end{corollary} \subsection{Covariant Momentum} \begin{corollary}\label{amm} The covariant momentum on $\left( P,S,\left\{ \cdot ,\cdot \right\} \right)$ is $p=m\frac{\mathcal{D}x}{dt}$, where $m$ is the mass of objects. \end{corollary} And the component form in corollary \ref{amm} can be written in the form ${{p}_{k}}=m\frac{\mathcal{D}{{x}_{k}}}{dt}$, the time rate of change leads to the consequence of force as below ${{F}_{k}} =\frac{\mathcal{D}{{p}_{k}}}{dt}$. \begin{definition}[XP]\label{d7} Let ${{\xi }_{ik}}={{\partial }_{i}}{{p}_{k}}=\left\{ \begin{matrix} {{\xi }_{kk}},k=i \\ 0,k\ne i \\ \end{matrix} \right.$, then \[{{J}_{ji}}{{\xi }_{jk}}={{\rho }_{ik}}=\left\{ \begin{matrix} -1,k=i \\ 0,k\ne i \\ \end{matrix} \right.=-{{\delta }_{ik}}\] \end{definition}where ${{\delta }_{ik}}={{J}_{ij}}{{\xi }_{jk}}$. Therefore, an identity ${{\rho }_{jk}}+{{\delta }_{jk}}=0$ holds. \begin{theorem} The covariant evolution of the momentum vector $p={{p}_{k}}{{e}_{k}}$ is \begin{equation} \frac{\mathcal{D}}{dt}{{p}_{k}}=\left\{{{p}_{k}}, H \right\}=-{{D}_{k}}H+{{p}_{k}}w \end{equation} where the time rate of change is then expressed as ${\dot{p}_{k}}=-{{D}_{k}}H$. \begin{proof} By GSPB , one can obtain \begin{align} \frac{\mathcal{D}}{dt}{{p}_{k}} & =\left\{ {{p}_{k}},H \right\}=\Omega \left( X_{{{p}_{k}}}^{M}, X_{H}^{M}\right)={{J}_{ij}}{{D}_{i}}{{p}_{k}} {{D}_{j}}H \notag \end{align} Using the definition \ref{d7}, further more, \begin{align} & {{J}_{ij}}{{D}_{i}}{{p}_{k}}{{D}_{j}}H \notag\\ & ={{J}_{ij}}{{\partial }_{i}}{{p}_{k}}{{D}_{j}}H+{{p}_{k}}\widehat{S}H \notag\\ & ={{J}_{ij}}{{\xi }_{ik}}{{D}_{j}}H+{{p}_{k}}\widehat{S}H \notag\\ & ={{\rho }_{jk}}{{D}_{j}}H+{{p}_{k}}w \notag\\ & =-{{\delta }_{jk}}{{D}_{j}}H+{{p}_{k}}w \notag\\ & =-{{D}_{k}}H+{{p}_{k}}w \notag \end{align}Hence it yields $\frac{\mathcal{D}}{dt}{{p}_{k}} =-{{D}_{k}}H+{{p}_{k}}w$, where ${{J}_{ij}}{{D}_{i}}{{p}_{k}}={{\rho }_{jk}}+{{p}_{k}}{{b}_{j}}=-{{\delta }_{jk}}+{{p}_{k}}{{b}_{j}}$. \end{proof} \end{theorem} \subsection{The Covariant Canonical Equations} As a consequence of generalized structural Poisson bracket, then one can lead to $\frac{\mathcal{D}F}{dt}=\left\{ F ,H \right\}=\frac{dF}{dt}+wF$, where the first part of TGHS is definitionally taken the form \[\frac{dF}{dt}={{\partial }_{i}}F{{J}_{ij}}{{D}_{j}}H={{\nabla }^{T}}FJDH\] Therefore, if $\frac{\mathcal{D}F}{dt}=0$ holds true, then the equilibrium equation is $\left\{ F,H \right\}=\frac{dF}{dt}+wF=0$, its formal solution is $F={{F}_{0}}{{e}^{-wt}}$, where ${{F}_{0}}$ is the initial value of function $F$. \begin{theorem}\label{lemm} The covariant canonical equations on $\left( P,S,\left\{ , \right\} \right)$ are \[\frac{\mathcal{D}{{x}_{k}}}{dt}=\left\{{{x}_{k}} ,H \right\},~~\frac{\mathcal{D}{{p}_{k}}}{dt}=\left\{{{p}_{k}} ,H\right\}\] where $\frac{\mathcal{D}}{dt}$ is CTO. \end{theorem} According to the definition \ref{d2}, one can define the acceleration flow based on the GCHS, Essentially, the acceleration flow is the second order of GCHS, and it's a second order differential equation. \subsection{Acceleration Flow} \begin{definition}[Acceleration Flow]\label{af} The acceleration flow on $\left( P,S,\left\{ , \right\} \right)$ is defined as \begin{equation}\label{eq22} a=\frac{{{\mathcal{D}}^{2}}x}{d{{t}^{2}}}=\ddot{x} +2w{\dot{x}}+x\beta,~~~x\in {{\mathbb{R}}^{m}} \end{equation} where $\ddot{x} =\frac{{{d}^{2}}x}{d{{t}^{2}}},\beta ={{w}^{2}}+\frac{dw}{dt}$, ${\dot{x}}=J\left( x \right)DH\left( x \right)$, its component expression is ${{a}_{i}}=\ddot{x}_{i} +2w\dot{x}_{i}+{{x}_{i}}\beta $, Newton's second law expresses $F=m\frac{{{\mathcal{D}}^{2}}x}{d{{t}^{2}}}$, where $m$ is the mass of objects. \end{definition} Obviously, definition \ref{af} also reveals the inevitable problems and critical limitations of GHS. Conversely, GCHS as a universal and intact theoretical system can remedy the defects of GHS. More importantly, acceleration flow is directly and completely derived from GCHS, it has achieved the self consistency and original intention between the Hamiltonian structure and classical mechanics along with the GCHS, the theories are compatible with each other. This vital point goes far beyond GHS reaches. The expression in lemma \ref{amm} is equivalent to the Newton's second law ${{F}_{k}}=m{{a}_{k}}$. As a result, if $\frac{dw}{dt}=0$ holds right, then $w={{w}_{0}}$, acceleration flow can be shown as \begin{equation}\label{eq25} a=\ddot{x}+2{{w}_{0}}\dot{x}+x{{w}_{0}}^{2} \end{equation} Equation \eqref{eq22} is the second order linear ordinary differential equation. Equilibrium equation of \eqref{eq22} is $a=0$, specifically, it's shown as $\ddot{x}+2w{\dot{x}}+x\beta =0$, its characteristic equation is ${{\lambda }^{2}}+2w\lambda +\beta =0 $, discriminant is $\Delta =4{{w}^{2}}-4\beta =-4\frac{dw}{dt}$, roots are ${{\lambda }_{1,2}}=-w\pm \sqrt{-\frac{dw}{dt}}$.It is transparent that discriminant $-\frac{\Delta }{4}=\frac{dw}{dt}$ is only decided by whether S dynamics $w$ changes or not as time flows. \section{GCHS on the Riemann Manifold} In Riemannian geometry, the Levi-Civita connection is a specific connection on the tangent bundle of a manifold. More specifically, it is the torsion-free metric connection, i.e., the torsion-free connection on the tangent bundle preserving a given Riemannian metric. Let $M$ be a differentiable manifold of dimension $m$. A Riemannian metric on $M$ is a family of inner products $ g\colon T_{p}M\times T_{p}M\longrightarrow \mathbb{R},~~p\in M$ such that, for all differentiable vector fields $X,Y$ on $M$, $ p\mapsto g(X(p),Y(p))$ defines a smooth function $M\to \mathbb{R}$. the metric tensor can be written in terms of the dual basis $(dx_{1}, \cdots, dx_{n})$ of the cotangent bundle as ${ g=g_{ij}\,\mathrm {d} x^{i}\otimes \mathrm {d} x^{j}}$. Endowed with this metric, the differentiable manifold $(M, g)$ is a Riemannian manifold. In a local coordinate system $\left( U,{{x}_{i}} \right)$, connection $\nabla$ gives the Christoffel symbols, so now the structural derivative ${{A}_{i}}$ is now expressed as the special case of Christoffel symbols. Accordingly, the structure function $\chi$ on the $(M, g)$ is taken as form $\chi=\log \sqrt{g}$. \begin{theorem}\label{th1} The GCHS on $(M, g)$ can be expressed \begin{align} \frac{\mathcal{D}{{x}_{k}}}{dt} & ={{J}_{kj}}\frac{\partial H\left( x \right)}{\partial {{x}_{j}}}+{{J}_{kj}}\Gamma _{ji}^{i}H\left( x \right)+{{x}_{k}}{{J}_{ij}}\Gamma _{il}^{l}\frac{\partial H\left( x \right)}{\partial {{x}_{j}}}\notag \end{align} \begin{proof} Plugging the Levi-Civita connection into the GCHS \[\frac{\mathcal{D}{{x}_{k}}}{dt}={{J}_{kj}}{{\partial }_{j}}H+{{J}_{kj}}{{A}_{j}}H+{{x}_{k}}{{J}_{ij}}{{A}_{i}}{{D}_{j}}H\left( x \right)\] then it proves the theorem. \end{proof} \end{theorem} Clearly, theorem \ref{th1} has indicated GCHS as the real general dynamical system which has contained the new three parts only linked to the structural function, GHS is very ordinary part of GCHS. And the covariant momentum is thusly shown as \begin{align} {{p}_{k}}=m{{J}_{kj}}\frac{\partial H\left( x \right)}{\partial {{x}_{j}}}+m{{J}_{kj}}\Gamma _{jl}^{l}H+m{{x}_{k}}{{J}_{ij}}\Gamma _{li}^{l}\frac{\partial H\left( x \right)}{\partial {{x}_{j}}} \notag \end{align} \begin{corollary}\label{lem} S dynamics on $(M, g)$ is shown by the equation\[w=\widehat{S}H\left( x \right)={{J}_{ij}}\Gamma _{li}^{l}\frac{\partial H\left( x \right)}{\partial {{x}_{j}}}\] \end{corollary} Incidentally, from the corollary \ref{lem}, it is clear that S dynamics is related only to the structure function $\chi$, and NGHS is expressed in the following lemma \begin{corollary}\label{lem1} TGHS on $(M, g)$ is the representation \[\dot{x}_{k} =\frac{d{{x}_{k}}}{dt}={{J}_{kj}}{{\partial }_{j}}H+{{J}_{kj}}\Gamma _{lj}^{l}H\] \end{corollary} Clearly, corollary \ref{lem1} contains GHS as the rate of change of time, which is a whole. Only in this way can we describe the stability problems and development trends etc. Therefore, only corollary \ref{lem1} can describe the Riemann manifold with Levi-Civita connection, which can clearly describe the various properties of Hamiltonian dynamics on Riemann manifold. As a consequence, the equilibrium equation on $(M, g)$ of GCHS is a set of nonlinear partial differential equations\[{{J}_{kj}}\frac{\partial H\left( x \right)}{\partial {{x}_{j}}}+{{x}_{k}}{{J}_{ij}}\Gamma _{li}^{l}\frac{\partial H\left( x \right)}{\partial {{x}_{j}}}+{{J}_{kj}}\Gamma _{jl}^{l}H\left( x \right)=0\] It can be mathematically denoted as ${{B}_{k}}+{{V}_{k}}H\left( x \right)=0$, where ${{V}_{k}}={{J}_{kj}}\frac{\partial \log \sqrt{g}}{\partial {{x}_{j}}}$, and ${{B}_{k}}={{J}_{kj}}\frac{\partial H\left( x \right)}{\partial {{x}_{j}}}+{{x}_{k}}{{J}_{ij}}\Gamma _{li}^{l}\frac{\partial H\left( x \right)}{\partial {{x}_{j}}}$. Apparently, the term ${{V}_{k}}$ is mainly connected to the structural derivative ${A}_{j}$. The expression of second order of GCHS is given \begin{theorem}\label{1a} Acceleration flow on $(M, g)$ of GCHS is \[{{a}_{k}}=\frac{{{d}^{2}}{{x}_{k}}}{d{{t}^{2}}}+2w{{J}_{kj}}{{\partial }_{j}}H+2w{{J}_{kj}}\Gamma _{jl}^{l}H+{{x}_{k}}{{w}^{2}}+{{x}_{k}}\frac{dw}{dt} \] where $\ddot{x}_{k}=\frac{d}{dt}\dot{x}_{k}$ is the second order derivative about time. \end{theorem}Classically, there is only one term $\frac{d}{dt}\left( {{J}_{kj}}{{\partial }_{j}}H \right)$ for the GHS to depict the acceleration flow, it can't describe the the Riemann manifold with Levi-Civita connection as mechanical system. As for the theorem \ref{1a}, the acceleration flow $\frac{d}{dt}\left( {{J}_{kj}}{{\partial }_{j}}H \right)$ of GHS is hidden in the second-order derivative of time. Mechanical equilibrium equation is ${{a}_{k}}=0$, specifically if equation $\frac{dw}{dt}=0$ holds, then geodesic equation of Riemann manifold is rewritten in the form \[\frac{{{d}^{2}}{{x}_{k}}}{d{{t}^{2}}}+2w_{0}{{J}_{kj}}{{\partial }_{j}}H+2w_{0}{{J}_{kj}}\Gamma _{jl}^{l}H+{{x}_{k}}{{w_{0}}^{2}}=0\] \section*{Acknowledgements} The first author would like to express his gratitudes to all those who helped him during the writing of this thesis. Firstly, the deepest love to his Dad, Chao Wang, who always supports him in his studies, secondly he would like to express his heartfelt gratitude to Prof. Xiaohua Zhao, Prof. Jibin Li, Hongxia Wang, Yang Liu, Huifang Du for all their kindness and help. His sincere appreciation also goes to friends Daoyi Peng, Ran Li, David Mitchell, Michael Kachingwe, Rojas Baltazart Minja, Berhanu Yohannes Melsew for their kindness and help.
1,314,259,994,616
arxiv
\section{INTRODUCTION\label{intro}} The diffuse Galactic synchrotron radiation provides a continuous background of radio emission which is intrinsically highly (up to $\approx$70\%) linearly polarized. This radiation is Faraday-rotated from the point of emission as it propagates through warm ionized gas interwoven with magnetic fields in the disk of the Galaxy; i.e., the angle $\theta$ of the polarized component of the emission is rotated at wavelength $\lambda$\,[m] by \begin{equation} \Delta\theta = RM\,\lambda^2\ [\rm{rad}], \end{equation} where $RM$ is the rotation measure [$\rm{rad}\,\rm{m}^{-2}$] and depends on the line-of-sight component of the magnetic field, $B_{\|}$\,$[\mu\rm{G}]$, the thermal electron density, $n_e$\,$[\rm{cm}^{-3}]$, and the path length, $dl$\,$[\rm{pc}]$, as \begin{equation} RM = 0.81\int{B_{\|}\,n_e\,dl\ [\rm{rad}\,\rm{m}^{-2}]}. \end{equation} High-resolution radio polarization images at frequencies $\lesssim$3~GHz reveal the turbulent imprint of Faraday rotation on the diffuse polarized emission (e.g., \citealt{Wieringa+1993}; \citealt{Gray+1999}; \citealt{Gaensler+2001}; \citealt{Uyaniker+2003}; \citealt*{HaverkornKd2003a,HaverkornKd2003c}; \citealt{Haverkorn+2006b}; \citealt{Schnitzeler+2007}). The turbulent nature of the imprint is the product of the random component of the Galactic magnetic field and irregular electron-density distributions in the general interstellar medium (ISM). Detailed studies and modeling of the diffuse Galactic emission (e.g., \citealt{Spoelstra1984}; \citealt*{HaverkornKd2004b}) as well as statistical analyses of the $RM$s of polarized extragalactic sources \citep{Haverkorn+2006a} indicate that the scale size, or ``cell'' size, for variations in the magnetized ISM range from $\sim$15~pc to 100~pc. Depth depolarization then results from the averaging of nonparallel polarization vectors from emission at different cells along the line-of-sight. Smaller-scale variations are also apparent in polarization images. Depolarization filaments, or ``canals,'' with a width corresponding to one beam of the observing instrument, indicate the presence of very sharp gradients in $RM$ (see, e.g., \citealt{Gaensler+2001}; \citealt{Uyaniker+2003}; \citealt*{HaverkornKd2004a}). If the gradient is so steep across the beam as to cause differential rotation of the polarization angle of $\sim$90\arcdeg, then complete depolarization occurs. Beam depolarization in images produced by aperture synthesis telescopes (typically with arcminute resolution) suggests a scale length for $RM$ structures in the magnetized ISM of less than 1~pc. Interpreting the structure seen in radio polarization images of the Galactic plane is largely left to modeling \citep[e.g.,][]{HaverkornKd2004b}, as there is generally little correlation between the polarization structures seen in these images and the emission structures seen in total intensity images. Nevertheless, objects of known distance can be used to estimate the line-of-sight distribution of the magnetized ISM, revealing, for example, whether observed polarization structures are generated behind the known object, in the region between the object and the Sun, or perhaps in the immediate vicinity of the object \citep[see][]{Gray+1999,UyanikerL2002b}. Moreover, if small-scale depolarization structures can be isolated to a known object, then it may be possible to probe directly the properties of the magnetized ISM within the structures. \ion{H}{2} regions and supernova remnants (SNRs) are the two most prevalent (discrete) constituents of the ISM as seen at radio wavelengths, and each class of object is detected in radio polarization images up to a limiting distance determined by the ``polarization horizon'' \citep*[see][]{KothesL2004}. \ion{H}{2} regions are detected by way of their depolarizing effects on background diffuse emission, while SNRs are simultaneously a source of polarized synchrotron emission and a Faraday ``screen'' which depolarizes background emission. However, neither class of object is a particularly good probe of the magnetized ISM. \ion{H}{2} regions have very high electron densities and turbulent motions which produce tangled magnetic fields, a combination which leads to virtually complete beam depolarization across the region. For SNRs, complex models are needed to describe the physical parameters in the shock front at the interface between the rapidly expanding SNR and the ISM. Another class of object which may potentially serve as a better probe of the magnetized ISM is planetary nebulae (PNe). Young PNe are relatively strong thermal radio emitters, resulting from high electron densities \citep[see, e.g.,][]{Bains+2003}, but they are also very compact ($\ll$1~pc) and not yet interacting with the ISM. On the other hand, the shells of many old PNe are observed at optical wavelengths to interact with the ISM \citep[see][]{TweedyK1996}. Moreover, the ISM magnetic field appears to play a significant role in shaping the shells, as evidenced primarily by the visible ``striping'' or filamentary structure of shell gases \citep*[see][]{TweedyMN1995,SokerZ1997}. Theoretical treatments show that interactions between PNe and the ISM are an important consideration in the evolution of PN systems moving at even modest speeds ($\gtrsim$5~$\rm{km}\,\rm{s}^{-1}$) with respect to the ISM \citep*[see, e.g.,][]{SokerD1997,WareingZO2007}. If the conditions are right in the interaction region between the PN and the ISM, we may expect to see the Faraday signature of old PNe in radio polarization images. Such a signature has been identified for the nearby PN Sharpless~2-216 (\mbox{Sh\,2-216}) and was first described by one of us in \citet{Uyaniker2004}. In this paper, we describe in detail the Faraday-rotation structure in the shell of \mbox{Sh\,2-216}. We present radio polarization images at 1420~MHz of a $2.5\arcdeg\times2.5\arcdeg$ region of the Galactic plane around the position of \mbox{Sh\,2-216}. The images are taken from the Canadian Galactic Plane Survey (CGPS). In \S~\ref{s216}, we summarize the pertinent properties of \mbox{Sh\,2-216}. In \S~\ref{obs}, we describe briefly the preparation of the images. In \S~\ref{results}, we describe the structures observed on the visible disk of \mbox{Sh\,2-216}\@ in both polarized intensity and polarization angle, and estimate the $RM$ through the shell of \mbox{Sh\,2-216}. In \S~\ref{discuss}, we derive the magnetic field in the shell of \mbox{Sh\,2-216}, and interpret the structures and $RM$s in the context of an interaction between the PN and the ISM. We also discuss the possibility that the observed structures are produced by the stellar field of the host white dwarf or its progenitor. Finally, in \S~\ref{concl}, we summarize our conclusions. \section{THE PLANETARY NEBULA SH\,2-216 \label{s216}} \mbox{Sh\,2-216}\@ is the closest known PN. At a distance of 129~pc \citep{Harris+2007}, its 1.7$\arcdeg$ angular diameter translates to a physical diameter of 3.8~pc, making it also one of the largest and oldest PNe. The most conspicuous feature of \mbox{Sh\,2-216}\@ at optical wavelengths is its bright eastern rim, denoting an interaction between the expanding and moving PN and the ISM. The location of the interaction region\footnote{We refer to the bright eastern rim as ``the interaction region'' throughout the paper, though other parts of the shell of \mbox{Sh\,2-216}\@ may also be interacting with the ISM.} appears to be consistent with the observed displacement of the host white dwarf from the center of the PN; i.e., the enhanced emission in this region is a consequence of the additive velocity of the nebular expansion in all directions and the underlying eastward motion of the PN system relative to the ISM. The expansion velocity is very low \citep[$<4~\rm{km}\,\rm{s}^{-1}$;][]{Reynolds1985}, indicating that the ISM pressure (with dynamic, magnetic and cosmic-ray components) is nearly equal to the PN ram pressure. The velocity (on the plane of the sky) of the system relative to the ISM is estimated also to be $\sim$4~$\rm{km}\,\rm{s}^{-1}$ \citep{TweedyMN1995}. The thin filamentary structures observed in $\rm{H}\alpha$ in the interaction region, together with the more subtle, and wider, filamentary structure observed in \ion{N}{2} across the face of the PN, qualitatively suggest that the ISM magnetic field is shaping the morphology of \mbox{Sh\,2-216}\@ \citep{TweedyMN1995}. Using estimates for the electron density within the PN ($n_e \sim 5~\rm{cm}^{-3}$) and the mean ISM magnetic field ($B \sim 5~\mu\rm{G}$), \citet{TweedyMN1995} show that the ISM magnetic pressure is about twice that of the dynamic pressure, and thus likely a dominant factor in the shaping. The strength of the PN magnetic field is expected to be negligible at large radii ($r \sim 1$~pc) from the host white dwarf, assuming the field decreases as $B \sim r^{-2}$ \citep*[see, e.g.,][]{VlemmingsDL2002}, but may be amplified significantly within filaments due to compression \citep[see][]{Soker2002,HugginsM2005}. We discuss the strength and orientation of the magnetic field in the outermost regions of \mbox{Sh\,2-216}\@ in \S~\ref{discuss}. \section{OBSERVATIONS AND IMAGE PREPARATION \label{obs}} The radio polarization data presented in this paper were obtained at 1420~MHz ($\lambda = 21$\,cm) as part of the CGPS \citep{Taylor+2003} using the synthesis telescope (ST) at the Dominion Radio Astrophysical Observatory (DRAO). The ST is described in detail by \citet{Landecker+2000}. Images are produced in each CGPS field for the two orthogonal linear polarization states, Stokes-Q ($Q$) and Stokes-U ($U$), as well as Stokes-I (total intensity), from data in each of four 7.5~MHz continuum bands centered on 1406.65, 1414.15, 1426.65 and 1434.15~MHz, respectively\footnote{Note that the frequency corresponding to the midpoint of the four continuum bands is 1420.4~MHz, the neutral hydrogen spin-flip frequency. The 5.0~MHz band about this frequency is allocated to the 256-channel spectrometer \citep[see][]{Taylor+2003}}. Images in Stokes-V, nominally representing circular polarization, are presently dominated by instrumental errors, and are of significantly lower value. The DRAO ST is sensitive at 1420~MHz to emission from structures with angular sizes of $\sim$1\arcdeg\@ (corresponding to the shortest, 12.9~m, baseline of the ST) down to the resolution limit of $\sim$1\arcmin\@ (corresponding to the longest, 617.1~m, baseline). In total intensity, data from the Effelsberg 21-cm Radio Continuum Survey \citep{ReichRF1997} are added to the band-averaged ST data to provide information on the largest spatial scales \citep[see][]{Taylor+2003}. In $Q$ and $U$, data from two single-antenna surveys of the northern sky at $\sim$1.4~GHz, namely the DRAO-26m survey \citep{Wolleben+2006} and the Effelsberg Medium Latitude Survey, are added to the band-averaged ST data \citep[see][]{Landecker+2008}. All CGPS images presented in this paper were produced using band-averaged data. CGPS data calibration and processing procedures are described in detail in \citet{Taylor+2003}. Here we summarize for the reader the general practice, emphasizing procedures related specifically to polarization. The complex antenna gains for the pointing centers in each CGPS field are calibrated by observing a compact calibrator source, either 3C~147 or 3C~295, at the start and end of each observing session. The polarization angle is calibrated using the polarization calibrator source 3C~286. Amplitude and phase variations encountered during individual observing sessions (on time scales down to $\sim$2~hr) are determined during the processing of the total intensity data, and are applied also to the $Q$ and $U$ data. Additional processing, needed to remove the effects of strong sources both inside and outside the primary beam of the ST antennas, is accomplished using routines developed especially for the DRAO ST \citep[see][]{Willis1999}. The instrumental polarization, which is corrected on-axis in the sequence above, varies across the primary beam of the ST antennas due to cross-polarization of the (nominally orthogonal) receiver feeds and the effects of the feed support struts \citep[see][]{Ng+2005}. The result is ``leakage'' of unpolarized radiation, seen in total intensity, into $Q$ and $U$. We have employed two different methods at DRAO to correct for the wide-field instrumental polarization. Both methods were used to calibrate the various CGPS fields appearing to some degree in the $2.5\arcdeg\times2.5\arcdeg$ region presented in this paper. In the first method, we derived the ``average'' leakage pattern in $Q$ and $U$ across the primary beam of the ST antennas, and subtracted from each of the $Q$ and $U$ images the ``leakage image'' for total intensity into $Q$ and $U$, respectively \citep[see][]{Taylor+2003}. In the second (and newer) method, we derived the leakage patterns for each of the ST antennas separately, and subtracted the complex leakage pattern created by each pair of antennas directly from the $Q$ and $U$ visibility data \citep[see][]{Reid+2008}. The residual instrumental polarization error after the on-axis and wide-field calibration is similar for each method in each processed field, increasing from $\sim$0.3\% root-mean-square (rms) at the field pointing center to $\sim$1\% at the field edge ($\rho = 75\arcmin$). The rms error is reduced further in the mosaicing process. The newer wide-field correction significantly reduces artifacts in $Q$ and $U$ around bright total-intensity sources (i.e., with flux densities $\gtrsim$100~mJy). No artifacts are seen above the estimated $\sim$0.34~$\rm{mJy}\,\rm{beam}^{-1}$ ($\sim$0.086~K) noise level in the $2.5\arcdeg\times2.5\arcdeg$ region of the CGPS presented in this paper. \section{RADIO POLARIZATION IMAGES OF PN \mbox{Sh\,2-216} \label{results}} In Figure~\ref{radandoptimages} we show images of the $2.5\arcdeg\times2.5\arcdeg$ region around PN \mbox{Sh\,2-216}\@ in both optical intensity at R-band ($\lambda = 6570$\,nm) and total radio intensity at 1420~MHz ($\lambda = 21$\,cm). The optical image is taken from the Digitized Sky Survey (DSS) and the radio image from the CGPS. The images are presented in Galactic coordinates and centered on the position of the host white dwarf LS\,V\,$46\arcdeg21$ \citep[$l = 158.49\arcdeg$, $b = +0.47\arcdeg$; see][]{Kerber+2003}. Note that the center of the visible disk of \mbox{Sh\,2-216}\@ is offset $\approx$24\arcmin\@ to the Galactic south-west of the white dwarf \citep[see][]{TweedyMN1995}. For the optical image, we adjusted the range of intensities to highlight extended emission. For the total radio intensity image, we removed point sources leaving only extended emission. There is a clear enhancement in both images across much of the face of \mbox{Sh\,2-216}, relative to the surroundings, but the enhancement is most intense along the (Galactic) north-eastern rim; i.e., in the interaction region between \mbox{Sh\,2-216}\@ and the ISM. Treating the enhancement in the radio image as thermal emission from the shell of \mbox{Sh\,2-216}, we can estimate the thermal electron density in the interaction region of the PN (see \S~\ref{discuss}). In Figure~\ref{piandpaimages} we show the polarized intensity ($P = \sqrt{Q^2+U^2-(1.2\sigma)^2}$, where the last term gives explicitly the noise bias correction) and polarization angle ($\theta_P = \frac{1}{2}\arctan{U/Q}$) images at 1420~MHz for the $2.5\arcdeg\times2.5\arcdeg$ region in the CGPS around PN \mbox{Sh\,2-216}. The polarization images contain several interesting features on a variety of angular scales. The most notable feature is a low-polarized-intensity arc $\sim$0.15\arcdeg\@ wide and $\sim$0.7\arcdeg\@ in length, coinciding with the north-east portion of the visible disk of \mbox{Sh\,2-216}. The reduced intensity and distinct shape of the arc indicate that its appearance is due to the effects of Faraday rotation: specifically (1) localized beam depolarization within the arc of the background diffuse synchrotron emission, and/or (2) cancellation of the background emission, Faraday-rotated within the arc, by foreground synchrotron emission. For background/foreground cancellation to play a significant role, the polarized foreground emission must be a reasonable percentage of the total polarized emission in the direction of \mbox{Sh\,2-216}. Galactic models for synchrotron emission predict for the 129~pc foreground toward \mbox{Sh\,2-216}\@ only $\sim$0.06~K at 1420~MHz \citep*[e.g.,][]{BeuermannKB1985}. Even if this emission is highly (i.e., $\sim$70\%) polarized, we expect just $\sim$0.04~K of polarized emission in the foreground; i.e., $\lesssim$10\% of the $0.47 \pm 0.06$~K total polarized emission seen outside the visible disk of \mbox{Sh\,2-216}\@ (and at $b > +0.5\arcdeg$). We conclude, therefore, that beam depolarization plays a larger role in reducing polarized emission over the arc than background/foreground cancellation. Structural variations observed within the arc in both polarized intensity and polarization angle on angular scales down to the resolution limit ($\sim$1\arcmin) further suggest that beam depolarization is responsible for the appearance of this feature. Since the length and location of the arc are very similar to the optically bright rim denoting the PN-ISM interaction region, it would seem there is a physical connection between the conditions and processes in this region which give rise to enhanced optical emission and those which lead to sharp gradients in $RM$. We discuss the $RM$ structure within the arc in \S~\ref{arc}. Aside from the prominent north-east arc, do we see other signatures of \mbox{Sh\,2-216}\@ in the polarization images? The circle representing the visible disk of \mbox{Sh\,2-216}\@ in Figure~\ref{piandpaimages} draws our attention to two suggestive details in the northern half ($b > +0.5\arcdeg$) of the images: (1) the appearance of a second low-polarized-intensity ``arc'' $\sim$0.2\arcdeg\@ wide and $\sim$0.4\arcdeg\@ in length, located in the north-west portion of the visible disk of \mbox{Sh\,2-216}; and (2) the increased range of polarization angles on the visible disk of \mbox{Sh\,2-216}\@ compared to the surroundings. The small-scale structural variations in polarization angle within the north-west arc indicate that beam depolarization is responsible to at least a moderate degree for the reduced emission in this feature. If this second arc is indeed associated with \mbox{Sh\,2-216}, then the conditions for sharp $RM$ gradients in the shell of the PN may not be confined to the optically-identified interaction region. Moreover, a comparison of the range of polarization angles seen inside the visible disk of \mbox{Sh\,2-216}\@ ($-82\arcdeg$ to $+27\arcdeg$, $\rm{rms}\approx17\arcdeg$) with those seen outside ($-17\arcdeg$ to $+32\arcdeg$, $\rm{rms}\approx6\arcdeg$) suggests that the conditions for moderate $RM$s are present throughout the shell of \mbox{Sh\,2-216}. We present a simple model for the observed polarization structures on the visible disk of \mbox{Sh\,2-216}\@ in \S~\ref{discuss}. In contrast to the smaller-scale structures seen on the visible disk of \mbox{Sh\,2-216}\@ in the northern half of the images ($b > +0.5\arcdeg$), the southern half of the images ($b < +0.5\arcdeg$) is dominated by ``bands'' of relatively low polarized intensity, 0.1\arcdeg--0.3\arcdeg\@ wide, which stretch approximately east-west across the region. The bands have no counterpart in total intensity. The boundary between north and south is clearly marked in the polarization angle image by a jagged line over which the angle changes very rapidly. On close inspection of the polarized intensity image, this line corresponds to a narrow ($\sim$1\arcmin) channel within the northernmost band of virtually zero polarized emission. Changes in the polarization angle across ``cells'' $<3\arcmin$ in size are seen, to differing degrees, throughout the bands. The bands appear to be part of a large-scale complex which depolarizes the background diffuse emission, most likely before it reaches the position of \mbox{Sh\,2-216}. In the less likely scenario that the complex sits between the Sun and \mbox{Sh\,2-216}, any polarization signature imprinted on the background by \mbox{Sh\,2-216}\@ is lost. In either case, the positioning of the bands on the sky south of the north-east arc associated with \mbox{Sh\,2-216}, and other apparent features on the northernmost portion of the visible disk of the PN, would seem to be fortuitous. \subsection{Rotation-Measure Structure in the North-East Arc \label{arc}} The polarization angle of the relatively bright emission outside the visible disk of \mbox{Sh\,2-216}\@ (and at $b > +0.5\arcdeg$) has a mean value of $+7$\arcdeg\@ and rms variations of only 6\arcdeg\@. Along the prominent north-east arc, the polarization angle is observed to change rapidly across the perimeters of roughly elliptical ``knots.'' The angle inside the perimeters changes more slowly and, indeed, plateaus at the centers of the knots. In Figure~\ref{zoompaimage} we show a small $0.4\arcdeg\times0.4\arcdeg$ region in polarization angle around the north-east arc and identify eight discrete knots. We define as the center of each knot the position of the pixel showing the maximum clockwise (see below) deviation from the background ($+7\arcdeg \pm 6\arcdeg$) value. In Table~\ref{knotangles} we give the mean value of the polarization angle in each knot. The mean was estimated over an area corresponding to the area of the resolving beam (10 pixels, see Figure~\ref{zoompaimage}), excluding, in the cases of knots 2 and 4, pixels which differed from the 10-pixel mean by more than 2$\sigma$. Table~\ref{knotangles} shows that the polarization angle of the emission emerging from the knots is rotated significantly with respect to the background emission. The weighted mean polarization angle at the centers of the knots is $+78\arcdeg \pm 22\arcdeg$. If we assume that emission with polarization angle $+7\arcdeg \pm 6\arcdeg$ is incident on the far side of each knot, and for the moment ignore foreground emission, then the incident emission is Faraday rotated in the knots by $\Delta\theta = -109\arcdeg \pm 23\arcdeg$. We infer negative, i.e., clockwise, rotation by tracing polarization angles from the outside edge of the arc to the center of any knot. The trace shows that the polarization angle (first) decreases through negatives values. At five of the eight knot perimeters, the polarization angle jumps from $-90\arcdeg$ to $+90\arcdeg$, and then continues to decrease to its center value. Since foreground emission probably cannot be ignored at the $\sim$10\% level, we must estimate the maximum deviation expected in the observed polarization angle if, by chance, the foreground emission is rotated 45\arcdeg\@ relative to the emission emerging from the knots. (Note that foreground emission rotated 90\arcdeg\@ relative to the background leads to a maximum reduction in polarized intensity, but no net rotation in polarization angle.) Assuming complete beam depolarization at the perimeters of at least some of the knots, in particular knots 1 and 4, we estimate the polarized foreground emission to be $0.046 \pm 0.012$~K, consistent with the values predicted by Galactic synchrotron models. Using $0.177 \pm 0.044$~K for the mean observed (i.e., emerging plus foreground) polarized emission at the centers of the knots (see Table~\ref{knotangles}), we estimate a maximum deviation of 7\arcdeg. Adding this in quadrature to the 23\arcdeg\@ statistical uncertainty gives a standard error for the measured rotation through the knots of 24\arcdeg. For a center wavelength of 21.12~cm (see \S~\ref{obs}), a rotation of $-109\arcdeg \pm 24\arcdeg$ gives (via Equation 1) $RM = -43 \pm 10$~$\rm{rad}\,\rm{m}^{-2}$. The emission emerging from the centers of the knots, where the polarization angles plateau, is likely higher than the $0.177 \pm 0.044$~K value given above, but polarization-angle variations over the resolving beam lead to reduced polarized intensity even inside the knot perimeters. While these variations are reflected in the range of angles observed in each knot (see Table~\ref{knotangles}), we nevertheless believe our estimated mean $RM$ reflects a real systematic rotation of the background emission as it passes through the knots. The consistent clockwise rotation of the polarization angle observed in moving from outside the north-east arc toward the center of any knot strengthens this assertion. We tried to estimate the $RM$ through the knots using the four-band data from the ST but failed. Given the typical uncertainty in polarization angle in the band-averaged image, and noting that $RM \approx -43$~$\rm{rad}\,\rm{m}^{-2}$ gives a difference in rotation angle of only $\sim$5\arcdeg\@ over 27.5~MHz ($\Delta\lambda = 0.41$~cm), the failure is not surprising. \section{DISCUSSION \label{discuss}} We can derive the line-of-sight component of the magnetic field through the knots in the north-east arc of \mbox{Sh\,2-216}\@ using $-43 \pm 10\ \rm{rad}\,\rm{m}^{-2}$ as an estimate of the $RM$ in the knots, and using estimates of the thermal electron density in and path length through the interaction region (see Equation 2). The electron density in \mbox{Sh\,2-216}\@ can be calculated from emission measure ($EM = \int{{n_e}^2\,dl}$) determinations, made independently at optical and radio wavelengths. Based on their measured H$\alpha$ intensity and gas temperature ($T_e = 9400 \pm 1100$~K), \citet{Reynolds1985} estimates a mean value for the emission measure over \mbox{Sh\,2-216}\@ of $EM \approx 42$~$\rm{cm}^{-6}$\,pc. Using a brightness temperature of $T_b = 0.11 \pm 0.02$~K for the thermal radio emission in the interaction region of \mbox{Sh\,2-216}\@ (obtained via comparison of on-source and off-source temperatures in Figure~\ref{radandoptimages}$b$), and the same gas temperature, we estimate a value for the interaction region of $EM = 69 \pm 13$~$\rm{cm}^{-6}$\,pc. If we assume for the moment that electrons are uniformly distributed over the approximately spherical volume of \mbox{Sh\,2-216}, and use an average path length through the sphere of $\Delta l = \frac{4}{3} R_{\rm{PN}} \approx 2.5$~pc, then the optically-determined $EM$ gives a mean electron density over \mbox{Sh\,2-216}\@ of $n_e \approx 4.1$~$\rm{cm}^{-3}$. In some contrast, the radio-intensity-determined $EM$ gives, for a path length\footnote{The path length $\Delta l = 1.1 \pm 0.3$~pc corresponds to the mean of the line-of-sight chord lengths through a 1.9-pc radius sphere at the positions of the eight knots.} through the interaction region of $\Delta l = 1.1 \pm 0.3$~pc, $n_e = 7.9 \pm 1.3$~$\rm{cm}^{-3}$. The factor $\sim$2 increase in the electron density in the interaction region compared to the mean value over the entire PN is reasonable, since material is stacking up in the interaction region \citep[see][]{TweedyMN1995}. However, $n_e = 7.9 \pm 1.3$~$\rm{cm}^{-3}$ still represents a mean over the interaction region. The H$\alpha$ images of \citet{TweedyMN1995} show small-scale filamentary structures in the interaction region with localized factor 1.5--2 enhancements in $EM$ relative to the mean. The filaments correspond approximately in both location and size to the polarization-angle knots. Since we cannot confirm the physical association between the filaments and the knots, we conservatively assume an $EM$-enhancement of $1.5 \pm 0.5$, and estimate the electron density in the knots to be $n_e = 9.7 \pm 2.3$~$\rm{cm}^{-3}$. Using $RM = -43 \pm 10$~$\rm{rad}\,\rm{m}^{-2}$, $n_e = 9.7 \pm 2.3$~$\rm{cm}^{-3}$ and $\Delta l = 1.1 \pm 0.3$~pc, we derive a line-of-sight magnetic field through the knots in the interaction region of $B_{\|} = 5.0 \pm 2.0$~$\mu\rm{G}$. Since the $RM$ is negative, this field is directed into the plane of the sky. \subsection{An ISM Origin for the Magnetic Field in the Shell of \mbox{Sh\,2-216} \label{ismfield}} Is a $\sim$5~$\mu\rm{G}$ line-of-sight magnetic field reasonable for the ISM around \mbox{Sh\,2-216}? Since there is no direct measurement of the ISM magnetic field around \mbox{Sh\,2-216}, we estimate the local field from what is known generally about the Galactic magnetic field. The Galactic magnetic field is concentrated in the disk and has two components \citep[see, e.g.,][]{Beck+1996}: a large-scale or regular component ($B_{reg}$), which follows the spiral arms, and a small-scale or random component ($B_{ran}$). $B_{reg}$ in the local spiral arm is found, using polarized radio sources and the polarization of starlight, to be directed toward $l \approx 85\arcdeg$ \citep[e.g.,][]{RandL1994,Heiles1996a,BrownT2001}; i.e., clockwise as viewed from the Galactic north pole. Fluctuation cell sizes for $B_{ran}$ are estimated to be 50--100~pc \citep[see][]{RandK1989,OhnoS1993}. The ratio of the strengths of the random and regular components of the Galactic field, $B_{ran}/B_{reg}$, can be obtained directly from starlight polarization data and synchrotron polarization data using the model presented in \citet{Burn1966}. The starlight polarization data of \citet{Fosalba+2002} give for a large sample of stars covering all Galactic longitudes $B_{ran}/B_{reg} \approx 1.3$. Using stars from the sample of \citet{MathewsonF1970} in the range $120\arcdeg < l < 180\arcdeg$, and a modified version of the Burn model, \citet{Heiles1996b} finds $B_{ran}/B_{reg} \approx 1.5$. For synchrotron emission just north of the visible disk of \mbox{Sh\,2-216}\@ ($l = 158.5\arcdeg$), we measure a fractional linear polarization of $p = 0.27 \pm 0.03$, close to the maximum value found by \citet{Spoelstra1984} for the diffuse emission in the Galactic plane. With a spectral index $\alpha = -0.44 \pm 0.04$ ($S \propto \nu^{\alpha}$) between 408~MHz and 1420~MHz for the synchrotron emission in the CGPS region around \mbox{Sh\,2-216}, we get for the intrinsic value of the fractional linear polarization $p_{max} = 0.68 \pm 0.01$ \citep[see][]{GinzburgS1965}, and thus obtain $B_{ran}/B_{reg} = 1.51 \pm 0.13$, consistent with the \citet{Heiles1996b} starlight estimate. Using a value $B_{tot} = 4.2$~$\mu\rm{G}$ for the average azimuthal field strength ($B_{tot}^2 = B_{reg}^2 + B_{ran}^2$) in the local arm \citep{Heiles1996b} and $B_{ran}/B_{reg} = 1.51$, we estimate $B_{reg} \approx 2.3$~$\mu\rm{G}$ and $B_{ran} \approx 3.5$~$\mu\rm{G}$. The maximum magnetic field at any point in the local arm is then achieved if, by chance alignment, the random field lies parallel to the regular field; i.e., $B_{max} = B_{reg} + B_{ran} \approx 5.8$~$\mu\rm{G}$. At the longitude of \mbox{Sh\,2-216}, both the average field ($B_{tot} = 4.2$~$\mu\rm{G}$) and maximum possible field ($B_{max} = 5.8$~$\mu\rm{G}$) lie largely in the plane of the sky, and run from Galactic east to west. The maximum field along the line-of-sight, where $B_{\|reg} \approx 0.6$~$\mu\rm{G}$, is $B_{\|max} = B_{\|reg} + B_{ran} \approx 4.1$~$\mu\rm{G}$, directed into the plane of the sky. In light of this brief overview, we conclude that an intrinsic $\sim$5~$\mu\rm{G}$ line-of-sight magnetic field in the ISM at the position of \mbox{Sh\,2-216}\@ is unlikely. Nevertheless, our observations can be used to comment further on the structure of a proposed ISM field in the interaction region as well as other locations in the shell of \mbox{Sh\,2-216}. Our estimate of the line-of-sight magnetic field in the interaction region is based on the maximum $RM$ as seen through knots in our polarization angle image. The $RM$s outside the knots are apparently much lower. The sharp $RM$ gradients over the knot perimeters must be the result of either a rapid change in the electron density or the line-of-sight magnetic field, or both. As we previously noted, the polarization-angle knots appear to be associated with narrow H$\alpha$ filaments observed in the interaction region. The sharp edges of the filaments, which denote a rapid change in $EM$ (and thus electron density), naturally explain $RM$ gradients across knot perimeters. The magnetic field need not change significantly across the interaction region. Realistically, however, the magnetic field is probably affected by turbulence in the hot gas (see \S~\ref{simplemodel}). If we look west of the north-east arc, toward the center of the visible disk of \mbox{Sh\,2-216}, we continue to see polarization angles significantly different from the $+7\arcdeg \pm 6\arcdeg$ observed outside the PN (see Figure~\ref{piandpaimages}$b$). Though we don't see prominent structures in this ``interior'' region, we do see some localized polarization-angle structures. These structures are roughly coincident with low-level enhancements in H$\alpha$ and \ion{N}{2} \citep[see][]{TweedyMN1995} and total radio intensity (see Figure~\ref{radandoptimages}$b$). Localized electron-density enhancements may therefore be responsible for both the knots in the interaction region and the more extended structures seen across the western portion of the face of \mbox{Sh\,2-216}. Indeed, these two apparently different structures may arise from similar underlying structures, seen edge-on in the case of the knots, and face-on in the case of the extended structures \citep[see, e.g.,][]{TweedyMN1995}. The increased path length through the shell in the interaction region would explain, at least in part, why the $EM$s and $RM$s in the filaments/knots are larger than those in the extended structures across the face. A decrease in the line-of-sight component of the magnetic field, moving west from the north-east edge of \mbox{Sh\,2-216}\@ toward the center of the visible disk, could also account for some of the difference (see \S~\ref{simplemodel}). A second low-polarized-intensity arc appears at the north-west edge of the visible disk of \mbox{Sh\,2-216}\@ (see Figure~\ref{piandpaimages}$a$). Small-scale variations in polarization angle within this arc indicate, as in the north-east arc, the presence of sharp $RM$ gradients. However, unlike the north-east arc, there are no plateaus (i.e., multi-pixel regions of roughly constant polarization angle) over which we can confidently estimate some deviation from the outside $+7\arcdeg \pm 6\arcdeg$. Consequently, we have no means of estimating the magnitude of the $RM$ through this arc. Nevertheless, there is some indication of the sign of the $RM$. Moving south from the edge of the north-west arc, the polarization angle (on average) increases, implying positive $RM$s. To substantiate this finding, we broke the north-west arc into three north-south slices, and used the approach of \citet{WollebenR2004} to estimate $RM$ together with three other parameters (degree of depolarization, foreground polarized intensity and foreground polarization angle). We found positive $RM$s for each slice, even when we varied the other parameters away from their ``best-fit'' values. The positive $RM$s indicate that the magnetic field in the north-west arc is directed out of the plane of the sky. If the north-west arc is associated with \mbox{Sh\,2-216}, and the ISM field is responsible for the observed $RM$s in both the north-east and north-west arcs, then the intrinsic field must be deflected around the PN. \subsubsection{A simple model for the ISM magnetic field around \mbox{Sh\,2-216} \label{simplemodel}} For the ISM magnetic field to simultaneously account for the negative $RM$s observed in the north-east arc and the positive $RM$s observed in the north-west arc, the intrinsic field must bend significantly around the shell of \mbox{Sh\,2-216}\@ such that it has a $\sim$5~$\mu\rm{G}$ line-of-sight component into the sky on the east edge of the PN and a non-zero line-of-sight component out of the sky on the west edge. This is exactly what we might expect in the following scenario (see Figure~\ref{modelpic}): The intrinsic ISM field around \mbox{Sh\,2-216}\@ is described by the 4.2~$\mu\rm{G}$ azimuthal component of the Galactic magnetic field \citep[see][]{Heiles1996b}, which, at $l = 158.5\arcdeg$, runs in the local arm from Galactic east to west and intersects the plane of the sky at $15.5\arcdeg \pm 4\arcdeg$ \citep{BrownT2001}. The intrinsic field is compressed and deflected by the expanding and moving PN, since it can diffuse only slowly into the partially ionized shell \citep[see][]{SokerD1997}. The three-dimensional motion of \mbox{Sh\,2-216}\@ is fully characterized by the (Galactic) north-west-directed motion of the host white dwarf \citep{CudworthR1985,TweedyMN1995} and a line-of-sight motion into the plane of the sky (see below). The motion in the plane of the sky is of less importance for our observations than the motion along the line-of-sight, though the full three-dimensional picture is important for interpreting the alignment of the filaments in the interaction region and wider structures across the face of \mbox{Sh\,2-216}\@ \citep[see][]{TweedyMN1995}. At the far side of \mbox{Sh\,2-216}, the line-of-sight motion drags the intrinsic field away from the observer. The result is a deflected field around \mbox{Sh\,2-216}\@ which has on the east edge of the PN a line-of-sight component directed into the sky and on the west edge a line-of-sight component directed out of the sky. In the center portion of the PN, the field lies largely in the plane of the sky. The line-of-sight component of the field on the east edge is slightly larger in strength than the intrinsic field itself, while that on the west side is lower. We estimate the line-of-sight motion of \mbox{Sh\,2-216}\@ by consulting the Wisconsin H$\alpha$ Mapper (WHAM) survey \citep{Haffner+2003}. The WHAM data show that the H$\alpha$ emission from \mbox{Sh\,2-216}\@ peaks at velocity $+5 \pm 1$~$\rm{km}\,\rm{s}^{-1}$ relative to the local standard of rest. (Note that the sign for velocity is opposite that of $RM$; i.e., a positive velocity signifies motion into the plane of sky while a positive $RM$ signifies a magnetic field out of the plane of the sky). The emission surrounding \mbox{Sh\,2-216}\@ peaks at velocities near zero, indicating that the $+5 \pm 1$~$\rm{km}\,\rm{s}^{-1}$ is indeed relative to the surrounding ISM. The magnetic field described by our model represents only the ``smooth'' component of the deflected ISM field. The magnetic field in the shell of the PN will also have a turbulent component due to motions in the hot gas. The turbulent component contributes in part to the beam depolarization we observe in the interaction region. The smooth component is responsible for the systematic $RM$ observed through the knots. \subsubsection{Are old PNe good potential probes of the magnetized ISM? \label{potentialasprobe}} We have asked in this section whether or not the magnetic field derived from the observed $RM$s is reasonable for the ISM around \mbox{Sh\,2-216}. If we were, instead, to concede that the derived field is native to the ISM, then we could ask the question: Are old PNe, such as \mbox{Sh\,2-216}, good probes of the intrinsic ISM field? The simple qualitative model presented above for \mbox{Sh\,2-216}\ suggests that we can learn something about the intrinsic field surrounding this PN. With a more comprehensive (three-dimensional) model of the PN-ISM interaction, it may be possible to work out quite accurately the strength and orientation of the intrinsic field. Since a large percentage of old PNe show PN-ISM interactions similar to \mbox{Sh\,2-216}\@ \citep[see][]{TweedyK1996}, it is perhaps reasonable to assume that the conditions for detectable Faraday rotation are present in the shells of many old PNe in the nearby Galaxy. Given a good model of the interaction in each case, it should then be possible to determine the intrinsic field at many locations. There are two significant drawbacks to consider before we declare old PNe good potential probes of the magnetized ISM: (1) It may not be possible in many cases to construct a good model of the PN-ISM interaction, due either to large uncertainties in the physical parameters (e.g., ISM and PN particle densities, space velocity of the PN) or to the overall complexity of the interaction. The qualitative model presented above for \mbox{Sh\,2-216}\@ does not comment on either the degree to which the intrinsic field around the PN is compressed or the maximum angle with which the intrinsic field is deflected. Both of these quantities are necessary in order to more accurately determine the intrinsic field from the line-of-sight field. Three-dimensional magneto-hydrodynamic (MHD) simulations can perhaps be used to demonstrate the sensitivity of the intrinsic-field determination to various parameters. (2) The Faraday signature of the PN is at the mercy of fluctuations in the warm ionized ISM as well as turbulent structures (e.g., \ion{H}{2} regions) which may lie along the line-of-sight. The dark bands that run through the approximate midpoint of \mbox{Sh\,2-216}\@ (almost) completely depolarize the diffuse background emission. If \mbox{Sh\,2-216}\@ were located $\sim$0.5\arcdeg\@ south of its actual position, then the distinct signature of the north-east arc would be destroyed. Given the prevalence of turbulence in the ISM, this point is a significant concern. In fact, of the six PNe in the CGPS region known to interact with the ISM, only two, including \mbox{Sh\,2-216}, are seen in polarization. (The other, namely DeHt\,5, is the subject of a subsequent paper.) Targeted polarimetric observations of old PNe at sub-arcminute resolution and at multiple frequencies in the range 1--3~GHz are necessary to better establish the potential of these objects as good probes of the magnetized ISM. \subsection{A Stellar Origin for the Magnetic Field in the Shell of \mbox{Sh\,2-216} \label{stellarfield}} Can a $\sim$5~$\mu\rm{G}$ magnetic field in the interaction region be attributed to the host white dwarf near the center of \mbox{Sh\,2-216}? The magnetic fields of white dwarfs have been measured via spectropolarimetric observations of optical absorption lines, but only recently have the observations had the sensitivity to detect kilogauss fields. The studies thus far have focused on either fully evolved (compact) white dwarfs \citep{AznarCuadrado+2004} or central stars of relatively young PNe which are still transitioning to white dwarfs \citep*{JordanWO2005}. In the case of evolved white dwarfs, \citet{AznarCuadrado+2004} found for a sample of 12 stars only three which had detectable magnetic fields in the range 2--4~kG, a detection rate of 25\%. On the other hand, \citet{JordanWO2005} found for each of a selection of four transition stars magnetic fields of 1--3~kG, a detection rate of 100\%. Though the number statistics for both cases are relatively poor, this pair of results suggests that the magnetic fields of transition stars are present not in their degenerate cores but rather their extended envelopes, since magnetic flux is apparently lost during white dwarf evolution (i.e., during collapse from stellar radii in the \citealt{JordanWO2005} sample of 0.14--0.3\,$R_{\sun}$ to white dwarf radii of $\approx$0.012\,$R_{\sun}$). The magnetic fields of the central stars of old PNe have not been measured. Given their intermediate radius, e.g., 0.05\,$R_{\sun}$ for the ``central'' star in \mbox{Sh\,2-216}\@ \citep[based on the luminosity and effective temperature given in][]{Rauch+2007}, measurements of the magnetic fields of the central stars of old PNe could lead to an improved understanding of field evolution in white dwarfs. We point out for completeness that a small fraction ($\sim$10\%) of white dwarfs are observed to have magnetic fields at the 1~MG level or higher \citep[see][]{Liebert+2003}, but these stars tend to have masses ($\approx$0.9\,$M_{\sun}$) much higher than typical white dwarf masses (0.48--0.65\,$M_{\sun}$), and may come from magnetized progenitors such as peculiar (Ap) stars \citep[see][]{Liebert1998,Liebert+2003}. For present purposes, we assume the magnetic field in the envelope around the contracting ``central'' star in \mbox{Sh\,2-216}\@ to be accurately represented by the magnetic field ($B_{\rm{avg}} = 1.8$~kG) measured at the radii ($r_{\rm{avg}} = 0.21\,R_{\sun}$) of the central stars in the \citet{JordanWO2005} sample. With some knowledge of the large-scale magnetic field geometry, we can then estimate the field at large radii; namely, in the shell of \mbox{Sh\,2-216}. Unfortunately, at this time, neither observations nor theory form a complete picture of the magnetic fields in PNe. Magnetic field measurements of maser spots in precursor (AGB) circumstellar envelopes suggest a radial dependence of the field \citep[$B \sim r^{-2}$; see][]{VlemmingsDL2002}, while measurements for the supergiant VX\,Sgr show a poloidal dependence \citep*[$B \sim r^{-3}$; see][]{VlemmingsLD2005}. In contrast, the geometry of filamentary structures observed by \citet{HugginsM2005} in three PNe, as well as measurements by \citet*{VlemmingsDI2006} of the magnetic field structure in the jet emanating from AGB star W43A, suggest the dominance of toroidal fields ($B \sim r^{-1}$), consistent with the theoretical framework of \citet{ChevalierL1994}. If either radial or poloidal geometries hold for \mbox{Sh\,2-216}, then the magnetic field in the interaction region ($\approx$1.0~pc from the host white dwarf) would fall well below our $RM$-estimated $\sim$5~$\mu\rm{G}$ line-of-sight field. On the other hand, if a toroidal field holds, then the magnetic field in the interaction region could be $\sim$8~$\mu\rm{G}$. Given the spherical symmetry of the shell of \mbox{Sh\,2-216}, a large-scale toroidal magnetic field for this PN, invoked generally to explain non-spherical (e.g., bipolar, elliptical) symmetries in young PNe, is unlikely. However, localized enhancements of the internal magnetic field, due to compression in dense knots or filaments \citep[see, e.g.,][]{Soker2002,SokerK2003}, are possible. Thus we cannot completely dismiss the possibility of a $\sim$5~$\mu\rm{G}$ internal field in the shell of \mbox{Sh\,2-216}. Detailed MHD simulations need to be done in order to better understand the magnetic field geometry in PNe. \section{CONCLUSIONS \label{concl}} Here we give a summary of our results and conclusions: 1. We presented 1420~MHz polarization images for the $2.5\arcdeg\times2.5\arcdeg$ region in the CGPS around the PN \mbox{Sh\,2-216}. 2. A low-polarized-intensity arc, $0.2\arcdeg \times 0.7\arcdeg$ in size, appears in the north-east portion of the visible disk of \mbox{Sh\,2-216}. The arc is coincident with the optically-identified interaction region between the PN and the ISM. 3. A second low-polarized-intensity arc appears in the north-west portion of the visible disk of \mbox{Sh\,2-216}. 4. The north-east arc contains structural variations down to the $\sim$1\arcmin\@ resolution limit in both polarized intensity and polarization angle. Several polarization-angle ``knots'' appear along the arc. 5. Via comparison of the polarization angles at the centers of the knots in the north-east arc and the mean polarization angle outside \mbox{Sh\,2-216}\@ (and above $b \simeq +0.5\arcdeg$), we estimated the $RM$ through the knots to be $-43 \pm 10\ \rm{rad}\,\rm{m}^{-2}$. 5. Using this estimate for the $RM$ and an estimate of the electron density in the shell of \mbox{Sh\,2-216}, we derived a line-of-sight magnetic field in the interaction region of $5.0 \pm 2.0$~$\mu\rm{G}$. 6. We believe it more likely the derived magnetic field is interstellar than stellar, though we cannot completely dismiss the latter possibility. We interpret our observations via a simple model which qualitatively describes the ISM magnetic field around \mbox{Sh\,2-216}. 7. It is unclear whether old PNe like \mbox{Sh\,2-216}\@ could be useful probes of the magnetized ISM. Targeted polarimetric observations at high resolution ($<$1\arcmin), and possibly at multiple frequencies in the range 1--3~GHz, may help separate the signatures of more PNe from the turbulent ISM. \acknowledgements ACKNOWLEDGMENTS. We thank an anonymous referee for a constructive review of the paper and for comments helpful in the preparation of the final manuscript. R.R.R.\@ would like to thank Maik Wolleben for applying his Faraday screen model to our data and for insightful discussions. The Canadian Galactic Plane Survey is a Canadian project with international partners, and is supported by a grant from NSERC. The Dominion Radio Astrophysical Observatory is operated as a national facility by the National Research Council of Canada. This research is based in part on observations with the 100-m telescope of the MPIfR at Effelsberg. The Second Palomar Observatory Sky Survey (POSS-II) was made by the California Institute of Technology with funds from the National Science Foundation, the National Geographic Society, the Sloan Foundation, the Samuel Oschin Foundation, and the Eastman Kodak Corporation. The Wisconsin H-Alpha Mapper is funded by the National Science Foundation. \bibliographystyle{apj}
1,314,259,994,617
arxiv
\section*{Figure Captions} \begin{description} \item[Fig. 1] Fermi surface of noninteracting $tt'$-Hubbard model $t'= 0.3 t$, $E_F = -1.2t$, hole doping = 0.27. Solid circles indicate saddle points. \item[Fig. 2a] Superconducting vertex correlation function $\chi(R)$ (Eq. (3)), plotted versus distance $R$, $12\times 12$ lattice, 106 electrons (doping 0.264), $\Theta = 8/t$, $t'= 0.286t$, $U = 2t$. Insert shows correlation function on expanded scale, horizontal dashed line is average plateau value. Error bars are less than the width of the points. \item[Fig. 2b] QMC calculation of plateau vs. U, for $8\times 8$ lattice, 50 electrons (doping 0.22), $\Theta = 8/t$, $t'=0.22$ \item[Fig. 3] Filled points QMC calculation of plateau value $\chi^{pl}$ of superconducting vertex correlation function (Eq. (3)) versus $t'$, for $10\times 10$ lattice, 74 electrons (doping 0.26), $\Theta = 8/t$, $U = 2t$, error bar is average; open points fbcs calculation (Eq. (4)) with J=0.055t and cutoff $omega_c = 0.2 t$. \item[Fig. 4] Superconducting vertex correlation $\chi^{pl}$ in the onsite-s channel for attractive $U=-0.3$ versus superconducting vertex correlation $\chi^{pl}$ in the $d_{x^2-y^3}$-channel for the repulsive Hubbard model ($U=2$). Filling and $t'$ are the same for both values of U and are indicated in table I. \end{description} \newpage \begin{table}[h] \begin{tabular}{ccccc} $N_L$ & $N_e$ & $\omega_c$ & $\chi^{pl}$ & $J$ \\ 36 & 26 & 0.3 t & 1.377E-03 & 0.12 t \\ 64 & 50 & 0.25 t & 0.648E-03 & 0.15 t \\ 100 & 82 & 0.25 t & 0.491E-03 & 0.15 t \\ 144 & 122 & 0.25 t & 0.332E-03 & 0.15 t \end{tabular} \label{tab1} \end{table} \begin{description} \item[Table I] Scaling $t'=0.22 t$, $U=2t$ \end{description} \end{document}
1,314,259,994,618
arxiv
\section{Introduction} Power control is especially crucial in a large-scale multiuser wireless network where interference is the main limiting factor in achieving high network throughput. A large volume of work, led by the pioneer results in \cite{JZANDER93,SAGRVDJGJZ93,JZANDER92,GFZM93}, has contributed to the design of optimal centralized or distributed power control schemes that could provide certain quality of service (QoS). A general framework for power control was thoroughly examined in \cite{RDYATES95} for a broad class of systems, where it is shown that if the interference function is standard, a distributed and iterative (continuous) power control algorithm converges to the minimum power solution. Although such continuous power control schemes are technically sound, they have to be discretized in practice since transmit power in a digital handset can only be updated at discrete levels \cite{MAZRJZ98}. For instance, the downlink and uplink transmit power in an IS-95 system may vary from 12 to 85 dB at steps of 0.5 dB \cite{Qualcomm92}. As such, how to design and implement discrete power control in wireless communication systems is always a key problem. In an ad hoc network, a discrete power control (DPC) scheme is preferable to be developed in a distributed fashion to reduce control overhead, which usually results in suboptimal schemes, especially when the network size is large. In recent years, applying Poisson point process (PPP) to modeling random node locations in large-scale networks has been shown to be a valid and analytically tractable approach \cite{WeberTC,FBBBPM06,MHJGAFBODMF10}. However, the power control problem in such a framework may not be completely tractable, since the complex distribution of interference exacerbates the analyses of outage probability, network throughput, etc.. In this paper, we aim at developing a simple and tractable DPC scheme in such a PPP-based ad hoc networking frame. More generally, we consider a Poisson cluster process (PCP)\footnote{The phenomena of PCP-based node distribution can be observed in many different kinds of wireless networks, such as clustered sensor networks, mobile ad hoc networks, small cell and heterogeneous cellular networks in a large city, etc. \cite{OYMKSR06,JYYPHJC06,CHLJGA11,RKGMH09}.} to model the distributions of transmitters and receivers in a clustered ad hoc network: Transmitters form a homogeneous PPP of intensity $\lambda$, and each of them is associated with a random number of receivers in a circular cluster that is tessellated into $N$-layer annuli. \subsection{Previous Work} Representative literatures on distributed power control in wireless ad hoc networks can be found in \cite{SARHKSVKSKD01,TEAE04,CWSKKL05,VKPRK05}, which usually are not designed for discrete implementation. A distributed DPC scheme cannot be simply realized by discreting a continuous distributed power control scheme, since such obtained DPC schemes may not retain the convergence and uniqueness properties \cite{MAZRJZ98}. Therefore, DPC needs its own problem formulation and analysis. For example, in \cite{SLKZRJZ99} the authors studied the joint optimization problem of discrete power and rate control. The problem of minimizing the sum power subject to signal-to-noise ratio constraints was considered in \cite{CWUDPB01}. Meanwhile, game-theoretic distributed DPC formulation is popular. In \cite{MHRPMPEC04}, a game-theoretic formulation for non-cooperative power control with discrete power levels and channel fading states is proposed, while \cite{YXRC08} formulated the distributed DPC problem as a utility-based $N$-person nonzero-sum game with a stochastic iterative process. Reference \cite{EAKAIMGMBJPAS09} investigates the dynamic discrete power control scheme in uplink cellular networks in which the transmit power level of a user is chosen based on the available channel state information. Although the above schemes succeed in achieving a certain level of power optimality, they are unable to provide tractable analytical performance metrics, such as outage probability, network throughput, etc.. In addition, their results are mainly restricted to small network topologies, such that useful insights on the behaviors of large-scale networks are hardly perceived. In the framework of Poisson-distributed ad hoc networks, a few heuristic power control algorithms have been studied, with the popular approach of combating the fading effect. For example, channel inversion power control studied in \cite{SWJGANJ07} sets the transmit power as the inverse of the channel gain between a transmitter and its intended receiver. For some fading distributions like Rayleigh fading, the inverse channel gain can be infinitely large, which is infeasible to implement. Another similar power control scheme, called fractional power control, is a modified version of channel-inversion power control and its idea is to make the transmit power to be a partially inverse function of the fading channel gain \cite{NJSPWJGA08}. These channel-aware power schemes require the knowledge of instantaneous fading gains at every time slot and thus their performance may significantly degrade when erroneous channel estimation happens. The ALOHA-type random on-off power control policies and delay-optimal power control policies in a Poisson-distributed wireless network are studied in \cite{XZMH0912} and \cite{XZMH1012}, respectively. All these prior literatures on power control in Poisson-distributed wireless networks are not discrete and thus implementing them in a discrete way certainly undermines their original idea of combating/canceling fading. In addition, the signal reception quality could be remarkably affected by the transmission distance, which means, an efficient DPC scheme should be of the distance-aware nature. This is the core idea of the proposed $N$-layer DPC scheme in this paper. \subsection{Contributions} Our first contribution is to identify under what conditions the DPC scheme strictly outperforms the case of no power control\footnote{Throughout this paper, no power control means that all transmitters always uses the same constant power for transmission.}. A fundamental constraint on the discrete power levels, and their selected probabilities are then discovered, which ensures that such designed DPC leads to strictly better performance in terms of the outage probability and mean signal-to-interference ratio (SIR). This constraint is built on the geometric conservation property of a homogeneous PPP, leading to a better outage-free spatial reuse factor, which has a physical meaning of how many transmitters per unit area on average that could simultaneously transmit without outage. Therefore, motivated by the fact that the received signal power heavily depends on the transmission distance, an $N$-layer DPC scheme is proposed for a cluster that is tessellated into $N$-layer annuli, where a suitable discrete power level is chosen from an $N$-tuple power set according to which layer the intended receiver is located at. To evaluate the throughput performance of this DPC scheme, the metric of transmission capacity (TC) proposed in~\cite{SWXYJGAGDV05,SWJGANJ10} is used after appropriate modification. Our second contribution is to characterize the outage probability of each layer in a cluster with the proposed $N$-layer power control and then use it to show that the proposed scheme is essentially ``location-dependent'' when it achieves the upper and lower bounds on the maximum contention intensity. This location-dependent characteristic makes the $N$-layer discrete power control have the capability of achieving power saving, interference reduction, and throughput fairness. Since the bounds on the maximum contention intensity are explicitly established, the corresponding TC can also be easily bounded, which indicates how the $N$-layer discrete power control can monotonically increase TC if it is properly devised. Analytical and simulation results both show that the bounds on the achievable outage probability and spatial reuse factor are better than other existing power control schemes. Our third contribution is outlined as follows. The location-dependent characteristic of the $N$-layer DPC scheme can be generalized to a power control scaling law, i.e., for an intended receiver located at the $i$th layer of a cluster, the transmit power $P_i\in\Theta\left(\eta^{-\frac{\alpha}{2}}_i\right)$ should be used, where $\alpha>2$ is the path loss exponent and $\eta_i$ is the probability of selecting power $P_i$, which usually depends on the area of the $i$th layer. This power control scaling law can not only balance the interference across $N$ different layers, but also reveal how the upper bound of $N$ and the spatial reuse factor change with $\eta_i$. With this power control scaling law, some optimization problems, such as minimizing the sum power over all $P_i$'s or minimizing the mean outage probability over $N$, can be easily formulated. Finally, two examples with different distributions of intended receivers are discussed, whose simulation results show that the proposed $N$-layer DPC can achieve a significantly higher TC than other power control schemes. \section{System Model and Preliminaries}\label{sec:model} \subsection{Poisson-Clustered Network Model and Geometric Conservation Property} In this paper, we consider an infinitely large wireless ad hoc network where transmitters are independently and randomly distributed on the plane $\mathbb{R}^2$, which forms a homogeneous PPP $\Phi$ of intensity $\lambda$ that gives the average number of transmitting nodes per unit area. Each transmitter can have a random number of candidate receivers that are uniformly and randomly distributed in a cluster with a common distribution, independent of the transmitters' spatial distribution. Hence, all the nodes in the network can be viewed to form a Poisson cluster process (PCP) -- A parent (transmitter) node is associated with some daughter (receiver) nodes\footnote{Note that each cluster could contain other transmitters and unintended receivers in addition to its own transmitter and intended receivers.}. The marked transmitter point process $\Phi$ can be expressed as \begin{equation} \Phi\defn\{(X_i,P_i,H_i): X_i\in\mathcal{B}_i, P_i,H_i\geq 0, i\in\mathbb{N}\}, \end{equation} where $X_i$ denotes transmitter $i$ and its location, $P_i$ represents the transmit power of $X_i$, $\mathcal{B}_i$ is the cluster that $X_i$ belongs to, and $H_i$ is the fading channel gain from $X_i$ to its selected receiver $Y_i\in\mathcal{B}_i$. Also, the network is assumed to be interference-limited and operating with a slotted Aloha protocol\footnote{With the slotted Aloha protocol, the interference received by each receiver in the network is merely generated by the transmitting nodes in the current time slot. The interference generated in the previous time slot is not received.}. A communication link from one node to another in the network experiences path loss and Rayleigh fading. The fading channel power gains of all links are i.i.d. exponential random variables with unit mean and variance. Without loss of generality, transmitter $X_0$ is assumed to be located at the origin and it selects one of the candidate receivers in cluster $\mathcal{B}_0$ for transmission. Thus, we call node $X_0$ the reference transmitter and perform the analysis by conditioning on its receiver (called reference receiver). According to the Slivnyak theorem\cite{Stoyan}\cite{Baccllibook}, the statistics of signal reception seen by the reference receiver is the same as that seen by any receivers of all other transmitter-receiver pairs. The signal-to-interference ratio (SIR) at the reference receiver can be written as \begin{equation}\label{Eqn:DefnSIR} \mathrm{SIR}_0 (P_0)=\frac{P_0H_0}{R^{\alpha}I_0}, \end{equation} where $R$ is the (random) distance from transmitter $X_0$ to its selected receiver $Y_0$, $(\mathtt{distance})^{-\alpha}$ is the pass loss model\footnote{This path-loss model is unreasonable for the near-field nodes with $\|X\|<1$; but we still use it for $\|X\|<1$ since it only makes a negligible effect on our outage probability results \cite{FBBBPM06,SWJGANJ07}.} with path loss exponent $\alpha>2$, and $I_0$ denotes the interference at $Y_0$ given by $$I_0=\sum_{ X_k \in \Phi \setminus X_0 }{P_k H_{k0} \|X_k - Y_0\|^{-\alpha}},$$ where $\|X_k-Y_0\|$ is the Euclidean distance between interfering transmitter $X_k$ and $Y_0$, $H_{k0}$ is the fading gain from $X_k$ to $Y_0$, and $P_k$ denotes the transmit power of $X_k$. In order to have a successful signal reception at receiver $Y_0$, the $\text{SIR}$ has to be no less than a predesignated threshold $\beta$; otherwise an outage occurs. Without loss of generality, the outage probability for transmissions using power $P_0$ is thus defined as $\mathbb{P}[\mathrm{SIR}_0(P_0)<\beta]$. A homogeneous PPP has a nice conservation property, which provides the relationship on how uniform node position scaling changes with the node intensity~\cite{Stoyan}. Here we give the conservation property in the Poisson cluster process (PCP) context with the following lemma. \begin{lemma}[The Geometric Conservation Property of a PCP] \label{Lam:ConserversionPropertyPCP} Assume that for each transmitter, the average number of intended receivers in the cluster is $\omega$ and thus all the nodes in the network also form a homogeneous PPP $\Pi$ with intensity $\omega\lambda$. Let $\mathbf{T}: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ be a non-singular transformation matrix in $\mathbb{R}^2$. Then $\mathbf{T}(\Pi)\defn\{\mathbf{T}Z_i:Z_i\in\Pi\}$ is also a homogeneous PPP with intensity $\omega\lambda/\sqrt{\det(\mathbf{T}^{\emph{\textsf{T}}}\mathbf{T})}$. \end{lemma} \begin{IEEEproof} The void probability of a point process in a bounded Borel set $\mathcal{A}\subset \mathbb{R}^2$ is the probability that $\mathcal{A}$ does not contain any points of the process. Since $\Pi$ is a homogeneous PPP, its void probability is given by \begin{equation}\label{Eqn:VoidProb} \mathbb{P}[\Pi(\mathcal{A})=0]=\exp(-\omega\lambda \mu(\mathcal{A})), \end{equation} where $\mu(\cdot)$ is a Lebesgue measure in $\mathbb{R}^2$. Since the void probability completely characterizes the statistics of a PPP, we only need to show that the void probability of $\mathbf{T}(\Pi(\mathcal{A}))$ is given by \begin{equation} \mathbb{P}[\mathbf{T}(\Pi(\mathcal{A}))=0]=\exp\left(-\omega\lambda/\sqrt{\det(\mathbf{T}^{\textsf{T}}\mathbf{T})}\mu(\mathbf{T}(\mathcal{A}))\right). \end{equation} Recall the result from vector calculus that the absolute value of the determinant of a matrix is equal to the volume of the parallelepiped that is spanned by the vectors of the matrix. Therefore, the $2$-dimensional volume of $\mathbf{T}(\mathcal{A})$ is given by $\mu(\mathbf{T}(\mathcal{A}))=\sqrt{\det(\mathbf{T}^{\textsf{T}}\mathbf{T})}\mu(\mathcal{A})$. Suppose $\mathbf{T}(\Pi)$ has intensity $\lambda^{\dag}$ and its void probability within the volume of $\mathbf{T}(\mathcal{A})$ is \begin{align*} \mathbb{P}[\mathbf{T}(\Pi(\mathcal{A}))=0]&=\mathbb{P}[\Pi(\mathcal{A})=0]\\ &=\exp\left(-\lambda^{\dag}\sqrt{\det(\mathbf{T}^{\textsf{T}}\mathbf{T})}\mu(\mathcal{A})\right). \end{align*} Then by comparing the above equation with \eqref{Eqn:VoidProb}, it follows that $\lambda^{\dag} = \omega\lambda/\sqrt{\det(\mathbf{T}^{\textsf{T}}\mathbf{T})}$. \end{IEEEproof} For a special case, if $\mathbf{T}=\sqrt{a}\mathbf{I}_2$ which $\mathbf{I}_2$ a $2\times 2$ identity matrix and constant $a>0$, the intensity of $\mathbf{T}(\Pi)$ changes to $\frac{\omega\lambda}{a}$. Lemma \ref{Lam:ConserversionPropertyPCP} can be used to eliminate the inconsistency in the distribution of interferences induced by multiple transmit power levels adopted in the network, as shown in the following subsection. \subsection{Why Discrete Power Control?} As aforementioned, discrete power control is preferable for implementation in practice. There are also two main motivations for adopting discrete power control even from a theoretical point of view. First of all, we show that if a transmitter can control its discrete powers appropriately, its receiver is able to achieve a lower outage probability compared with no power control. \begin{theorem}\label{Thm:AvgSIRInq} Consider a special case in the PCP-based network where each cluster contains one transmitter-receiver pair. Each transmitter has $N$ constant power options from the discrete power control set $\mathcal{P}\defn\{P_1, P_2, \cdots, P_N\}$. Suppose each transmitter independently selects its own transmit power and the probability of selecting $P_j\in\mathcal{P}$ is $\eta_j$. The average SIR achieved by transmitters using $N$ discrete powers is strictly greater than that achieved by transmitters using a single constant power if \begin{equation}\label{Eqn:AvgSIRIneq} \sum_{j=1}^{N}\eta_j^{\frac{\alpha}{2}}\left(\frac{P_j}{P_i}\right)< \frac{1}{\rho_0},\,\ i\in\{1, 2, \ldots, N\}, \end{equation} where $\rho_0\defn \mathbb{E}[I_0(1)]\mathbb{E}[I^{-1}_0(1)]\geq 1$ is a function of intensity $\lambda$ and path loss exponent $\alpha$, and $I_0(\nu)\defn \nu (\sum_{X_i\in\Phi\setminus X_0}H_{i0}\|X_i-Y_0\|^{-\alpha})$ denotes the interference at $Y_0$ induced by all interferers in $\Phi$ using transmit power $\nu$. Most importantly, condition \eqref{Eqn:AvgSIRIneq} also ensures that the outage probability achieved by transmitters using $N$ discrete powers is also strictly smaller than that achieved by transmitters using a single constant power. \end{theorem} \begin{IEEEproof} See Appendix \ref{App:ProofAvgSIRInq}. \end{IEEEproof} \begin{remark} The inequality in \eqref{Eqn:AvgSIRIneq} ensures that discrete power control has a better performance in terms of the average SIR and outage probability than no power control. It can be relaxed to $\sum_{j=1}^{N}\eta_j^{\frac{\alpha}{2}}\left(\frac{P_j}{P_i}\right)< 1$ if we only require a lower outage probability (i.e. no SIR requirement). \end{remark} \begin{remark} The average of the interference $I_0$, $\mathbb{E}[I_0]$, is unbounded since the pass loss model $\|\cdot\|^{-\alpha}$ is not well-defined for very nearby interferers and even explodes at distance zero. To obtain a bounded $\rho_0$, we define $\mathbb{E}[I_0(\nu)]\defn 2\pi \lambda\nu \int_1^{\infty} r^{1-\alpha}\dif r=\frac{2\pi\lambda\nu}{\alpha-2}$, which is obtained by applying the Campbell theorem \cite{Stoyan} and ignoring the interference contributed by the interferers within the disc with a center at the origin and unit radius. \end{remark} Theorem \ref{Thm:AvgSIRInq} indicates that using multiple discrete power level will outperform using no power control if the inequality constraint in \eqref{Eqn:AvgSIRIneq} is satisfied. This is due to the fact that the inequality in \eqref{Eqn:AvgSIRIneq} essentially ensures that the interference generated by multiple transmit powers is not greater than that generated by a single power. In other words, if we use several discrete transmit powers in the network, a lower outage probability can be attained if those discrete power values and the associated probabilities are properly devised to satisfy \eqref{Eqn:AvgSIRIneq}. For example, if the power control set $\mathcal{P}=\{P_1,P_2\}$ has two tuples, with $P_0$, $P_1$ and $P_2$ distinct, \eqref{Eqn:AvgSIRIneq} can be simplified as \begin{equation}\label{Eqn:TwoPowRatIneq} \eta_1^{-\frac{\alpha}{2}}\left(\frac{1}{\rho_0}-\eta_2^{\frac{\alpha}{2}}\right)\geq \frac{P_1}{P_2} \geq \eta_2^{\frac{\alpha}{2}}\left(\frac{1}{\rho_0}-\eta_1^{\frac{\alpha}{2}}\right)^{-1}. \end{equation} This result is illustrated in Fig. \ref{fig:RegionDPC} for $\alpha=3.5$ and $\rho_0\approx 1.29$, and the shaded region represents two discrete powers strictly outperform a single power in term of outage. Fig. \ref{fig:OutProbTwoDisPow} illustrates the two outage probabilities and the average outage probability for $R=20$m, $\alpha=3.5$, $\beta=1$, $\eta_1=0.4$, $\eta_2=0.6$, and power ratio $\frac{P_1}{P_2}=1.5$ satisfying \eqref{Eqn:TwoPowRatIneq} where the two outage probabilities and the average outage probability are $\mathbb{P}[\mathrm{SIR}_0(P_1)<\beta]$, $\mathbb{P}[\mathrm{SIR}_0(P_2)<\beta]$ and $\eta_1\mathbb{P}[\mathrm{SIR}_0(P_1)<\beta]+\eta_2\mathbb{P}[\mathrm{SIR}_0(P_2)<\beta]$, respectively. Note that the simulation result for the single power case does not depend what constant power is used since the SIR in (2) does not depend on transmit power in the no power control scheme. As we see, all the outage probabilities with two discrete powers are (much) lower than that with a single power. Moreover, the inequality in \eqref{Eqn:AvgSIRIneq} makes the average SIR with DPC higher than the average SIR without power control, which results in a higher channel capacity bound on average. \begin{figure}[t!] \centering \includegraphics[width=3.6in,height=2.6in]{PowRatReg.eps} \caption{The available region of $\frac{P_1}{P_2}$ for $\alpha=3.5$, $\lambda=0.0005$ and $\mathbb{E}[I_0(1)]\mathbb{E}[I_0^{-1}(1)]\approx1.29$. Two discrete powers outperforms a single constant power in terms of the average SIR and outage probability if their ratio is within the colored region.} \label{fig:RegionDPC} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=3.6in,height=2.6in]{OutProbTwoDisPow.eps} \caption{The outage probabilities of using two discrete powers and a single power for $\alpha=3.5$, $R=20$m, $\beta=1$, $\eta_1=0.4$ and $\eta_2=0.6$. The ratio of the two discrete powers is $\frac{P_1}{P_2}=1.5$.} \label{fig:OutProbTwoDisPow} \end{figure} Another interesting observation that can be drawn from \eqref{Eqn:AvgSIRIneq} is that it reveals a simple method to design those discrete power values. For example, we can consider $P_i\in\Theta\left(\eta_i^{-\frac{\alpha}{2}}\right)$, which results in $\min_i\{P_i\}\in\Omega(N)$, that is, $\max_i\{\eta_i\}\in\Omega\left( N^{-\frac{2}{\alpha}}\right)$ \footnote{ Throughout this paper, we slightly relax standard asymptotic notations to denote the scaling results in this paper: $O(\cdot)$, $\Omega(\cdot)$ and $\Theta(\cdot)$ correspond to (asymptotic) upper, lower, and tight bounds, respectively. For instance, given two real-valued functions $f(x)$ and $g(x)$, we use $f(x)\in\Theta(g(x))$ to mean that there exist two positive constants $c_1$ and $c_2$ such that $c_1g(x)\leq f(x)\leq c_2 g(x)$ for all $x\in\mathbb{R}$, i.e., $x$ does not have to go to infinitely large or small to make $c_1g(x)\leq f(x)\leq c_2 g(x)$ to hold.}. Thus, the minimum required power can be determined by $N$ and $\lambda$, and we are able to know the minimum number of discrete powers needed once the node intensity and the power $\min_i\{P_i\}$ are known. Usually, selecting transmit power depends on the channel gain condition such that the probabilities $\{\eta_i\}$ are related to some \textit{uncontrollable} network parameters such as the distributions of channel fading and node locations. That implies that the selection of discrete power control can be specified in terms of certain network parameters. From a \textit{spatial reuse} point of view, we can also explain why using discrete power control can do better. Since the outage probability can be written as $\mathbb{P}\left[(P_0H_0/\beta I_0)^{\frac{1}{\alpha}}<R\right]$, there is no outage once the transmission distance is less than or equal to $(P_0H_0/\beta I_0)^{\frac{1}{\alpha}}$ that is called the\textit{ maximum transmission distance without outage}. Motivated by the similar concept of spatial reuse defined in \cite{FBBBPM06} and the maximum transmission distance without outage, we define the outage-free spatial reuse factor as follows. \begin{definition}[Outage-Free Spatial Reuse Factor] The (outage-free) spatial reuse factor $\delta_0$ for transmitter $X_0$ with power $P_0$ is defined by \begin{equation}\label{Eqn:DefnSpatialReuse} \delta_0\defn \dfrac{\mathbb{E}\left[\pi (P_0H_0/\beta I_0)^{\frac{2}{\alpha}}\lambda\right]}{\mathbb{E}[\pi D^2_0 \lambda]}=\pi\lambda\,\mathbb{E}\left[\left(\frac{P_0H_0}{I_0\beta}\right)^{\frac{2}{\alpha}}\right], \end{equation} where $D_0$ is the nearest distance between two transmitters and its pdf is $f_{D_0}(x)=2\pi\lambda x e^{-\pi\lambda x^2}$ and $\mathbb{E}[D_0^2]=\frac{1}{\pi\lambda}$. \end{definition} \noindent According to \eqref{Eqn:DefnSpatialReuse}, the physical meaning of the spatial reuse factor can be interpreted as the average number of transmitting nodes that can coexist in the defined maximum outage-free (circular) transmission area. The larger the spatial reuse factor is, the higher the effective network throughput per unit area is. Note that for the case of no power control, $\delta_0$ becomes \begin{equation} \delta_0^{\textrm{np}}=\pi\lambda \Gamma\left(1+\frac{2}{\alpha}\right)\beta^{-\frac{2}{\alpha}}\mathbb{E}\left[I_0^{-\frac{2}{\alpha}}(1)\right], \end{equation} where $\Gamma(x)=\int_{0}^{\infty}t^{x-1}e^{-t}\dif t$ is the Gamma function and $\mathbb{E}\left[I^{-\frac{2}{\alpha}}_0(1)\right]$ is lower-bounded by $(\mathbb{E}[I_0(1)])^{-\frac{2}{\alpha}}=\left(\frac{\alpha-2}{2\pi\lambda}\right)^{\frac{2}{\alpha}}$. That means $\mathbb{E}\left[I^{-\frac{2}{\alpha}}_0(1)\right]\in\Omega(\lambda^{-\frac{2}{\alpha}})$ and thus the spatial reuse factor for no power control is $\delta^{\textrm{np}}_0=\frac{\delta_0}{P_0} \in\Omega(\lambda^{1-\frac{2}{\alpha}})$. Thus $\delta^{\textrm{np}}_0$ increases when $\lambda$ increases, which means the shrinking speed of the average outage-free area is slower than that of the average area without any transmitters. In order to increase the spatial reuse factor, we can appropriately control transmit power. The following lemma will show how the spatial reuse factor under a DPC can be increased. \begin{lemma}\label{Lem:SpatialReuse} In a Poisson-distributed wireless network with transmitter intensity $\lambda$, each transmitter independently selects power $P_i$ from power set $\mathcal{P}=\{P_1, P_2, \ldots, P_N\}$ with probability $\eta_i$. If all discrete powers and their corresponding selected probabilities satisfy \eqref{Eqn:AvgSIRIneq}, the spatial reuse factor induced by transmitters with discrete power $P_i$ is $\delta^{\textrm{dp}}_{0_i}\defn \mathbb{E}[(H_0/\beta(I_0/P_i))^{\frac{2}{\alpha}}]/\mathbb{E}[D_0^2]$ that is greater than $\delta^{\textrm{np}}_0$. The average spatial reuse factor with discrete power control $\mathcal{P}$ is defined as \begin{equation}\label{Eqn:AvgSpaReuFac} \delta^{\textrm{dp}}_0\defn \sum_{i=1}^{N}\eta_i\delta^{\textrm{dp}}_{0_i}, \end{equation} and thus $\delta^{\textrm{dp}}_0>\delta_0^{\textrm{np}}$ since $\delta^{\textrm{dp}}_{0_i}>\delta_0^{\textrm{np}}$ for all $i$. \end{lemma} \begin{IEEEproof} First consider the case of no power control and the maximum transmission distance without outage in this case, which is $(H_0/\beta I_0(1))^{\frac{1}{\alpha}}$. By definition, the spatial reuse factor $\delta^{\textrm{np}}_0$ is given by \begin{equation} \delta^{\textrm{np}}_0=\lambda\pi\Gamma\left(1+\frac{2}{\alpha}\right)\beta^{-\frac{2}{\alpha}}\mathbb{E}\left[I_0^{-\frac{2}{\alpha}}(1)\right]. \end{equation} Now consider that transmitter $X_j\in\Phi$ uses discrete power $P_j\in\mathcal{P}$ with probability $\eta_j$ and thus the receiver $Y_0$ of transmitter $X_0$ using power $P_i$ experiences the following interference normalized by $P_i$ \begin{align*} \frac{I_0}{P_i}&= \sum_{j=1}^{N}\frac{P_j}{P_i}\sum_{X_k\in\Phi_j}H_{k0}\|X_k\|^{-\alpha}\stackrel{d}{=} \sum_{j=1}^{N}\sum_{X_k\in\Phi'_j}H_{k0}\|X_k\|^{-\alpha}\\ &\stackrel{d}{=}\sum_{j=1}^{N}\eta_j^{\frac{\alpha}{2}}\left(\frac{P_j}{P_i}\right)\sum_{X_m\in\hat{\Phi}_j}H_{m0}\|X_m\|^{-\alpha}, \end{align*} where $\Phi'_j$ is a PPP of intensity $\lambda\eta_j(P_j/P_i)^{\frac{2}{\alpha}}$ and $\hat{\Phi}_j$ is a PPP of intensity $\lambda$. Whereas the spatial reuse factor $\delta_{0_i}$ induced by $X_0$ with power $P_i$ can be equivalently defined as \begin{align*} \delta^{\textrm{dp}}_{0_i}&\defn \mathbb{E}\left[\left(\frac{\beta \sum_{j=1}^{N}\sum_{X_k\in\Phi'_j} H_{k0}\|X_k\|^{-\alpha}}{H_0}\right)^{-\frac{2}{\alpha}}\right]/\left(\mathbb{E}[D^2_0]\right)\\ &\geq\delta^{\textrm{np}}_0\left[\sum_{j=1}^{N}\eta_j^{\frac{\alpha}{2}}\left(\frac{P_j}{P_i}\right)\right]^{-\frac{2}{\alpha}}\frac{(\mathbb{E}[I_0(1)])^{-\frac{2}{\alpha}}}{\mathbb{E}\left[I^{-\frac{2}{\alpha}}_0(1)\right]} \end{align*} \begin{align} &= \delta^{\textrm{np}}_0\left[\rho_0\sum_{j=1}^{N}\eta_j^{\frac{\alpha}{2}}\left(\frac{P_j}{P_i}\right)\right]^{-\frac{2}{\alpha}}\frac{(\mathbb{E}[I^{-1}_0(1)])^{\frac{2}{\alpha}}}{\mathbb{E}\left[I^{-\frac{2}{\alpha}}_0(1)\right]}.\label{Eqn:LowBouSpaReuPowi} \end{align} Since $(\mathbb{E}[I^{-1}_0(1)])^{\frac{2}{\alpha}}\geq \mathbb{E}\left[I^{-\frac{2}{\alpha}}_0(1)\right]$, we can make sure $\delta^{\textrm{dp}}_{0_i}> \delta_0^{\textrm{np}}$ whenever $\rho_0\sum_{j=1}^{N}\eta_j^{\frac{\alpha}{2}}\left(\frac{P_j}{P_i}\right)<1$. Thus it follows that $\delta^{\textrm{dp}}_{0_i}>\delta^{\textrm{np}}_0$ if the condition in \eqref{Eqn:AvgSIRIneq} is satisfied. Substituting the above result of $\delta^{\textrm{dp}}_{0_i}$ into the definition of $\delta_0^{\textrm{dp}}$ leads to \eqref{Eqn:AvgSpaReuFac}. \end{IEEEproof} The inequality in \eqref{Eqn:AvgSIRIneq} for spatial reuse ensures that the effect of discrete powers and their corresponding probabilities is able to geometrically lessen the scaling of the transmitter intensity. This point can be further illustrated by taking a closer look at the average spatial reuse factor $\delta^{\textrm{dp}}_0$ in \eqref{Eqn:LowBouSpaReuPowi} via the following form: $$\delta^{\textrm{dp}}_0> \lambda\pi\Gamma\left(1+\frac{2}{\alpha}\right)\beta^{-\frac{2}{\alpha}}\mathbb{E}\left[\tilde{I}_0^{-\frac{2}{\alpha}}(1)\right],$$ where $\tilde{I}_0(1)$ is the interference generated by a transmitter PPP with unit constant transmit power and intensity $\lambda \left[\rho_0\sum_{j=1}^{N}\eta_j^{\frac{\alpha}{2}}\left(\frac{P_j}{P_i}\right)\right]^{\frac{2}{\alpha}}\mathbb{E}\left[I^{-\frac{2}{\alpha}}_0(1)\right]/(\mathbb{E}[I^{-1}_0(1)])^{\frac{2}{\alpha}}$, which is smaller than $\lambda$. Hence, the average number of coexisting transmitters without outage per unit area is increased. \begin{figure}[t!] \centering \includegraphics[width=3.6in,height=2.75in]{SpaReuFac.eps} \caption{The spatial reuse factors for discrete power control and no power control schemes. The network parameters for simulation are $\alpha=3.5$, $\frac{P_1}{P_2}=1.5$, $\eta_1=0.4$ and $\eta_2=0.6$.} \label{fig:SpaResFac} \end{figure} Although the spatial reuse factor characterizes how space is effectively used for simultaneous successful transmissions, it fails to characterize the temporal transmission efficiency of a communication link. Reducing the outage probability certainly increases the temporal transmission efficiency since it results in fewer retransmission behaviors. Surprisingly, here we see that the condition in \eqref{Eqn:AvgSIRIneq} is able to guarantee a better spatial reuse factor as well as a lower outage probability. That is, both spatial and temporal transmission efficiencies can be enhanced if all discrete powers and their corresponding probabilities satisfy \eqref{Eqn:AvgSIRIneq}. Therefore, \eqref{Eqn:AvgSIRIneq} is the fundamental requirement to ensure that discrete power control is strictly superior to no power control. The simulation results of how the spatial reuse factors with two discrete powers are superior to the spatial reuse factor with a single power are shown in Fig. \ref{fig:SpaResFac} by assuming $\alpha=3.5$, $\frac{P_1}{P_2}=1.5$, $\eta_1=0.4$ and $\eta_2=0.6$. Finally, \eqref{Eqn:AvgSIRIneq} also motivates us a simple discrete power design approach. For example, we can adopt $P_i\in\Theta\left(\eta_i^{-\frac{\alpha}{2}}\right)$ as the power design in the case of reducing outage probability, and then \eqref{Eqn:AvgSIRIneq} gives $\min_i\{P_i\}\in \Omega(N^{\frac{\alpha}{2}})$, i.e., $\max_i\{\eta_i\}\in O\left(\frac{1}{N}\right)$. The required $N$ and discrete powers $\{P_i\}$ can be properly chosen once the probabilities $\{\eta_i\}$ related to network parameters are determined. In Section \ref{Sec:NlayerDPC}, we will show that the DPC scaling law $P_i\in\Theta\left(\eta_i^{-\frac{2}{\alpha}}\right)$ is a general expression for increasing TC with $N$-layer DPC. \section{$N$-Layer Discrete Power Control}\label{Sec:NlayerDPC} Since signal power decays heavily over the transmission distance, it is nature for us to consider an $N$-layer DPC scheme that is devised based on the transmission distance to the intended receiver in a cluster, i.e., we consider a cluster tessellated into $N$-layer annuli and each time a transmitter selects one receiver at a certain layer of the cluster for service. If the selected receiver is at the $i$th layer, power $P_i$ is used for transmission at transmitter $X_0$, where the outage probability at receiver $Y_0$ is given by \begin{equation}\label{eq:def+OP} q_i \defn \mathbb{P}[\mathrm{SIR}_0(P_i)<\beta],\quad i\in[1,\ldots,N]. \end{equation} This outage probability for a receiver located at the $i$th layer can be used to define the following transmission capacity in the $N$-layer DPC context. \begin{definition}[Transmission Capacity with $N$-layer DPC] The transmission capacity for the $N$-layer DPC scheme is defined by \begin{equation}\label{eq:DefnTC} C^{\textrm{dp}}_{\epsilon} \defn \gamma\, \lambda^{\textrm{dp}}_{\epsilon}\sum_{i=1}^N \eta_i \left[1-q_i(\lambda^{\textrm{dp}}_{\epsilon})\right], \end{equation} where $\eta_i$ denotes the fraction of intended receivers being served in the $i$th layer, $\gamma$ is the transmission rate per unit bandwidth of each communication link, and $\lambda^{\textrm{dp}}_{\epsilon}$, called the maximum contention intensity, is given by \begin{equation} \lambda^{\textrm{dp}}_{\epsilon}\defn \sup\left\{\lambda : \max_{i\in\{1, 2, \cdots, N\}} q_i(\lambda)\leq \epsilon\right\}, \end{equation} where $\epsilon$ denotes the upper bound on the outage probability and usually it is a small number. \end{definition} \noindent Note that the transmission capacity defined in \eqref{eq:DefnTC} represents the area spectrum efficiency of the $N$-layer DPC, which is different from and actually a generalized form of the transmission capacity originally proposed in \cite{SWJGANJ07} for the point-to-point communication scenario. It degrades to the original one when each cluster only contains one intended receiver and there is no power control. Suppose that the distance $R$ from a transmitter to its intended receiver in a cluster is a random variable whose probability density function (pdf) and cumulative density function (cdf) are denoted by $f_{R}(r)$ and $F_R(r)$, respectively. Our $N$-layer DPC scheme is to use a transmit power based on which layer the selected intended receiver is located at. Let the maximum transmission distance in a cluster $\mathcal{B}$ be quantized into $N$ intervals, i.e., $\{\mathcal{L}_i, i=1,2,\ldots, N\}$, where $\mathcal{L}_i$ is the $i$th interval with $\bigcup_{i=1}^N \mathcal{L}_i \subseteq \mathcal{B}$, and receivers are at layer $i$ if the distances from their transmitter are in interval $\mathcal{L}_i$. A transmitter transmits to its layer-$i$ receivers with the $i$th transmit power chosen from power set $\mathcal{P}=\{P_1, P_2, \ldots, P_N\}$. Then the average outage probability of the layer-$i$ receivers is given in the following theorem. \begin{theorem}\label{Thm:OutProbLayi} The average outage probability at the layer-$i$ receivers is given by \begin{eqnarray} \label{Eqn:OutProbLayer_i} q_i= 1-\mathbb{E}\left[e^{-\lambda T_i\beta^{\frac{2}{\alpha}}R^2}\bigg|R\in \mathcal{L}_i\right], \end{eqnarray} where $T_i = \kappa_{\alpha}\sum_{j=1}^{N}{\eta_j \left(\frac{P_j}{P_i}\right)^{\frac{2}{\alpha}}}$ and $\eta_i =\mathbb{P}[R\in\mathcal{L}_i]$. \end{theorem} \begin{IEEEproof} See Appendix \ref{App:ProofOutProbLayi}. \end{IEEEproof} For a general distance distribution, the result in \eqref{Eqn:OutProbLayer_i} cannot be further reduced to a closed-form expression. For special cases, consider the one that receivers are uniformly distributed around their transmitter in a circular cluster of radius $s$. In this case, the cdf and pdf of distance $R$ become \begin{eqnarray*} F_R(r) = \frac{r^2}{s^{2}} \quad\text{and}\quad f_R(r) = \frac{2r}{s^2}, \text{ respectively}. \end{eqnarray*} Substituting the above $F_R(r)$ and $f_R(r)$ into \eqref{Eqn:OutProbLayer_i} and applying $\eta_i= \mathbb{P}[R\in\mathcal{L}_i]=\frac{1}{s^2}[(\sup(\mathcal{L}_i))^2-(\inf(\mathcal{L}_i))^2]$, the average outage probability for the layer-$i$ receivers becomes \begin{eqnarray}\label{Eqn:OutProbUniDisR} q_i = 1- \frac{ e^{-\lambda T_i\beta^{\frac{2}{\alpha}}(\inf(\mathcal{L}_i))^2 } - e^{-\lambda T_i\beta^{\frac{2}{\alpha}}(\sup(\mathcal{L}_i))^2}}{ \lambda T_i\beta^{\frac{2}{\alpha}}[(\sup(\mathcal{L}_i))^2-(\inf(\mathcal{L}_i))^2]}, \end{eqnarray} which can be approximated by \begin{equation}\label{Eqn:OutProbUniDisRApp} q_i \approx \frac{1}{2}\lambda T_i\beta^{\frac{2}{\alpha}}\left[(\sup(\mathcal{L}_i))^2+(\inf(\mathcal{L}_i))^2\right] \end{equation} when $\lambda$ is small. In addition, an important implication that can be grasped from Theorem \ref{Thm:OutProbLayi} is that the optimal power control that maximizes $\lambda_{\epsilon}^{\mathrm{dp}}$ depends on the distribution of the receiver distance $R$. The following theorem shows that there exists a (location-dependent) $N$-layer DPC scheme that could achieve (tight) upper and lower bounds on $\lambda_{\epsilon}^{\textrm{dp}}$. \begin{theorem}\label{Thm:PowerAlloBoundLambda} If all intended receivers in each cluster of radius $s$ are uniformly distributed and the power ratio of $P_j$ to $P_i$ is set as \begin{eqnarray}\label{eq:cont+lower+power} \frac{P_j}{P_i} = \left[\frac{(\sup(\mathcal{L}_j))^2+(\inf(\mathcal{L}_j))^2}{(\sup(\mathcal{L}_i))^2+(\inf(\mathcal{L}_i))^2}\right]^{\frac{\alpha}{2}}, \end{eqnarray} the lower bound on $\lambda^{\textrm{dp}}_{\epsilon}$ given as \begin{equation}\label{eq:cont+lambda+lb} \underline{\lambda}_{\epsilon}^{\textrm{dp}}=\frac{2\epsilon s^2}{\kappa_{\alpha}\beta^{\frac{2}{\alpha}}\sum_{j=1}^{N}[(\sup(\mathcal{L}_j))^4-(\inf(\mathcal{L}_j))^4]} \end{equation} could be achieved. If the following power ratio constraints \begin{equation}\label{eq:cont+upper+power} \frac{P_j}{P_i} = \left[\frac{\inf(\mathcal{L}_j)}{\inf(\mathcal{L}_i)}\right]^{\alpha} \end{equation} are satisfied for all $i\neq j$, the upper bound on $\lambda^{\textrm{dp}}_{\epsilon}$ given as \begin{equation}\label{eq:cont+lambda+ub} \overline{\lambda}_{\epsilon}^{\textrm{dp}}=\frac{\epsilon s^2}{(1-\epsilon)\kappa_{\alpha} \beta^{\frac{2}{\alpha}} \sum_{j=1}^{N}[(\sup(\mathcal{L}_j)\inf(\mathcal{L}_j))^2-(\inf(\mathcal{L}_j))^4] } \end{equation} could be achieved. \end{theorem} \begin{IEEEproof} According to the proof of Theorem \ref{Thm:OutProbLayi}, the outage probability associated with layer $i$ is given by \begin{eqnarray} q_i = \frac{1}{\eta_i s^2}\int_{\mathcal{L}_i}\left(1-e^{- \lambda T_i\beta^{\frac{2}{\alpha}}r^2 }\right) \dif r^2 \end{eqnarray} By utilizing the fact that $\frac{x}{1+x}\leq 1-e^{-x}\leq x$ for $x\geq 0$, the outage probability $q_i$ is upper-bounded by \begin{align} q_i &\leq \frac{2\lambda T_i\beta^{\frac{2}{\alpha}}}{\eta_is^2}\int_{\mathcal{L}_i} r^3 \dif r\nonumber\\ &= \frac{\lambda T_i\beta^{\frac{2}{\alpha}}}{2 }\left[(\sup(\mathcal{L}_i))^2+(\inf(\mathcal{L}_i))^2\right]\defn \overline{q}_i. \label{Eqn:UppBound_q_i} \end{align} Since $q_i$ is a continuous and monotonic increasing function of the intensity $\lambda$, the maximum contention intensity $\lambda^{\mathrm{dp}}_{\epsilon}$ that makes $q_i$ equal to $\epsilon$ must exist. As a result, the intensity, as defined in the following \begin{eqnarray} \underline{\lambda}^{\textrm{dp}}_i \defn \sup \{\lambda: \overline{q}_{i}(\lambda)= \epsilon, 1 \leq i \leq N \}, \end{eqnarray} satisfies $\overline{q}_i = \epsilon$, which is indeed a lower bound on the maximum contention intensity. Hence, it follows that \begin{align}\label{Eqn:LowBoundIntensityLayer_i} \underline{\lambda}^{\textrm{dp}}_i= \frac{2 \epsilon }{\beta^{\frac{2}{\alpha}}T_i\left[(\sup(\mathcal{L}_i))^2+(\inf(\mathcal{L}_i))^2\right]}, \end{align} for all $i \in \left[1, 2, \ldots, N\right]$. Next we explain that the maximum $\lambda$ satisfying $\max_{i}\overline{q}_i = \epsilon $ is attained when all $\overline{q}_i$ are equal. Define $\underline{\lambda}^{\textrm{dp}}_{\epsilon}\defn \min_{i}\{\underline{\lambda}_i^{\textrm{dp}} \}$, which is the intensity that makes the outage probability at each layer less than or equal to $\epsilon$. Since $\lambda_{\epsilon}^{\mathrm{dp}} \geq \underline{\lambda}_i^{\textrm{dp}}$ for all $i \in [1, 2, \ldots,N]$, it follows that $\lambda_{\epsilon}^{\mathrm{dp}}\geq\underline{\lambda}^{\textrm{dp}}_{\epsilon}$ by definition. Thus $ \underline{\lambda}_{\epsilon}^{\textrm{dp}} $ can be maximized up to $\lambda^{\mathrm{dp}}_{\epsilon}$ if all $\overline{q}_i$'s are equal to $\epsilon$, i.e., $\overline{q}_1= \overline{q}_2 = \cdots = \overline{q}_N=\epsilon$. This equality constraint results in the following power ratio condition: \begin{eqnarray} \frac{P_j}{P_i} = \left[\frac{(\sup(\mathcal{L}_j))^2+(\inf(\mathcal{L}_j))^2}{(\sup(\mathcal{L}_i))^2+(\inf(\mathcal{L}_i))^2}\right]^{\frac{\alpha}{2}}. \end{eqnarray} By substituting the above power ratio into \eqref{Eqn:LowBoundIntensityLayer_i}, $\underline{\lambda}_{\epsilon}^{\mathrm{dp}}$ given in \eqref{eq:cont+lambda+lb} is obtained. To obtain an upper bound on the maximum contention intensity, we use $1-e^{-x}\geq\frac{x}{1+x}$ to find the lower bound on the outage probability at layer $i$ as \begin{align} q_i &\geq \frac{1}{\eta_is^2}\int_{\mathcal{L}_i}\frac{\lambda T_i\beta^{\frac{2}{\alpha}}r^2}{1+\lambda T_i\beta^{\frac{2}{\alpha}}r^2}\dif r^2\nonumber \\ &= 1-\frac{1}{\eta_i s^2}\int_{\mathcal{L}_i}\frac{1}{1+\lambda T_i\beta^{\frac{2}{\alpha}}r^2}\dif r^2 \end{align} \begin{align} &= 1- \frac{1}{\eta_i s^2 \lambda T_i\beta^{\frac{2}{\alpha}}}\ln\left[\frac{1+\lambda \beta^{\frac{2}{\alpha}}T_i(\sup(\mathcal{L}_i))^2}{1+\lambda \beta^{\frac{2}{\alpha}}T_i(\inf(\mathcal{L}_i))^2}\right]\nonumber\\ &\stackrel{(c)}{\geq} 1-\frac{1}{\eta_i s^2}\left[\frac{(\sup(\mathcal{L}_i))^2-(\inf(\mathcal{L}_i))^2}{1+\lambda \beta^{\frac{2}{\alpha}}T_i(\inf(\mathcal{L}_i))^2}\right] \nonumber\\ &= \frac{\lambda \beta^{\frac{2}{\alpha}}T_i(\inf(\mathcal{L}_i))^2}{1+\lambda \beta^{\frac{2}{\alpha}}T_i(\inf(\mathcal{L}_i))^2}\defn \underline{q}_i, \label{Eqn:LowBound_qi} \end{align} where $(c)$ follows from $\ln\left(\frac{1+x}{1+y}\right)\leq \frac{x-y}{1+y}$ for $x>y>0$. Thus $\lambda$ that satisfies $\underline{q}_i =\epsilon$ provides an upper bound on $\lambda_{\epsilon}^{\mathrm{dp}}$. That means \begin{eqnarray} \overline{\lambda}^{\textrm{dp}}_i = \sup\{\lambda : \underline{q}_i(\lambda)= \epsilon\}\geq \lambda_{\epsilon}^{\mathrm{dp}},\,\,i\in\{1, 2, \ldots, N\}. \end{eqnarray} Similarly, we can argue that $\overline{\lambda}_{\epsilon}^{\mathrm{dp}}=\max_i \{\overline{\lambda}_i^{\mathrm{dp}}\}$ is maximized provided that all $ \underline{q}_i $'s are equal to $\epsilon$. This leads to the power ratio in \eqref{eq:cont+upper+power}. By substituting \eqref{eq:cont+upper+power} into $\underline{q}_i = \epsilon$, $\overline{\lambda}_{\epsilon}^{\mathrm{dp}}$ can be characterized in \eqref{eq:cont+lambda+ub}. \end{IEEEproof} Since $\epsilon$ and the node intensity are fairly small for most of practical situations, the upper bound in \eqref{Eqn:UppBound_q_i} and lower bound in \eqref{Eqn:LowBound_qi} on $q_i$ are very tight for all $i$'s since they both approach to $\lambda\beta^{\frac{2}{\alpha}} T_i(\inf(\mathcal{L}_i))^2$ as $\lambda$ is very small, and thus the bounds in \eqref{eq:cont+lambda+lb} and \eqref{eq:cont+lambda+ub} are pretty tight as well. Hence, the power ratios given in \eqref{eq:cont+lower+power} and \eqref{eq:cont+upper+power} could be said to nearly achieve $\lambda^{\mathrm{dp}}_{\epsilon}$ for a given small $\epsilon$ since they achieve the tight bounds on $\lambda^{\mathrm{dp}}_{\epsilon}$. Moreover, those power ratios could achieve network-wise throughput fairness since they have the effect on balancing the outage probabilities for all layers such that the average throughput of receivers in different layers are almost balanced to the same value. In other words, the throughput degradation problem between remote and nearby receivers hardly exists. If no power control is used, the average outage probability at the $i$th layer becomes \begin{align*} \label{eq:cont+npc+qi} q_i^{\textrm{np}}(\lambda) = 1-\frac{1}{\eta_i}\int_{\mathcal{L}_i}{ \left(\exp\left\{-\lambda \beta^{\frac{2}{\alpha}}\kappa_{\alpha}r^2\right\}\right) \frac{2r}{s^2} \dif r} \end{align*} \begin{align} =& 1-\frac{1}{\lambda\beta^{\frac{2}{\alpha}}\kappa_{\alpha}\eta_i s^2}\exp\left\{-\lambda \beta^{\frac{2}{\alpha}}\kappa_{\alpha}(\inf(\mathcal{L}_i))^2\right\}\cdot\nonumber\\ &\left(1-\exp\left\{-\lambda \beta^{\frac{2}{\alpha}}\kappa_{\alpha} s^2\eta^2_i\right\}\right) \end{align} when the intended receivers are uniformly distributed in a cluster. Then \eqref{eq:cont+npc+qi} is lower-bounded as \begin{align} q_i^{\textrm{np}}\geq & \frac{\kappa_{\alpha}\beta^{\frac{2}{\alpha}}\lambda \left(\inf(\mathcal{L}_i)\right)^2}{1+\kappa_{\alpha}\beta^{\frac{2}{\alpha}}\lambda \left(\inf(\mathcal{L}_i)\right)^2}\nonumber\\ =&\frac{T_i\beta^{\frac{2}{\alpha}}\lambda \left(\inf(\mathcal{L}_i)\right)^2}{\sum_{j=1}^{N}\eta_j\left(\frac{P_j}{P_i}\right)^{\frac{2}{\alpha}}+T_i\beta^{\frac{2}{\alpha}}\lambda \left(\inf(\mathcal{L}_i)\right)^2}. \end{align} Recall that $\frac{T_i}{\kappa_{\alpha}}=\sum_{j=1}^{N}\eta_j\left(\frac{P_j}{P_i}\right)^{\frac{2}{\alpha}}$ and this term is due to discrete power control. Therefore, if we let $\sum_{j=1}^{N}\eta_j\left(\frac{P_j}{P_i}\right)^{\frac{2}{\alpha}}<\frac{1}{\rho_0}$ for all $i\in\{1, 2, \ldots, N\}$, then condition in \eqref{Eqn:AvgSIRIneq} is automatically satisfied and the lower bound $\underline{q}_i$ in the proof of Theorem \ref{Thm:PowerAlloBoundLambda} is smaller than the lower bound on $q_i^{\textrm{np}}$ above. The upper bound $\underline{q}_i$ in the proof of Theorem \ref{Thm:PowerAlloBoundLambda} can be shown to be smaller than the lower bound on $q_i^{\textrm{np}}$ for most of the practical cases (i.e., $N\geq 2$ and small $\lambda$). So the outage probability performance of the DPC scheme in Theorem \ref{Thm:PowerAlloBoundLambda} is better than that of no power control such that a larger transmission capacity could be achieved by the discrete power control. Fig. \ref{Fig:OutProbUppLowBound} shows the simulation results of the outage probabilities for different power control schemes. As can be seen, the bounds corresponding to discrete power control are actually fairly tight when $\lambda$ is small ($\lambda\leq 10^{-4}$). More importantly, the upper bound is much lower than the outage probability achieved by all other power control schemes, which verifies that our discrete power control indeed can boost transmission capacity. \begin{figure}[!t] \centering \includegraphics[width=3.6in,height=2.75in]{OutProbUppLowBound.eps} \caption{The outage probabilities for different power control schemes. The network parameters for simulation are: $\alpha =3.5$, $\beta =1$. The transmit power for each transmitter $X_i$ using fractional power control is $1/\sqrt{H_i}$, while each transmitter $X_i$ using channel-inversion power control has transmit power $1/H_i$. The transmission distance for fractional power control and channel-inversion power control is a random variable uniformly distributed in [1m, 20m] while the transmission distance 20m is quantized into $N=5$ layers for discrete power control.} \label{Fig:OutProbUppLowBound} \end{figure} The DPC scheme in Theorem \ref{Thm:PowerAlloBoundLambda} is essentially location-dependent. Nonetheless, it can be concluded in a simple scaling form as shown in the following theorem. \begin{theorem}\label{Thm:PowConScaLaw} In a PCP-based ad hoc network, suppose all intended receivers in a cluster of radius $s$ are uniformly distributed. The optimal $N$-layer discrete power control that achieves the maximum contention intensity and better spatial reuse has the following scaling law \begin{equation}\label{Eqn:DPCscaling} P_i \in \Theta\left(\eta_i^{-\frac{\alpha}{2}}\right), \quad \forall i\in[1, 2, \ldots, N], \end{equation} where $\eta_i=\frac{1}{s^2}[(\sup(\mathcal{L}_i))^2-(\inf(\mathcal{L}_i))^2]$. With this power control scaling law, the cardinality of discrete power set $\mathcal{P}$ has the following scaling behavior \begin{equation}\label{Eqn:UppScaLawN} N\in O\left(\min_i\eta^{-\frac{\alpha}{2}}_i\right), \end{equation} whereas the spatial reuse factor $\delta^{\textrm{dp}}_{0_i}$ becomes \begin{equation}\label{Eqn:ScaLawSRF} \delta^{\textrm{dp}}_{0_i}\in\Theta\left(\frac{\delta^{\textrm{np}}_0}{N\eta_i}\right)\,\,\text{ or }\,\,\delta^{\textrm{dp}}_{0_i}\in\Omega\left(\left(\frac{\lambda}{\eta_i}\right)^{1-\frac{2}{\alpha}}\right). \end{equation} \end{theorem} \begin{IEEEproof} See Appendix \ref{App:ProofPowConScaLaw}. \end{IEEEproof} \begin{remark} Note that the power control scaling law in \eqref{Eqn:DPCscaling} and other scaling results are built based on the assumption that receivers are uniformly distributed in a cluster. If receivers are not uniformly distributed, these scaling results may not hold any more. \end{remark} There are several further important observations that can be concluded from Theorem \ref{Thm:PowConScaLaw} and they are specified in the following: \begin{description} \item[(i)] \textbf{Interference balancing}: The power control scaling law in \eqref{Eqn:DPCscaling} reflects an interesting result that a large power should be used if its selected probability is small. This intuitively makes sense since such power control balances the different interferences generated by different discrete powers and it thus reduces the total interference. \item[(ii)] \textbf{Design of optimal discrete power control}: The power control scaling law in \eqref{Eqn:DPCscaling} can also be used to formulate an optimal discrete power design problem. For example, consider each discrete power specified by the form $ P_i=c_i\eta_i^{-\frac{\alpha}{2}}$ where $c_i$ is a unknown constant that needs to be designed and the upper bound for $P_i$ is $P_{\max}$. Here we choose to minimize the sum of the transmit powers $\sum_{i=1}^{N}P_i$ subject to some constraints. That is, \begin{align} \min\limits_{\{c_i\}} \sum_{i=1}^{N} c_i\eta_i^{-\frac{\alpha}{2}} \end{align} \begin{align} \text{subject to } \sum_{j=1}^{N}c_j^{\frac{2}{\alpha}}\leq \frac{c^{\frac{2}{\alpha}}_i}{\rho_0\eta_i} , \label{Eqn:ConOutProbDPC} \end{align} \begin{align} \hspace{20mm} 0< c_i\leq \eta_i^{\frac{\alpha}{2}} P_{\max}. \label{Eqn:TxPowCon} \end{align} where constraint \eqref{Eqn:ConOutProbDPC} is motivated by combining $T_i\leq \kappa_{\alpha}$ and constraint \eqref{Eqn:AvgSIRIneq}, and it ensures that the discrete power control has a lower outage probability. Constraint \eqref{Eqn:TxPowCon} is just a practical power constraint for a transmitter. This is a convex optimization problem and its solution is \begin{equation}\label{Eqn:OptDisPowCoe} \hspace{-.3in}c_i=\min\left\{\left(\frac{2}{\alpha}\sum_{i=1}^{N}\frac{\rho_0\eta_i-1}{2-\rho_0\eta_i}\right)^{\frac{\alpha}{\alpha-2}}\eta^{\frac{\alpha^2}{2(\alpha-2)}}_i,\eta^{\frac{\alpha}{2}}_i\right\} P_{\max}, \end{equation} $\quad i\in\{1, 2, \ldots, N\}$. Thus the optimal discrete power control is given by \begin{equation}\label{Eqn:OptDisPow} P^*_i=\min\left\{\left(\frac{2\eta_i}{\alpha}\sum_{i=1}^{N}\frac{\rho_0\eta_i-1}{2-\rho_0\eta_i}\right)^{\frac{\alpha}{\alpha-2}}, 1\right\}P_{\max}, \end{equation} $i\in\{1, 2, \ldots, N\}$. \item[(iii)] \textbf{The optimal cardinality of power set $\mathcal{P}$}: The scaling result of the upper bound on $N$ in \eqref{Eqn:UppScaLawN} provides us a clue about how large $N$ should be. In addition, according to the proof of Theorem \ref{Thm:PowerAlloBoundLambda}, minimizing $T_i/\kappa_{\alpha}$ is roughly equivalent to minimizing the outage probability $q_i$ since both upper and lower bounds of $q_i$ are a monotonically increasing function of $T_i/\kappa_{\alpha}$. Since $T_i/\kappa_{\alpha}=\sum_{j=1}^{N}\eta_j\left(\frac{P_j}{P_i}\right)^{\frac{2}{\alpha}}\leq 1$ for all $i\in\{1, 2, \ldots, N\}$, $\sum_{j=1}^{N}\eta_j=1$ and $T_i/\kappa_{\alpha}=1$ for $N=1$, the optimal value of $N$, denoted by $N^*$, can be found by minimizing the average outage probability $\sum_{i=1}^{N}\eta_i q_i$ subject to $\sum_{i=1}^{N}\eta_i=1$. Since this objective function $\sum_{i=1}^{N}\eta_iq_i$ is too complex to be effectively handled, we can instead use $\sum_{i=1}^{N}\eta_i\frac{T_i}{\kappa_{\alpha}}[(\sup(\mathcal{L}_i))^2+(\inf(\mathcal{L}_i))^2]$ since the (tight) upper bound on $q_i$ is dominated by $T_i[(\sup(\mathcal{L}_i))^2+(\inf(\mathcal{L}_i))^2]$, i.e., finding $N^*$ by solving the following optimization problem: \begin{eqnarray*} &&\hspace{-.45in}\min\limits_{N} \left(\sum_{j=1}^{N}c_j^{\frac{2}{\alpha}}\right)\left(\sum_{i=1}^{N}\frac{\eta^2_i}{c_i^{\frac{2}{\alpha}}} \left[(\sup(\mathcal{L}_i))^2+(\inf(\mathcal{L}_i))^2\right] \right)\label{Eqn:OptProbN} \\ && \text{subject to }\sum_{i=1}^{N}\eta_i=1. \end{eqnarray*} Since $\{\sup(\mathcal{L}_i)\}$, $\{\inf(\mathcal{L}_i)\}$ and $\{\eta_i\}$ can be determined by a predesignated cluster-partitioning rule and a given $N$ and $\{c_i\}$ can be obtained by substituting the value of $N$ and $\{\eta_i\}$ into \eqref{Eqn:OptDisPowCoe}, all variables in this optimization problem can be determined for a given $N$. Thus $N^*$ can be carried out by searching the positive integer that minimizes the objective (cost) function in the above optimization problem. \end{description} \section{Simulation Examples of $N$-Layer Discrete Power Control} In this section, we will study two cases of $N$-layer discrete power control. First, a simple single-intended-receiver scenario is considered. Namely, each transmitter in its cluster only has one intended receiver that is distributed in $N$ different locations with certain probabilities. Next, we consider the scenario of a transmitter having multiple intended receivers. That is, each transmitter has a random number of intended receivers in a cluster that also form a homogeneous PPP. The objective of investigating these two cases is to demonstrate that our DPC scheme significantly outperforms other power control schemes already proposed in Poisson-distributed ad hoc networks. \subsection{Single Intended Receiver with $N$ Random Locations in a Cluster} Consider that each transmitter has only one intended receiver in a cluster and the random distance $R$ between the transmitter and the receiver is taking one of $N$ discrete values in the set $\{r_1,r_2,\ldots,r_N\}$ with the probability mass function $\mathbb{P}[R=r_i]= \eta_i$. Without loss of generality, we assume that $r_1 < r_2 < \cdots < r_N<s$. The receivers with distance $r_i$ away from their transmitter are also called the layer-$i$ receivers. In other words, the transmitters with the receivers at layer $\mathcal{L}_i$ all have the same transmission distance $r_i$. At each time slot, the transmitter uses power $P_i$ if its intended receiver is located in layer $i$.With the discrete power set $\mathcal{P}$, the outage probability associated with the layer-$i$ receiver is \begin{eqnarray} \label{Eqn:OutProbFixDis} q_i = 1-\exp \left(-\lambda T_i \beta^{\frac{2}{\alpha}} r_i^2 \right), \end{eqnarray} which is easily obtained by considering a deterministic $R=r_i$ in \eqref{Eqn:OutProbLayer_i}. The optimal power control scheme that achieves the maximum contention intensity and transmission capacity is shown in the following theorem. \begin{theorem} \label{Thm:MainResFixDisLoc} Suppose an intended receiver in a cluster has $N$ discrete random locations $\{r_1, r_2,\ldots, r_N\}$ and each location $r_i$ has a corresponding probability $\eta_i$. Then the following maximum contention intensity \begin{eqnarray} \label{Eqn:UppBonMaxConIntFixDisLoc} \lambda^{\textrm{dp}}_{\epsilon} = \frac{-\log{\left(1-\epsilon\right) }}{ \kappa_{\alpha}\beta^{\frac{2}{\alpha}}\sum_{i=1}^{N}\eta_i r_i^2}. \end{eqnarray} is achieved with the following optimal discrete power control \begin{eqnarray} \label{Eqn:OptPowConFixNdis} \frac{P_j}{P_i}=\left(\frac{r_j}{r_i}\right)^{\alpha},\,\, \text{for all }i\neq j. \end{eqnarray} The corresponding transmission capacity is given by \begin{eqnarray} \label{eq:disc+pc+TC} C_{\epsilon}^{\textrm{dp}}= \frac{-\gamma\left(1-\epsilon\right)\log{\left(1-\epsilon\right)}}{\kappa_{\alpha}\beta^{\frac{2}{\alpha}}\sum_{i=1}^{N}\eta_i r_i^2}, \end{eqnarray} and it is strictly greater than the transmission capacity of no power control if \begin{eqnarray} \label{Eqn:ConHigTCbyDPC} \frac{r_N^2}{\sum_{i=1}^{N}{\eta_i r_i^2}} > \sum_{i=1}^{N}{\eta_i\left(1-\epsilon\right)^{\frac{r_i^2}{r_N^2}-1}}. \end{eqnarray} \end{theorem} \begin{IEEEproof} See Appendix \ref{App:ProofMainResFixDisLoc}. \end{IEEEproof} Note that the optimal DPC in \eqref{Eqn:OptPowConFixNdis} is equivalent to \eqref{Eqn:DPCscaling} if $\eta_i=\frac{r_i^2}{\sum_{j=1}^{N}r_j^2}$ and $\frac{P_i}{r^{\alpha}_i}=(\sum_{i=1}^{N}r_i^2)^{\frac{\alpha}{2}}$. Thus the optimal power control \eqref{Eqn:OptPowConFixNdis} only depends on the receiver locations provided that probabilities $\{\eta_i\}$ are independent of the receiver locations. Discrete power control has the benefit of increasing the maximum contention intensity \eqref{Eqn:UppBonMaxConIntFixDisLoc} since the term $\sum_{i=1}^{N}\eta_ir_i^2$ is always smaller than $r^2_N$. The condition $\sum_{i=1}^{N}\eta_i\left(\frac{r_i}{r_N}\right)^2<1$ actually corresponds to the condition of having a lower outage probability mentioned in Remark 1. The condition of improving TC for discrete power control is given in \eqref{Eqn:ConHigTCbyDPC} and for small $\epsilon$ it can be reduced to $\sum_{i=1}^{N}\eta_i\left(\frac{r^2_i}{r^2_N}\right) \lesssim 1$, which always holds. Hence, the optimal discrete control scheme in \eqref{Eqn:OptPowConFixNdis} is able to increase the transmission capacity for small $\epsilon$. \begin{figure}[!t] \centering \includegraphics[width=3.6in,height=2.75in]{TCPC2.eps} \caption{The simulation results of transmission capacities for different power control schemes. The network parameters for simulation are $\alpha =3.5$, $\beta =1$, $s=15$m and the intended receiver is equally likely at 3m, 6m, 9m, 12m, and 15m away from its transmitter.} \label{fig:TCPC2} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=3.6in,height=2.75in]{SpaReuFac2.eps} \caption{The simulation results of spatial reuse factors for different power control schemes. The network parameters for simulation is $\alpha =3.5$, $s=15$m and $\eta_i=\frac{1}{5}$. The intended receivers of a transmitter are equally likely located at 3m, 6m, 9m, 12m, and 15m away from their transmitter.} \label{fig:SpaReuFac2} \end{figure} We now present some numerical simulations regarding the results in Theorem \ref{Thm:MainResFixDisLoc} by assuming $r_i=\frac{i}{N}s$ and $\eta_i=\frac{1}{N}$ for all $i$. Under this assumption, we have $\frac{P_i}{P_j}=\frac{i^{\alpha}}{j^{\alpha}}$ and $\sum_{i=1}^{N}\eta_i(\frac{r_i}{r_N})^2=\frac{1}{6}(1+\frac{1}{N})(2+\frac{1}{N})$. Thus the condition in \eqref{Eqn:ConHigTCbyDPC} for small $\epsilon$ can be simplified to $\sum_{i=1}^{N}\eta_i\left(\frac{r^2_i}{r^2_N}\right)=\frac{(1+1/N)(2+1/N)}{6}$, which is always smaller than one and approaches its minimum at $\frac{1}{3}$ as $N$ gets large. This means that the discrete power control under this setting can achieve nearly 3 times transmission capacity than no power control if $N$ is large and $\epsilon$ is small. However, a super large $N$ is not proper in practice and it has a diminishing return problem. Fig. \ref{fig:TCPC2} illustrates the effects of the proposed optimal discrete power control on enhancing the transmission capacity when the radius of a cluster $s$ is 15m and it is segmented to $5$ equal lengths of 3m, i.e., $r_i=3i$ and $\eta_i=\frac{1}{5}$. The transmission capacities achieved by no power control, channel-inversion, and fractional power control schemes are also illustrated for comparison. The transmit powers for each transmitter $X_i$ using fractional power control and each transmitter $X_j$ using channel-inversion power control are $1/\sqrt{H_i}$ and $1/H_j$, respectively. As we see, $N$-layer discrete power control significantly outperforms all other power control schemes in terms of transmission capacity, and increasing $N$ can increase TC. However, using a large $N$ does not produce too much benefit on TC and it looks like $N=15$ is good enough in this case. Similar observations can also be acquired from the simulation results of the spatial reuse factors in Fig. \ref{fig:SpaReuFac2}. \subsection{Multiple Intended Receivers Uniformly Distributed in a Cluster} Now we investigate and simulate the case that each transmitter has multiple intended receivers uniformly distributed in its cluster with $N$ layers, and at each time slot the transmitter independently selects one of the intended receivers to transmit with probability $\eta_i$ if the selected receiver is at the $i$th layer. Reference \cite{CHLJGA11} showed that the selected receivers also form a homogeneous PPP of intensity $\lambda$. Each cluster is layered by segmenting the cluster radius $s$ into $N$ equal lengths of $\frac{s}{N}$, such that the $i$th layer is the annulus with inner radius of $\frac{(i-1)s}{N}$ and outer radius of $\frac{is}{N}$, and thus the probability of the selected receiver being in the $i$th layer is $\eta_i=\frac{2i-1}{N^2}$. Note that $\eta_i$ increases along its index $i$ such that the intended receivers in a farther layer can be selected for service more often. According to the discrete optimal power in Theorem \ref{Thm:PowerAlloBoundLambda}, we know that the following power ratio \begin{equation} \frac{P_j}{P_i}=\left[\frac{(2j-1)(j^2+(j-1)^2)}{(2i-1)(i^2+(i-1)^2)}\right]^{\frac{\alpha}{2}}\left(\frac{\eta_i}{\eta_j}\right)^{\frac{\alpha}{2}} \end{equation} can achieve the following lower bound on TC \begin{equation} \underline{C}^{\textrm{dp}}_{\epsilon}= \frac{2\gamma \epsilon (1-\epsilon)}{\kappa_{\alpha}\beta^{\frac{2}{\alpha}}s^2}. \end{equation} Also, the following power ratio $$\frac{P_j}{P_i}=\left(\frac{j-1}{i-1}\right)^{\alpha}=\left[\frac{(2j-1)(j-1)^2}{(2i-1)(i-1)^2}\right]^{\frac{\alpha}{2}}\left(\frac{\eta_i}{\eta_j}\right)^{\frac{\alpha}{2}}$$ for $i, j\neq 1$ and $\inf(\mathcal{L}_1)>0$ can achieve the following upper bound on TC \begin{eqnarray} \overline{C}^{\textrm{dp}}_{\epsilon} = \frac{2\gamma \epsilon}{\kappa_{\alpha}\beta^{\frac{2}{\alpha}}s^2 \left(1-\frac{4}{3N}+\frac{1}{3N^3}\right)},\,\,N>1. \end{eqnarray} Thus, we can choose $P_i=c_i\eta_i^{-\frac{\alpha}{2}}$ where $(2i-1)(i-1)^2<c^{\frac{2}{\alpha}}_i< (2i-1)(i^2+(i-1)^2)$. Note that the lower bound $\underline{C}^{\textrm{dp}}_{\epsilon}$ is exactly twice of the TC achieved by no power control for small $\epsilon$, which certainly indicates that using discrete power control can achieve a larger TC than no power control\footnote{The transmit power for no power control (and others) is always set according to the worse-case transmission distance $s$ since there is always a possibility that an intended receiver is located at $s$.}. In addition, comparing $\overline{C}_{\epsilon}^{\textrm{dp}}$ with $\underline{C}^{\textrm{dp}}_{\epsilon}$ reveals that using a very large $N$ should be avoided since $\overline{C}_{\epsilon}^{\textrm{dp}}\approx \underline{C}^{\textrm{dp}}_{\epsilon}$ as $N\gg 1$ and $\epsilon\ll 1$, and $\overline{C}_{\epsilon}^{\textrm{dp}}$ is maximized and 5 times more than the TC of no power control when $N=2$. The simulation results of transmission capacities for discrete power control $P_i=c_i\eta^{-\frac{\alpha}{2}}_i$ with parameters $\eta_i=\frac{2i-1}{N^2}$ and $c_i=\left(\frac{3}{2}(2i-1)(i-1)^2\right)^{\frac{\alpha}{2}}$ and different values of $N$ are shown in Fig.~\ref{fig:TCPC3}. As expected, all $N$-layer discrete power controls can achieve at least twice TC of other control schemes, and a higher value of $N$ can lead to a higher TC. The maximum of TC for $N$-layer power control will be attained when $N$ goes to infinity; however, Fig. \ref{fig:TCPC3} shows that $N=20$ is good enough for approaching the maximum TC. Fig. \ref{fig:TCPC4} shows the simulation results of the optimal discrete power scheme in \eqref{Eqn:OptDisPow} with $P_{\max}=1$ with the same network parameters as used in Fig. \ref{fig:TCPC3}. The transmission capacities for different values of $N$ in Figs. \ref{fig:TCPC4} are very much similar to those in Fig. \ref{fig:TCPC3}, but the sum powers used in Fig. \ref{fig:TCPC4} is just about $75\%\sim 80\%$ of the sum of the discrete powers used in Fig. \ref{fig:TCPC3}. Thus using \eqref{Eqn:OptDisPow} is able to reduce the power cost while keeping the same level of the TC performance. \begin{figure}[!t] \centering \includegraphics[width=3.6in,height=2.75in]{TCPC3.eps} \caption{The simulation results of transmission capacities for different power control schemes. The network parameters for simulation are $s=15$m, $\alpha =3.5$ and $\beta =1$. The intended receivers of a transmitter are uniformly distributed in a cluster.} \label{fig:TCPC3} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=3.6in,height=2.75in]{TCPC4.eps} \caption{The simulation results of transmission capacities for optimal discrete power control and other power control schemes. The network parameters for simulation are $s=15$m, $\alpha =3.5$ and $\beta =1$. The intended receivers of a transmitter are uniformly distributed in a cluster.} \label{fig:TCPC4} \end{figure} \section{Conclusion} The $N$-layer DPC scheme proposed in this paper is mainly motivated by the fact that practical power control in a digital device is of a discrete nature. We first show that in a Poisson-distributed network, a discrete power control is able to work strictly better than no power control in the sense of a mean SIR and outage probability, provided that some constrains on the discrete powers and their selected probabilities are satisfied. In particular, we design an $N$-layer DPC scheme in which $N$ discrete powers can be used by the transmitters, where which discrete power is used depends on which layer the intended receiver is located in a cluster with $N$ layers. In order to evaluate the performance of the proposed discrete power control, the transmission capacity and outage-free spatial reuse factor are redefined. The average outage probability of each layer is derived, which is the foundation of developing the optimal discrete power control scaling law $P_i=\Theta\left(\eta^{-\frac{\alpha}{2}}\right)$. The optimization methods of choosing the discrete power and the cardinality of the power set are also discussed. Finally, two simulation examples are presented to show that the proposed $N$-layer discrete power control is able to achieve larger transmission capacity and spatial reuse factor than no-power control and other existing power control schemes.
1,314,259,994,619
arxiv
\section{Introduction } \Pb\ is the heaviest easily accessible doubly magic nucleus. It is an ideal test laboratory to study the shell model in detail. For a wide range in excitation energy one-particle one-hole excitations are dominant, only at an excitation energy above $E_x$=5.3\,MeV four quasiparticle excitations resulting from collective two-phonon octupole modes \cite{Yates96c,Valn2001}, start to contribute. The structure of the observed states is in agreement with theoretical expectations up to $E_x$=4.5\,MeV \cite{Rad1996,Schr1997,Valn2001}. At higher energies, despite impressive experimental research \cite{NDS1971,NDS1986} not all states expected from the shell model have been detected. Many spin assignments are still ambiguous and in addition there are more states than expected in the 1-particle 1-hole frame. Much information has been obtained by inelasting proton scattering via isobaric analog resonances (IAR-pp'). IAR-pp' is a selective reaction, sensitive to the neutron particle-hole components of the structure only. In this way the observed cross sections, its \excFs\ and \angDs, provide direct information on quantum numbers and amplitudes of the respective \ph\ \cfgs. Early measurements of inelastic proton scattering via IAR \cite{1967MO25,1967RI13,1968BO16,1968CR05,1968VO02,1968WH02, Zai1968,1969RI10,1970KU13} provided detailed information about the complex mixture of the neutron \ph\ \cfgs. Since many years little work has been done in this field, we mention, however, the recent work done with the EUROBALL cluster detector \cite{Rad1996}. The main reason is the need for an energy re\-so\-lu\-tion of better than 5\,keV in the spectra. At excitation energies $E_x$=4.2-4.8\,MeV in \Pb\ there are a few doublets with a spacing below 10\,keV. In the region $E_x$=5-6\,MeV the average distance of the known levels is already less than 10\,keV. The work done in the 1960s by \cite{Zai1968,1968WH02,1969RI10,1970KU13} improved the energy re\-so\-lu\-tion from 35 to 18\,keV; some data were obtained using a magnetic spectrograph of 9-13\,keV re\-so\-lu\-tion \cite{1967MO25,1969RI10}. The present status of the Q3D facility at M\"unchen \cite{LMUrep2000p70,LMUrep2000p71,Wirth1999,RalfsQ2005} allows to take (p,~p') spectra with a re\-so\-lu\-tion of about 3\,keV within an energy span of around 1\,MeV and high counting statistics on all known IAR in \Bi\ and up to excitation energies of at least 8\,MeV. The IAR-pp' data are complemented by high statistics \PbS(d,~p) neutron transfer spectra, where the observed transfer quantum numbers are identical with the neutron particles coupled to the \pOhlb\ hole configuration of the target \PbS\ in its ground state. In this paper we concentrate on the energy range $E_x$=4.5 -5.3\,MeV, a region of considerable level density (at least 35 levels). We identify all members of the shell model \cfgs\ \iEhlb\fFhlb\ and \iEhlb\pThlb. Because of the large value of the orbital angular momentum, the multiplet states based on the \iEhlb\ neutron particle are weakly excited. It is one further step towards the goal of complete spectroscopy, as started in the early attempt \cite{AB1973} to derive ``complete wave functions'' and the residual interaction among 1-particle 1-hole \ph\ \cfgs\ from \expt al data alone. \section{Shell model } In order to describe the structure of the excited states in \Pb, we restrict the shell model wave function to the 1-particle 1-hole \cfgs, neglecting 2-particle 2-hole and higher \cfgs. In the restricted shell model for \Pb\ a state $|\alpha\,I>$ is described by a superposition of \ph\ \cfgs\ built from neutrons $\nu$ and protons $\pi$ relative to the $0^{+}$ g.s. of \Pb, see Fig.~\ref{IAR.scenario} for the neutron particle and hole \cfgs\ $LJ,\nu$ and $lj,\nu$, \begin{eqnarray} \label{eq.state} |\alpha I> = \sum_{LJ} \sum_{lj} c_{LJ,lj}^{\alpha\,I,\nu} |LJ{,\nu}> \otimes|lj{,\nu}> + \nonumber\\ \sum_{LJ} \sum_{lj} c_{LJ,lj}^{\alpha\,I,\pi} |LJ{,\pi}> \otimes|lj{,\pi}>. \end{eqnarray} Here we characterize a state $|\alpha\, I>$ by its spin (always given together with the parity), $\alpha$ denoting the excitation energy $E_x$ and other quantum numbers. The excitation energy is often given as a {\it label} by using \cite{Schr1997} where known and omitting fractions of keV, {\it so an adopted value of the energy may differ by 1~keV}. If we restrict to this ansatz the amplitudes $c_{LJ,lj}^{\alpha\,I,(\nu,\pi)}$ represent a unitary transformation of the shell model \ph\ \cfgs\ to the real states $|\alpha\,I>$. We introduce the short-hand writing $|LJ,\nu>$ for the neutron particle in the 6th shell with angular momentum $L$ and spin $J$ and similarly $|lj,\nu>$ for the neutron hole in the 5th shell, $|LJ,\pi>$ for the proton particle in the 5th shell, $|lj,\pi>$ for the proton hole in the 4th shell. We often omit the label $\nu$ and simply write e.g. $\dFhlb$ since we will essentially only discuss neutron \ph\ \cfgs\ in this paper. From the context e.g. the meaning of the neutron particle $|LJ,\nu>=6\,\dFhlb$ can be distinguished from the proton hole $|lj,\pi>=4\,\dFhlb$. In the schematic shell model (SSM) the residual interaction is taken to be zero. The splitting of the multiplets in the full shell model depends on the strength of the diagonal and nondiagonal matrix elements of the residual interaction in \Pb\ and of the relative separation of the undisturbed \cfgs\ in the SSM; matrix elements in the order of magnitude of some tens of keV are expected \cite{AB1973}. A rather pure structure will show up only in case of an isolated multiplet. Actually, however, the lowest 20 states in \Pb\ ($E_x<$4.5\,MeV) are heavily mixed since here the \hNhlb\sOhlb\ proton and the \gNhlb\fFhlb\ neutron \cfgs\ have almost the same SSM energy, and similarly the \hNhlb\dThlb\ proton and the \gNhlb\pThlb, \iEhlb\pOhlb\ neutron \cfgs. An early attempt \cite{AB1973} determined the matrix elements of the effective residual interaction among \ph\ \cfgs\ in \Pb\ from the \cfg\ mixing in the lowest 20~states. However, some spin assignments and identifications of the states below $E_x$=4.50\,MeV were essentially settled by the later work of \cite{Mai1983}. In 1982, an update of the fit was done by one of us (A.~H.); the results are shown in appendix~A. There is a remarkable agreement with shell model calculations by \cite{Rej1999}. In contrast, the two multiplets built from the \iEhlb\ neutron particle and the \fFhlb\ and the \pThlb\ neutron hole, predicted at SSM energies $E_x$=4.780 and 5.108\,MeV, respectively, are expected to be less mixed, at least for the high spin members ($I=5,6,7,8$). \begin{figure}[htb \caption IAR.scenario ] {\label{IAR.scenario}% Sketch of the IAR-pp' scenario for \Pb(p,~p')$\Pb^{*}$ (scale of proton energy $E_p,E_{p'}$ at left). A single IAR state with spin $LJ$=\iEhlb\ as the second member of the isobaric analog multiplet $[\PbN,\Bi,\cdots]$ is exemplified with {\it one} \cfg\ \iEhlb\fFhlb, but all 45 excess neutrons $lj,\nu$ including $\iEhlb$ participate equally, see Eq.~\ref{eq.IAR.sum}. The \excFs\ of all IAR are shown \cite{1968WH02}; for the two weakest IAR \iEhlb, \jFhlb\ they are barely visible, therefore they are enhanced by a factor~10 (thick curves at left). The energies of the particle and hole \cfgs\ $|LJ,\nu>$, $|lj,\nu>$ are taken from \cite{NDS1986}. The \peneTra\ of the Coulomb barrier can be estimated from the comparison of the maxima for the \gNhlb, \gShlb\ (drawn) and the \dFhlb, \dThlb\ (dotted) IAR. Similarly the \peneTra\ of the outgoing particles $lj,\nu$ can be seen from the comparison of the mean cross section for the \ph\ \cfgs\ $|\iEhlb,\nu>\otimes|lj,\nu>$ with spins $I=J-j,\dots,j+j$ calculated from the s.p. widths derived by this work (lower left). } \resizebox{\hsize}{12.50cm} {\includegraphics[angle=00]{PAPfigs/1IAR-scenario.ps}} } \end{figure} \section{Selective reactions } Spectroscopic information about \ph\ \cfgs\ has been derived in addition to IAR-pp' from particle transfer reactions \PbS(d,~p), \Bi\dHe, and \Bi(t,~$\alpha$) \cite{Mai1983,Schr1997,Valn2001} and from transitions due to the electromagnetic \cite{Rad1996,Rej1999} or the weak interaction \cite{NDS1986}. IAR-pp' allows to identify the neutron components $|LJ{,\nu}>\otimes|lj{,\nu}>$ of \ph\ states. The quantum number of the selected IAR is identical to the quantum number of the neutron particle configuration $|LJ{,\nu}>$, the angular distribution of the inelastically detected protons carry the information on the coherent contribution of the excess neutrons $|lj{,\nu}>$. \subsection{IAR-pp' } We discuss the inelastic proton scattering via isobaric analog resonances ({\it IAR-pp'\,}) on a spin~0 target, ${|0^{+}g.s.>} \rightarrow IAR(LJ) \rightarrow {|\alpha\,I>} $, here specifically \Pb(p,~p') proceeding via one of the lowest, well isolated IAR in \Bi. The wave function of an IAR in \Bi\ with spin $LJ$ may be represented by \begin{eqnarray} \label{eq.IAR.doorway} |\Psi_{LJ}^{IAR}(\Bi)> = \nonumber\\ {\frac{1}{\sqrt{2T_0+1}}} T_{-} |LJ,\nu>\otimes|\Pb(0^{+}\, g.s.)> \end{eqnarray} where $T_0=(N-Z)/2$ is the isospin of the g.s. of \Pb. The isospin lowering operator $T_{-}$ acts on all excess neutrons, hence we have \begin{eqnarray} \label{eq.IAR.sum} |\Psi_{LJ}^{IAR}(\Bi)> = \nonumber\\ {\frac{1}{\sqrt{2T_0+1}}} |LJ,\pi> \otimes|\Pb(0^{+}\, g.s.)> + \nonumber\\ \sum_{lj} \sqrt{\frac{2j +1}{2T_0+1}} (|lj^{+1},\pi> \otimes |lj^{-1},\nu >)_{0^{+}} \nonumber\\ \otimes |LJ,\nu> \otimes |\Pb(0^{+}\, g.s.)> . \end{eqnarray} Evidently the outgoing proton either leaves \Pb\ in its g.s. (elastic scattering) or creates a {\it neutron} \ph\ \cfg $|LJ,\nu>\otimes|lj,\nu>$ as shown in the sketch Fig.~\ref{IAR.scenario} for one specific example. \subsection{\AngDs\ of \Pb(p,~p') } The IAR are described as Breit Wigner like resonance terms, their partial decay widths depend on the mixing coefficients $c_{LJ,lj}^{\alpha\,I}$ and on \peneTra\ effects. The resonance scattering is nicely described in the book of Bohr\&Mottelson \cite{BM1969} in a general manner. The differential cross section of the \Pb(p,~p') reaction on top of an isolated IAR ($E_p=E^{res}_{LJ}$) proceeding to a state with neutron \ph\ \cfgs\ $|LJ>\otimes|lj,\nu>$ is described \cite{Heu1969} by \begin{eqnarray} \label{eq.diff.c.s} {\frac{{d\sigma_{LJ}^{\alpha\,I}}} {d\Omega}}(\Theta) = {\frac{\hbar^2 }{4\mu_0 }} {\frac{(2I+1)}{(2J+1)}} {\frac{\Gamma^{s.p.}_{LJ}}{{E^{res}_{LJ}(\Gamma^{tot}_{LJ})^2}}} \,\times \nonumber\\ \sum_{lj}\sum_{l'j'} a^{IK}_{LJ,lj,l'j'} P_K(cos(\Theta)) {c_{LJ,lj}^{\alpha\,I}\sqrt{\Gamma^{s.p.}_{lj}}} \nonumber\\ cos(\xi^{s.p.}_{lj}- \xi^{s.p.}_{l'j'}) {c_{LJ,l'j'}^{\alpha\,I}\sqrt{\Gamma^{s.p.}_{l'j'}}} \end{eqnarray} where $\xi^{s.p.}_{lj}$ are phases derived from theory \cite{1971CL02} and $\mu_0=m(p)m(\Pb)/(m(p)+m(\Pb))$ is the reduced mass. The factors $a^{IK}_{LJ,lj,l'j'}$ arise from the recoupling of the angular momenta $L,l$ and spins $J,j$ to $I,K$ \begin{eqnarray} a^{IK}_{LJ,lj,l'j'} = (-)^{(I+2J)} W(jJj'J,IK) \,\times \nonumber\\ \bar Z(LJLJ,{\frac{1}{2}} K) \bar Z(ljl'j',{\frac{1}{2}} K), \end{eqnarray} where $K\le min(2L,2J,max(2l),max(2j))$ is even; the recoupling \coefs\ $W, \bar Z$ are defined by \cite{1952BB,Edm1957}, see appendix~B. The component with $K=0$ represents the mean cross section $\sigma^{\alpha\,I}_{LJ}$; for a state $|\alpha\,I>$ it is just the sum of the \cfg\ strength $|{c_{LJ,lj}^{\alpha\,I}}|^2$ weighted by the s.p. widths, \begin{eqnarray} \label{eq.avg.c.s} \sigma^{\alpha\,I}_{LJ} = {\frac{\hbar^2}{4\mu_0 E^{res}_{LJ}}} {\rm } {\frac{(2I+1)}{(2J+1)}} {\frac{\Gamma^{s.p.}_{LJ}}{{(\Gamma^{tot}_{LJ})^2}}} \sum_{lj} |{c_{LJ,lj}^{\alpha\,I}\sqrt{\Gamma^{s.p.}_{lj}}}|^2. \end{eqnarray} For a multiplet of states $|\alpha\,I>$ with spins $I=J-j,\cdots,J+j$ consisting mainly of one \cfg\ $|LJ>\otimes|lj>$, the angle averaged (mean) cross sections $\sigma^{\alpha\,I}_{LJ}$ on top of a specific IAR $LJ$ should be simply related to the spin factor $2I+1$, neglecting contributions from other IAR. In general several \cfgs\ are to be considered; the formula describing the \angD\ of the IAR-pp' reaction (Eq.~\ref{eq.diff.c.s}) comprises a sum of products for coherent amplitudes $c_{LJ,lj}^{\alpha\,I}$. Hence the relative phases of the amplitudes can be determined. Each pure neutron \ph\ \cfg\ $|(LJ{,\nu}>\otimes|lj{,\nu}>)_I>$ has a characteristic \angD\ $\sum_K a_K P_K(\Theta)$ of even Legendre polynomials (appendix~B). Small admixtures of other neutron \ph\ \cfgs\ sometimes change the values $a_K$ considerably, however. The highest spin of each \cfg\ $|LJ>\otimes|lj>$ produce a deep minimum of the \angD\ at $90^\circ$, which is the more pronounced the higher the angular momenta $LJ,lj$ are. Similarly, for the lowest spin a deep minimum of the \angD\ at $90^\circ$ is found. This gives the chance to assign spins in certain cases rather firmly. Unfortunately we could not measure at scattering angles beyond $115^\circ$ for technical reasons. IAR-pp' \angDs\ should be symmetric around $90^\circ$ in the absence of direct-(p,~p') contributions. Hence an \angD\ rising towards forward angles sometimes is difficult to interprete. In case a group of states represents a rather complete subset of \ph\ \cfgs, the \coefs\ $c_{LJ,lj}^{\alpha\,I,(\pi,\nu)}$ of the unitary transformation matrix may be determined from the analysis of IAR-pp' together with the observation of the ortho-normality rule\ and the sum-rule relations. Often there are less free parameters to be fitted than the IAR-pp' data provides. So in principle, amplitudes of proton \ph\ \cfgs\ can be determined \cite{AB1973}. Crucial for such an analysis is the correct identification of all relevant states and firm spin and parity assignments. \begin{table}[htb] \caption[IAR parameters ] {\label{ratio.spWid}% Parameters for IAR in \Bi. For some IAR $LJ$ and some outgoing waves $lj$ new values are derived (rightmost column), see appendix~C. The energy dependence of the \peneTra\ for the escape widths $\Gamma_{LJ}^{s.p.}$ can be globally approximated by Eq.~\ref{eq.peneTra} in the region 10\,MeV$<E_{p'}<12$\,MeV. } \begin{tabular}{|c ccc c| c |} \hline $LJ$ &$E_{LJ}^{res}$ & $\Gamma_{LJ}^{tot}$& $\Gamma_{LJ}^{s.p.}$& $R_{LJ}$ &$\Gamma_{LJ}^{s.p.}$ \\ & MeV & keV & keV & &keV\\ &\cite{1968WH02}&\cite{1968WH02}&\cite{1968WH02} &\cite{1968WH02} & \\ \hline \gNhlb & 14.918$\pm$.006 & 253$\pm$10 & 20$\pm$ 1 & 8 & \\ \iEhlb & 15.716$\pm$.010 & 224$\pm$20 & ~2$\pm$0.8& 1 & ~2.2$\pm$0.3~~ \\ \jFhlb & 16.336$\pm$.015 & 201$\pm$25 & &0.4${}^a$& ~0.7$\pm$0.3 \footnote{ from a preliminary analysis of the 4610, 4860, 4867 states with spins $8^{+}, 8^{+}, 7^{+}$} \\ \dFhlb & 16.496$\pm$.008 & 308$\pm$~8 & 45$\pm$ 5 & 12 & \\ \sOhlb & 16.965$\pm$.014 & 319$\pm$15 & 45$\pm$ 8 & 11 & \\ \dThlb & 17.430$\pm$.010 & 288$\pm$20 & 35$\pm$10 & (20) \footnote{doublet IAR, definition of $R_{LJ}$ valid for isolated IAR only} & \\ \gShlb & 17.476$\pm$.010 & 279$\pm$20 & 45$\pm$10 & (20) ${}^b$ &\\ \hline $lj$ &$E_{lj}^{p'}$ & &$\Gamma_{lj}^{s.p.}$ &&$\Gamma_{lj}^{s.p.}$\\ & MeV & & keV &&keV \\ &\cite{1969RI10} \footnote{ $E_{lj}^{p'} = E_{LJ}^{res}-E_{LJ,lj}^{SSM}$ corresponds to the SSM energy of the \ph\ \cfg\ $|LJ>\otimes|lj>$, see Fig.~\ref{IAR.scenario}. } & &\cite{1969RI10} && \\ \hline \pOhlb & 11.49 & &28.6$\pm$3& &28.6 \footnote{ This value was not adjusted since the systematic errors of the absolute cross section are about 10-20\%. They can be reduced by a more complete evaluation of our IAR-pp' data \cite{AHwwwHOME}. }~~~ \\ \fFhlb & 10.92 & &~4.2$\pm$0.4& & ~~~5.2$\pm$0.4~~ \\ \pThlb & 10.59 & &15.8$\pm$1.5& &~~14.6$\pm$0.5~~ \\ \fShlb & ~9.15 & &~0.6~~~~~ & &~~~0.55$\pm$0.1 \footnote{ from a preliminary analysis of the 5935 state identified to contain most of the \gNhlb\fShlb\ $8^{-}$ \cfg}\\ \hline \end{tabular} \end{table} \subsection{Energy dependence of the s.p. widths } The s.p. widths strongly depend on the angular momentum $L$ of the IAR since the outgoing particle has to penetrate the Coulomb barrier. Fig.~\ref{IAR.scenario} gives an impression about the relative values of the \peneTra. The \iEhlb\ IAR with $l=4$ has the weakest \peneTra\ of all positive parity IAR we measured. We define a \peneTra\ ratio \begin{eqnarray} \label{eq.ratio} R_{LJ}= {\frac {\Gamma^{s.p}_{LJ}/(\Gamma^{tot}_{LJ})^2} {\Gamma^{s.p}_{\iEhlb}/(\Gamma^{tot}_{\iEhlb})^2} }; \end{eqnarray} it compares the cross section on some IAR $LJ$ to that on the \iEhlb\ IAR. In fact it essentially takes care of the different \peneTra\ of the particle populating each IAR. Using the data and analysis of \cite{1968WH02,1969RI10} we derive values $R_{LJ}$=8, 12,~11 for $LJ$=\gNhlb, \dFhlb, \sOhlb\ IAR, respectively, see Tab.~\ref{ratio.spWid}. For the doublet \dThlb+\gShlb\ IAR we assume a factor 20, but we note that the given equations are valid for isolated IAR \cite{Heu1969a,Latz1979} only. \subsection{Overlapping IAR } Eqs. \ref{eq.diff.c.s},~\ref{eq.avg.c.s} are valid for isolated IAR only. The lowest IAR in \Bi\ are well isolated, but the \iEhlb\ IAR is rather weak as can be seen in Fig.~\ref{IAR.scenario} where it is enhanced by a factor~10 in order to make it visible at all. Hence the tails from neighbouring \gNhlb\ and \dFhlb\ IAR may interfere with the \iEhlb\ IAR. (The \peneTra\ ratio is $R_{\gNhlb}, R_{\dFhlb}$=8,~12.) Following the formula for \excFs\ given by \cite{1968WH02}, the \gNhlb, \dFhlb\ IAR have decayed by a factor 40,~25, respectively, from the top of the IAR ($E_p$=14.920, 16.945\,MeV, respectively) to $E_p$=15.720\,MeV, the resonance energy of the \iEhlb\ IAR. Since in IAR-pp' the amplitudes are relevant, the population on top of the \iEhlb\ IAR by the neighbouring IAR can be still considerable. However, the influence of the \gNhlb\ IAR may be neglected since the relative amplitude is only of the order of 15\%, while for the \dFhlb\ IAR the relative amplitude is still about~50\%. Yet for the \iEhlb\fFhlb, \iEhlb\pThlb\ multiplets being considered, this problem does not apply since there must be an allowed entrance channel. For the higher spins only the \cfgs\ \dFhlb\hNhlb, \dFhlb\hEhlb\ may contribute, but the \peneTra\ of the outgoing $l=5$ particle is 10 and 50 times lower than for \fFhlb\ and \pThlb, respectively. In addition any contribution of these \cfgs\ is expected to be small. Only for a $3^{-}$ state the entrance channel \dFhlb\pOhlb\ may contribute eventually. We conclude that IAR-pp' is a method able to detect and analyze even weakly excited neutron \ph\ states. \begin{table}[htb] \caption Q3D Parameters IAR-pp' and \PbS(d,~p) ] {\label{Q3D.params.pp}% Parameters for the \Pb(p,~p') \expt. Targets enriched in \Pb\ to 99.85\% were used. The thickness of the targets T2, T3, T4 were 98, 245, 353\,$\mu\,g/cm^2$; the thickness of target T1 was determined as 104\,$\mu\,g/cm^2$ by comparison to other targets. } \begin{tabular}{|c|llcc| l|} \hline IAR &$E_p [MeV]$ &$E_x [MeV]$ &$\Theta$ &targets & \# runs \\ \hline \gNhlb &14.920 &3.85 - 6.2& $48^\circ$ - $115^\circ$ & T1 - T4 & 57 \footnote{ 2\,runs at $\Theta=54^\circ,90^\circ$ covering $E_x$=2.1-3.85\,MeV} \footnote{ 1\,run at $\Theta=58^\circ$ covering $E_x$=6.2-6.65\,MeV} \\ \iEhlb &15.720 &4.05 - 5.85& $20^\circ$ - $115^\circ$ & T1 - T3 & 44 \footnote{ 3\,runs at $\Theta=105^\circ,115^\circ$ covering $E_x$=3.85 - 4.05\,MeV,} \footnote{ 1\,run at $\Theta=105^\circ$ covering $E_x$=5.85 - 6.18\,MeV } \\ \jFhlb &16.355 \footnote{in addition 16.290, 16.380, 16.290} &4.55 - 6.0& $66^\circ$ - $115^\circ$ & T2 - T3& 22 \\ \dFhlb &16.495 &3.73 - 6.9& $36^\circ$ - $115^\circ$ & T1 - T4 & 39 \footnote{ 1\,run at $\Theta=48^\circ$ covering $E_x$=3.65 - 3.73\,MeV} \footnote{ 3\,runs at $\Theta=48^\circ,84^\circ$ covering $E_x$=6.9 - 7.4\,MeV} \\ \sOhlb &16.960 &5.00 - 6.9& $48^\circ$ - $115^\circ$ & T2 - T4 & 12 \footnote{ 1\,run at $\Theta=84^\circ$ for $E_x$=3.65 - 5.0\,MeV} \footnote{ 1\,run at $\Theta=115^\circ$ for $E_x$=6.9 - 7.2\,MeV} \\ \dThlb+\gShlb&17.480 \footnote{in addition 17.590, 17.610, 17.720} &5.54 - 6.8& $84^\circ,115^\circ$ & T2 - T3 & 12 \footnote{ 2\,runs at $\Theta=84^\circ$ covering $E_x$=4.7 - 5.54\,MeV} \footnote{ 2\,runs at $\Theta=84^\circ$ covering $E_x$=6.8 - 7.2\,MeV} \\ \hline \end{tabular} \end{table} \begin{table}[htb \caption Q3D Parameters \PbS(d,~p) ] {\label{Q3D.params.dp}% Parameters for the \Pb(d,~p) \expt\ with the Q3D facility. The deuteron energy was $E_d$=22.000\,MeV as for \cite{Valn2001}. A target enriched in \PbS\ to 99.86$\pm$.04\% was used. The slits perpendicular to the scattering angle were kept open, $\Delta\Phi=\pm3^\circ$. } \begin{tabular}{|ccc|c|} \hline $E_x$ &scattering & slit & \# runs \\ $[MeV]$ & angle $\Theta$& opening $\Delta\Theta$&\\ \hline 3.5 - 5.2 &$20^\circ$ & $\pm0.9^\circ$& 1\\ 3.1 - 7.9 &$20^\circ$ & $\pm1.5^\circ$& 3\\ \hline 3.1 - 5.5 &$25^\circ$ & $\pm0.9^\circ$& 3\\ 3.1 - 7.9 &$25^\circ$ & $\pm1.5^\circ$& 3\\ \hline 3.1 - 5.1 &$30^\circ$ & $\pm0.6^\circ$& 1\\ 3.1 - 5.2 &$30^\circ$ & $\pm0.9^\circ$& 1\\ 5.7 - 8.0 &$30^\circ$ & $\pm0.9^\circ$& 2\\ 3.1 - 8.0 &$30^\circ$ & $\pm1.5^\circ$& 6\\ \hline \end{tabular} \end{table} \section{Experiments } We performed \expt s on \Pb(p,~p') and \PbS(d,~p). The high Q-value of the reaction \Bi\dHe\ prohibited any reasonable \expt\ with the Q3D facility due to the restricted energy range of the accelerator. The data are evaluated by help of the computer code GASPAN \cite{Rie2005}. It allows the deconvolution of spectra into a set of peaks with gaussian shape of individual widths and exponential tails on a background with polynomial shape. The energy calibration takes care of the quadratic dependence on the channel due to the effect of the magnetic field. Here we report on results leading to the detection of the main components of the \iEhlb\fFhlb\ and the \iEhlb\pThlb\ multiplets in \Pb. Other data are being evaluated; the raw data (together with excerpts from the runbook) can be accessed \cite{AHwwwHOME}. A preliminary analysis is in agreement with data from ref.~\cite{1967MO25,1967RI13,1968BO16,1968CR05,1968VO02,1968WH02, Zai1968,1969RI10,1970KU13} obtained in the 1960s. Of course, because of the much higher resolution many levels are resolved to be doublets. An important difference is the better energy calibration, their energies deviate linearly by about 5-10\,keV for the range 4-6\,MeV; mostly the energies are about 0.2\% too low. \subsection{\Pb(p,~p') \expt\ with the Q3D facility } The \Pb(p,~p') \expt\ was performed with a proton beam from the M\"unchen HVEC MP Tandem accelerator using the Q3D magnetic spectrograph. The bright Stern-Gerlach polarized ion source was used with unpolarized hydrogen \cite{LMUrep2000p70,RalfsQ2005}. At beam intensities of about 900\,nA, the target was wobbled with a frequency of 2~sec to avoid damage of the lead target. The proton energies were chosen according to \cite{1968WH02} to match the top of the seven lowest IAR in \Bi, namely the \gNhlb, \iEhlb, \jFhlb, \dFhlb, \sOhlb\ IAR and the doublet-IAR \gShlb+\dThlb; some more energies slightly off-resonance were chosen, see Tab.~\ref{Q3D.params.pp}. The analyzed particles were detected in an ASIC supported cathode strip detector \cite{LMUrep2000p71,Wirth1999}. At an active length of 890~mm it produces spectra where the position of a line is determined to better than 0.1~mm without systematic errors. With a few exceptions the slits of the magnetic spectrograph were kept open, $\Delta\Theta=\pm3^\circ$, $\Delta\phi=\pm3^\circ$. \subsection{Experiments on \PbS(d,~p) } A weak excitation by \PbS(d,~p) may help to decide some spin and \cfg\ assignments. Therefore, we measured the reaction \PbS(d,~p) with the goal to detect as low spectroscopic factors (S.F.) as possible. We performed two measurements, one with the (gone) Buechner spectrograph at Heidelberg at large backward angle in order to eliminate any contamination from light nuclei in the spectrum, another \expt\ with the Q3D facility at M\"unchen were the deuteron energy was chosen to match \cite{Valn2001}. {\it (a) Buechner spectrograph.} In 1969, using the Heidelberg Tandem van de Graaff accelerator, two of us (A.~H., P. von~B.) did a deep exposure of \PbS(d,~p) with the Buechner magnetic spectrograph gathering 6\,mCb of the deuteron beam in more than 30 hours. The scattering angle was chosen as $\Theta=130^\circ$. The target was enriched to 92\%. A short exposure was done to position the line from the 3.708\,MeV $5^{-}$ state properly. The energy range was 3.65\,MeV$<E_x<5.15$\,MeV. This data was crucial for the fit shown in \cite{AB1973} and now is still useful albeit the re\-so\-lu\-tion of only 12\,keV. It has been reevaluated by use of the GASPAN code \cite{Rie2005}. {\it (b) Q3D spectrograph.} The study of the reaction \PbS(d,~p) was performed with a deuteron beam from the M\"unchen HVEC MP Tandem accelerator. The high performance of the Q3D facility allowed to take 18~spectra with superior resolution during 30~hours with beam intensities of about 600\,nA. Tab.~\ref{Q3D.params.dp} shows the parameters relevant to the data taking. In order to detect even minor contaminations (e.g. from \Nax,\ClF,\ClS) we measured at scattering angles $\Theta=20^\circ,25^\circ,30^\circ$ with different slit openings, see Tab.\ref{Q3D.params.dp}. We achieved a peak-to-valley ratio of better than $1:10^{-4}$ which allows the detection of S.F. as low as a few $10^{-3}$ in favorable cases. By this means, the amount of the impurity isotopes \PbX, \Pb\ could be measured as 0.028$\pm$.003\%, 0.11$\pm$.03\%, respectively. The 5292, 4610\,keV levels in \Pb\ (Tab.~\ref{allStates.other.1},~\ref{allStates.other.2}) are known to be populated by a $l=0,\,l=5$ transfer, respectively. The measurement at three scattering angles allows to discriminate the transfer of a $l=0,\,l=5$ neutron by virtue of a steeply rising slope for the \angD. This gives a chance to determine the $l$-value for some levels. Other $l$-values have about equal cross sections for $\Theta=20^\circ,25^\circ,30^\circ$. \begin{figure}[htb \caption Spectra for $E_x$=4.6-5.0\,MeV ] {\label{Pb_pp_46.50}% Spectra of \Pb(p,~p') for $E_x$=4.6-5.0\,MeV taken at $\Theta=58^\circ,72^\circ,54^\circ$ on the \gNhlb, \iEhlb, \dFhlb, with targets T3, T2, T2 (see caption of Tab.~\ref{Q3D.params.pp}), respectively. Six levels resonate at $E_p$=15.72\,MeV on top of the \iEhlb\ IAR (black fill out); the doublet at 4709, 4711\,keV is resolved by the computer code GASPAN only. The energies of the \iEhlb\fFhlb\ multiplet are given in the middle panel and shown by bars above and below; in the lower panel the spins are given, too. The counting interval is proportional to $\sqrt{E_x}$ and one step corresponds to about 0.3\,keV. } \resizebox{\hsize}{05.05cm} {\includegraphics[angle=00]{PAPfigs/1p1_4670.5000.0.ps}} } \resizebox{\hsize}{05.05cm} {\includegraphics[angle=00]{PAPfigs/1p1_4670.5000.1.ps}} } \resizebox{\hsize}{05.05cm} {\includegraphics[angle=00]{PAPfigs/1p1_4670.5000.2.ps}} } \end{figure} \begin{figure}[htb \caption Spectra for $E_x$=5.0-5.3\,MeV ] {\label{Pb_pp_50.53}% Spectra of \Pb(p,~p') for the \iEhlb\pThlb\ multiplet in the region $E_x$=5.0-5.3\,MeV. The energies of the multiplet are given in the middle panel, the spins in the lower panel. For other details refer to Fig.~\ref{Pb_pp_46.50} and the text. } \resizebox{\hsize}{05.05cm} {\includegraphics[angle=00]{PAPfigs/1p1_5020.5295.0.ps}} } \resizebox{\hsize}{05.05cm} {\includegraphics[angle=00]{PAPfigs/1p1_5020.5295.1.ps}} } \resizebox{\hsize}{05.05cm} {\includegraphics[angle=00]{PAPfigs/1p1_5020.5295.2.ps}} } \end{figure} \begin{table}[htb \caption[Energies for other states than \iEhlb\fFhlb,\pThlb -- part.1 ] {\label{allStates.other.1}% Energies for levels excited by \Pb(p,~p') but {\it not} bearing the main strength of the \cfgs\ \iEhlb\fFhlb, \iEhlb\pThlb. Not all of them are also detected by the \PbS(d,~p) \expt. The energies from \Pb(p,~p') are not yet finally evaluated, therefore they are just given as a label with a precision of around 1\,keV. The dominant excitations by specific IAR are shown; for strong excitations the energy label is printed {\bf boldface}, for weak excitations an important IAR is given in parentheses. Energies and mean cross sections $\sigma^{\alpha I}_{LJ}$ ($\Theta=20^\circ,25^\circ,30^\circ$) derived from \PbS(d,~p) are shown. Spins from \cite{Schr1997} and energies from \cite{Valn2001,Schr1997,Rad1996} are given for comparison. } \begin{tabular}{|cc cc|rrr c|} \hline $E_x$ &main &$E_x$&$\sigma(25^\circ)$ &$E_x$ &$E_x$ &$E_x$ &spin\\ &IAR &keV &$\mu b/sr$ &keV &keV &keV &\\ (p,~p')& &(d,~p)& (d,~p) &\cite{Schr1997}&\cite{Valn2001}&\cite{Rad1996}&\cite{Schr1997}\\ \hline {\bf 5292 } & \sOhlb & 5292.1&637 & 5292.000& 5292.6& 5292.7&$ 1^{-}$ \\% 5292.2 & & 0.2& & 0.200& 1.5& 0.1& \\% 0.6 5280 & \sOhlb & 5280.3&210 & 5280.322& 5281.3& 5280.3&$ 0^{-}$ \\% 5279.9 & & 0.2& & 0.080& 1.5& 0.1& \\% 0.5 {\bf 5245 } & \dFhlb & 5245.3&713 & 5245.280& 5245.6& 5245.4&$ 3^{-}$ \\% 5245.1 & & 0.2& & 0.060& 1.5& 0.1& \\% 0.5 5239 & (\iEhlb)& 5239.5& 10 & 5239.350& 5240.8& --~~~~&$ 0^{+}$ \\% 5239.6 & & 0.7& & 0.360& 1.5& & \\% 0.6 5214 & \dFhlb & 5214.0& 45 & 5213.000& 5215.6& --~~~~&$ 6^{+}$ \\% 5214.4 & & 0.3& & 0.200& 1.5& & \\% 0.5 5195 & \jFhlb & 5195.0&17 & 5195.340& 5194.3& --~~~~&$ 7^{+}$ \\% 5194.9 & & 0.3& & 0.140& 0.6& & \\% 0.5 5194 & \jFhlb & -- &$< 5$& 5193.400& --~~~~& --~~~~&$ 5^{+}$ \\% 5193.9 & & & & 0.150& & & \\% 0.4 {\bf 5127 } & \dFhlb & 5127.4&682 & 5127.420& 5127.1& --~~~~& $2,3^{-}$ \\% 5127.3 & & 0.3& & 0.090& 0.6& & \\% 0.4 5105 & (\iEhlb)& -- &$< 5$& --~~~~ & 5103.3& --~~~~&$ $ \\% 5105. & & & & & 1.5& & \\% 1.0 5093 & \jFhlb & 5093.2& 14 & 5093.110& 5094.3& --~~~~&$ 8^{+}$ \\% 5092.9 & & 0.5& & 0.200& 1.5& & \\% 0.6 5069 & \dFhlb & -- &$< 5$& 5069.380& 5068.5& --~~~~&$ 10^{+}$ \\% 5069.2 & & & & 0.130& 1.5& & \\% 1.6 5037 & \dFhlb & 5037.4&1200 & 5037.520& 5037.2& 5037.0&$ 2^{-}$ \\% 5037.4 & & 0.2& & 0.050& 0.6& 0.1& \\% 0.4 5010.5& (\jFhlb)& -- &$< 5$& 5010.550& 5010.0& --~~~~&$ 9^{+}$ \\% 5010.5 & & & & 0.090& 0.6& & \\% 0.7 \hline \end{tabular} \end{table} \begin{table}[htb \caption[Energies for other states than \iEhlb\fFhlb,\pThlb -- part.2 ] {\label{allStates.other.2}% $\cdots$ continuing Tab.~\ref{allStates.other.1} } \begin{tabular}{|cc|cc|rrr|c|} \hline $E_x$ &main &$E_x$&$\sigma(25^\circ)$ &$E_x$ &$E_x$ &$E_x$ &spin\\ &IAR &keV &$\mu b/sr$ &keV &keV &keV &\\ (p,~p')& &(d,~p)& (d,~p) &\cite{Schr1997}&\cite{Valn2001}&\cite{Rad1996}&\cite{Schr1997}\\ \hline 4992 & (\jFhlb)& 4992.5&10 & --~~~~ & 4992.7& --~~~~&$ $ \\% 4992.6 & & 0.6& & & 0.6& & \\% 0.7 {\bf 4974} & \dFhlb & 4973.9&1350 & 4974.037& 4974.2& 4973.8&$ 3^{-}$ \\% 4973.9 & & 0.2& & 0.040& 0.6& 0.1& \\% 0.2 4953 & (\dFhlb)& -- &$< 5$& 4953.320& 4952.2& --~~~~&$ 3^{-}$ \\% 4953.2 & & & & 0.230& 0.3& & \\% 1.1 4937 & \dFhlb & 4937.4&33 & 4937.550& 4937.1& 4935.1&$ 3^{-}$ \\% 4937.3 & & 0.4& & 0.230& 0.3& 0.2& \\% 0.6 4928 & (\jFhlb)& -- &$< 5$& --~~~~ & 4928.1& --~~~~&$ $ \\% 4928.0 & & & & & 1.5& & \\% 0.5 4911 & (\dFhlb)& 4911.7& 6 & --~~~~ & 4910.6& --~~~~&$ $ \\% 4911.5 & & 0.5& & & 1.5& & \\% 0.7 4909 & (\jFhlb)& -- &$< 5$& --~~~~ & --~~~~& --~~~~&$ $ \\% 4909.8 & & & & & & & \\% 0.8 4895 & \jFhlb & -- &$< 5$& 4895.277& 4894.8& --~~~~&$ 10^{+}$ \\% 4895.4 & & & & 0.080& 1.5& & \\% 0.7 4867 & \jFhlb & 4868.1& 95 & 4867.816& 4866.9& --~~~~&$ 7^{+}$ \\% 4867.7 & & 0.2& & 0.080& 1.5& & \\% 0.4 4860 & \jFhlb & 4860.8& 35 & 4860.840& 4859.8& --~~~~&$ 8^{+}$ \\% 4860.8 & & 0.3& & 0.080& 1.5& & \\% 0.5 4841 & \dFhlb & 4841.7&22 & 4841.400& 4841.7& 4842.1&$ 1^{-}$ \\% 4841.3 & & 0.4& & 0.100& 0.3& 0.1& \\% 0.5 4610 & \jFhlb & 4610.7&66 & 4610.795& 4610.8& 4610.5&$ 8^{+}$ \\% 4610.7 & & 0.3& & 0.070& 0.5& 0.3& \\% 0.7 \hline \end{tabular} \end{table} \subsection{Typical spectra for \Pb(p,~p') } In Fig.~\ref{Pb_pp_46.50}, \ref{Pb_pp_50.53} we show some spectra for \Pb(p,~p') taken on the \iEhlb\ IAR. For comparison spectra taken on the \gNhlb, \dFhlb\ IAR are displayed, too. In total we measured nearly 200 spectra. We will discuss the excitation of the levels at $E_x$=4680, 4698, 4761, 4918, 5275\,keV and the clearly resolved multiplet at $E_x$=5075, 5079, 5085\,keV (black fill-out in the spectra taken on the \iEhlb\ IAR, bars on the other IAR). The 4709, 4711 doublet is resolved by help of the computer code GASPAN \cite{Rie2005}; the distance is found to be 1.9\,keV with an average re\-so\-lu\-tion of somewhat less than 3.0\,keV FWHM for spectra, see Fig.~\ref{Pb_pp_46.50}. The line contents could be measured quite well using a special option of GASPAN (fixed level distances) yielding usable \angDs. Some excitations belong to well known levels (Tab.~\ref{allStates.other.1},~\ref{allStates.other.2}). A few weak lines are also clearly identified, among them are the $0^{+}$ state at 5239\,keV identified by \cite{Yates96c} to have the 2-particle 2-hole structure $ |2614\,{\rm keV}\, 3^{-}>\otimes |2614\,{\rm keV}\, 3^{-}>$, the $0^{-}$ state at 5280\,keV separated from the 5276\,keV $8^{-}$ state by only 4\,keV, the 4860, 4867\,keV doublet with spins $8^{+}$,~$7^{+}$ strongly excited on the \jFhlb\ IAR. For the shown spectra (Fig.~\ref{Pb_pp_46.50}, \ref{Pb_pp_50.53}) only a few contamination lines are present; prominent contamination lines start at $E_x\approx\ $5.29\,MeV for the spectra taken both on the \gNhlb\ and the \dFhlb\ IAR. A weak contamination line is visible in the region 4.76-4.82\,MeV on the \iEhlb\ IAR, kinematically broadened. Most levels in discussion are excited strongest on the \iEhlb\ IAR, the only exception is the 4698\,keV $3^{-}$ state. On the \iEhlb\ IAR, the levels at 4680, 4761, 4918, 5079\,keV are at least four times stronger excited than on any other~IAR. \section{Data analysis } \subsection{Excitation energies from \Pb(p,~p') } In Tab.~\ref{allStates.info} we show the excitation energies derived from our measurements of \Pb(p,~p'). The spectra were calibrated by using around 40 reference energies below $E_x$=6.0\,MeV and about 25 more reference energies up to $E_x$=7.5\,MeV mainly from \cite{Schr1997}, but also from \cite{Rad1996,Valn2001}; see Tab.~\ref{allStates.other.1},~\ref{allStates.other.2} for the region of interest. We avoided the usage of reference values in cases where the identification due to a multiplet structure was unclear or where the cross section was low. In addition to the quadratic dependence of the energy from the channel in the Q3D spectra, a secondary fit by a third order parabola improved the energy calibration considerably, see \cite{LMUrep2004}. The excitation energies determined from the IAR-pp' measurement with errors of about 0.5\,keV in general, compare well to \cite{Schr1997,Rad1996,Valn2001} within the given errors; the only exception is the 5075 level with a discrepancy of about two standard errors. \subsection{Excitation functions of \Pb(p,~p') } With a few exceptions, we did not measure \excFs, but selected the energies of all known IAR only, see Tab.~\ref{Q3D.params.pp}. IAR often excite the states rather selectively. So we can determine excitation functions in a schematic manner. For some levels \excFs\ were measured in the 1960s \cite{Zai1968,1968WH02}. We will mention them in place. The \angDs\ were fitted by even Legendre polynomials \begin{eqnarray} \label{eq.mean.c.s} {\frac{{d\sigma_{LJ}^{\alpha\,I}}} {d\Omega}}(\Theta) = \sum_K A_K P_K(cos(\Theta)) \end{eqnarray} Odd Legendre polynomials have not to be included since the direct-(p,~p') reaction does not contribute much in most cases. The angle averaged (mean) cross section is derived for each IAR $LJ$ and each state $|\alpha\,I>$ as $\sigma^{\alpha\,I}_{LJ}=A_0$. We don't quote neither the errors of $\sigma^{\alpha\,I}_{LJ}$ nor the values $A_K$ for $K>0$ since the evaluation can be further improved. The errors of the mean cross sections are about 5-20\%. In Fig.~\ref{IARschemiE.f5.p3} we show the \excFs\ in a schematic manner. For each of the ten states in discussion and for each IAR the mean cross section $\sigma^{\alpha\,I}_{LJ}$ is shown. All levels in discussion show a pronounced excitation by the \iEhlb\ IAR. They have weak counterparts on all other IAR. In reality due to the low \peneTra\ of the \iEhlb\ particle, the cross sections for the \gNhlb, \dFhlb,\sOhlb\ IAR must be be reduced by the \peneTra\ ratio $R_{LJ}$=8,~12,~11,~(20) (Eq.~\ref{eq.ratio} and Tab.~\ref{ratio.spWid}) in relation to the \iEhlb\ IAR, see Tab.~\ref{allStates.info}. Taking into account these values, Fig.~\ref{IARschemiE.f5.p3} demonstrates that the ten states are rather pure. \begin{figure}[htb \caption Schematic \excFs\ i11 f5 ] {\label{IARschemiE.f5.p3}% Angle averaged (mean) cross section $\sigma^{\alpha\,I}_{LJ}$ for states containing most of the \iEhlb\fFhlb\ strength ({\it upper panel}) and \iEhlb\pThlb\ strength ({\it lower panel}). The value $\sigma^{\alpha\,I}_{LJ}$ for each state is shown relative to the maximum of all cross sections of either multiplet; the maxima are set equal (upper panel: 4761 $6^{-}$, lower panel: 5085 $7^{-}$, see tab~\ref{allStates.info}). At the left and right side the energy labels and the spins are given. In order to obtain partial widths (Eq.~\ref{eq.avg.c.s}), for each IAR $LJ$ the mean cross section must be reduced by the \peneTra\ ratio $R_{LJ}$ given at bottom. } \resizebox{\hsize}{08.4cm} {\includegraphics[angle=00]{PAPfigs/3plt_f5.ps}} } \resizebox{\hsize}{07.2cm} {\includegraphics[angle=00]{PAPfigs/3plt_p3.ps}} } \end{figure} \subsection{\AngDs\ of \Pb(p,~p') } In Fig.~% \ref{IARangDis.iEf5.45}, \ref{IARangDis.iEf5.678} and \ref{IARangDis.iEp3} we show the \angDs\ for some members of the \iEhlb\fFhlb\ multiplet and all members of the \iEhlb\pThlb\ multiplet. The cross sections are shown on a logarithmic scale in $\mu b/sr$; the scale of the scattering angles is $0^\circ<\Theta<120^\circ$, the highest angle where we could measure was $115^\circ$; below $20^\circ$ the spectra became unusable due to increasing slit scattering. The spin assignment is discussed below. Calculations for the pure \ph\ \cfgs\ by Eq.~\ref{eq.diff.c.s} are inserted for the \angDs\ (dotted line) and the angle averaged (mean) cross section $\sigma^{\alpha I}_{LJ}$ (dashed line). The absolute value of the calculated \angDs\ has been adjusted to an approximate best-fit for the $8^{-}$ state of the \iEhlb\fFhlb\ group and for the $7^{-}$ state of the \iEhlb\pThlb\ group yielding a more precise value of $\Gamma^{s.p.}_{\iEhlb}$. For the states with other spins no adjustment has been done except for the energy dependence of the \peneTra\ (Eq.~\ref{eq.sigmaCorr},~\ref{eq.peneTra}). For the \iEhlb\pThlb\ group there is a general agreement of the mean cross section with the calculation, whereas for the the \iEhlb\fFhlb\ group only the states with highest spins $7^{-},8^{-}$ agree with the expectation of a rather pure \cfg. The 4698 level has a cross section about ten times higher than expected. For the $4^{-},6^{-}$ states the shape of the \angDs\ agrees with the expectation of a rather pure \cfg, but the angle averaged (mean) cross section is around 50\% higher. For the $5^{-}$ state the \angD\ ({\it not shown}) deviates from a \iEhlb\fFhlb\ distribution at forward angles $\Theta<60^\circ$ up to a factor~4. Note that the \angD\ of the members with the highest spin $I=J+j$ for the \cfgs\ \iEhlb\pThlb\ and \iEhlb\fFhlb\ (Fig.~\ref{Pb_pp_50.53}: 5085 $7^{-}$, Fig.~\ref{Pb_pp_46.50}: 4918 $8^{-}$) are similar, both show the characteristic minimum at $\Theta=90^\circ$. As expected for the lowest spin $I=J-j$, the 5276 $4^{-}$ state (Fig.~\ref{Pb_pp_50.53}) exhibits the characteristic forward peaking similar as for the highest spin $I=J+j$. \begin{figure}[htb \caption \AngDs\ i11 f5 for 4- 6- ] {\label{IARangDis.iEf5.45}% \AngDs\ for the 4.71\,MeV doublet partner with spin $4^{-}$ and the state with spin $6^{-}$. The mean cross section for a pure \cfg\ \iEhlb\fFhlb\ calculated by Eq.~\ref{eq.avg.c.s} is shown by a dashed line; the corresponding \angDs\ calculated by Eq.~\ref{eq.diff.c.s} is shown by the dotted curve. Both calculated curves are corrected for the energy dependent \peneTra\ by Eq.~\ref{eq.sigmaCorr},~\ref{eq.peneTra}. } \resizebox{\hsize}{05.5cm} {\includegraphics[angle=00]{PAPfigs/1iEf5.ps} {\includegraphics[angle=00]{PAPfigs/3iEf5.ps} } \end{figure} \begin{figure}[htb \caption \AngDs\ i11 f5 for 7- 8- ] {\label{IARangDis.iEf5.678}% \AngDs\ for the \iEhlb\fFhlb\ states with spins $7^{-},8^{-}$. For the 4918 state \cite{1968WH02} measured an \excF\ at a scattering angle of $\Theta=158^\circ$. The cross section on top of the IAR with $\sigma^{\alpha I}_{LJ}=20\pm2\mu b/sr$ agrees with the value near $\Theta=22^\circ$ assuming symmetry around $\Theta=90^\circ$. For other details see Fig.~\ref{IARangDis.iEf5.45}. } \resizebox{\hsize}{05.5cm} {\includegraphics[angle=00]{PAPfigs/4iEf5.ps} {\includegraphics[angle=00]{PAPfigs/5iEf5.ps} } \end{figure} \begin{figure}[htb \caption \AngDs\ i11 p3 ] {\label{IARangDis.iEp3}% \AngDs\ states for the \iEhlb\pThlb\ states with spins $4^{-},5^{-},6^{-},7^{-}$. For details see Fig.~\ref{IARangDis.iEf5.45}. } \resizebox{\hsize}{05.5cm} {\includegraphics[angle=00]{PAPfigs/3iEp3.ps} {\includegraphics[angle=00]{PAPfigs/0iEp3.ps} } \resizebox{\hsize}{05.5cm} {\includegraphics[angle=00]{PAPfigs/1iEp3.ps} {\includegraphics[angle=00]{PAPfigs/2iEp3.ps} } \end{figure} \subsection{Data from \PbS(d,~p) } Tab.~\ref{allStates.info} gives the results from our \PbS(d,~p) measurement for the ten levels in discussion. The precision of the excitation energies is slightly better than that from the IAR-pp' measurement. This may be partly explained by satellite lines due to an atomic effect which deteriorates the \Pb(p,~p') but not the \PbS(d,~p) spectra \cite{LMUrep2003}. The energy of the 5075 level with a deviation of about 2\,$\sigma$ from \cite{Schr1997} agrees with the result from the IAR-pp' measurement. Some levels have a vanishing \PbS(d,~p) cross section, especially the 4680, 4918, 5085\,keV levels. In Tab.~\ref{allStates.other.1},~\ref{allStates.other.2} we add the information derived from the Q3D \expt\ on \PbS(d,~p) for the region 4.5\,MeV$<E_x<$5.3\,MeV for levels {\it not} belonging to the ten states in discussion. \section{Results and discussion } A key assumption of the shell model is the existence of rather pure 1-particle 1-hole excitations if the spacing of the model \cfgs\ is higher than the average matrix element of the residual interaction. We verified this assumption for multiplets excited by the weakest positive parity IAR in \Bi. The extremely weak excitation of the lowest 2-particle 2-hole states (especially the 5239 $0^{+}$ state) adds confidence in the shell model. In the SSM four multiplets \iEhlb\pOhlb, \iEhlb\fFhlb, \iEhlb\pThlb, \iEhlb\fShlb\ are expected to be built with the $\iEhlb$ particle at energies $E_x$=4.210, 4.780, 5.108, 6.550\,MeV, respectively, see Fig.~\ref{IAR.scenario}. The goal of this paper is the identification of the \iEhlb\fFhlb, \iEhlb\pThlb\ neutron \ph\ multiplets; the states containing the major strength of the \cfg\ \iEhlb\pOhlb\ are known \cite{AB1973,Schr1997,Valn2001}, see also Tab.~\ref{QfipStri}; for the \iEhlb\fShlb\ group no measurement has been done (Tab.~\ref{Q3D.params.pp}). We encounter several problems with the IAR-pp' method, \begin{itemize} \item the s.p. widths $\Gamma_{lj}^{s.p.}$ for the outgoing particles ($lj=$\pOhlb, \fFhlb, \pThlb) are only known to about 10\%, \item the energy dependence of the s.p. widths is rather strong and its slopes are not well known. In the region of interest a systematic error of around 20\% has to be assumed, \item the mean cross section $\sigma^{\alpha I}_{LJ}$ of a state bearing the main strength of a \cfg\ with angular momenta $l$ is strongly affected by the presence of a slight admixture of a \cfg\ with lower angular momentum $l-2$ due to the higher \peneTra, \item the anisotropy of the \angD\ is highly sensitive to the mixture of the \cfgs. This is especially true for a small admixture of a \cfg\ $|lj>$ with $j=l+1/2$ to a \cfg\ with $j=l-1/2$. In rare cases the anisotropy \coefs\ $a_K/a_0,\, K=2,4,6,8$ allow to determine the relative mixing of \cfgs\ $|LJ>\otimes|lj>$ with $l=1,3,5$, $j=l\pm1$. \item the \angD\ of states with natural parity often exhibit strong forward peaking via the direct-(p,~p') reaction, \item the s.p. widths $\Gamma_{\iEhlb}^{s.p.}$, $\Gamma_{\jFhlb}^{s.p.}$ of the two weakest IAR are only known to 70\%. \end{itemize} \begin{figure}[htb \caption States excited on the \iEhlb\ IAR ] {\label{ergbns.centroid}% In the {\bf upper panel}, the centroid excitation energy (eq.\ref{eq.centroid}) and the total \cfg\ strength $\sum_I |c^{I}_{LJ,lj}|^2$ are shown. The centroid energies agree with energies $E_x$~= 4.210, 4.780, 5.108\,MeV of the SSM for the three \cfgs\ \iEhlb\pOhlb, \iEhlb\fFhlb, \iEhlb\pThlb. The total \cfg\ strengths are close to unity using the s.p. widths from Tab.~\ref{ratio.spWid}. In the {\bf lower panel} the excitation energies $E_x$ and the partial strength $|c_{LJ,lj}|^2$ for the states bearing the main strength of the \pOhlb, \fFhlb, \pThlb\ \cfgs\ are shown. The cross sections $\sigma^{\alpha I}_{LJ}$ from Tab.~\ref{allStates.info} are converted to partial strengths by eq.\ref{eq.avg.c.s} with s.p. widths from Tab.~\ref{ratio.spWid} and corrected for the energy dependence of the \peneTra (Eq.~\ref{eq.sigmaCorr},~\ref{eq.peneTra}). For the \iEhlb\pOhlb\ multiplet the {\it sum} of the partial strengths of the three $5^{-}$ and the three $6^{-}$ states at 4.0\,MeV$<E_x<$4.5\,MeV is shown. The value for the 4698\,$3^{-}$ state (left out in determining the centroid energy) is reduced by a factor~3. Both this $3^{-}$ and the neighbouring $5^{-}$ state are affected by the direct-(p,~p') reaction yielding a much larger value than unity. The SSM expects a value $|c_{LJ,lj}|^2=1$ (dotted line). } \resizebox{\hsize}{12.05cm} {\includegraphics[angle=00]{PAPfigs/1centroid_i11.ps}} } \end{figure} \subsection{Centroid Energy} The states strongly excited by the \iEhlb\ IAR can be grouped into three parts, the first part at $E_x\approx\ $\,4.2\,MeV belongs to the group of states strongly excited by the \gNhlb\ IAR mainly, the second part at $E_x\approx\ $\,4.6-4.8\,MeV (except for the 4698 $3^{-}$ state) and the third part at $E_x\approx\ $5.1\,MeV are excited by no other IAR strongly. The number of states in the second and third group is six and four. (In the following discussion, the 4698 $3^{-}$ state is omitted since it is affected by a large direct (p,~p') contribution starting at least at scattering angles $E_x<115^\circ$, the maximum angle for the Q3D magnetic spectrograph in the current shape.) We derive the centroid energies from the excitation energies $E_x$ and the angle averaged (mean) cross sections $\sigma^{\alpha I}_{LJ}$ of the remaining five and four states given in Tab.~\ref{allStates.info}. First, we correct the mean \expt al cross sections by the large change of the \peneTra\ for the outgoing particles across the range of excitation energies, \begin{eqnarray} \label{eq.sigmaCorr} \tilde\sigma^{\alpha I}_{LJ} = p^2(E_x^{\alpha I}(LJ)) \sigma^{\alpha I}_{LJ}. \end{eqnarray} The energy dependence of the \peneTra\ is calculated \cite{1971CL02} and can be linearly approximated by \begin{eqnarray} \label{eq.peneTra} p(E_x^{\alpha I}(LJ))= 1 + 3.5\, {\frac{ E_x^{\alpha I}(LJ) - E_{LJ,lj}^{SSM} }{ E_{LJ,lj}^{SSM}}}. \end{eqnarray} The approximation is reasonable near the SSM excitation energy of the \ph\ \cfgs\ $|LJ>\otimes|lj>$ for all relevant values of $lj$. (The slope varies between 2.0 and 6.0 for 8\,MeV$<E_{p'}<14$\,MeV and $l=1,3,5$, for higher $l$-values the slope always becoming steeper.) We then calculate the centroid energy by the weighted mean \begin{eqnarray} \label{eq.centroid} <E_x(LJ)> = \sum_{\alpha I} \tilde\sigma^{\alpha I}_{LJ} E_x^{\alpha I}(LJ) \end{eqnarray} Fig.~\ref{ergbns.centroid} {\it upper panel} shows the centroid energies; clearly they coincide with the prediction of the SSM model. We note that the adjustment of the s.p. widths discussed in appendix~C does not affect the values of the centroid energies much. The ratio of the sum of the angle averaged (mean) cross sections $\tilde\sigma^{\alpha I}_{LJ}$ (converted to \cfg\ strengths by use of Eq.~\ref{eq.avg.c.s}) for the groups related to the \pOhlb, \fFhlb, \pThlb-particle compares well with the calculated ratio derived from the s.p. widths of Tab.~\ref{ratio.spWid}. We note that the shown deviations of the \cfg\ strengths from unity are already lessened by improved s.p. widths as discussed in appendix~C; using the values from \cite{1968WH02,1969RI10} the deviations are larger, but still in the range 10-30\%. Both the agreement of the centroid energies and the approximate agreement of the \cfg\ strengths with the SSM expectation favour the identification of the states shown in fig.~\ref{Pb_pp_46.50}-\ref{ergbns.centroid} and Tab.~\ref{allStates.info} as the members of the \iEhlb\fFhlb\ and \iEhlb\pThlb\ multiplets. \subsection{ Proton \ph\ \cfgs } IAR-pp' is sensitive to neutron \ph\ \cfgs\ only (Eq.~\ref{eq.IAR.sum}). Yet with robust values of the s.p. widths and reasonable functions for the energy dependence of the \peneTra, a missing \cfg\ strength can be determined. In case the unitarity of the truncated \cfg\ space can be trusted, by this means even amplitudes of proton \ph\ \cfgs\ can be determined. An example is give in appendix~A. The \cfgs\ \fShlb\sOhlb, \iEhlb\pThlb\ have similar SSM energies 5.011, 5.108 \,MeV, respectively, including the Coulomb shift $\Delta_C=-0.30\pm0.02$\,MeV (appendix~A). Hence an admixture of the proton \ph\ \cfg\ \fShlb\sOhlb\ to the neutron \ph\ multiplet can be expected for the state with spin $4^{-}$ at $E_x$=5.276\,MeV. It does not change the \angD\ of IAR-pp', but reduces the mean cross section only. We derive an upper limit of 20\% for the \fShlb\sOhlb\ component. Similarly the \cfg\ \fShlb\dThlb\ with the SSM energy 5.462\,MeV may change the structure of the states with a dominant \iEhlb\pThlb\ \cfg. Evidently the states with spins $6^{-},7^{-}$ are not affected due to the high spin; for the states with spins $4^{-},5^{-}$ we derive upper limits of 20\% for the \fShlb\dThlb\ component. The principle of unitarity for a rather complete set of shell model \cfgs\ can be used to predict one $4^{-}$ states with dominant \cfg\ \fShlb\sOhlb\ in the region $E_x=5.0\pm0.2$\,MeV as has been done successfully for the N=82 nucleus \Ce\ \cite{Heu1969,GNPH140CeIII}. Several candidates can be found (Tab.~\ref{allStates.other.1},~\ref{allStates.other.2}). They should be weak on all IAR eventually except for the \iEhlb\ IAR and have weak \PbS(d,~p) and vanishing \Bi\dHe\ cross sections. \subsection{The \iEhlb\pOhlb\ \ph\ multiplet } The \iEhlb\pOhlb\ strength for spin $5^{-}$ is split up into three fractions, whereas the $6^{-}$ strength is contained in one state mainly. The lowest $6^{-}$ state at $E_x$=3919\,keV contains less than 1\% of the \iEhlb\pOhlb\ strength as shown especially by the absence of a detectable \PbS(d,~p) cross section both with the Buechner and the Q3D \expt. The centroid energies agree well with the prediction by the SSM model, see Fig.~\ref{ergbns.centroid}. The ratio of the total strength for the $5^{-}$ and $6^{-}$ states does not relate as expected from the SSM as 11:13. This hints to a considerable part of the \iEhlb\pOhlb\ strength in a higher $6^{-}$ state. Pure \iEhlb\pOhlb\ states should have isotropic \angDs, but all six \angDs\ deviate from isotropy. Small admixtures of other \cfgs\ like \iEhlb\fFhlb, \iEhlb\pThlb, \iEhlb\fShlb\ may explain the anisotropy. For the fit shown in appendix~A only few data for the \cfg\ \iEhlb\pOhlb\ were used, namely the 4206 state to bear the overwhelming strength and and a roughly equal partition of the $\iEhlb\pOhlb\, 5^{-}$ strength into the 4125, 4180, 4296 states. The mean cross sections now determined more precisely are in general agreement with the fit. It thus gives confidence into the fitting procedure. The 4206 state has been already identified by \cite{1968WH02} and used to determine the total width of the \iEhlb\ IAR, see Tab.~\ref{ratio.spWid}. \subsection{The \iEhlb\fFhlb\ \ph\ multiplet } Tab.~\ref{allStates.info} (upper part) gives the mean cross section for the states containing the major part of the \iEhlb\fFhlb\ \cfg, see also Fig.~\ref{ergbns.centroid}. The states at 4680, 4698, 4709, 4711, 4761\,keV have rather firm spin assignments with spins $7^{-}$, $3^{-}$, $5^{-}$, $4^{-}$, $6^{-}$ according to \cite{Schr1997}. These states -- except for the $3^{-}$ state at $E_x$=4698\,keV again -- have an \angD\ which can be explained by a rather pure \iEhlb\fFhlb\ \cfg, see Fig.~\ref{IARangDis.iEf5.45}, \ref{IARangDis.iEf5.678}. We approximated the \angD\ by calculations for the pure \cfg\ with a common factor. According to the theory \cite{Heu1969} this factor is described by the total width $\Gamma_{\iEhlb}^{tot}$ which has been measured by \cite{1968WH02} and the s.p. widths $\Gamma_{\iEhlb}^{s.p.}$, $\Gamma_{\fFhlb}^{s.p.}$, $\Gamma_{\pThlb}^{s.p.}$ determined by \cite{1969RI10}; the energy dependence is calculated \cite{1971CL02}. In total the systematic uncertainty is about 20\%. We used the adjusted values for the s.p. widths (Tab.~\ref{ratio.spWid}). We remark that the determination of the s.p. widths is complicated since they can be determined only as the product $\Gamma_{LJ}^{s.p.}$ for the IAR and $\Gamma_{lj}^{s.p.}$ for the outgoing particles, see Eq.~\ref{eq.diff.c.s}; in addition the energy dependence of the \peneTras\ is not well known. Since the states with the main \cfg\ \iEhlb\fFhlb\ and spins $4^{-}$, $5^{-}$, $6^{-}$, $7^{-}$ may mix with the \cfgs\ \iEhlb\pThlb\ which have a much larger s.p. width $\Gamma^{s.p.}$ due to the l=1 wave instead of l=3, even a small admixture changes the \angD\ much. Seemingly this is the case for the $4^{-},6^{-}$ states at $E_x$=4711, 4761\,keV, respectively, see Fig.~\ref{IARangDis.iEf5.45}, \ref{IARangDis.iEf5.678}, hence the sum of the mean cross section according to Eq.~\ref{eq.avg.c.s} is larger than unity. {\it (a) 4918 $8^{-}$}. The state at $E_x$=4918\,keV is excited by the \iEhlb\ IAR solely. A detection on the \jFhlb\ and \dFhlb\ IAR yields cross sections a factor 10 lower, see Fig.~\ref{IARschemiE.f5.p3}. The \PbS(d,~p) cross section is vanishing small, see Tab.~\ref{allStates.info}. The agreement of the \angD\ with the calculation for a pure \iEhlb\fFhlb\ \cfg\ is remarkable, see Fig.~\ref{IARangDis.iEf5.678}. The slight deviation may be interpreted by an admixture of the \cfg\ \iEhlb\fShlb. The absence of a sizable excitation by any other IAR corraborates the spin assignment. At a scattering angle of $158^\circ$, the resonance is rising by a factor 18.0 over the direct background \cite{1968WH02}. This fact corroborates the assignment of an unnatural parity. {\it (b) 4680 $7^{-}$}. The \angD\ agrees with a pure \iEhlb\fFhlb\ \cfg; the slight deviation may be explained by an admixture of \iEhlb\fShlb. {\it (c) 4711 $4^{-}$}. A weak \iEhlb\pThlb\ admixture explains the augmented cross section (Fig.~\ref{IARangDis.iEf5.45}) by the much higher \peneTra\ of the \pThlb\ particle. A weak excitation by the \PbS(d,~p) reaction is consistent with a sizeable excitation by the \gNhlb\ IAR (Fig.~\ref{IARschemiE.f5.p3}). {\it (d) 4761 $6^{-}$}. A fraction of about 10\% of the $6^{-}$ \iEhlb\pOhlb\ strength in the 4761\,keV state relieves both the augmented mean cross section (Fig.~\ref{IARangDis.iEf5.678}) and the discrepancy found while discussing the \iEhlb\pOhlb\ strength above. It is consistent with the detected \PbS(d,~p) reaction; both the Buechner and the Q3D data can be explained by a 0.10$\pm$0.04 \iEhlb\pOhlb\ admixture. {\it (e) 4709 $5^{-}$}. The deviation of the \angD\ at forward angles can be explained by a direct-(p,~p') component; it is consistent with the assignment of natural parity. Seemingly an admixture of \gNhlb\pOhlb\ or \iEhlb\pOhlb\ is small; it is consistent with the smaller cross section for \PbS(d,~p) in relation to the 4711 doublet member. {\it (f) 4698 $3^{-}$}. The 4698 $3^{-}$ state is excited at forward angles ten times stronger as predicted by the SSM, but from the \excFs\ of ref.\cite{Zai1968} we derive upper limits for the backward angles $150^\circ,170^\circ$ which are consistent with the pure \iEhlb\fFhlb\ \cfg\, in contrast to the forward angles symmetric to $90^\circ$. The state is also known to have sizable \gNhlb\pThlb\ and \dFhlb\pOhlb\ components \cite{AB1973}. The excitation by the \PbS(d,~p) reaction is consistent with a rather strong \dFhlb\pOhlb\ component. A possible alimentation on top of the \iEhlb\ IAR via the exit channel \dFhlb\pOhlb\ may change the \angD\ due to the higher \peneTra\ of the outgoing \pOhlb\ particle. The rather strong direct-(p,~p') component contributes in addition. Therefore the interpretation of this state is complicate. The \angD\ for the 4.70\,MeV level shown by \cite{1968WH02,Zai1968} is interpreted incorrectly by the authors. Namely the re\-so\-lu\-tion of about 35\,keV was insufficient to resolve this state from the neighbouring multiplet at 4680, 4709, 4711\,keV. So the strong excitation by the \iEhlb\ IAR is not due to the excitation of the $3^{-}$ state alone, but at least equally to the $7^{-}$, $5^{-}$, $4^{-}$ multiplet around it. So the surprise about the strong excitation of the ``4.692\,MeV'' level \cite{1969RI10} is solved. \subsection{The \iEhlb\pThlb\ \ph\ multiplet } The \angDs\ of the four states containing most of the \iEhlb\pThlb\ strength is shown in Fig.~~\ref{IARangDis.iEp3}. Calculations for spins $4^{-}$, $5^{-}$, $6^{-}$, $7^{-}$ are inserted. Tab.~\ref{allStates.info} (lower part) gives the mean cross section for the triplet levels at 5075, 5079, 5085 and the 5276 level. Comparing the cross sections for the resolved triplet levels at $\Theta=90^\circ,22^\circ$ to the data points at $\Theta=90^\circ,158^\circ$ (ie. symmetric to $90^\circ$) from the \excF\ of the 5.071\,MeV level unresolved by \cite{1968WH02} we find agreement within 10\%. This shows that the direct-(p,~p') contribution is low. {\it (a) 5085 $7^{-}$}. The 5085 state is assumed to have spin $7^{-}$ \cite{Schr1997}. Its \angD\ is well fitted by assuming a pure \iEhlb\pThlb\ \cfg\ and very similar to that from the \iEhlb\fFhlb\ \cfg\ with the highest spin $I=J+j$. A preliminary analysis of more data designates the $8^{-}$ member of the \gNhlb\fShlb\ multiplet as a state at $E_x$=5936\,keV. It exhibits a similar steep rise of the \angD\ towards forward angles indicating a rather pure neutron \ph\ \cfg\ with spin $I=J+j$, too. {\it (b) 5075 $5^{-}$, 5079 $6^{-}$}. The 5075, 5079 states of the triplet are assigned spin $5^{-}$,~$6^{-}$. A reverse spin assignment fits worse, since the mean cross section of the 5079 level is about 20\% higher, see Tab.~\ref{allStates.info}. The 5075 and 5085 states are excited sizable on both the \dFhlb\ and \sOhlb\ IAR (the 5085 state also on the \dThlb+\gShlb\ doublet IAR). This may be due to a considerable direct-(p,~p') cross sections and corroborates the assignment of natural parity spins in contrast to the low cross section of the 5079 state on all other IAR, see Fig.~\ref{IARschemiE.f5.p3}. In the \excFs\ for the scattering angles $90^\circ,158^\circ$ the cross section is rising by a factor 10.0, 13.7 over the direct background, respectively. This fact may hint to a small contribution from direct-(p,~p') for the $7^{-}$ state. {\it (c) 5276 $4^{-}$}. The missing $4^{-}$ member is identified as the 5276\,keV state. It is strongly excited by the \iEhlb\ IAR, but only weakly on all other IAR, see Fig.~\ref{IARschemiE.f5.p3}. The \angD\ is well described by a pure \iEhlb\pThlb\ \cfg, see Fig.~\ref{IARangDis.iEp3}. The cross section is somewhat higher than expected, see Fig.~\ref{IARangDis.iEf5.678}. Only a complete fit of all levels participating in the \cfg\ mixing with $|\gNhlb>\otimes|lj>$ and $|\iEhlb>\otimes|lj>$ and another readjustment of the s.p. widths will solve the problem. {\it(d) Information from \PbS(d,~p). } The 4680 $7^{-}$, 4918 $8^{-}$, 5085 $7^{-}$ states have vanishing \PbS(d,~p) cross sections corroborating the spin and \cfg\ assignments. We explain the excitation by the \PbS(d,~p) reaction for the 5075 $5^{-}$ state by a weak \iEhlb\pOhlb\ admixture, for the 5079 $6^{-}$ state by a weak \iEhlb\pOhlb\ admixture, for the 5276 $4^{-}$ state by a weak \gShlb\pOhlb\ admixture. The \gShlb\pOhlb\ admixture of the 5276 $4^{-}$ state is corroborated by the excitation on the \dThlb+\gShlb\ doublet IAR, see Fig.~\ref{IARschemiE.f5.p3}. \setlength\LTleft{0pt} \setlength\LTright{0pt} \begin{longtable*}{@{\extracolsep{05pt}}p{25pt}p{30pt}p{12pt}p{15pt}p{25pt}p{35pt}p{25pt}p{25pt}p{33pt}p{25pt}p{33pt}p{30pt}p{55pt}} \caption{% Energies, spins and cross sections for the ten states in the range 4.6\,MeV$<E_x<$5.3\,MeV discussed in the text with \cfgs\ \iEhlb\fFhlb\ (upper part), \iEhlb\pThlb\ (lower part). The energies and the mean cross sections $\sigma^{\alpha I}_{LJ}$ are determined from the \Pb(p,~p') \expt. The cross sections $\tilde\sigma^{\alpha I}_{LJ}$ on top of the \iEhlb\ IAR are corrected for the energy dependence of the \peneTra\ by Eq.~\ref{eq.sigmaCorr}, the cross sections on top of the \gNhlb, \dFhlb\ IAR are reduced by the \peneTra\ ratio $R_{\gNhlb}, R_{\dFhlb}$=8,~12 (Eq.~\ref{eq.ratio}). Cross sections for pure \iEhlb\fFhlb\ and \iEhlb\pThlb\ \cfgs\ are calculated from Eq.~\ref{eq.avg.c.s} with $|c_{\iEhlb\fFhlb}|^2=1$ respectively $|c_{\iEhlb\pThlb}|^2=1$ using s.p. widths from Tab.~\ref{ratio.spWid} (col.~11). From \PbS(d,~p) performed with the Q3D facility, energies (col.~2) and cross sections $\sigma(\approx25^\circ)$ are derived, too. Spins from \cite{Schr1997} and energies from \cite{Valn2001,Schr1997,Rad1996} are given for comparison. } \endfirsthead \multicolumn{11}{c}{Energies, spins and cross sections for the ten states continued \dots}\\ \hline \hline $E_x$ &$E_x$ &spin &spin&$E_x$ &$E_x$ &$E_x$ & $\sigma(25^\circ)$ & $ {{\sigma^{\alpha I}_{LJ}}/{R_{LJ}}} $& $ {{\tilde\sigma^{\alpha I}_{LJ}} } $& $ {{\sigma^{\alpha I}_{LJ}} } $& $ {{\sigma^{\alpha I}_{LJ}}/{R_{LJ}}} $& ~~~remark\\ keV &keV & & &keV&keV&keV &$\mu b/sr$&$\mu b/sr$&$\mu b/sr$&$\mu b/sr$&$\mu b/sr$ \\ (p,~p') & (d,~p) & & & & & & (d,~p)&(p,~p') &(p,~p')&(p,~p')&(p,~p') \\ &&&& &&& & on \gNhlb&on \iEhlb &on \iEhlb &on \dFhlb \\ \hline (a) &(a) &(a) & \cite{Schr1997} &\cite{Valn2001}&\cite{Schr1997}&\cite{Rad1996}& (a)&(a) &(a)&calcul.&(a) &(a) this work \\ \hline \hline \endhead \endfoot \endlastfoot \multicolumn{11}{c}{}\\ \hline \hline $E_x$ &$E_x$ &spin &spin&$E_x$ &$E_x$ &$E_x$ & $\sigma(25^\circ)$ & $ {{\sigma^{\alpha I}_{LJ}}/{R_{LJ}}} $& $ {{\tilde\sigma^{\alpha I}_{LJ}} } $& $ {{\sigma^{\alpha I}_{LJ}} } $& $ {{\sigma^{\alpha I}_{LJ}}/{R_{LJ}}} $& ~~~remark\\ keV &keV & & &keV&keV&keV &$\mu b/sr$&$\mu b/sr$&$\mu b/sr$&$\mu b/sr$&$\mu b/sr$ \\ (p,~p') & (d,~p) & & & & & & (d,~p)&(p,~p') &(p,~p')&(p,~p')&(p,~p') \\ &&&& &&& & on \gNhlb&on \iEhlb &on \iEhlb &on \dFhlb \\ \hline (a) &(a) &(a) & \cite{Schr1997} &\cite{Valn2001}&\cite{Schr1997}&\cite{Rad1996}& (a)&(a) &(a)&calcul.&(a) &(a) this work \\ \hline 4680.3 & ~~~-- &$7^{-}$ &($7^{-}$) &4680.7 & 4680.310 & -- &$<2$ & 0.2 & 16 & 16.2 & 0.3 \\ $\pm$0.5 & & & &$\pm$0.5 &$\pm$0.250 & & & & & & \\ 4698.5 &4698.40 &$3^{-}$ & $3^{-}$ &4698.4 & 4698.375 & 4697.9 & 800 & 1.2 & 45 (b) & 6.6 & 1.6 & (b)~see~text \\ $\pm$0.3 &$\pm$0.15 & & &$\pm$0.5 &$\pm$0.040 &$\pm$0.1 & & & & & \\ 4709.4 & ~~~-- &$5^{-}$ &($5^{-}$) &4709.5 & 4709.409 & -- & 10 & 1.7 & 11 & 10.5 & 1.4 \\ $\pm$0.8 & & & &$\pm$3.5 &$\pm$0.250 & & & & & & \\ 4711.2 &4711.0 &$4^{-}$ & $4^{-}$ & -- & 4711.300 & -- & 15 & 0.5 & 15 & 8.6 & 0.9 \\ $\pm$0.8 &$\pm$0.6 & & & &$\pm$0.750 & & & & & & \\ 4761.9 &4762.1 &$6^{-}$ & $6^{-}$ &4761.8 & 4761.800 & -- & 7 & 0.5 & 19 & 12.4 & 0.3 \\ $\pm$0.5 &$\pm$0.4 & & &$\pm$0.5 &$\pm$0.250 & & & & & & \\ 4918.8 & ~~~-- &$8^{-}$ & &4917.6 & -- & -- &$<2$ & 0.1 & 15 & 16.2 (c) & 0.2 & (c) adapted,\\ $\pm$0.4 & & & &$\pm$1.5 & & & & & & & & see text\\ \hline 5074.6 &5074.8 &$5^{-}$ & -- &5073.7 & 5075.800 & -- & 9 & 0.6 & 32 & 34 & 0.9 \\ $\pm$ .5 &$\pm$0.4 & & &$\pm$1.5 &$\pm$0.200 & & & & & & \\ 5079.8 &5079.8 &$6^{-}$ & & -- & -- & -- & 5 & 0.5 & 40 & 40 & 0.5 \\ $\pm$ .6 &$\pm$0.7 & & & & & & & & & & \\ 5085.3 & ~~~-- &$7^{-}$ &($7^{-}$) &5084.7 & 5085.550 & 5085.7 &$<2$ & 0.5 & 46 & 46 (d) & 1.0 &(d) adapted, \\ $\pm$ .4 & & & &$\pm$1.5 &$\pm$0.250 &$\pm$0.2 & & & & & & see text\\ 5276.4 &5276.2 &$4^{-}$ & &5277.1 & -- & -- & 70 & 0.3 & 26 & 27 & 0.5 \\ $\pm$ .5 &$\pm$0.2 & & &$\pm$1.5 & & & & & & & \\ \hline \hline \label{allStates.info}% \end{longtable*} \section{Conclusion } The shell model is verified to explain the structure of 30 negative parity states in \Pb\ below $E_x$=5.3\,MeV by 1-particle 1-hole \cfgs, 10 states more than in the first derivation of matrix elements of the residual interaction by \cite{AB1973}. The states containing the major strength of the \cfg\ \iEhlb\pOhlb\ were measured. Amplitudes of the \cfg\ \iEhlb\pOhlb\ obtained by an update of the fit done by \cite{AB1973} are verified to be approximately correct, see appendix~A. The ten states containing the major strength of the multiplets \iEhlb\fFhlb\ and \iEhlb\pThlb\ are identified. All states have one dominant shell model neutron \ph\ \cfg\ except for the \iEhlb\fFhlb\ member with the lowest spin $3^{-}$. Some minor admixtures of other \cfgs\ derived from the analysis of \Pb(p,~p') are consistent with results from \PbS(d,~p). The detection of the members with the highest spins from the \iEhlb\fFhlb\ and \iEhlb\pThlb\ group (and of \gNhlb\fShlb, too) gives hope to find even the \iEhlb\fShlb\ group members with spins $8^{-}$ and $9^{-}$ expected at an energy $E_x=$6.550\,MeV which might be rather pure. The clear identification of the \iEhlb\fFhlb\ and \iEhlb\pThlb\ group and the more carefully measured \angDs\ of the states containing \iEhlb\pOhlb\ strength will help to refine the derivation of a more complete shell model transition matrix extending the \cfg\ space up to $E_x\approx\ $5.2\,MeV similar as done by \cite{AB1973}, at least for the higher spins. Admixtures of proton \ph\ \cfgs\ can be obtained in principle by the assumption of a rather complete subshell closure and the observation of the ortho-normality rule\ and sum-rule relations; this fact becomes more relevant since for the higher proton \ph\ \cfgs\ there is no target for transfer reactions of the type \Bi\dHe. The goal to determine of matrix elements of the effective residual interaction among \ph\ \cfgs\ in \Pb\ can be approached better than done by \cite{AB1973} due to the higher quality and larger amount of \expt al data. The evaluation of existing data from our measurements of \Pb(p,~p') and \PbS(d,~p) will allow to extend the \cfg\ space up to eventually $E_x\approx$\,6.1\,MeV. \setlength\LTleft{0000pt} \setlength\LTright{000pt} \begin{longtable*}{@{\extracolsep{05pt}}p{35pt}p{25pt}p{35pt}p{35pt}p{35pt}p{35pt}p{35pt}p{35pt}p{35pt}p{35pt}p{35pt}} \caption{Update of the fit from \cite{AB1973}: Unitary transformation $||c_{LJ,lj}^{\alpha\,I,(\nu,\pi)}||$ of the shell model \ph\ \cfgs\ below $E_x=$\,4.5\,MeV to the states $|\alpha\,I>$ with spins $4^{-},5^{-},6^{-}$. The errors of the amplitudes are of the order of 0.01 for amplitudes close to $|c|=1$ and up to 0.20 for $|c|\approx0$. \quad {\it Footnote: (a)} A sizable admixture in the order of about $|c|=0.1$ of \gNhlb\hNhlb\ and \gNhlb\hEhlb\ is needed to fit the \angD; it depends on the component $a_8$ from the fit of the \angD\ by $ d\sigma(\Theta) /d\Omega = \sum_K^{0,2,4,6,8} a_K P_K(cos(\Theta))$. } \endfirsthead \multicolumn{09}{d}{Update\ of\ fit\ continued}\\ \hline \hline $E_x$ & spin& \gNhlb\pOhlb &\gNhlb\fFhlb &\gNhlb\pThlb &\gNhlb\fShlb & \iEhlb\pOhlb & \hNhlb\sOhlb &\hNhlb\dThlb \\ \hline \endhead \endfoot \endlastfoot \multicolumn{09}{d}{}\\ \hline \hline $E_x(keV)$ & spin& \gNhlb\pOhlb &\gNhlb\fFhlb &\gNhlb\pThlb &\gNhlb\fShlb & \iEhlb\pOhlb & \hNhlb\sOhlb &\hNhlb\dThlb \\ \hline 3475 &$4^{-}$ & +.985& +.060& --.280& --.013& & +.050& --.176\\ 3946 &$4^{-}$ & --.050& +.293& --.030& ~~.000& & +.937& +.118\\ 3995 &$4^{-}$ & --.110& +.984& --.065& ~~.000& & --.389& +.018\\ 4262 &$4^{-}$ & +.050& +.070& +.569& ~~.000& & +.182& +.559\\ 4383 &$4^{-}$ & ~~.000& +.037& +.863& ~~.000& & +.100& --.349\\ \hline 3192 &$5^{-}$ & +.780& +.350& +.220& --.100& --.150& --.230& +.170\\ 3709 &$5^{-}$ & --.430& +.491& --.130& ~~.000& --.300& --.520& +.400\\ 3960 &$5^{-}$ & --.010& +.690& ~~.000& ~~.000& ~~.000& +.720& ~~.000\\ 4125 &$5^{-}$ & --.050& --.340& +.270& ~~.000& --.440& +.330& +.610\\ 4180 &$5^{-}$ & ~~.000& +.015& +.590& ~~.000& +.720& --.130& +.250\\ 4292 &$5^{-}$ & --.050& +.150& +.620& ~~.000& --.400& --.100& --.600\\ \hline 3919 ${}^a$ &$6^{-}$ && +.981 & --.062 & +.119& +.110 && +.137 \\ 4206 &$6^{-}$ && --.222 & +.045 & --.010& +.960 && --.326 \\ 4383 &$6^{-}$ && --.207 & --.338 & ~~.000& +.235 && +.900 \\ 4480 &$6^{-}$ && --.083 & +.905 & --.032& +.231 && +.401 \\ \hline \hline \label{QfipStri}% \end{longtable*} {
1,314,259,994,620
arxiv
\section{Introduction} \vspace{-1em} Soon after its release, ImageNet~\cite{ILSVRC15:rus} became the de facto standard dataset for performance benchmarking in the field of computer vision, primarily thanks to the diverse set of images and classes it contains. This diversity allowed for research on various vision tasks, including, but not limited to, classification~\cite{Alexnet,VGG}, segmentation~\cite{SegNet,Fcn8s}, and localization~\cite{mask_rcnn,faster_rcnn}. Although the tasks put forward during the introduction of ImageNet were considered to be some of the hardest problems to address in the field of computer vision, a number of deep neural networks (DNNs) were, in recent years, able to achieve super-human results on many of these challenges, thus effectively \say{solving} the aforementioned problems~\cite{dosovitskiy2021an}. However, research efforts that make use of ImageNet are not limited to the performance-oriented tasks mentioned before. Indeed, thanks to the diverse set of images it contains, ImageNet enabled a large number of research efforts beyond its initial scope, allowing researchers to experiment with model interpretability~\cite{grad_cam,guided_backprop}, model calibration~\cite{guo2017calibration}, object relations~\cite{RussakovskyFeiFei}, fairness~\cite{yang2019fairer}, and many other topics. One research field that was enriched by the availability of ImageNet is the field of study that focuses on adversarial examples. In this context, the term \say{adversarial examples} refers to meticulously created data points that come with a malicious intent, aimed at deceiving models that are performing a pre-defined task, steering the prediction outcome in favor of the adversary~\cite{biggio2013evasion,LBFGS}. Although adversarial examples are a threat for predictive models in domains other than the domain of computer vision~\cite{carlini2018audio,ozbulak2021investigating}, the latter is acknowledged to be the one that suffers the most from adversarial examples, since an adversarial example created from a genuine image, through the use of adversarial perturbation, often looks the same as its unperturbed counterpart~\cite{Goodfellow-expharnessing,mcdaniel2016machine}. This makes it, in most cases, impossible to detect adversarial examples by visually inspecting images. Although the vulnerability of DNNs to adversarial examples in the image domain was originally mostly evaluated through the usage of two datasets, namely MNIST~\cite{lecun1998gradient} and CIFAR~\cite{CIFAR}, the authors of~\cite{DBLP:journalsCarliniW17} revealed that methods derived through the usage of one of these datasets do not necessarily generalize to other datasets. In particular, compared to ImageNet, both of the aforementioned datasets contain images with a smaller resolution and a lower number of classes. As a result, most of the research efforts in recent years started to favor ImageNet over MNIST and CIFAR~\cite{croce2019sparse,guo2019simple_black_black_box,su2018adversarial_activation_functions,xu2018structured_new_local_adv_attack}. From the perspective of adversarial evaluation, ImageNet does not only allow for most, if not all, of the research work that was performed using the previously mentioned datasets, it also enables a wide range of additional research topics in the area of adversariality, such as investigations with regards to regional perturbation~\cite{LAVAN}, color channels~\cite{shamsabadi2020colorfool,xu2018structured_new_local_adv_attack}, and defenses that use certain properties of natural images~\cite{total_variation_defense}. However, as demonstrated in this paper, ImageNet has a major shortcoming when it comes to evaluating adversarial attacks, especially in model-to-model transferability scenarios: a large number of synsets/classes in ImageNet are semantically highly similar to one another. Different from previous research efforts that mostly focus on generating more effective adversarial perturbations or evaluating adversarial defenses, we investigate a topic that is yet to be touched upon: untargeted misclassification classes for adversarial examples. Specifically, with the help of two of the most frequently used adversarial attacks and seven unique DNN architectures, including two recently proposed vision transformer architectures, we present a large-scale study that solely focuses on model-to-model adversarial transferability and misclassification classes in the context of ImageNet, resulting in the following contributions: \quad\textbullet\,\,In model-to-model transferability scenarios, we demonstrate that a large portion of adversarial examples are classified into the top-5 predictions obtained for their source image counterparts. \quad\textbullet\,\,With the help of the ImageNet class hierarchy, we show that adversarial examples created from certain synset collections are mostly misclassified into classes belonging to the same collections (e.g., a dog breed is misclassified as another dog breed). \quad\textbullet\,\,Interestingly, we can make the two aforementioned observations consistently for all of the evaluated models, as well as for both adversarial attacks. As a result, we discuss the necessity of evaluating misclassification classes when experimenting with adversarial attacks and untargeted misclassification in the context of ImageNet. \vspace{-0.5em} \section{Adversarial attacks} \vspace{-1em} Given an $M$-class classification problem, a data point $\bm{x}\in \mathbb{R}^k$ and its categorical association $\bm{y} \in \mathbb{R}^M$ associated with a correct class $k$ ($y_k = 1$ and $y_m = 0 \,, \forall \, m \in \{0,\ldots, M\} \char`\\ \{k\}$) are used to train a machine learning model represented by $\theta$. Let $g(\theta, \bm{x}) \in \mathbb{R}^M$ represent the prediction (logit) produced by the model $\theta$ and a data point $\bm{x}$. This data point is then assigned to the class that contains the largest output value $G(\theta, \bm{x}) = \arg \max (g(\theta, \bm{x}))$. When $G(\theta, \bm{x}) = \arg \max (\bm{y})$, this prediction is recognized as the correct one. For the given setting, a perturbation $\Delta$ bounded by an $L_p$ ball centered at $\bm{x}$ with radius $\epsilon$ is said to be an \textit{adversarial perturbation} if $G(\theta, \bm{x}) \neq G(\theta, \bm{x} + \Delta)$. In this case, $\hat{\bm{x}} = \bm{x} + \Delta$ is said to be an \textit{adversarial example}. Adversarial examples can be highly \textit{transferable}: an adversarial sample that fools a certain classifier can also fool completely different classifiers that have been trained for the same task~\cite{cheng2019improvinge_black_black_box,demontis2019transferability,DBLP:journals/corr/PapernotMG16}. This property, which is called transferability of adversarial examples, is a popular metric for assessing the effectiveness of a particular attack. Let $\theta_1$ and $\theta_2$ represent two DNNs and let $\bm{x}$, $k$, and $\hat{\bm{x}}_1$ be a genuine image, the correct class of this image, and a corresponding adversarial example, respectively, with the adversarial example generated from this genuine image using an attack that targets a class $c$ by leveraging the DNN represented by $\theta_1$. If $G(\theta_1, \hat{\bm{x}}_1) = G(\theta_2, \hat{\bm{x}}_1) = c$ and $G(\theta_{\{1,2\}}, \bm{x}) = k$, then the adversarial example is said to have achieved \textit{targeted adversarial transferability} to the model $\theta_2$. If $G(\theta_1, \hat{\bm{x}}_1) =c$ but $G(\theta_2, \hat{\bm{x}}_1) \notin \{c, k\}$, the adversarial example in question is classified into a class that is different than the targeted one ($c$) and the correct one ($k$). In cases like this, an adversarial example is said to have achieved \textit{untargeted adversarial} transferability. In the context of ImageNet, the success of targeted transferability for adversarial examples is known to be abysmally lower compared to the success of untargeted transferability~\cite{su2018robustness_18_imagenet_models_evaluation}. As a result, many studies that propose a novel attack or perform a large-scale analysis of model-to-model transferability use untargeted transferability when showcasing the effectiveness of attacks, without evaluating the classes that adversarial examples are classified into~\cite{croce2019sparse,guo2019simple_black_black_box,xu2018structured_new_local_adv_attack}. Therefore, in this work, we investigate the success of untargeted adversarial transferability and the characteristics of misclassification classes. \begin{figure}[t!] \centering \includegraphics[width=0.4\linewidth]{transferability_matrix/PGD_Detailed_all_ims_transferability_matrix.pdf}\quad\quad\quad \includegraphics[width=0.4\linewidth]{transferability_matrix/CW_Detailed_all_ims_transferability_matrix.pdf} \caption{Number (percentage) of source images that became adversarial examples with PGD (\textit{left}) and CW (\textit{right}). Adversarial examples are generated by the models listed along the $y$-axis and tested by the models listed along the $x$-axis.} \label{fig:transferability_matrix_untargeted} \end{figure} \vspace{-1em} \section{Methodology} \vspace{-0.5em} \textbf{Models}\,\textendash\,In order to evaluate a variety of model-to-model adversarial transferability scenarios, we employ the following architectures: AlexNet~\cite{Alexnet}, SqueezeNet~\cite{squeezenet}, VGG-16~\cite{VGG}, ResNet-50~\cite{resnet}, and DenseNet-121~\cite{densenet}, as well as two recently proposed vision transformer architectures, namely ViT-Base$/16-224$ and ViT-Large$/16-224$~\cite{dosovitskiy2021an}. \textbf{Data}\,\textendash\,For our adversarial attacks (see further in this section), we use images from the ImageNet validation set as inputs. Hereafter, these unperturbed input images will be referred to as \textit{source images}. In order to perform a trustworthy analysis of adversarial transferability, we ensure that all source images are correctly classified by all employed models. To that end, we filter out all images incorrectly classified by at least one model, leaving us with $19,025$ source images to work with. \textbf{ImageNet hierarchy}\,\textendash\,Classes in ImageNet are organized according to the WordNet hierarchy~\cite{WordNet,ILSVRC15:rus}, grouping classes into various collections depending on their semantic meaning. We use the aforementioned hierarchy in order to measure intra-collection adversarial misclassifications. In that respect, an intra-collection misclassification is when an adversarial example created from a source image that belongs to a class under a collection is misclassified into a class under the same collection (e.g., an image belonging to a cat breed misclassified as another breed of cat is an intra-collection misclassification for the \textit{Feline} collection). More details about the ImageNet hierarchy are given in the supplementary material (see Figure I). \textbf{Attacks}\,\textendash\,We use the adversarial examples generated for our previous study~\cite{ozbulak_selection}, where those adversarial examples are generated using two of the most commonly used attacks: Projected Gradient Descent (PGD)~\cite{PGD_attack} and Carlini \& Wagner's attack (CW)~\cite{CW_Attack}. PGD can be seen as a generalization of $L_{\infty}$ attacks~\cite{Goodfellow-expharnessing,IFGS}, aiming at finding an adversarial example $\hat{\bm{x}}$ that satisfies $||\hat{\bm{x}} - \bm{x}||_{\infty} < \epsilon$. The adversarial example is iteratively generated as follows: \begin{align} \hat{\bm{x}}^{(n+1)} = \Pi_{\epsilon}\Big(\hat{\bm{x}}^{(n)} - \alpha \ \text{sign} \big(\nabla_x J(g(\theta, \hat{\bm{x}}^{(n)})_c) \big)\Big) \,, \end{align} with $\hat{\bm{x}}^{(1)} = \bm{x}$, $c$ the selected class, and $J(\cdot)$ the cross-entropy loss. We use PGD with $50$ iterations and set $\epsilon$ to $38/255$. We adopt this constraint as the maximum perturbation-size bound in order to be able to produce a large number of adversarial examples that achieve model-to-model transferability. CW, on the other hand, is a complex attack that incorporates $L_2$ norm minimization: \begin{align} \text{miminize} \quad & ||\bm{x} - (\bm{x} + \Delta)||_{2}^{2} + \; f(\bm{x} + \Delta)\,. \end{align} In the paper introducing CW~\cite{CW_Attack}, multiple loss functions (i.e., $f$) are discussed. However, in later works, the creators of CW prefer to make use of the loss function that is constructed as follows: \begin{align} \label{eq:CW_2} f (\bm{x}) = \max \big( \max \{ g(\theta, \bm{x})_i : i\neq c\} - g(\theta, \bm{x})_c, - \kappa \big) \,, \end{align} where this loss compares the predicted logit value of target class $c$ with the predicted logit value of the next-most-likely class $i$. The constant $\kappa$ can be used to adjust the \textit{strength} of the produced adversarial examples (for our experiments, we use $\kappa=20$ and the settings described in~\cite{CW_Attack} and~\cite{ozbulak_selection}). We keep executing the attacks until a source image becomes an adversarial example or until the attacks reach a maximum number of iterations. At each iteration, we examine whether or not the images under consideration became adversarial examples for the aforementioned models. \begin{figure}[t!] \begin{tikzpicture} \centering \node[inner sep=0pt] (a) at (0, 0) {\includegraphics[width=0.89\linewidth]{top_n_ims/all_all_classes_topK.pdf}}; \node[align=center,rotate=90] at (-6.75, 0) {\scriptsize Adversarial examples\\\scriptsize misclassified into\\\scriptsize top-K classes}; \node[align=left] at (0.25, -1) {\scriptsize ImageNet class (sorted according to y-axis)}; \end{tikzpicture} \caption{Number of adversarial examples, given per class, that are classified into the top$-\{2,3,4,5\}$ classes predicted for their underlying source images.} \label{fig:top5_misclassifications} \end{figure} \section{Experiments} \vspace{-0.5em} Leveraging the attacks described above and through the usage of $19,025$ source images that are correctly classified by the models employed, we create $289,244$ adversarial examples, where $173,549$ of those adversarial examples are generated with PGD and $115,695$ with CW. Detailed untargeted model-to-model transferability successes of those adversarial examples can be found in Figure~\ref{fig:transferability_matrix_untargeted}. To investigate misclassifications made into semantically similar classes, we first have a look at the adversarial examples that are misclassified into classes that lie in the top-5 positions of their source image predictions, where the four remaining classes, apart from the first one, are the classes that were deemed to be the most-likely prediction classes by the model under consideration, with the first one being the correct classification. Doing so, we provide Figure~\ref{fig:top5_misclassifications}, with this figure displaying, for each class, the percentage of adversarial examples that had their predictions changed into one of the top-5 classes as described above. Specifically, we observe that $215,717$ (approximately $71\%$) adversarial examples are predicted into one of the top-5 predictions of their unperturbed source images, where these classes in the top-5 are often highly similar to the correct predictions for the source images the adversarial examples are generated from (see Figure~II in the supplementary material). Although this graph hints that a large portion of untargeted adversarial transferability successes are (plausible) misclassifications rather than adversarial successes, on its own, it does not provide enough evidence to make such a claim. In order to solidify this observation, we expand on misclassifications and utilize the ImageNet class hierarchy. In Table~\ref{tbl:main_pgd_cw_short}, we provide the count and the percentage of adversarial examples that are originating from a number of collections and their intra-collection misclassification rates for a number of collections under the \textit{Organism} branch of the hierarchy. Table~\ref{tbl:main_pgd_cw_short} represents the aforementioned measurements for all adversarial examples that achieved adversarial transferability to any of the models and with any attack. Naturally, the larger the collection, the higher the intra-collection misclassification rate will be. For example, a source image taken from the \textit{Organism} collection has $409$ other classes that may contribute to intra-collection misclassification. However, even for smaller, more granular collections such as the \textit{Bird} collection, which only contains $59$ classes, we observe that adversarial examples are more-often-than-not misclassified into the classes in the same collection. Furthermore, a number of collections such as \textit{Canine}, \textit{Bird}, \textit{Reptilian}, and \textit{Arthropod} stand out among other collections for having remarkably high intra-collection misclassification rates. For example, $84\%$ of all adversarial examples that originate from a canine (i.e., dog) image are misclassified as another breed of canine. In Table~\ref{tbl:main_pgd_cw_short}, we also provide misclassifications into the top-3 and the top-5 classes for adversarial examples that are originating from source images taken from individual collections. As can be seen, the observations we made when evaluating all adversarial examples also hold true for individual collections, where most of the adversarial examples in those collections have a misclassification rate of about $60\%$ and $70\%$ for the top-3 and the top-5 classes, respectively. To make matters worse, we can even see trends similar to the aforementioned observations when we filter adversarial examples for individual attacks and when we investigate misclassifications on a model-to-model basis, demonstrating that our observations are not specific to a single model or to one of the attacks. Extended results covering more collections and individual models/attacks can be found in the supplementary material (Table~I to Table~V). \begin{table}[t!] \centering \caption{For the adversarial examples that achieved model-to-model transferability, intra-collection misclassifications and misclassifications into the top-\{3,5\} prediction classes in the target models are provided. The results for the adversarial examples are grouped into collections according to the classes of their source image origins.} \scriptsize \begin{tabular}{llccccccc} \cmidrule[0.25pt]{1-9} \multirow{4}{*}{\shortstack{Hierarchy}} & \multirow{4}{*}{\shortstack{Collection}} & \multirow{4}{*}{\shortstack{Classes\\in collection}} & \multirow{4}{*}{\shortstack{Source\\images\\in collection}} & \multirow{4}{*}{\shortstack{Adversarial\\examples\\originating\\from collection}} & \multicolumn{2}{c}{\multirow{3}{*}{\shortstack{Intra-collection\\misclassifications}}} & \multicolumn{2}{c}{\multirow{3}{*}{\shortstack{Misclassification\\into top-K\\classes}}} \\ ~ & ~ & ~ \\ ~ & ~ & ~ \\ \cmidrule[0.25pt]{6-9} ~ & ~ & ~ & ~ & ~ & Count & \% & Top-3 & Top-5 \\ \cmidrule[0.25pt]{1-9} ~ & All & 1000 & 19,025 & 289,244 & 289,244 & 100.0\% & 59.6\% & 71.1\% \\ \cmidrule[0.25pt]{1-9} 1 & Organism & 410 & 9,390 & 147,621 & 132,865 & \bf 90.0\% & 61.2\% & 72.8\% \\ 1.1 & Creature & 398 & 9,009 & 143,996 & 130,409 & \bf 90.6\% & 61.4\% & 73.1\% \\ 1.1.1 & Domesticated animal & 123 & 2,316 & 50,036 & 41,978 & \bf 83.9\% & 63.4\% & 75.6\% \\ 1.1.2 & Vertebrate & 337 & 7,692 & 126,913 & 112,828 & \bf 88.9\% & 61.3\% & 73.2\% \\ 1.1.2.1 & Mammalian & 218 & 4,665 & 89,004 & 76,351 & \bf 85.8\% & 61.4\% & 73.5\% \\ 1.1.2.1.1 & Primate & 20 & 475 & 9,333 & 5,301 & \bf 56.8\% & 58.9\% & 70.4\% \\ 1.1.2.1.2 & Hoofed mammal & 17 & 419 & 6,206 & 2,751 & 44.3\% & 58.4\% & 71.6\% \\ 1.1.2.1.3 & Feline & 13 & 319 & 3,895 & 1,998 & \bf 51.3\% & 64.3\% & 75.9\% \\ 1.1.2.1.4 & Canine & 130 & 2,502 & 53,294 & 45,089 & \bf 84.6\% & 63.5\% & 75.7\% \\ 1.1.2.2 & Aquatic vertebrate & 16 & 366 & 5,355 & 2,383 & 44.5\% & 65.0\% & 75.6\% \\ 1.1.2.3 & Bird & 59 & 1,937 & 22,402 & 15,993 & \bf 71.4\% & 59.8\% & 71.3\% \\ 1.1.2.4 & Reptilian & 36 & 547 & 7,635 & 4,795 & \bf 62.8\% & 63.8\% & 75.2\% \\ 1.1.2.4.1 & Saurian & 11 & 188 & 2,416 & 1,050 & 43.5\% & 58.4\% & 71.1\% \\ 1.1.2.4.2 & Serpent & 17 & 223 & 3,202 & 1,700 & \bf 53.1\% & 67.0\% & 77.1\% \\ 1.1.3 & Invertebrate & 61 & 1,317 & 17,083 & 10,698 & \bf 62.6\% & 61.9\% & 72.3\% \\ 1.1.3.1 & Arthropod & 47 & 1,018 & 13,200 & 8,863 & \bf 67.1\% & 63.1\% & 73.5\% \\ 1.1.3.1.1 & Insect & 27 & 652 & 7,850 & 4,468 & \bf 56.9\% & 59.9\% & 70.5\% \\ 1.1.3.1.2 & Arachnoid & 9 & 189 & 2,824 & 1,476 & \bf 52.3\% & 69.7\% & 79.5\% \\ 1.1.3.1.3 & Crustacean & 9 & 137 & 2,035 & 955 & 46.9\% & 70.0\% & 80.1\% \\ \cmidrule[1pt]{1-9} \end{tabular} \label{tbl:main_pgd_cw_short} \vspace{-2em} \end{table} \vspace{-0.5em} \section{Conclusions and outlook} \vspace{-0.5em} In the context of a classification problem, what differentiates an adversarial success from a plausible misclassification? If an adversarial example is misclassified into a class that is highly similar to the class of its unperturbed origin, should it still be considered an adversarial success? In this case, how should we measure the similarity between the classes? The aforementioned questions are not trivial to answer, and different answers may find different logical explanations depending on the context of the evaluation performed. However, given that the threat of adversarial examples is evaluated from the perspective of security, does a semantically similar misclassification that has been made in the context of ImageNet (e.g., a brown dog breed misclassified as another brown dog breed) carry the same weight as a lethal misclassification in the context of self-driving cars (e.g., a road sign misclassification leading to an accident)? Finding answers to the questions presented above requires meticulous investigations on the topic of misclassification classes, where these investigations should involve various threat scenarios, similar to the work presented in~\cite{schwinn2021exploring,zhang2020understanding,zhao2020success}. In this paper, we took one of the first steps in analyzing misclassification classes in the context of ImageNet, with the help of large-scale experiments and the ImageNet class hierarchy, showing that a large number of untargeted adversarial misclassifications in model-to-model transferability scenarios are, in fact, plausible misclassifications. In particular, we observe that categories under the \textit{Organism} branch have considerably high intra-collection misclassifications compared to classes in the \textit{Artifact} branch. To aid future work on this topic in the context of ImageNet, we share an easy-to-use class hierarchy of ImageNet, as well as other resources, in the following repository: {\color{magenta}\url{https://github.com/utkuozbulak/imagenet-adversarial-image-evaluation}}. \clearpage \bibliographystyle{abbrv}
1,314,259,994,621
arxiv
\section{Introduction} \label{sec:introduction} \PARstart{T}{he} COVID-19 pandemic has rapidly become into one of the biggest health world challenges in recent years. The disease spreads at a fast pace: the reproduction number of COVID-19 ranged from $2.24$ to $3.58$ during the first months of the pandemic \cite{zhao2020preliminary}, meaning that, on average, an infected person transmitted the disease to $2$ or more people. As a result, the number of COVID-19 infections dramatically increased from just a hundred cases in January --almost all of them concentrated in China-- to more than $43$ million in November spread all around the world \cite{ECDC:2020}. COVID-19 is caused by the coronavirus SARS-COV2, a virus that belongs to the same family of other respiratory disorders such as the \textit{Severe Acute Respiratory Syndrome} (SARS) and \textit{Middle East Respiratory Syndrome} (MERS). The symptomatology of COVID-19 is diverse and arise after incubation of around $5.2$ days. These might include fever, dry cough, and fatigue; although, headache, haemoptysis, diarrhoea, dyspnoea, and lymphopenia are also reported \cite{rothan2020epidemiology,chen2020epidemiological}. In severe cases, an \textit{Acute Respiratory Distress Syndrome} (ARDS) might be developed by underlying pneumonia associated with the COVID-19. For the most serious cases, the estimated period from the onset of the disease to death ranged from 6 to 41 days (with a median of 14 days), being dependent on the age of the patient and the status of the patient's immune system \cite{rothan2020epidemiology}. Once the SARS-COV2 reaches the host at the lung, it \DIFdelbegin \DIFdel{enters its }\DIFdelend \DIFaddbegin \DIFadd{gets into the }\DIFaddend cells through a protein called ACE2, which serves as the "opening" of the cell lock. After the genetic material of the virus has multiplied, the \DIFaddbegin \DIFadd{infected }\DIFaddend cell produces proteins that complement the viral structure to produce new viruses. \DIFdelbegin \DIFdel{When they are ready, they destroy the }\DIFdelend \DIFaddbegin \DIFadd{Then, the virus destroys the infected }\DIFaddend cell, leave it and infect new cells. The destroyed cells \DIFdelbegin \DIFdel{cause }\DIFdelend \DIFaddbegin \DIFadd{produce }\DIFaddend radiological lesions \cite{pan2020initial,pan2020imaging,zhou2020coronavirus} such as consolidations and nodules in the lungs, that are \DIFdelbegin \DIFdel{visible as }\DIFdelend \DIFaddbegin \DIFadd{observable in the form of }\DIFaddend ground-glass opacity regions in the XR images (Fig. \ref{rx-COVID-19}). These lesions are more noticeable in patients assessed $5$ or more days after the onset of the disease, and especially in those older than $50$ \cite{song2020emerging}. Findings also suggest that patients recovered from COVID-19 have developed pulmonary fibrosis \cite{Hosseiny2020}, in which the connective tissue of the lung gets inflamed. This leads to a pathological proliferation of the connective tissue between the alveoli and the surrounding blood vessels. Given the aforementioned, radiological imaging techniques --using plain chest \textit{X-Ray} (XR) and/or thorax \textit{Computer Tomography} (CT)-- \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{have become }\DIFaddend crucial diagnosis and evaluation tools to identify and assess the severity of the infection. Since the declaration of the COVID-19 pandemic by the World Health Organization, four major key areas were identified to reduce the impact of the disease in the world: to prepare and be ready; detect, protect, and treat; reduce transmission; and innovate and learn \cite{WHO_pandemic:2020}. Concerning the area of detection, big efforts have been taken to improve the diagnostic procedures of COVID-19. To date, the gold standard in the clinic is still a molecular diagnostic test based on a \textit{polymerase chain reaction} (PCR), which is precise but time-consuming, requires specialized personnel and laboratories and is in general limited by the capacities and resources of the health systems. This poses difficulties due to the rapid rate of growth of the disease. An alternative to PCR is the rapid tests such as those based in \textit{real-time reverse transcriptase-polymerase chain reaction} (RT-PCR), as they can be more rapidly deployed, decrease the load of the specialized laboratories, require less specialized personal and provide faster diagnosis compared to traditional PCR. Other tests, such as those based on antigens, are now available, but are mainly used for massive testings (i.e. for non clinical applications) due to a higher chance of missing an active infection. In contrast with RT-PCR, which detect the virus's genetic material, antigen tests identify specific proteins on the surface of the virus, requiring a higher viral load, which significantly shortens the period of sensitivity. In clinical practice, the RT-PCR test is usually complemented with a chest XR, in such a manner that the combined analysis reduces the significant number of false negatives and, at the same time, brings additional information about the extent and severity of the disease. In addition to that, thorax CT is also used as a second row method for evaluation. Although the evaluation with CT provides more accurate results in early stages and have been shown to have greater sensitivity and specificity \cite{ai2020correlation}, XR imaging has become the standard in the screening protocols, since it is fast, minimally-invasive, low-cost, and requires simpler logistics for its implementation. In the search of rapid, more objective, accurate and sensitive procedures, which could complement the diagnosis and assessment of the disorder, a trend of research has emerged to employ clinical features extracted from thorax CT or chest XR for automatic detection purposes. A potential benefit of studying the radiological images also comes from the potentiality of medical imaging to characterize pneumonic states even in asymptomatic population \cite{chan2020familial}, although more research is needed in this field as the lack of findings in infected patients is also reported \cite{li2020chest}. The consolidation of such technology will permit a speedy and accurate diagnosis of COVID-19, decreasing the pressure on microbiological laboratories in charge of the PCR tests, and providing more objective means of assessing the severity of the disease. To this end, techniques based on deep learning have been employed to characterize XR with promising results. Although it would be desirable to employ CT for detection purposes, some major drawbacks are often present, including higher costs, a more time-consuming procedure, the necessity of thorough hygienic protocols \DIFdelbegin \DIFdel{to }\DIFdelend not to spread infections, and the requirement of specialized equipment that might not be readily available in hospitals or health centres. By contrast, XR is available as a first screening test in many hospitals or health centres, at lower expenses and with less time-consuming imaging procedures. \DIFaddbegin \DIFadd{Several approaches for COVID-19 detection based on chest XR images and different deep learning architectures have been published in the last few months, reporting classification accuracies around 90\% or higher. However, the central analysis in most of those works have focused on the variations of network architectures and less attention has been pay to the variability factors that a real solution should tackled before it can be deployed in the medical setting. In this sense, no analysis have been provided to demonstrate the reliability of the predictions made by the networks, which in the context of medical solutions acquires special relevance. Moreover, most of the works in the state of the art have validated their results with data sets containing dozens or a few hundreds of COVID-19 samples, limiting the impact of the proposed solutions. } \DIFaddend With these antecedents in mind, this paper uses a deep learning algorithm based on CNN, data augmentation and regularization techniques to handle data imbalance, for the discrimination between COVID-19, controls and other types of pneumonia. The methods are tested with the largest corpus to date known by the authors. Three different sets of experiments were carried out in the search for the most suitable \DIFdelbegin \DIFdel{approach. The }\DIFdelend \DIFaddbegin \DIFadd{and coherent approach. To this end, the }\DIFaddend paper also uses explainability techniques to gain insight about the manners on how the neural network learns, and interpretability in terms of the \DIFaddbegin \DIFadd{overlaping among the }\DIFaddend regions of interest \DIFdelbegin \DIFdel{that are }\DIFdelend \DIFaddbegin \DIFadd{selected by the network and those that are more likely }\DIFaddend affected by COVID-19. A critical analysis of factors that affect the performance of automatic systems based on deep learning is also carried out. This paper is organized as follows: section \ref{background} presents some background and antecedents on the use of deep learning for COVID-19 detection. section \ref{sec:methodology} presents the methodology, section \ref{sec:results} presents the results obtained, whereas \ref{sec:disscon} presents the discussions and main conclusions of this paper. \begin{figure*}[!hp] \setlength{\tempheight}{0.18\textheight} \settowidth{\tempwidth}{\includegraphics[height=\tempheight]{images/CRXNIH__0__n__DX__PA__19169__5__00019169_017.png}} \centering \hspace{0.1cm} \columnname{Control}\hspace{0.4cm} \columnname{Pneumonia}\hspace{0.3cm} \columnname{COVID-19}\\ \rowname{Raw image} \subfloat[] { \label{rx-control} \includegraphics[width=0.25\textwidth]{images/CRXNIH__0__n__DX__PA__19169__5__00019169_017.png} } \subfloat[] { \label{rx-Pneumonia} \includegraphics[width=0.25\textwidth]{images/CRXNIH__2__n__DX__PA__2439__10__00002439_010.png} } \subfloat[] { \label{rx-COVID-19} \includegraphics[width=0.25\textwidth]{images/HM2__1__n__AP__CR__2049__17__1.3.51.0.7.1724101731.55385.64073.43842.45459.6508.63735.DC3.png} } \\ \rowname{Exp 1} \subfloat[] { \label{Exp1-control} \includegraphics[width=0.25\textwidth]{images/cam_CRXNIH__0__n__DX__PA__19169__5__00019169_017 2.png} } \subfloat[] { \label{Exp1-Pneumonia} \includegraphics[width=0.25\textwidth]{images/cam_CRXNIH__2__n__DX__PA__2439__10__00002439_010 2.png} } \subfloat[] { \label{Exp1-COVID-19} \includegraphics[width=0.25\textwidth]{images/cam_HM2__1__n__AP__CR__2049__17__1.3.51.0.7.1724101731.55385.64073.43842.45459.6508.63735.DC3 2.png} } \\ \rowname{Exp 2} \subfloat[] { \label{Exp2-control} \includegraphics[width=0.25\textwidth]{images/cam_CRXNIH__0__n__DX__PA__19169__5__00019169_017 3.png} } \subfloat[] { \label{Exp2-Pneumonia} \includegraphics[width=0.25\textwidth]{images/cam_CRXNIH__2__n__DX__PA__2439__10__00002439_010 3.png} } \subfloat[] { \label{Exp2-COVID-19} \includegraphics[width=0.25\textwidth]{images/cam_HM2__1__n__AP__CR__2049__17__1.3.51.0.7.1724101731.55385.64073.43842.45459.6508.63735.DC3 3.png} } \\ \rowname{Exp 3} \subfloat[] { \label{Exp3-control} \includegraphics[width=0.25\textwidth]{images/cam_CRXNIH__0__n__DX__PA__19169__5__00019169_017.png} } \subfloat[] { \label{Exp3-Pneumonia} \includegraphics[width=0.25\textwidth]{images/cam_CRXNIH__2__n__DX__PA__2439__10__00002439_010.png} } \subfloat[] { \label{Exp3-COVID-19} \includegraphics[width=0.25\textwidth]{images/cam_HM2__1__n__AP__CR__2049__17__1.3.51.0.7.1724101731.55385.64073.43842.45459.6508.63735.DC3.png} } \caption[]{ Experiments considered in the paper. \textbf{First row:} raw chest XR images belonging to the control, pneumonia, and COVID-19 classes. \textbf{Second row:} Grad-CAM activation mapping for the XR images. Despite of the high accuracy of the methods, in some cases, the model focuses its attention in areas different from the lungs. \textbf{Third row:} Grad-CAM activation mapping after zooming in, cropping to a squared region of interest and resizing. Zooming to the region of interest forces the model to focus its attention to the lungs, but errors are still present. \textbf{Fourth row:} Grad-CAM activation mapping after a zooming and segmentation procedure. Zooming in and segmenting force the model to focus attention in the lungs. The black background represents the mask introduced by the segmentation procedure. } \label{fig:XR-examples} \end{figure*} \section{Background} \label{background} A large body of research has emerged on the use of \textit{Artificial Intelligence} (AI) for the detection of different respiratory diseases using plain XR images. For instance, in \cite{rajpurkar2017chexnet} authors developed a $121$-layer \textit{Convolutional Neural Network} (CNN) architecture, called Chexnet, which was trained with a dataset of $100,000$ XR images for the detection of different types of pneumonia. \DIFdelbegin \DIFdel{the }\DIFdelend \DIFaddbegin \DIFadd{The }\DIFaddend study reports an area under the \textit{Receiving Operatinng Characteristic} (ROC) curve of $0.76$ in a multiclass scenario composed of $14$ classes. \DIFdelbegin \DIFdelFL{Summary of the literature in the field}} \DIFdelFL{(lr) \DIFdelFL{3-5 \textbf{\DIFdelFL{COVID-19}} \textbf{\DIFdelFL{Controls}} \textbf{\DIFdelFL{Others}} \DIFdelFL{\mbox \cite{narin2020automatic} }\hspace{0pt \DIFdelFL{InceptionV3, InceptionResNetV2, ResNet50 \DIFdelFL{50 \DIFdelFL{50 \DIFdelFL{-- \DIFdelFL{2 \DIFdelFL{Acc=98\% \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{hemdan2020covidx} }\hspace{0pt \DIFdelFL{VGG19, DenseNet \DIFdelFL{25 \DIFdelFL{50 \DIFdelFL{-- \DIFdelFL{2 \DIFdelFL{AvF1=0.90 \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{wang2020covid} }\hspace{0pt \DIFdelFL{COVID-Net, ResNet50, VGG19 \DIFdelFL{358 \DIFdelFL{8066 \DIFdelFL{5538 \DIFdelFL{3 \DIFdelFL{Acc=93.3\% \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{zhang2020covid} }\hspace{0pt \DIFdelFL{EfficientNet \DIFdelFL{100 \DIFdelFL{1431 \DIFdelFL{-- \DIFdelFL{2 \DIFdelFL{Se=96\% Sp=70\% \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{Islam2020} }\hspace{0pt \DIFdelFL{CNN + LSTM \DIFdelFL{1525 \DIFdelFL{1525 \DIFdelFL{1525 \DIFdelFL{3 \DIFdelFL{Acc=99\% \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{NarayanDas2020} }\hspace{0pt \DIFdelFL{Xception \DIFdelFL{127 \DIFdelFL{500 \DIFdelFL{500 \DIFdelFL{3 \DIFdelFL{Acc=97\% \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{Ozturk2020} }\hspace{0pt \DIFdelFL{Darknet \DIFdelFL{127 \DIFdelFL{500 \DIFdelFL{500 \DIFdelFL{3 \DIFdelFL{Acc=87\% \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{Khan2020} }\hspace{0pt \DIFdelFL{Xception \DIFdelFL{290 \DIFdelFL{310 \DIFdelFL{657 \DIFdelFL{3 \DIFdelFL{Acc=93\% \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{Apostolopoulos2020} }\hspace{0pt \DIFdelFL{VGG19, MobileNetV2, Inception, Xception, InceptionResNetV2 \DIFdelFL{224 \DIFdelFL{504 \DIFdelFL{700 \DIFdelFL{3 \DIFdelFL{Acc=94\% \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{Togacar2020} }\hspace{0pt \DIFdelFL{MobileNetV2, SqueezeNet \DIFdelFL{295 \DIFdelFL{65 \DIFdelFL{98 \DIFdelFL{3 \DIFdelFL{Acc=99\% \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{Das2020} }\hspace{0pt \DIFdelFL{Inception \DIFdelFL{162 \DIFdelFL{2003 \DIFdelFL{4650 \DIFdelFL{3 \DIFdelFL{Acc=99\% \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{Pereira2020} }\hspace{0pt \DIFdelFL{Inception-V3 \DIFdelFL{90 \DIFdelFL{1000 \DIFdelFL{687 \DIFdelFL{7 \DIFdelFL{AvF1=0.65 \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{Waheed2020} }\hspace{0pt \DIFdelFL{VGG16 \DIFdelFL{403 \DIFdelFL{1124 \DIFdelFL{-- \DIFdelFL{2 \DIFdelFL{Acc=95\% \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{Loey2020} }\hspace{0pt \DIFdelFL{Alexnet, Googlenet, Restnet18 \DIFdelFL{69 \DIFdelFL{79 \DIFdelFL{158 \DIFdelFL{4 \DIFdelFL{Acc=99\% \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{Toraman2020} }\hspace{0pt \DIFdelFL{Capsnet \DIFdelFL{231 \DIFdelFL{1050 \DIFdelFL{1050 \DIFdelFL{3 \DIFdelFL{Acc=84\% \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{Mahmud2020} }\hspace{0pt \DIFdelFL{CovXNet \DIFdelFL{305 \DIFdelFL{305 \DIFdelFL{610 \DIFdelFL{4 \DIFdelFL{Acc=90.2\% \DIFdelFL{N \DIFdelFL{Y \DIFdelFL{\mbox \cite{Oh2020} }\hspace{0pt \DIFdelFL{ResNet18 \DIFdelFL{180 \DIFdelFL{191 \DIFdelFL{131 \DIFdelFL{4 \DIFdelFL{Acc=89\% \DIFdelFL{Y \DIFdelFL{Y \DIFdelFL{\mbox \cite{Civit-Masot2020} }\hspace{0pt \DIFdelFL{VGG16 \DIFdelFL{132 \DIFdelFL{132 \DIFdelFL{132 \DIFdelFL{3 \DIFdelFL{AvF1=0.85 \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{Yoo2020} }\hspace{0pt \DIFdelFL{ResNet18 \DIFdelFL{162 \DIFdelFL{585 \DIFdelFL{585 \DIFdelFL{3 \DIFdelFL{Acc=95\% \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{Minaee2020} }\hspace{0pt \DIFdelFL{ResNet18, ResNet50, SqueezeNet, DenseNet121 \DIFdelFL{184 \DIFdelFL{2400 \DIFdelFL{2600 \DIFdelFL{2 \DIFdelFL{Se=98\% Sp=92.9\% \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{Brunese2020} }\hspace{0pt \DIFdelFL{VGG16 \DIFdelFL{250 \DIFdelFL{3520 \DIFdelFL{2753 \DIFdelFL{4 \DIFdelFL{Acc=97\% \DIFdelFL{N \DIFdelFL{N \DIFdelFL{\mbox \cite{Altan2020} }\hspace{0pt \DIFdelFL{EfficientNet-B \DIFdelFL{219 \DIFdelFL{1341 \DIFdelFL{1345 \DIFdelFL{3 \DIFdelFL{Acc=99\% \DIFdelFL{N \DIFdelFL{N \DIFdelend Directly related to the COVID-19 detection, three CNN architectures (ResNet50, InceptionV3 and InceptionResNetV2) were considered in \cite{narin2020automatic}, using a database of just $50$ controls and $50$ COVID-19 patients. The best accuracy ($98\%$) was obtained with ResNet50. In \cite{hemdan2020covidx}, seven different deep CNN models were tested using a corpus of $50$ controls and $25$ COVID-19 patients. The best resuts were attained with the VGG19 and DenseNet models, obtaining F1-scores of $0.89$ and $0.91$ for controls and patients. The COVID-Net architecture was proposed in \cite{wang2020covid}. The net was trained with an open repository, called COVIDx, composed of $13,975$ XR images, although only $358$ --coming from $266$ patients-- belonged to the COVID-19 class. The attained accuracy was of $93.3\%$. In \cite{zhang2020covid} a deep anomaly detection algorithm was employed for the detection of COVID-19, in a corpus of $100$ COVID-19 images (taken from $70$ patients), and $1,431$ control images (taken from $1008$ patients). \DIFdelbegin \DIFdel{A }\DIFdelend $96\%$ of sensitivity and $70\%$ of specificity was obtained. In \cite{Islam2020}, a combination of a CNN for feature extraction and a \textit{Long Short Term Memory Network} (LSTM) for classification were used \DIFaddbegin \DIFadd{for }\DIFaddend automatic detection purposes. The model was trained with a corpus gathered from different sources, consisting of $4,575$ XR images: $1,525$ of COVID-19 \DIFaddbegin \DIFadd{(although $912$ come from a repository applying data augmentation)}\DIFaddend , $1,525$ of pneumonia, and $1,525$ of controls. In a $5$-folds cross-validation scheme, \DIFdelbegin \DIFdel{an accuracy of }\DIFdelend \DIFaddbegin \DIFadd{a }\DIFaddend $99$\% \DIFdelbegin \DIFdel{was obtained}\DIFdelend \DIFaddbegin \DIFadd{accuracy was reported}\DIFaddend . In \cite{Civit-Masot2020}, the VGG16 network was used for classification, \DIFdelbegin \DIFdel{in }\DIFdelend \DIFaddbegin \DIFadd{employing }\DIFaddend a database of $132$ COVID-19, $132$ controls and $132$ pneumonia images. Following a hold-out validation, \DIFdelbegin \DIFdel{an accuracy of }\DIFdelend about $100$\% \DIFaddbegin \DIFadd{accuracy }\DIFaddend was obtained identifying COVID-19, being lower on the other classes. By using transfer-learning based on the Xception network, authors in \cite{NarayanDas2020} adapted a model for the classification of COVID-19. Experiments were carried out in a database of $127$ COVID-19, $500$ controls and $500$ patients with pneumonia gathered from different sources, attaining \DIFdelbegin \DIFdel{an accuracy of }\DIFdelend about $97$\% \DIFaddbegin \DIFadd{accuracy}\DIFaddend . A similar approach\DIFaddbegin \DIFadd{, }\DIFaddend followed in \cite{Ozturk2020}, used the same corpus for the binary classification of COVID-19 and controls; and for the multi-class classification of COVID-19, controls and pneumonia. With a modification of the Darknet model for transfer-learning, and a $5$-folds cross-validation, \DIFdelbegin \DIFdel{an accuracy of }\DIFdelend \DIFaddbegin \DIFadd{a }\DIFaddend $98$\% \DIFaddbegin \DIFadd{accuracy }\DIFaddend in binary classification and $87$\% in multi-class classification was obtained. Another Xception transfer-learning-based approach was presented in \cite{Khan2020}, but considering two multi-class classification tasks: i) controls vs. COVID-19 vs. viral pneumonia and bacterial pneumonia; ii) controls vs. COVID-19 vs. pneumonia. To deal with the imbalance of the corpus, undersampling was used to randomly discard registers from the larger classes, obtaining $290$ COVID-19, $310$ controls, $330$ bacterial pneumonia and $327$ viral pneumonia chest XR images. The reported accuracy in the $4$-class problem was of $89$\%, and of $94$\% in the $3$-class scenario. Moreover, in a $3$-class cross-database experiment, the accuracy \DIFdelbegin \DIFdel{of the system }\DIFdelend was of $90$\%. In \cite{Minaee2020}, four CNN networks (ResNet18, ResNet50, SqueezeNet, and DenseNet-121) were used for transfer learning. Experiments were performed on a database of $184$ COVID-19 and $5,000$ no-finding and pneumonia images. Reported results indicate a sensitivity of about $98$\% and a specificity of $93$\%. In \cite{Apostolopoulos2020}, five state-of-the-art CNN systems --VGG19, MobileNetV2, Inception, Xception, InceptionResNetV2-- were tested on a transfer-learning setting to identify COVID-19 from control and pneumonia images. Experiments were carried out in two partitions: one of $224$ COVID-19, $700$ bacterial pneumonia and $504$ control images; and another that considered the previous normal and COVID-19 data, but included $714$ cases of bacterial and viral pneumonia. The MobileNetV2 net attained the best results with \DIFdelbegin \DIFdel{an accuracy of }\DIFdelend $96$\% and $94$\% \DIFaddbegin \DIFadd{accuracy }\DIFaddend in the 2 and 3-classes classification\DIFaddbegin \DIFadd{, }\DIFaddend respectively. In \cite{Apostolopoulos2020b}, the MobileNetV2 net was trained from scratch, and compared to one net based on transfer-learning and to another based on hybrid feature extraction with fine-tuning. Experiments performed in a dataset of $3905$ XR images of $6$ diseases \DIFdelbegin \DIFdel{, }\DIFdelend indicated that training from the scratch outperforms the other approaches, attaining \DIFdelbegin \DIFdel{an accuracy of }\DIFdelend $87$\% \DIFaddbegin \DIFadd{accuracy }\DIFaddend in the multi-class classification and $99$\% in the detection of COVID-19. \DIFaddbegin \DIFadd{A system, also grounded on the InceptionNet and transfer-learning, was presented in \mbox \cite{Das2020}}\hspace{0pt . Experiments were performed on $6$ partitions of XR images with COVID-19, pneumonia, tuberculosis and controls. Reported results indicate $99$\% accuracy, in a $10$-folds cross-validation scheme, in classification of COVID-19 from other classes. }\DIFaddend In \cite{Togacar2020}, fuzzy colour techniques were used as a pre-processing stage to remove noise and enhance XR images in a 3-class classification setting (COVID-19, pneumonia and controls). The pre-processed images and the original ones were stacked. Then, two CNN models were used to extract features: MobileNetV2 and SqueezeNet. A feature selection technique based on social mimic optimization and a Support Vector Machine (SVM) were used. Experiments were performed on a corpus of $295$ COVID-19, $65$ controls and $98$ pneumonia XR images, attaining \DIFdelbegin \DIFdel{an accuracy of }\DIFdelend about $99$\% \DIFdelbegin \DIFdel{. A system, also grounded on the InceptionNet and transfer-learning, was presented in \mbox \cite{Das2020}}\hspace{0pt . Experiments were performed on $6$ partitions of XR images with COVID-19, pneumonia, tuberculosis and controls. Reported results indicate an accuracyof $99$\%, in a $10$-folds cross-validation scheme, in classification of COVID-19 from other classes}\DIFdelend \DIFaddbegin \DIFadd{accuracy}\DIFaddend . Given the limited amount of COVID-19 images, some approaches have focused on generating artificial data to train better models. In \cite{Waheed2020}, an auxiliary \textit{Generative Adversarial Network} (GAN) was used to produce artificial COVID-19 XR images from a database of $403$ COVID-19 and $1,124$ controls. Results indicated that data augmentation increased accuracy from $85$\% to $95$\% on the VGG16 net. Similarly, in \cite{Loey2020}, GAN was used to augment a database of $307$ images belonging to four classes: controls, COVID-19, bacterial and viral pneumonia. Different CNN models were tested in a transfer-learning-based setting, including Alexnet, Googlenet, and Restnet18. The best results were obtained with Googlenet, achieving $99$\% in a multi-class classification approach. In \cite{Toraman2020}, a CNN based on capsule networks (CapsNet), was used for binary (COVID-19 \DIFdelbegin \DIFdel{, }\DIFdelend \DIFaddbegin \DIFadd{vs. }\DIFaddend controls) and multi-class classification (COVID-19 \DIFdelbegin \DIFdel{, pneumonia , }\DIFdelend \DIFaddbegin \DIFadd{vs. pneumonia vs. }\DIFaddend controls). Experiments were performed on a dataset of $231$ COVID-19, $1,050$ pneumonia and $1,050$ controls XR images. Data augmentation was used to increase the number of COVID-19 images to $1,050$. On a $10$-folds cross-validation scheme, \DIFdelbegin \DIFdel{an accuracy of }\DIFdelend $97$\% \DIFaddbegin \DIFadd{accuracy }\DIFaddend for binary classification, and \DIFdelbegin \DIFdel{of }\DIFdelend $84$\% multi-class classification \DIFdelbegin \DIFdel{was }\DIFdelend \DIFaddbegin \DIFadd{were }\DIFaddend achieved. The CovXNet architecture, based on depth-wise dilated convolution networks, was proposed in \cite{Mahmud2020}. In a first stage, pneumonia (viral and bacterial) and control images were employed for pretraining. Then, a a refined model of COVID-19 is obtained using transfer learning. In experiments using two-databases, \DIFdelbegin \DIFdel{an accuracy of }\DIFdelend $97$\% \DIFaddbegin \DIFadd{accuracy }\DIFaddend was achieved for COVID-19 vs. controls, and of $90$\% for COVID-19 vs. controls vs. bacterial and viral cases of pneumonia. In \cite{Oh2020}, an easy-to-train neural network with a limited number of training parameters was presented. To this end, patch phenomena found on XR images were studied (bilateral involvement, peripheral distribution and ground-glass opacification) to develop a lung segmentation and a patch-based neural network that distinguished COVID-19 from controls. The basis of the system was the ResNet18 network. Saliency maps were also used to produce interpretable results. In experiments performed on a database of controls ($191$), bacterial pneumonia ($54$), tuberculosis ($57$) and viral pneumonia ($20$), \DIFdelbegin \DIFdel{an accuracy of }\DIFdelend about $89$\% \DIFdelbegin \DIFdel{was obtained, with interpretable results . The authors also reported }\DIFdelend \DIFaddbegin \DIFadd{accuracy was obtained. Likewise, interpretable results were reported in terms of large correlations between the activation zones of the saliency maps and the radiological findings found in the XR images. In addition to that, authors indicate }\DIFaddend that when the lung segmentation approach was not considered the \DIFaddbegin \DIFadd{system's }\DIFaddend accuracy decreased to about $80$\%. In \cite{Altan2020}, 2D curvelets transformations were used to extract features from XR images. A feature selection algorithm based on meta-heuristic was used to find the most relevant characteristics, while a CNN model based on EfficientNet-B0 was used for classification. Experiments were carried out in a database of $1,341$ controls, $219$ COVID-19, and $1,345$ viral pneumonia images\DIFdelbegin \DIFdel{. Classification accuracy of }\DIFdelend \DIFaddbegin \DIFadd{, and }\DIFaddend $99$\% \DIFaddbegin \DIFadd{classification accuracy }\DIFaddend was achieved with the proposed approach. Multi-class and hierarchical classification of different types of diseases producing pneumonia (with $7$ labels and $14$ label paths), including COVID-19, were explored in \cite{Pereira2020}. Since the database of $1,144$ XR images was heavily imbalanced, different resampling techniques were considered. By following a transfer-learning approach based on a CNN architecture to extract features, and \DIFaddbegin \DIFadd{a }\DIFaddend hold-out validation with $5$ different classification techniques, a macro-avg F1-Score of $0.65$ and an F1-Score of $0.89$ were obtained for the multi-class and hierarchical classification \DIFdelbegin \DIFdel{scenario }\DIFdelend \DIFaddbegin \DIFadd{scenarios }\DIFaddend respectively. In \cite{Brunese2020}, a three-phases approach is presented: i) to detect the presence of pneumonia; ii) to classify between COVID-19 and pneumonia; and, iii) to highlight regions of interest of XR images. The proposed system utilized a database of $250$ images of COVID-19 patients, $2,753$ with other pulmonary diseases and $3,520$ controls. \DIFdelbegin \DIFdel{Using }\DIFdelend \DIFaddbegin \DIFadd{By using }\DIFaddend a transfer-learning system based on VGG16, \DIFdelbegin \DIFdel{an accuracy of }\DIFdelend about $0.97$ \DIFaddbegin \DIFadd{accuracy }\DIFaddend was reported. A CNN-hierarchical approach using decision trees (based on ResNet18) was presented in \cite{Yoo2020}, on which a first tree classified \DIFaddbegin \DIFadd{XR images }\DIFaddend into the normal or pathological classes; the second identified tuberculosis; and the third COVID-19. Experiments were carried out on $3$ partitions obtained after having gathered images from different sources and data augmentation. The \DIFdelbegin \DIFdel{accuracies }\DIFdelend \DIFaddbegin \DIFadd{accuracy }\DIFaddend for each one of the decision trees --starting from the first-- was about $98$\%, $80$\%, and $95$\% respectively. \DIFaddbegin \subsection*{\DIFadd{Issues affecting results in the literature}} \begin{table*}[htbp] \caption{\DIFaddFL{Summary of the literature in the field}} \begin{tabular}{@{}lp{5.5cm}lllllp{2cm}l@{}} \toprule \multirow{2}{*}{\textbf{Ref.}} & \multirow{2}{*}{\textbf{Architecture}} & \multicolumn{3}{c}{\textbf{Number of cases}} & \multirow{2}{*}{\textbf{Classes}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Performance\\ metrics\end{tabular}}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Lung\\segment.\end{tabular}}} & \multirow{2}{*}{\textbf{Explainable}} \\ \cmidrule\DIFaddFL{(lr)}{\DIFaddFL{3-5}} & & \textbf{\DIFaddFL{COVID-19}} & \textbf{\DIFaddFL{Controls}} & \textbf{\DIFaddFL{Others}} & & & & \\ \midrule \DIFaddFL{\mbox \cite{narin2020automatic} }\hspace{0pt }& \DIFaddFL{InceptionV3, InceptionResNetV2, ResNet50 }& \DIFaddFL{50 }& \DIFaddFL{50 }& \DIFaddFL{-- }& \DIFaddFL{2 }& \DIFaddFL{Acc=98\% }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{hemdan2020covidx} }\hspace{0pt }& \DIFaddFL{VGG19, DenseNet }& \DIFaddFL{25 }& \DIFaddFL{50 }& \DIFaddFL{-- }& \DIFaddFL{2 }& \DIFaddFL{AvF1=0.90 }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{wang2020covid} }\hspace{0pt }& \DIFaddFL{COVID-Net, ResNet50, VGG19 }& \DIFaddFL{358 }& \DIFaddFL{8066 }& \DIFaddFL{5538 }& \DIFaddFL{3 }& \DIFaddFL{Acc=93.3\% }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{zhang2020covid} }\hspace{0pt }& \DIFaddFL{EfficientNet }& \DIFaddFL{100 }& \DIFaddFL{1431 }& \DIFaddFL{-- }& \DIFaddFL{2 }& \DIFaddFL{Se=96\% Sp=70\% }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{Islam2020} }\hspace{0pt }& \DIFaddFL{CNN + LSTM }& \DIFaddFL{1525* }& \DIFaddFL{1525 }& \DIFaddFL{1525 }& \DIFaddFL{3 }& \DIFaddFL{Acc=99\% }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{NarayanDas2020} }\hspace{0pt }& \DIFaddFL{Xception }& \DIFaddFL{127 }& \DIFaddFL{500 }& \DIFaddFL{500 }& \DIFaddFL{3 }& \DIFaddFL{Acc=97\% }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{Ozturk2020} }\hspace{0pt }& \DIFaddFL{Darknet }& \DIFaddFL{127 }& \DIFaddFL{500 }& \DIFaddFL{500 }& \DIFaddFL{3 }& \DIFaddFL{Acc=87\% }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{Khan2020} }\hspace{0pt }& \DIFaddFL{Xception }& \DIFaddFL{290 }& \DIFaddFL{310 }& \DIFaddFL{657 }& \DIFaddFL{3 }& \DIFaddFL{Acc=93\% }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{Apostolopoulos2020} }\hspace{0pt }& \DIFaddFL{VGG19, MobileNetV2, Inception, Xception, InceptionResNetV2 }& \DIFaddFL{224 }& \DIFaddFL{504 }& \DIFaddFL{700 }& \DIFaddFL{3 }& \DIFaddFL{Acc=94\% }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{Togacar2020} }\hspace{0pt }& \DIFaddFL{MobileNetV2, SqueezeNet }& \DIFaddFL{295 }& \DIFaddFL{65 }& \DIFaddFL{98 }& \DIFaddFL{3 }& \DIFaddFL{Acc=99\% }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{Das2020} }\hspace{0pt }& \DIFaddFL{Inception }& \DIFaddFL{162 }& \DIFaddFL{2003 }& \DIFaddFL{4650 }& \DIFaddFL{3 }& \DIFaddFL{Acc=99\% }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{Pereira2020} }\hspace{0pt }& \DIFaddFL{Inception-V3 }& \DIFaddFL{90 }& \DIFaddFL{1000 }& \DIFaddFL{687 }& \DIFaddFL{7 }& \DIFaddFL{AvF1=0.65 }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{Waheed2020} }\hspace{0pt }& \DIFaddFL{VGG16 }& \DIFaddFL{403 }& \DIFaddFL{1124 }& \DIFaddFL{-- }& \DIFaddFL{2 }& \DIFaddFL{Acc=95\% }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{Loey2020} }\hspace{0pt }& \DIFaddFL{Alexnet, Googlenet, Restnet18 }& \DIFaddFL{69 }& \DIFaddFL{79 }& \DIFaddFL{158 }& \DIFaddFL{4 }& \DIFaddFL{Acc=99\% }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{Toraman2020} }\hspace{0pt }& \DIFaddFL{Capsnet }& \DIFaddFL{231 }& \DIFaddFL{1050 }& \DIFaddFL{1050 }& \DIFaddFL{3 }& \DIFaddFL{Acc=84\% }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{Mahmud2020} }\hspace{0pt }& \DIFaddFL{CovXNet }& \DIFaddFL{305 }& \DIFaddFL{305 }& \DIFaddFL{610 }& \DIFaddFL{4 }& \DIFaddFL{Acc=90.2\% }& \DIFaddFL{N }& \DIFaddFL{Y }\\ \DIFaddFL{\mbox \cite{Oh2020} }\hspace{0pt }& \DIFaddFL{ResNet18 }& \DIFaddFL{180 }& \DIFaddFL{191 }& \DIFaddFL{131 }& \DIFaddFL{4 }& \DIFaddFL{Acc=89\% }& \DIFaddFL{Y }& \DIFaddFL{Y }\\ \DIFaddFL{\mbox \cite{Civit-Masot2020} }\hspace{0pt }& \DIFaddFL{VGG16 }& \DIFaddFL{132 }& \DIFaddFL{132 }& \DIFaddFL{132 }& \DIFaddFL{3 }& \DIFaddFL{AvF1=0.85 }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{Yoo2020} }\hspace{0pt }& \DIFaddFL{ResNet18 }& \DIFaddFL{162 }& \DIFaddFL{585 }& \DIFaddFL{585 }& \DIFaddFL{3 }& \DIFaddFL{Acc=95\% }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{Minaee2020} }\hspace{0pt }& \DIFaddFL{ResNet18, ResNet50, SqueezeNet, DenseNet121 }& \DIFaddFL{184 }& \DIFaddFL{2400 }& \DIFaddFL{2600 }& \DIFaddFL{2 }& \DIFaddFL{Se=98\% Sp=92.9\% }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{Brunese2020} }\hspace{0pt }& \DIFaddFL{VGG16 }& \DIFaddFL{250 }& \DIFaddFL{3520 }& \DIFaddFL{2753 }& \DIFaddFL{4 }& \DIFaddFL{Acc=97\% }& \DIFaddFL{N }& \DIFaddFL{N }\\ \DIFaddFL{\mbox \cite{Altan2020} }\hspace{0pt }& \DIFaddFL{EfficientNet-B }& \DIFaddFL{219 }& \DIFaddFL{1341 }& \DIFaddFL{1345 }& \DIFaddFL{3 }& \DIFaddFL{Acc=99\% }& \DIFaddFL{N }& \DIFaddFL{N }\\ \bottomrule \end{tabular} \DIFaddFL{* 912 coming from a repository of data augmented images. }\label{tab:SoASummary \end{table*} \DIFaddend Table \ref{tab:SoASummary} presents a summary of the state of the art in the \DIFdelbegin \DIFdel{field. The review of the literature reveals that the systems developed }\DIFdelend \DIFaddbegin \DIFadd{automatic detection of COVID-19 based on XR images and deep learning. Despite the excellent results reported, the review reveals that some of the proposed systems suffer from certain shortcomings that affect the conclusions that can be extracted from them, limiting the possibility to be transferred to the clinical environment. Likewise, there exist variability factors that have not been deeply studied in these papers and which can be regarded as important. } \DIFadd{For instance, one of the issues that affect most the reviewed systems }\DIFaddend to detect COVID-19 from plain chest XR images \DIFdelbegin \DIFdel{use very small }\DIFdelend \DIFaddbegin \DIFadd{is the use of very limited }\DIFaddend datasets, which compromises their generalization capabilities. \DIFaddbegin \DIFadd{Indeed, from the authors' knowledge, to date, the paper employing the largest database of COVID-19 considers $1,525$ XR images gathered from different sources. However, from these, $912$ belong to a data augmented repository, which does not include additional information about the initial number of files or the number of augmented images. In general terms, most of the works employ less than $300$ COVID-19 XR images, having systems that use as few as $50$ images. This is however understandable since some of these works were published at the onset of the pandemics when the number of available registers was limited. } \DIFaddend On the other hand, \DIFdelbegin \DIFdel{and despite the good accuracies obtained, the behaviour of the existing models is difficult to explain, revealing excellent correlation with the disease but }\DIFdelend \DIFaddbegin \DIFadd{a good balance in the age of the patients is considered important to avoid the model learn age-specific features. However, several previous works have used XR images from children to populate the pneumonia class }\footnote{\DIFadd{First efforts used the RSNA Pneumonia Detection Challenge dataset, which is focused on the detection of pneumonia cases in children. https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/overview}}\DIFadd{. This might be biasing the results given the age differences with respect to COVID-19 patients. } \DIFadd{Despite many works in the literature report a good performance in the detection of COVID-19, most of the approaches follow a brute force approach exploiting the potentiality of deep learning to correlate with the outputs (i.e. the class labels), but providing }\DIFaddend low interpretability and explainability \DIFdelbegin \DIFdel{(only \mbox \cite{Oh2020} }\hspace{0pt and \mbox \cite{Mahmud2020} }\hspace{0pt partially address this aspect). Additionally, most of the approaches found use the well known brute force }\DIFdelend \DIFaddbegin \DIFadd{of the process. It means that it is unclear if the good results are due to the actual capability of the system to extract information related to the pathology or due to the }\DIFaddend capabilities of the \DIFdelbegin \DIFdel{deep learning, but without a clear strategy for forcing }\DIFdelend \DIFaddbegin \DIFadd{system to learn other aspects biasing and compromising the results. As a matter of example, just one of the works reported in the literature follows a strategy that forces }\DIFaddend the network to \DIFdelbegin \DIFdel{put the }\DIFdelend focus in the most significant areas \DIFdelbegin \DIFdel{. to this respect, only \mbox \cite{Oh2020} }\hspace{0pt uses a methodology with }\DIFdelend \DIFaddbegin \DIFadd{of interest for COVID-19 detection \mbox \cite{Oh2020}}\hspace{0pt . It does so, by proposing a methodology based on }\DIFaddend a semantic segmentation of the lungs. \DIFdelbegin \DIFdel{Besides, no study analyzes the potential variability factors that might be biasing the results. This }\DIFdelend \DIFaddbegin \DIFadd{In the remaining cases, it is unclear if the models are analyzing the lungs, or if they are categorizing given any other information available, which might be interesting for classification purposes but might lack diagnostic interest. This is relevant, as in all the analyzed works in the literature, pneumonia and controls classes come from a certain repository, whereas others such as COVID-19 comes from a combination of sources and repositories. Having classes with different recording conditions might certainly affect the results, and as such, a critical study about this aspect is needed. In the same line, other variability issues such as the sensor technology that is employed, the type of projection used, the sex of the patients, and even age, require a thorough study. } \DIFadd{Finally, the review revealed that most of the published papers showed excellent correlation with the disease but low interpretability and explainability (see Table \ref{tab:SoASummary}). Indeed, in clinical practice, it is often more desirable to obtain interpretable results that correlate to pathological conditions, or to a certain demographic or physiological variable, than a black box system that simply states a binary or a multiclass decision. From the revision of literature, only \mbox \cite{Oh2020} }\hspace{0pt and \mbox \cite{Mahmud2020} }\hspace{0pt partially addressed this aspect. Thus, further research on this topic is needed. } \DIFadd{With these ideas in mind, this }\DIFaddend paper addresses these aspects by training and testing with a wide corpus of RX images, proposing and comparing two strategies to preprocess the images, analyzing the effect of some variability factors, and providing some insights towards more explainable and interpretable results. \DIFaddbegin \DIFadd{The major goal is presenting a critical overview of these aspects since they might be affecting the modelling capabilities of the deep learning systems for the detection of COVID-19. }\DIFaddend \DIFaddbegin \DIFaddend \section{Methodology} \label{sec:methodology} The design methodology is presented in the following section. The procedure that is followed to train the neural network is described firstly, along with the process that was followed to create the dataset. The \DIFdelbegin \DIFdel{model and the implementation have been made openly available at : }\DIFdelend \DIFaddbegin \DIFadd{network and the source code to train it are available at }\DIFaddend \url{https://github.com/jdariasl/COVIDNET}\DIFaddbegin \DIFadd{, so results can be readily reproduced by other researchers. }\DIFaddend \subsection{The network} \label{sec:network} The core of the system is a deep CNN based on the COVID-Net\footnote{Following the PyTorch implementation available at \url{https://github.com/IliasPap/COVIDNet}} proposed in \cite{wang2020covid}. Some modifications were made to include regularization components in the last two dense layers and a weighted categorical cross entropy loss function in order to compensate for the class imbalance. The structure of the network was also refactored in order to allow gradient-based localization estimations \cite{Selvaraju_2019}, which are used after training in the search for an explainable model. The network was trained with the corpus described in \ref{sec:corpus} using the Adam optimizer with a learning rate policy: the learning rate decreases when learning stagnates for a period of time (i.e., ’patience’). The following hyperparameters were used for training: learning rate=$2\textsuperscript{-5}$, number of epochs=$24$, batch size=$32$, factor=$0.5$, patience=$3$. Furthermore, data augmentation for pneumonia and COVID-19 classes was leveraged with the following augmentation types: horizontal flip, Gaussian noise with a variance of $0.015$, rotation, elastic deformation, and scaling. The variant of the COVID-Net was built and evaluated using the PyTorch library \cite{NEURIPS2019_9015}. The CNN features from each image are concatenated by a flatten operation and the resulting feature map is fed to three fully connected layers to generate a probability score for each class. The first two fully connected layers include dropout regularization of $0.3$ and ReLU activation functions. Dropout was necessary because the original network tended to overfit since the very beginning of the training phase. The input layer of the network rescales the images keeping the aspect ratio, with the shortest dimension scaled to $224$ pixels. \DIFdelbegin \DIFdel{Later }\DIFdelend \DIFaddbegin \DIFadd{Then, }\DIFaddend the input image is cropped to a square of $224 \times 224$ pixels located in the centre of the image. Images are normalized using a z-score function with parameters $mean=[0.485, 0.456, 0.406]$ and $std=[0.229, 0.224, 0.225]$, for each of the three RGB channels respectively. Even though we are working with grayscale images, the network architecture was designed to be pre-trained on a general purpose database including coloured images; this characteristic was kept in case it would be necessary to use some transfer learning strategy in the future. The output layer of the network provides a score for each of the three classes (i.e. control, pneumonia, or COVID-19), which is converted into three probability estimates --in the range $[0, 1]$-- using a softmax activation function. The final decision about the class membership is made according to the highest of the three probability estimates obtained. \subsection{The corpus} \label{sec:corpus} The corpora used in the paper have \DIFaddbegin \DIFadd{been }\DIFaddend compiled from a set of \textit{Posterior-Anterior} (PA) and \textit{Anterior-Posterior} (AP) XR images from different public sources. The compilation contains images from participants without any observable pathology (controls or no findings), pneumonia, and COVID-19 cases. After the compilation, two subsets of images were generated, i.e. training and testing. Table \ref{tab:classDistribution} contains the number of images per subset and class. Overall, the corpus contains more than $70,000$ \DIFdelbegin \DIFdel{RX }\DIFdelend \DIFaddbegin \DIFadd{XR }\DIFaddend images, including \DIFdelbegin \DIFdel{$8,000$ }\DIFdelend \DIFaddbegin \DIFadd{more than $8,500$ images }\DIFaddend belonging to COVID-19 patients. \begin{table}[htbp] \centering \caption{Number of images per class for training and testing subsets} \begin{tabular}{@{}cccc@{}} \toprule {Subset} & \multicolumn{1}{l}{Control} & \multicolumn{1}{l}{Pneumonia} & \multicolumn{1}{l}{COVID-19} \\ \midrule Training & 45022 & 21707 & 7716 \\ Testing & 4961 & 2407 & 857 \\ \bottomrule \end{tabular}% \label{tab:classDistribution}% \end{table}% The repositories of \DIFaddbegin \DIFadd{XR }\DIFaddend images employed to create the corpus used in this paper are presented next. Most of \DIFdelbegin \DIFdel{them contain samples of control and pneumonia classes only. Just }\DIFdelend \DIFaddbegin \DIFadd{these contain solely registers of controls and pneumonia patients. Only }\DIFaddend the most recent repositories include samples of COVID-19 XR images. In all cases, the annotations were made by a specialist as indicated by the authors of the repositories. The COVID-19 class is modelled compiling images coming from three open data collection initiatives: HM Hospitales COVID \cite{Hospitales2020}, BIMCV-COVID19 \cite{vay2020bimcv} and Actualmed COVID-19 \cite{Actualmed2020} chest XR datasets. The final result of the compilation process is a subset of $8,573$ images from more than $3,600$ patients at different stages of the disease\footnote{Figures at the time the datasets were downloaded. The datasets are still open, and more data might be available in the next future}. Table \ref{tab:DemographicDistribution} summarizes the most significant characteristics of the datasets used to create the corpus, which are presented next: \subsubsection{HM Hospitales COVID-19 dataset} This dataset was compiled by HM Hospitals \cite{Hospitales2020}. It contains all the available clinical information about anonymous patients with the SARS-CoV-2 virus who were treated in different centres belonging to this company since the beginning of the pandemic in Madrid, Spain. The \DIFdelbegin \DIFdel{clinical dataset }\DIFdelend \DIFaddbegin \DIFadd{corpus }\DIFaddend contains the anonymized records of $2,310$ patients\DIFdelbegin \DIFdel{, and collects different interactions in the COVID-19 treatment process including, among many other records, information on diagnoses, admissions, diagnostic imaging tests (RX and CT), treatments, laboratory results, discharge or death.The information is organized according to their content, all of them linked by a unique identifier. The }\DIFdelend \DIFaddbegin \DIFadd{ The }\DIFaddend dataset contains several radiological studies for each patient corresponding to different stages of the disease. \DIFdelbegin \DIFdel{Images are stored in a standard DICOM format. }\DIFdelend A total of $5,560$ RX images are available in the dataset, with an average of $2.4$ image studies per subject, often taken in intervals of two or more days. The histogram of the patients’ age is highly coherent with the demographics of COVID-19 in Spain (see Table \ref{tab:DemographicDistribution} for more details). \DIFdelbegin \DIFdel{Images were acquired with diverse devices, using different technologies, configurations, positions of the patient, and views. \DIFdelend Only patients with at least one positive PCR test or positive immunological tests for SARS-CoV-2 were included in the study. \DIFdelbegin \DIFdelend The Data Science Commission and the Research Ethics Committee of HM Hospitales approved the current research study and the use of the data for this purpose. \subsubsection{BIMCV COVID19 dataset} BIMCV COVID19 dataset \cite{vay2020bimcv} is a large dataset with chest radiological studies (XR and CT) of COVID-19 patients along with their pathologies, results of PCR and immunological tests, and radiological reports. It was recorded by the Valencian Region Medical Image Bank (BIMCV) \DIFdelbegin \DIFdel{. \DIFdelend \DIFaddbegin \DIFadd{in Spain. }\DIFaddend The dataset contains the anonymized studies of patients with at least one positive PCR test or positive immunological tests for SARS-CoV-2 in the period between February 26th and April 18th, 2020. \DIFdelbegin \DIFdel{Patients were identified by querying the Health Information Systems from 11 different hospitals in the Valencian Region, Spain. Studies were acquired using more than 20 different devices. \DIFdel{The dataset contains }\DIFdelend \DIFaddbegin \DIFadd{The corpus is composed of }\DIFaddend a total of $3,013$ XR images, with an average of $1.9$ image studies per subject, \DIFdelbegin \DIFdel{oftenly }\DIFdelend taken in intervals of \DIFaddbegin \DIFadd{approximately }\DIFaddend two or more days. The histogram of the patients’ age is highly coherent with the demographics of COVID-19 in Spain (Table \ref{tab:DemographicDistribution}). \DIFdelbegin \DIFdel{All images are labelled with the technology of the sensor, but not all of them with the projection. Images were acquired with diverse devices, using different technologies, configurations, positions of the patient, and views. \DIFdelend Only patients with at least one positive PCR test or positive immunological tests for SARS-Cov-2 were included in the study. \begin{table*}[htbp] \centering \caption{Demographic data of the datasets used. Only those labels confirmed are reported}\DIFaddbeginFL \label{tab:DemographicDistribution \DIFaddendFL \begin{tabular}{@{}lllllllll@{}} \toprule & Mean age $\pm$ std & \# Males/\# Females & \# Images & AP/PA & DX/CR & COVID-19 & Pneumonia & Control \\ \midrule HM Hospitales & 67,8 $\pm$ 15,7 & 3703/1857 * & 5560 & 5018/542 & 1264/4296 & Y & N & N \\ BIMCV & 62,4 $\pm$ 16,7 & 1527/1486 ** & 3013 & 1171/1217 & 1145/1868 & Y & N & N \\ ACT & -- & -- & 188 & 30/155 & 126/59 & Y & N & Y \\ ChinaSet & 35,4 $\pm$ 14,8 & 449/213 & 662 & 0/662 & 662/0 & N & Y & Y \\ Montgomery & 51,9 $\pm$ 2,41 & 63/74 & 138 & 0/138 & 0/138 & N & Y & Y \\ CRX8 & 45,75 $\pm$ 16,83 & 34760/27030 & 61790 & 21860/39930 & 61790/0 & N & Y & Y \\ CheXpert & 62.38 $\pm$ 18,62 & 2697/1926 & 4623 & 3432/1191 & -- & N & Y & N \\ MIMIC & -- & -- & 16399 & 10850/5549 & -- & N & Y & N \\\bottomrule & * 1377/929 patients & ** 727/626 patients & & & & & & \end{tabular} \DIFdelbeginFL \DIFdelendFL \DIFaddbeginFL \DIFaddendFL \end{table*}% \subsubsection{Actualmed set (ACT)} The actualmed COVID-19 Chest XR dataset initiative \cite{Actualmed2020} contains a series of XR images compiled by Actualmed and Universitat Jaume I (Spain). The dataset contains COVID-19 and control XR images, but no information is given about the place or date of recording and/or about the demographics. However, a metadata file is included. It contains an anonymized descriptor to distinguish among patients, and information about the XR modality, type of view and the class to which the image belongs. \subsubsection{China Set - The Shenzhen set} The set was created by the National Library of Medicine, Maryland, USA in collaboration with the Shenzhen No.3 People’s Hospital at Guangdong Medical College in Shenzhen, China \DIFdelbegin \DIFdel{. The Chest XR images have been gathered from out-patient clinics and were captured as part of the daily routine using Philips DX Digital Diagnose systems \mbox \cite{jaeger2014two}}\hspace{0pt . }\DIFdelend \DIFaddbegin \DIFadd{\mbox \cite{jaeger2014two}}\hspace{0pt . }\DIFaddend The dataset contains normal and abnormal chest XR with manifestations of tuberculosis and includes associated radiologist readings. \subsubsection{The Montgomery set} \DIFdelbegin \DIFdel{The set }\DIFdelend \DIFaddbegin \DIFadd{This dataset }\DIFaddend was created by the National Library of Medicine in collaboration with the Department of Health and Human Services, Montgomery County, Maryland, USA. \DIFdelbegin \DIFdel{The set }\DIFdelend \DIFaddbegin \DIFadd{It }\DIFaddend contains data from XR \DIFaddbegin \DIFadd{images }\DIFaddend collected under Montgomery County's tuberculosis screening program \cite{jaeger2014two, jaeger2013automatic}. \DIFdelbegin \DIFdel{It contains a series of images of controls and tuberculosis patients, captured with a Eureka stationary X-ray machine (CR). All images are de-identified and available in DICOM format. The set covers a wide range of abnormalities, including effusions and miliary patterns. }\DIFdelend \subsubsection{ChestX-ray8 dataset (CRX8)} The ChestX-ray8 dataset \cite{wang2017chestx} contains $12,120$ images from $14$ common thorax disease categories from $30,805$ unique patients, compiled by the National Institute of Health (NIH). \DIFdelbegin \DIFdel{Natural language processing was used to extract the disease from the associated radiological reports. The labels are expected to be >90\% accurate and suitable for weakly-supervised learning. }\DIFdelend For this study, the images labelled with 'no radiological findings' were used to be part of the control class, whereas the images annotated as 'pneumonia' were used for the pneumonia class. \DIFdelbegin \DIFdel{In total $61,790$ images were used. Images annotated as pneumonia with other comorbidities were not included. }\DIFdelend \subsubsection{CheXpert dataset} CheXpert \cite{irvin2019chexpert} is a dataset of XR images created for an automated evaluation of medical imaging competitions\DIFdelbegin \DIFdel{. It contains images from $65,240$ patients obtained from }\DIFdelend \DIFaddbegin \DIFadd{, and contains }\DIFaddend chest XR examinations carried out in Stanford Hospital during $15$ years. For this study, we selected \DIFdelbegin \DIFdel{$4.623$ }\DIFdelend \DIFaddbegin \DIFadd{$4,623$ }\DIFaddend pneumonia images using those annotated as 'pneumonia' \DIFdelbegin \DIFdel{or 'pneumonia and }\DIFdelend \DIFaddbegin \DIFadd{with and without }\DIFaddend another additional comorbidity\DIFdelbegin \DIFdel{'}\DIFdelend . These comorbidities were never caused by COVID-19. The motivation to include \DIFdelbegin \DIFdel{the }\DIFdelend \DIFaddbegin \DIFadd{pneumonia with }\DIFaddend comorbidities was to increase the number of pneumonia examples in the final compilation for this study, increasing the variability of this cluster. \subsubsection{MIMIC-CXR Database} MIMIC-CXR \cite{Johnson2019} is an open dataset complied from 2011 to 2016, and comprising \DIFdelbegin \DIFdel{of }\DIFdelend de-identified chest RX from patients admitted to the Beth Israel Deaconess Medical Center. \DIFdelbegin \DIFdel{The dataset contains $371,920$ XR images associated with $227,943$ imaging studies. Each imaging study can pertain to one or more images, but most often are associated with two images: a frontal and a lateral view. The dataset is complemented with free-text radiology reports. }\DIFdelend In our study we employed the images for the pneumonia class. The labels were obtained from the agreement of the two methods indicated in \cite{Johnson2019}. The dataset reports no information about gender or age, thus, we assume that the demographics are similar to those of CheXpert dataset, and those of pneumonia \cite{Ramirez2017}. \subsection{Image Pre-processing} XR images were converted to uncompressed grayscale '.png' files, encoded with 16 bits, and preprocessed using the DICOM \textit{WindowCenter} and \textit{\DIFdelbegin \DIFdel{WindoWidth}\DIFdelend \DIFaddbegin \DIFadd{WindowWidth}\DIFaddend } details (when needed). All images were converted to a \textit{Monochrome 2} photometric interpretation. Initially, the images were not re-scaled, to avoid loss of resolution in later processing stages. Only AP and PA views were selected. No differentiation was made between erect, either standing or sitting or decubitus. This information was inferred by a careful analysis of the DICOM tags, but also required a manual checking due to certain labelling errors. \subsection{Experiments} The corpus collected from the aforementioned databases was processed to compile three different datasets of equal size to the initial one. Each of these datasets was used to run a different set of experiments. \subsubsection{Experiment 1. Raw data} The first experiment was run using the raw data \DIFdelbegin \DIFdel{as they were }\DIFdelend extracted from the different datasets. Each image is kept with the original aspect ratio. Only a histogram equalization was applied. \subsubsection{Experiment 2. Cropped image} The second experiment consists of preprocessing the images by zooming in, cropping to a squared region of interest, and resizing to a squared image (aspect ratio $1:1$). The process is summarized in the following steps: \begin{enumerate} \item Lungs are segmented from the original image using a U-Net semantic segmentation algorithm\footnote{Following the Keras implementation available at \url{https://github.com/imlab-uiip/lung-segmentation-2d}}. The algorithm used reports \textit{Intersection-Over-Union} (IoU) and Dice similarity coefficient scores of 0.971 and 0.985 respectively. \item A black mask is extracted to identify the external boundaries of the lungs. \item The mask is used to create two sequences, adding the grey levels of the rows and columns respectively. \DIFaddbegin \DIFadd{These two sequences provide four boundary points, which define two segments of different lengths in the horizontal and vertical dimensions. }\DIFaddend \item The sequences of added grey levels in the vertical and horizontal dimensions of the mask are used to identify a squared region of interest associated \DIFdelbegin \DIFdel{to }\DIFdelend \DIFaddbegin \DIFadd{with }\DIFaddend the lungs, taking advantage of the higher added values outside the lungs (Fig. \ref{fig:Mascara}). \DIFaddbegin \DIFadd{The process to obtain the squared region requires identifying the middle point of each of the identified segments and cropping in both dimensions using the length of the longest of these two segments. }\DIFaddend \item The original image is cropped with a squared template placed in the centre of the matrix using the information obtained in the previous step. No mask is placed over the image. \item Histogram equalization of the image obtained. \end{enumerate} This process is carried out to decrease the variability of the data, to make the training process of the network simpler, and to ensure that the region of significant interest is in the centre of the image with no areas cut. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{images/mascara.eps} \caption{Identification of the squared region of interest. Plots in the top and left represent the normalized accumulated gray level in the vertical and horizontal dimension respectively. } \label{fig:Mascara} \end{figure} \subsubsection{Experiment 3. Lung segmentation} The third experiment consists of preprocessing the images by masking, zooming in, cropping to a squared region of interest, and resizing to a squared image (aspect ratio $1:1$). The process is summarized in the following steps: \begin{enumerate} \item Lungs are segmented from the original image using the same semantic segmentation algorithm used in experiment 2. \item An external black mask is extracted to identify the external boundaries of the lungs. \item The mask is used to create two sequences, adding the grey levels of the rows and columns respectively. \item The sequences of added grey levels in the vertical and horizontal dimensions of the mask are used to identify a squared region of interest associated to the lungs, taking advantage of the higher added values outside \DIFdelbegin \DIFdel{the lungs }\DIFdelend \DIFaddbegin \DIFadd{them }\DIFaddend (Fig. \ref{fig:Mascara}). \item The original image is cropped with a squared template placed in the center of the image. \item The mask is dilated with a $5 \times 5$ pixels kernel, and it is superimposed to the image. \item Histogram equalization is applied only to the segmented area (i.e. the area corresponding to the lungs). \end{enumerate} This preprocessing makes the training of the network much simpler and forces the network to focus the attention on the lungs region, removing external characteristics \DIFaddbegin \DIFadd{--like the sternum-- }\DIFaddend that might influence the obtained results. \begin{table*}[!ht] \centering \caption{Performance measures for the three experiments considered in the paper}\DIFaddbeginFL \label{tab:NumericResults} \DIFaddendFL \begin{tabular}{@{}lclccccc@{}} \toprule \multirow{2}{*}{\textbf{Experiment}} & \multirow{2}{*}{\textbf{Class}} & \multicolumn{6}{c}{\textbf{Measures}} \\ \cmidrule(l){3-8} & & \textbf{PPV} & \textbf{Recall} & \textbf{F1} & \textbf{Acc} & \textbf{BAcc} & \textbf{GMR} \\ \midrule \multirow{3}{*}{\textbf{Exp. 1}} & \textit{Pneumonia} & 92.53 $\pm$ 1.13 & 94.20 $\pm$ 1.43 & 93.35 $\pm$ 0.68 & \multirow{3}{*}{91.67 $\pm$ 2.56} & \multirow{3}{*}{94.43 $\pm$ 1.36} & \multirow{3}{*}{93.00 $\pm$ 1.00} \\ & \textit{Control} & 93.35 $\pm$ 0.68 & 96.56 $\pm$ 0.50 & 97.24 $\pm$ 0.23 & & & \\ & \textit{COVID-19} & 91.67 $\pm$ 2.56 & 94.43 $\pm$ 1.36 & 93.00 $\pm$ 1.00 & & & \\ \midrule \multirow{3}{*}{\textbf{Exp. 2}} & \textit{Pneumonia} & 84.02 $\pm$ 1.16 & 85.75 $\pm$ 1.46 & 84.86 $\pm$ 0.51 & \multirow{3}{*}{87.64 $\pm$ 0.74} & \multirow{3}{*}{81.35 $\pm$ 2.70} & \multirow{3}{*}{81.36 $\pm$ 0.42} \\ & \textit{Control} & 93.62 $\pm$ 0.76 & 92.67 $\pm$ 0.69 & 93.14 $\pm$ 0.25 & & & \\ & \textit{COVID-19} & 81.60 $\pm$ 3.33 & 81.35 $\pm$ 2.70 & 81.36 $\pm$ 0.42 & & & \\ \midrule \multirow{3}{*}{\textbf{Exp. 3}} & \textit{Pneumonia} & 85.26 $\pm$ 0.73 & 85.26 $\pm$ 0.73 & 87.42 $\pm$ 0.27 & \multirow{3}{*}{91.53 $\pm$ 0.20} & \multirow{3}{*}{87.64 $\pm$ 0.74} & \multirow{3}{*}{87.37 $\pm$ 0.84} \\ & \textit{Control} & 96.99 $\pm$ 0.17 & 94.48 $\pm$ 0.24 & 95.72 $\pm$ 0.15 & & & \\ & \textit{COVID-19} & 78.52 $\pm$ 2.08 & 78.73 $\pm$ 2.80 & 78.57 $\pm$ 1.15 & & & \\ \bottomrule \end{tabular} \DIFdelbeginFL \DIFdelendFL \DIFaddbeginFL \DIFaddendFL \end{table*} \begin{figure*}[!h] \setlength{\tempheight}{0.18\textheight} \settowidth{\tempwidth}{\includegraphics[height=\tempheight]{images/ROC_Original.png}} \centering \hspace{\baselineskip} \columnname{Exp. 1}\hfil\vspace{-0.3cm} \columnname{Exp. 2}\hfil \columnname{Exp. 3}\hfil \subfloat[] { \label{ROC_original} \includegraphics[width=0.33\textwidth]{images/ROC_Original.png} } \subfloat[] { \label{ROC_Cropped} \includegraphics[width=0.33\textwidth]{images/ROC_Cropped.png} } \subfloat[] { \label{ROC_CropSeg} \includegraphics[width=0.33\textwidth]{images/ROC_Segmented.png} } \\ \subfloat[] { \label{CM_original} \includegraphics[width=0.33\textwidth]{images/CM_Exp3_OrgImages_Cumulative.png} } \subfloat[] { \label{CM_Cropped} \includegraphics[width=0.33\textwidth]{images/CM_Exp3_CroppedImages_Cumulative.png} } \subfloat[] { \label{CM_CropSeg} \includegraphics[width=0.33\textwidth]{images/CM_Exp3_CroppedSegmentedImages_Cumulative.png} } \caption{ROC curves and confusion matrices for each one of the experiments, considering each one of the classes separately. \textbf{Top:} ROC curves. \textbf{Bottom:} Normalized confusion matrices. \textbf{Left:} Original images (experiment 1). \textbf{Center:} Cropped Images (experiment 2). \textbf{Right:} Segmented images (experiment 3). } \label{fig:ROC_CMatrix} \end{figure*} \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{images/Merged_Three.png} \caption{Average ROC curves for each experiment, including AUC values.} \label{fig:ROCThree} \end{figure} \subsection{Identification of the areas of significant interest for the classification} The areas of significant interest used by the CNN for discrimination purposes are identified using a qualitative analysis based on a \textit{Gradient-weighted Class Activation Mapping} (Grad-CAM) \cite{Selvaraju_2019}. This is an explainability method that serves to provide insights about the manners on how deep neural networks learn, pointing to the most significant areas of interest for decision-making purposes. The method uses the gradients of any target class \DIFdelbegin \DIFdel{, to flow }\DIFdelend to \DIFaddbegin \DIFadd{flow until }\DIFaddend the final convolutional layer, and to produce a coarse localization map which highlights the most important regions in the image identifying the class. The \DIFdelbegin \DIFdel{results }\DIFdelend \DIFaddbegin \DIFadd{result }\DIFaddend of this method is a heat map like those presented in \DIFaddbegin \DIFadd{Fig. }\DIFaddend \ref{fig:XR-examples}, in which the colour encodes the importance of each pixel in differentiating among classes. \section{Results} \label{sec:results} The model has been quantitatively evaluated computing the test \textit{Positive Predictive Value} (PPV), \textit{Recall}, \textit{F1-score} (F1), \textit{Accuracy} (Acc), \textit{Balanced Accuracy} (BAcc), \textit{Geometric Mean Recall} (GMR) and \textit{Area Under the ROC Curve} (AUC) for each of the three classes in the corpus previously described in section \ref{sec:corpus}. The performance of the \DIFdelbegin \DIFdel{model was }\DIFdelend \DIFaddbegin \DIFadd{models is }\DIFaddend assessed using an independent testing set, which \DIFdelbegin \DIFdel{was not }\DIFdelend \DIFaddbegin \DIFadd{has not been }\DIFaddend used during development. A $5$-folds cross-validation procedure has been used to evaluate the obtained results (Training/Test balance: 90/10 \%). The performance of the CNN network on the three experiments considered in this paper is summarized in Table \ref{tab:NumericResults}. Likewise, the ROC curves per class for each of the experiments, and the corresponding confusion matrices are presented in Fig. \ref{fig:ROC_CMatrix}. The global ROC curve displayed in Fig. \ref{fig:ROCThree} for each experiment summarizes the global performance of the experiments. Considering experiment 1, and although slightly higher for controls, the detection performance remains almost similar for all classes (the PPV ranges from \DIFdelbegin \DIFdel{$91-93\%$}\DIFdelend \DIFaddbegin \DIFadd{$91$-$93\%$}\DIFaddend ) (Table \ref{tab:NumericResults}). The remaining measures per class follow the same trend, with similar figures but better numbers for the controls. ROC curves and confusion matrices of Fig. \ref{fig:ROC_CMatrix}a and Fig. \ref{fig:ROC_CMatrix}d point out that the largest source of confusion for COVID-19 is the pneumonia class. The ROC curves for each one of the classes reach in all cases AUC values larger than $0.99$, which, in principle is considered excellent. In terms of global performance, the system achieves an Acc of $91\%$ and a BAcc of $94\%$ (Table \ref{tab:NumericResults}). This is also supported by the average ROC curve of Fig. \ref{fig:ROCThree}, which reveals the excellent performance of the network and the almost perfect \DIFdelbegin \DIFdel{ROC}\DIFdelend \DIFaddbegin \DIFadd{behaviour of the ROC curve}\DIFaddend . Deviations are small for the three classes. When experiment 2 is considered, a decrease in the performance per class is observed in comparison to experiment 1. In this case, the PPV ranges from \DIFdelbegin \DIFdel{$81-93\%$ }\DIFdelend \DIFaddbegin \DIFadd{$81$-$93\%$ }\DIFaddend (Table \ref{tab:NumericResults}), with a similar trend for the remaining figures of merit. ROC curves and confusion matrices in Fig. \DIFdelbegin \DIFdel{\ref{fig:ROC_CMatrix}b }\DIFdelend \DIFaddbegin \DIFadd{\ref{ROC_Cropped} }\DIFaddend and Fig. \DIFdelbegin \DIFdel{\ref{fig:ROC_CMatrix}e }\DIFdelend \DIFaddbegin \DIFadd{\ref{CM_Cropped} }\DIFaddend report AUC values in the range \DIFdelbegin \DIFdel{$0.96-0.99$}\DIFdelend \DIFaddbegin \DIFadd{$0.96$-$0.99$}\DIFaddend , and an overlapping of the COVID-19 class mostly with pneumonia. The global \DIFdelbegin \DIFdel{ROC performance curve shown in }\DIFdelend \DIFaddbegin \DIFadd{performance of the system -presented in the ROC curve of }\DIFaddend Fig. \ref{fig:ROCThree} \DIFdelbegin \DIFdel{reports }\DIFdelend \DIFaddbegin \DIFadd{and Table \ref{tab:NumericResults}- yields }\DIFaddend an AUC of $0.98$, \DIFdelbegin \DIFdel{reaching }\DIFdelend an Acc of $87\%$ and a BAcc of $81\%$\DIFdelbegin \DIFdel{(Table \ref{tab:NumericResults})}\DIFdelend . Finally, for the experiment 3, PPV ranges from $78\%-96\%$ (Table \ref{tab:NumericResults}). In this case, the results are slightly worse than those of experiment 2, with the COVID-19 class presenting the worse performance \DIFdelbegin \DIFdel{of all the tested experiments}\DIFdelend \DIFaddbegin \DIFadd{among all the tests}\DIFaddend . According to Fig. \DIFdelbegin \DIFdel{\ref{fig:ROC_CMatrix}c}\DIFdelend \DIFaddbegin \DIFadd{\ref{ROC_CropSeg}}\DIFaddend , AUCs range from $0.94$ to $0.98$. Confusion matrix in Fig. \DIFdelbegin \DIFdel{\ref{fig:ROC_CMatrix}f }\DIFdelend \DIFaddbegin \DIFadd{\ref{CM_CropSeg} }\DIFaddend reports a large level of confusion in the COVID-19 class being labelled as pneumonia $18\%$ of the times. In terms of global performance the system reaches an Acc of $91\%$ and a BAcc of $87\%$ (Table \ref{tab:NumericResults}). These results are consistent with the average AUC of $0.97$ shown in Fig. \ref{fig:ROCThree}. \subsection{Explainability and interpretability of the models} The regions of interest identified by the network, were \DIFdelbegin \DIFdel{analysed }\DIFdelend \DIFaddbegin \DIFadd{analyzed }\DIFaddend qualitatively using Grad-CAM activation maps \cite{Selvaraju_2019}. Results shown by the activation maps, permit the identification of the most significant areas \DIFdelbegin \DIFdel{of the image}\DIFdelend \DIFaddbegin \DIFadd{in the image, highlighting the zones of interest }\DIFaddend that the network is using to discriminate. \DIFaddbegin \DIFadd{In this regard, }\DIFaddend Fig. \ref{fig:XR-examples}, presents examples of the Grad-CAM of a control, a pneumonia, and a COVID-19 patient, for each of the three experiments considered in the paper. It is important to note that the activation maps \DIFdelbegin \DIFdel{obtained }\DIFdelend are providing overall information about the behaviour of the network, pointing to the most significant areas of interest, but the whole image is supposed to be contributing to the classification process to a certain extent. \DIFdelbegin \DIFdel{Te second row if }\DIFdelend \DIFaddbegin \DIFadd{The second row in }\DIFaddend Fig. \ref{fig:XR-examples} shows several \DIFaddbegin \DIFadd{prototipical }\DIFaddend results applying the Grad-CAM techniques to experiment 1. The examples show the areas of significant interest for a control, pneumonia and COVID-19 patient. \DIFdelbegin \DIFdel{The Grad-CAM plots show that the network often focuses on areas outside the lungs. In principle, these }\DIFdelend \DIFaddbegin \DIFadd{The }\DIFaddend results suggest that the detection of pneumonia or COVID-19 is \DIFdelbegin \DIFdel{sometimes }\DIFdelend \DIFaddbegin \DIFadd{often }\DIFaddend carried out based on information that is \DIFdelbegin \DIFdel{external to }\DIFdelend \DIFaddbegin \DIFadd{outside }\DIFaddend the expected area of interest\DIFdelbegin \DIFdel{. }\DIFdelend \DIFaddbegin \DIFadd{, i.e. the lung area. }\DIFaddend In the examples provided, the network \DIFdelbegin \DIFdel{put the focus }\DIFdelend \DIFaddbegin \DIFadd{focuses }\DIFaddend on the corners \DIFdelbegin \DIFdel{or }\DIFdelend \DIFaddbegin \DIFadd{of the XR image or in areas }\DIFaddend around the diaphragm. \DIFdelbegin \DIFdel{This }\DIFdelend \DIFaddbegin \DIFadd{In part, this }\DIFaddend is likely due to the metadata which is frequently stamped on the corners of the XR images. The Grad-CAM plots corresponding to the experiment 2 (third row of Fig \ref{fig:XR-examples}), indicates that the model still points towards areas which are different to the lungs, but to a lesser extent. Finally, the Grad-CAM of experiment 3 (fourth row of Fig \ref{fig:XR-examples}) presents the areas of interest where the segmentation procedure is carried out. In this case, the network is forced to look at the lungs, and therefore \DIFdelbegin \DIFdel{the results are }\DIFdelend \DIFaddbegin \DIFadd{this scenario is }\DIFaddend supposed to be more realistic and more prone to generalizing \DIFdelbegin \DIFdel{results }\DIFdelend as artifacts that might bias the results are somehow discarded. On the other hand, for visualization purposes, and in order to interpret the separability capabilities of the system, a t-SNE embedding is used to project the high dimensional data of the layer adjacent to the output of the network, to a 2-dimensional space. \DIFdelbegin \DIFdel{The results }\DIFdelend \DIFaddbegin \DIFadd{Results }\DIFaddend are presented in Fig. \ref{fig:t-SNE_Plots} for each of the three experiments considered in the paper. \begin{figure*}[th!] \setlength{\tempheight}{0.18\textheight} \settowidth{\tempwidth}{\includegraphics[height=\tempheight]{images/Exp3_OrgImages_t-SNE_training_v2.png}} \centering \hspace{\baselineskip} \columnname{Exp. 1}\hfil \columnname{Exp. 2}\hfil \columnname{Exp. 3}\\ \rowname{Training data} \subfloat[] { \label{t-SNE Original Trai} \includegraphics[width=0.3\textwidth]{images/Exp3_OrgImages_t-SNE_training_v2.png} } \subfloat[] { \label{t-SNE Cropped Trai} \includegraphics[width=0.3\textwidth]{images/Exp3_CroppedImages_t-SNE_training_v2.png} } \subfloat[] { \label{t-SNE CropSeg Trai} \includegraphics[width=0.3\textwidth]{images/Exp3_CroppedSegmentedImages_t-SNE_training_v3.png} } \\ \rowname{Test data} \subfloat[] { \label{t-SNE Original Test} \includegraphics[width=0.3\textwidth]{images/Exp3_OrgImages_t-SNE_test_v2.png} } \subfloat[] { \label{t-SNE Cropped Test} \includegraphics[width=0.3\textwidth]{images/Exp3_CroppedImages_t-SNE_test_v2.png} } \subfloat[] { \label{t-SNE CropSeg Test} \includegraphics[width=0.3\textwidth]{images/Exp3_CroppedSegmentedImages_t-SNE_test_v3.png} } \caption{Mapping of the high-dimensional data of the layer adjacent to the output into a two dimensional plot. \textbf{Top:} Output network embedding using t-SNE for the training data. \textbf{Bottom:} Output network embedding using t-SNE for the testing data. \textbf{Left:} Original images (experiment 1). \textbf{Center:} Cropped Images (experiment 2). \textbf{Right:} Segmented images (experiment 3). } \label{fig:t-SNE_Plots} \end{figure*} \begin{figure*}[th!] \setlength{\tempheight}{0.18\textheight} \settowidth{\tempwidth}{\includegraphics[height=\tempheight]{images/Exp3_OrgImages_t-SNE_training_v2.png}} \centering \hspace{\baselineskip} \columnname{Exp. 1}\hfil \columnname{Exp. 2}\hfil \columnname{Exp. 3}\\ \rowname{Training data} \subfloat[] { \label{t-SNE Original Trai 2} \includegraphics[width=0.3\textwidth]{images/Exp3_OrgImages_t-SNE_train_v3_DB_Nolegend.png} } \subfloat[] { \label{t-SNE Cropped Trai 2} \includegraphics[width=0.3\textwidth]{images/Exp3_CroppedImages_t-SNE_train_v3_DB_Nolegend.png} } \subfloat[] { \label{t-SNE CropSeg Trai 2} \includegraphics[width=0.3\textwidth]{images/Exp3_CroppedSegmentedImages_t-SNE_train_v3_DB_Nolegend.png} } \\ \rowname{Test data} \subfloat[] { \label{t-SNE Original Test 2} \includegraphics[width=0.3\textwidth]{images/Exp3_OrgImages_t-SNE_test_v3_DB_Nolegend.png} } \subfloat[] { \label{t-SNE Cropped Test 2} \includegraphics[width=0.3\textwidth]{images/Exp3_CroppedImages_t-SNE_test_v3_DB_Nolegend.png} } \subfloat[] { \label{t-SNE CropSeg Test 2} \includegraphics[width=0.3\textwidth]{images/Exp3_CroppedSegmentedImages_t-SNE_test_v3_DB_Nolegend.png} } \\ { \includegraphics[scale=0.3]{images/t-SNE_v3_DB_legend.png} } \caption{Mapping of the high-dimensional data of the layer adjacent to the output into a two dimensional plot. \textbf{Top:} Output network embedding using t-SNE for the training data. \textbf{Bottom:} Output network embedding using t-SNE for the testing data. \textbf{Left:} Original images (experiment 1). \textbf{Center:} Cropped Images (experiment 2). \textbf{Right:} Segmented images (experiment 3). Labels correspond to data sets and classes.} \label{fig:t-SNE_Plots_v2} \end{figure*} \DIFdelbegin \DIFdel{A good separability is observed }\DIFdelend \DIFaddbegin \DIFadd{Fig. \ref{fig:t-SNE_Plots} indicates that a good separability exists }\DIFaddend for all the classes in both training and testing data, and for all experiments. The boundaries of the normal cluster are very well defined in the three experiments, whereas pneumonia and COVID-19 are more spread, overlapping with adjacent classes. In general terms, the t-SNE plots demonstrate the ability of the network to learn a mapping from the input data to the desired labels. \DIFdelbegin \DIFdel{Despite of the differences found between }\DIFdelend \DIFaddbegin \DIFadd{However, despite the shape differences found for }\DIFaddend the three experiments, no \DIFdelbegin \DIFdel{clear }\DIFdelend \DIFaddbegin \DIFadd{additional }\DIFaddend conclusions can be extracted\DIFdelbegin \DIFdel{from these differences. }\DIFdelend \DIFaddbegin \DIFadd{. }\DIFaddend \subsection{Potential variability factors affecting the system} There are several variability factors which might be biasing the results, namely: the projection (PA vs. AP); the technology of the detector (\DIFdelbegin \DIFdel{CR vs. DX}\DIFdelend \DIFaddbegin \textit{\DIFadd{Computed Radiography}} \DIFadd{(CR) vs. }\textit{\DIFadd{Digital Radiography}} \DIFadd{(DX)}\DIFaddend ); the gender of the \DIFdelbegin \DIFdel{patient}\DIFdelend \DIFaddbegin \DIFadd{patients}\DIFaddend ; the age; potential specificities of the dataset; or \DIFdelbegin \DIFdel{training by using }\DIFdelend \DIFaddbegin \DIFadd{having trained with }\DIFaddend several images per patient. The use of several images per patient represents a certain risk of data leak in the COVID-19 class due to its underlying imbalance. However, our initial hypothesis is that \DIFdelbegin \DIFdel{by introducing }\DIFdelend \DIFaddbegin \DIFadd{using }\DIFaddend several images per COVID-19 patient \DIFaddbegin \DIFadd{but }\DIFaddend obtained at different instants in time (with days of difference)\DIFaddbegin \DIFadd{, }\DIFaddend would increase the variability of the dataset, \DIFdelbegin \DIFdel{thus disregarding the biases of the results}\DIFdelend \DIFaddbegin \DIFadd{and thus that source of bias would be disregarded}\DIFaddend . Indeed, the evolution of the associated lesions often found in COVID-19 is considered fast, in such a manner that very different images are obtained in a time interval as short as one or two days of the evolution. Also, since every single exploration is framed differently, or sometimes even taken with different machines and/or projections, the \DIFdelbegin \DIFdel{bias issues could be minimizedeven further}\DIFdelend \DIFaddbegin \DIFadd{potential bias is expected to be minimized}\DIFaddend . Concerning the type of projection, and to evaluate its effectiveness, the system has been \DIFdelbegin \DIFdel{retrained }\DIFdelend \DIFaddbegin \DIFadd{studied }\DIFaddend taking into account this potential variability factor, which is considered to be one of the most significant. In particular, Table \ref{tab:NumericResults2}, presents the outcomes after accounting for the influence of the XR projection (PA/AP) in the performance of the system. In general terms, the system demonstrates consistency with respect to the projection used, and differences are mainly attributable to smaller training and testing sets. However, significant differences are shown for projection PA in class COVID-19/experiment 3, decreasing the F1 up to \DIFdelbegin \DIFdel{65.61 }\DIFdelend \DIFaddbegin \DIFadd{$65.61$}\DIFaddend \%. The reason for the unexpected drop in performance is unknown, but likely attributable to an underrepresented class in the corpus (see Table \ref{tab:DemographicDistribution}). \DIFdelbegin \DIFdel{The }\DIFdelend \DIFaddbegin \DIFadd{Besides, Table \ref{tab:variability_factors} shows --for the three experiments under evaluation and for the COVID-19 class-- the error distribution with respect to the sex of the patient, }\DIFaddend technology of the detector\DIFdelbegin \DIFdel{(CR/DX) is also supposed to be another variability factor. However, no results are provided to this respect due to missing information }\DIFdelend \DIFaddbegin \DIFadd{, dataset and projection. For the four variability factors enumerated, results show that the error distribution committed by the system follows --with minor deviations-- the existing proportion of the samples in the corpus. These results suggest that there is no clear bias with respect to these potential variability factors, at least for the COVID-19 class which is considered the worst case due to its underrepresentation. Similar results would be expected for control and pneumonia classes, but these results are not provided due to the lack of certain labels }\DIFaddend in some of the datasets used (see Table \ref{tab:DemographicDistribution})\DIFdelbegin \DIFdel{, which makes difficult any potential analysis. Unfortunately, more than 60 \% of the samples do not have this information available. }\DIFdelend \DIFaddbegin \DIFadd{. }\DIFaddend \DIFaddbegin \DIFaddend \DIFdelbegin \DIFdel{Regarding the gender, the corpus is considered balanced (see Table \ref{tab:DemographicDistribution}); thus, the assumption is that this factor is not biasing the results. No specific gender biases have been found in the errors committed by the system. \DIFdelend Concerning age, the datasets used are reasonably well balanced (Table \ref{tab:DemographicDistribution}), but with a certain bias in the normal class: COVID-19 and pneumonia classes have very similar average ages, but controls have a lower mean age. Our assumption has been that age differences are not significantly affecting the results, but the mentioned difference might explain why the normal cluster in Fig. \ref{fig:t-SNE_Plots} is less spread than the other two. In any case, no specific age biases have been found in the errors committed by the system. An additional study was also carried out to evaluate the influence of potential specificities of the different datasets used to compile the corpus (i.e. the variability of the results with respect to the datasets merged to build the corpus). This variability factor is evaluated in Fig. \ref{fig:t-SNE_Plots_v2} using different t-SNE plots (one for each experiment in a similar way than in Fig. \ref{fig:t-SNE_Plots}) but differentiating the corresponding cluster for each dataset and class. Results for the different datasets and classes are clearly merged \DIFdelbegin \DIFdel{together }\DIFdelend or are adjacent in the same cluster. However, several datasets report a lower variability for certain classes (i.e. variability in terms of \DIFdelbegin \DIFdel{spreading}\DIFdelend \DIFaddbegin \DIFadd{scattering}\DIFaddend ). This is especially clear in Chexpert and NIH pneumonia sets, which are successfully merged with the corresponding class, but appear clearly clustered, suggesting that these datasets have certain unknown specific characteristics different to those of the complementary datasets. The model has been able to manage this aspect but is a factor to be analyzed in further studies. \begin{table*}[!ht] \centering \caption{Performance measures considering the XR projection (PA/AP)}\DIFaddbeginFL \label{tab:NumericResults2} \DIFaddendFL \begin{tabular}{@{}lclccccc@{}} \toprule \multirow{2}{*}{\textbf{Experiment}} & \multirow{2}{*}{\textbf{Class}} & \multicolumn{3}{c}{\textbf{PA}} & \multicolumn{3}{c}{\textbf{AP}} \\ \cmidrule(l){3-8} & & \textbf{PPV} & \textbf{Recall} & \textbf{F1} & \textbf{PPV} & \textbf{Recall} & \textbf{F1} \\ \midrule \multirow{3}{*}{\textbf{Exp. 1}} & \textit{Pneumonia} & 91.25 $\pm$ 1.22 & 92.78 $\pm$ 1.58 & 92.00 $\pm$ 0.93 & 94.70 $\pm$ 0.79 & 96.28 $\pm$ 1.10 & 95.48 $\pm$ 0.50 \\ & \textit{Control} & 98.54 $\pm$ 0.33 & 97.83 $\pm$ 0.23 & 98.18 $\pm$ 0.14 & 97.87 $\pm$ 0.28 & 95.46 $\pm$ 0.87 & 96.65 $\pm$ 0.43 \\ & \textit{COVID-19} & 84.06 $\pm$ 3.94 & 88.91 $\pm$ 2.31 & 86.33 $\pm$ 1.80 & 95.13 $\pm$ 2.46 & 97.18 $\pm$ 0.94 & 96.12 $\pm$ 1.06 \\ \midrule \multirow{3}{*}{\textbf{Exp. 2}} & \textit{Pneumonia} & 81.77 $\pm$ 1.79 & 79.17 $\pm$ 2.38 & 80.41 $\pm$ 1.16 & 87.39 $\pm$ 1.66 & 90.78 $\pm$ 1.21 & 89.03 $\pm$ 0.71 \\ & \textit{Control} & 94.81 $\pm$ 0.46 & 95.56 $\pm$ 0.61 & 95.33 $\pm$ 0.16 & 92.79 $\pm$ 1.53 & 88.15 $\pm$ 1.61 & 90.38 $\pm$ 0.32 \\ & \textit{COVID-19} & 73.72 $\pm$ 2.37 & 68.82 $\pm$ 5.20 & 71.01 $\pm$ 2.27 & 84.96 $\pm$ 2.27 & 87.63 $\pm$ 2.04 & 86.23 $\pm$ 0.86 \\ \midrule \multirow{3}{*}{\textbf{Exp. 3}} & \textit{Pneumonia} & 84.07 $\pm$ 1.72 & 87.19 $\pm$ 1.66 & 85.57 $\pm$ 0.53 & 87.39 $\pm$ 0.97 & 81.66 $\pm$ 1.12 & 89.47 $\pm$ 0.41 \\ & \textit{Control} & 97.88 $\pm$ 0.36 & 97.08$\pm$ 0.21 & 97.48$\pm$ 0.19 & 96.03 $\pm$ 0.81 & 90.65 $\pm$ 0.87 & 93.26 $\pm$ 0.47 \\ & \textit{COVID-19} & 66.68 $\pm$ 4.82 & 65.23 $\pm$ 4.73 & 65.61 $\pm$ 1.05 & 81.82 $\pm$ 3.07 & 83.62 $\pm$ 2.14 & 82.65 $\pm$ 1.28 \\ \bottomrule \end{tabular} \DIFdelbeginFL \DIFdelendFL \DIFaddbeginFL \DIFaddendFL \end{table*} \DIFaddbegin \begin{table}[ht!] \centering \caption{\DIFaddFL{Percentage of testing samples and error distribution with respect to several potential variability factors for the COVID-19 class. (\% in hits represents the percentage of samples of every factor under analysis in the correctly predicted set.) }}\label{tab:variability_factors} \begin{tabular}{lccccc} \toprule \multirow{2}{*}{\textbf{Factor}} & \multirow{2}{*}{\textbf{Types}} & \multirow{2}{*}{\textbf{\% in test}} & \multicolumn{3}{c}{\textbf{\% in hits}} \\ \cmidrule\DIFaddFL{(l)}{\DIFaddFL{4-6}} & & & \textbf{\DIFaddFL{Exp. 1}} & \textbf{\DIFaddFL{Exp. 2}} & \textbf{\DIFaddFL{Exp. 3}} \\ \hline \multirow{2}{*}{\textbf{Projection}} & \DIFaddFL{AP }& \DIFaddFL{79 }& \DIFaddFL{80.0 }& \DIFaddFL{82.6 }& \DIFaddFL{82.7 }\\ & \DIFaddFL{PA }& \DIFaddFL{21 }& \DIFaddFL{20.0 }& \DIFaddFL{17.4 }& \DIFaddFL{17.3 }\\ \hline \multirow{2}{*}{\textbf{Sensor}} & \DIFaddFL{DX }& \DIFaddFL{22 }& \DIFaddFL{22.0 }& \DIFaddFL{23.3 }& \DIFaddFL{23.6 }\\ & \DIFaddFL{CR }& \DIFaddFL{78 }& \DIFaddFL{78.0 }& \DIFaddFL{76.7 }& \DIFaddFL{76.4 }\\ \hline \multirow{2}{*}{\textbf{Sex}} & \DIFaddFL{M }& \DIFaddFL{64 }& \DIFaddFL{64.0 }& \DIFaddFL{65.4 }& \DIFaddFL{65.2 }\\ & \DIFaddFL{F }& \DIFaddFL{36 }& \DIFaddFL{36.0 }& \DIFaddFL{34.6 }& \DIFaddFL{34.8 }\\ \hline \multirow{3}{*}{\textbf{DB}} & \DIFaddFL{BMICV }& \DIFaddFL{30 }& \DIFaddFL{28.7 }& \DIFaddFL{26.6 }& \DIFaddFL{26.6 }\\ & \DIFaddFL{HM }& \DIFaddFL{69 }& \DIFaddFL{71.0 }& \DIFaddFL{72.7 }& \DIFaddFL{73.1 }\\ & \DIFaddFL{ACT }& \DIFaddFL{1 }& \DIFaddFL{0.3 }& \DIFaddFL{0.7 }& \DIFaddFL{0.3 }\\ \bottomrule \end{tabular} \end{table} \DIFaddend \section{Discussion and Conclusions} \label{sec:disscon} This study evaluates a deep learning model for the detection of COVID-19 from RX images. The paper provides additional evidence to the state of the art, supporting the potentiality of deep learning techniques to accurately categorize XR images corresponding to control, pneumonia, and COVID-19 patients (Fig. \ref{fig:XR-examples}). These three classes were chosen under the assumption that they can support clinicians on making better decisions, establishing potential differential strategies to handle patients depending on their cause of infection \cite{wang2020covid}. However, the main goal of the paper was not to demonstrate the suitability of the deep learning for categorizing XR images, but to make a thoughtful evaluation of the results and of different preprocessing approaches, searching for better explainability and/or interpretability of the results, while providing evidence of potential effects that might bias results. The model relies on the COVID-Net network, which has served as a basis for the development of a more refined architecture. This network has been chosen due to its tailored characteristics and given the previous good results reported by other researchers. The COVID-Net was trained with a corpus compiled using data gathered from different sources: the control and pneumonia classes --with $49,983$ and $24,114$ samples respectively-- were collected from the ACT, Chinaset, Montgomery, CRX8, CheXpert and MIMIC datasets; and the COVID-19 class was collected from the information available at the BIMCV, ACT, and HM Hospitales datasets. Although the COVID-19 class only contains $8,573$ chest RX images, the \DIFdelbegin \DIFdel{data sources used }\DIFdelend \DIFaddbegin \DIFadd{developers of the data sources }\DIFaddend are continuously adding new cases \DIFaddbegin \DIFadd{to the respective repositories}\DIFaddend , so the number of samples is expected to grow in the future. Despite the unbalance of the COVID-19 class, up to date, and to the authors' knowledge, this is the largest compilation of COVID-19, images based on open repositories. Despite that, the number of COVID-19 RX images is still considered small in comparison to the other two classes, and therefore, it was necessary to compensate for the class imbalance by modifying the network architecture, including regularization components in the last two dense layers. To this end, a weighted categorical cross-entropy loss function was used to compensate for this effect. Likewise, data augmentation techniques were used for pneumonia and COVID-19 classes to automatically generate more samples for these two underrepresented classes. \DIFdelbegin \DIFdel{Despite the known limitations, the proposed methods have provided promising results, }\DIFdelend \DIFaddbegin \DIFadd{We stand on the fact that automatic diagnosis is much more than a classification exercise, meaning that many factors have to be had in mind to bring these techniques to the clinical practice. To this respect, there is a classic assumption in the literature that the associated heat maps --calculated with techniques such as Grad-CAM-- provide a clinical interpretation of the results, which is unclear in practice. In light of the results shown in the heat maps depicted in Fig. \ref{fig:HetMap}, we show that experiment 1 must be carefully interpreted. Despite the high-performance metrics obtained in experiment 1, the significant areas identified by the network are pointing towards certain areas with no clear interest for the diagnosis, such as corners of the images, the sternum, clavicles, etc. From a clinical point of view, this is clearly biasing the results. It means that other approaches are necessary to force the network to focus on the lungs area. To this respect, we have developed and compared the results }\DIFaddend with \DIFdelbegin \DIFdel{an Acc of $91.53\%$, BAcc of $87.6$, GMR of $87.37\%$ and an AUC of $0.97$. These results correspond with the worst --in terms of accuracy-- but the most interpretable experiment of those tested (experiment }\DIFdelend \DIFaddbegin \DIFadd{two preprocessing approaches based on cropping the images and segmenting the lungs area (experiment 2 and experiment }\DIFaddend 3\DIFdelbegin \DIFdel{), which involves a preprocessing with a previous segmentation of the lungs area. Even when the segmentation process reduces the accuracy of the artificial model, it is considered more suitable for our purpose, since the network is forced to put the focus }\DIFdelend \DIFaddbegin \DIFadd{). Again, given the heat maps corresponding to experiment 2, we also see similar explainability problems to those enumerated for experiment 1. Reducing the area of interest to that proposed in experiment 2 significantly decreases the performance of the system due to the removal of the metadata that usually appear in the top left and/or right corner, and to the removal of areas which are of interest to categorize the images but have no interest from the diagnosis point of view. However, while comparing experiment 2 and 3, performance results improve in the third approach, which focuses on the same region of interest but with a mask that forces the network to see only the lungs. Thus, results obtained in experiments 2 and 3 suggest that eliminating the needless features extracted from the background or non-related regions improves the results. Besides, the third approach (experiment 3), provides more explainable and interpretative results, with the network focusing its attention only }\DIFaddend in the area of interest \DIFaddbegin \DIFadd{for the disease. The gain in explainability of the last method is still at the cost of a lower accuracy with respect to experiment 1, but the improvement in explainability and interpretability are considered critical to translate these techniques to the clinical setting. Despite the decrease in performance, the proposed method in experiment 3 has provided promising results, with an Acc of $91.53\%$, BAcc of $87.6$, GMR of $87.37\%$ and AUC of $0.97$}\DIFaddend . \DIFdelbegin \DIFdel{This is expected to produce a better trained network, that is capable of generalizing results better, since the effects of artifacts that might bias have been disregarded or at least minimized. }\DIFdelend Performance results obtained are in line with those presented in \cite{wang2020covid}, which reports sensitivities of $95$, $94$ and $91$ for control, pneumonia and COVID-19 classes respectively --also modelling with the COVID-Net in a scenario similar to the one in experiment 1--, but training with a much smaller corpus of $358$ RX images from $266$ COVID-19 patients, $8,066$ controls, and $5,538$ RX images belonging to patients with different types of pneumonia. The paper \DIFdelbegin \DIFdel{has also critically evaluated }\DIFdelend \DIFaddbegin \DIFadd{also critically evaluates }\DIFaddend the effect of several variability factors that might compromise the performance of the network. For instance, the effect of the projection (PA/AP) was evaluated by retraining the network and checking the outcomes. This effect is important, given that PA projections are often practised in erect positions to better observe the pulmonary ways, and as such, are expected to be examined in healthy or slightly affected patients. In contrast, AP projections are often preferred for patients confined in bed, and as such are expected to be practised in the most severe cases. Since AP projections are common in COVID-19 patients, blood is expected to flow more to lungs’ apices than when standing; thus, not considering this variability factor into account may result in a misdiagnosis of pulmonary congestion \cite{burlacu2020curbing}. \DIFdelbegin \DIFdel{This might mean that the results could be biased towards one of the projections. }\DIFdelend Indeed, the obtained results have highlighted the importance of taking into account this factor when designing the training corpus, as PPV have decreased for PA projections in our experiments with the COVID-19 images\DIFdelbegin \DIFdel{, which is the underrepresented }\DIFdelend \DIFaddbegin \DIFadd{. This is probably due to an underrepresentation of this }\DIFaddend class (Table \ref{tab:NumericResults2})\DIFaddbegin \DIFadd{, which would require a further specific analysis when designing future corpora}\DIFaddend . \DIFaddbegin \DIFadd{On the other hand, results have shown that the error distribution for the COVID-19 class follows a similar proportion than the percentage of images available in the corpus while categorizing by gender, the technology of the detector, the projection and/or the dataset. These results suggest no significant bias with respect to these potential variability factors, at least for the COVID-19 class, which is the less represented one. } \DIFaddend An analysis of how the clusters of classes were distributed is also presented in \DIFaddbegin \DIFadd{Fig. }\DIFaddend \ref{fig:t-SNE_Plots}, demonstrating how well each class is differentiated. These plots help to identify existing overlap among classes (especially that present between pneumonia and COVID-19, and to a lesser extent between controls and pneumonia). Similarly, since the corpus used to train the network was built around several datasets, a new set of t-SNE plots was produced, but differentiating according to each of the subsets that were used for training (Fig. \ref{fig:t-SNE_Plots_v2}). This test served to evaluate the influence of potential specific characteristics of each dataset in the training procedure and, hence, possible sources of confusion that arise due to particularities of the \DIFdelbegin \DIFdel{corpus }\DIFdelend \DIFaddbegin \DIFadd{corpora }\DIFaddend that are tested. The plots suggest that in general terms the different datasets are correctly merged together, but with some exceptions. This fact suggests that there might be certain unknown \DIFdelbegin \DIFdel{specificities }\DIFdelend \DIFaddbegin \DIFadd{characteristics }\DIFaddend in the datasets used, which cluster the images belonging to the same dataset together. The COVID-Net has also demonstrated being a good starting point \DIFdelbegin \DIFdel{to identify the associated lesions}\DIFdelend \DIFaddbegin \DIFadd{for the characterization of the disease}\DIFaddend . Indeed, the outcomes of the paper suggest the possibility to automatically \DIFdelbegin \DIFdel{identify }\DIFdelend \DIFaddbegin \DIFadd{identifying }\DIFaddend the lung lesions associated with a COVID-19 infection (see Fig.\ref{fig:XR-examples}) by analyzing the Grad-CAM mappings \DIFaddbegin \DIFadd{of experiment 3}\DIFaddend , providing an explainable justification about the way the network works. However, the interpretation of the heat maps obtained for the control class must be carried out carefully. Whereas the areas of significant interest for pneumonia and COVID-19 classes are supposed to point to potential lesions (i.e. with higher density and/or with different textures in contrast to controls), the areas of significant interest for the classification in the control group are supposed to correspond \DIFdelbegin \DIFdel{with }\DIFdelend \DIFaddbegin \DIFadd{to }\DIFaddend a sort of complement, potentially highlighting less dense areas. Thus, not meaning the presence of any kind of lesion in the lungs. \DIFdelbegin \DIFdel{Besides, in comparison with the performance of }\DIFdelend \DIFaddbegin \DIFadd{Likewise, and in comparison to the performance achieved by }\DIFaddend a human evaluator \DIFdelbegin \DIFdel{doing the same task, we can conclude that }\DIFdelend \DIFaddbegin \DIFadd{differentiating pneumonia from COVID-19, }\DIFaddend the system developed in the third experiment \DIFdelbegin \DIFdel{, achieves }\DIFdelend \DIFaddbegin \DIFadd{attains }\DIFaddend comparable results. Indeed, in \cite{bai2020performance} the ability of seven radiologists to correctly differentiate pneumonia and COVID-19 from XR images was put into test. The results indicated that the radiologists achieved sensitivities ranging from $97\%$ to $70\%$ (mean $80\%$), and specificities ranging from $7\%$ to $100\%$ (mean $70\%$). These results suggest a potential use in a supervised clinical environment. COVID-19 is still a new disease and much remains to be studied. The use of deep learning techniques would potentially help to understand the mechanisms on how the SARS-CoV2 attacks the lungs and alveoli, and how it evolves during the different stages of the disease. Despite there \DIFdelbegin \DIFdel{are }\DIFdelend \DIFaddbegin \DIFadd{is }\DIFaddend some empirical evidence on the evolution of COVID-19 --based on observations made by radiologists \cite{pan2020imaging}--, the employment of automatic techniques based on machine learning would help to analyze data massively, to guide research onto certain paths or to extract conclusions faster. But more interpretable and explainable methods are required to go one step forward. Inline to the previous comment, and based on the empirical evidence respecting the evolution of the \DIFdelbegin \DIFdel{disorder}\DIFdelend \DIFaddbegin \DIFadd{disease}\DIFaddend , it has been stated that during early stages of the disease, ground-glass shadows, pulmonary consolidation and nodules, and local consolidation in the centre with peripheral ground-glass density are often observed, but once the disease evolves the consolidations reduce their density resembling a ground-glass opacity, that can derive in a "white lung" if the disease worsens or in a minimization of the opacities if the course of the disease improves \cite{pan2020imaging}. In this manner, if any of these characteristic behaviours are automatically identified, it would be possible to stratify the stage of the disorder according to its severity. Not only that \DIFdelbegin \DIFdel{, }\DIFdelend but computing the extent of the ground-glass opacities or densities would also be useful to assess the severity of the infection or to evaluate the evolution of the disease. The assessment of the infection extent has been previously tested in other CT studies of COVID-19 \cite{yang2020chest}, but using manual procedures based on observation of the images\DIFaddbegin \DIFadd{. } \DIFadd{Solutions like the one discussed in this paper, are intended to support a much faster diagnosis and to alleviate the workload of radiologists and specialists, but not to substitute their assessment. A rigorous validation would open the door to integrating these algorithms in desktop applications or cloud servers for its use in the clinic environment. Thus, its use, maintenance and update would be simple and cost-effective, and would reduce healthcare costs, improve the accuracy of the diagnosis and the response time \mbox \cite{topol2019deep}}\hspace{0pt . In any case, the deployment of these algorithms is not exempt from controversies: hosting the AI models in a cloud service would entail the upload of the images which might be subject to national and/or international regulations and constraints to ensure privacy \mbox \cite{he2019practical}}\hspace{0pt }\DIFaddend . \bibliographystyle{IEEEtran} \balance \section{Introduction} \label{sec:introduction} \IEEEPARstart{T}{he} COVID-19 pandemic has rapidly become into one of the biggest health world challenges in recent years. The disease spreads at a fast pace: the reproduction number of COVID-19 ranged from $2.24$ to $3.58$ during the first months of the pandemic \cite{zhao2020preliminary}, meaning that, on average, an infected person transmitted the disease to $2$ or more people. As a result, the number of COVID-19 infections dramatically increased from just a hundred cases in January --almost all of them concentrated in China-- to more than $43$ million in November spread all around the world \cite{ECDC:2020}. COVID-19 is caused by the coronavirus SARS-COV2, a virus that belongs to the same family of other respiratory disorders such as the \textit{Severe Acute Respiratory Syndrome} (SARS) and \textit{Middle East Respiratory Syndrome} (MERS). The symptomatology of COVID-19 is diverse and arise after incubation of around $5.2$ days. These might include fever, dry cough, and fatigue; although, headache, haemoptysis, diarrhoea, dyspnoea, and lymphopenia are also reported \cite{rothan2020epidemiology,chen2020epidemiological}. In severe cases, an \textit{Acute Respiratory Distress Syndrome} (ARDS) might be developed by underlying pneumonia associated with the COVID-19. For the most serious cases, the estimated period from the onset of the disease to death ranged from 6 to 41 days (with a median of 14 days), being dependent on the age of the patient and the status of the patient's immune system \cite{rothan2020epidemiology}. Once the SARS-COV2 reaches the host at the lung, it gets into the cells through a protein called ACE2, which serves as the "opening" of the cell lock. After the genetic material of the virus has multiplied, the infected cell produces proteins that complement the viral structure to produce new viruses. Then, the virus destroys the infected cell, leave it and infect new cells. The destroyed cells produce radiological lesions \cite{pan2020initial,pan2020imaging,zhou2020coronavirus} such as consolidations and nodules in the lungs, that are observable in the form of ground-glass opacity regions in the XR images (Fig. \ref{rx-COVID-19}). These lesions are more noticeable in patients assessed $5$ or more days after the onset of the disease, and especially in those older than $50$ \cite{song2020emerging}. Findings also suggest that patients recovered from COVID-19 have developed pulmonary fibrosis \cite{Hosseiny2020}, in which the connective tissue of the lung gets inflamed. This leads to a pathological proliferation of the connective tissue between the alveoli and the surrounding blood vessels. Given the aforementioned, radiological imaging techniques --using plain chest \textit{X-Ray} (XR) and/or thorax \textit{Computer Tomography} (CT)-- have become crucial diagnosis and evaluation tools to identify and assess the severity of the infection. Since the declaration of the COVID-19 pandemic by the World Health Organization, four major key areas were identified to reduce the impact of the disease in the world: to prepare and be ready; detect, protect, and treat; reduce transmission; and innovate and learn \cite{WHO_pandemic:2020}. Concerning the area of detection, big efforts have been taken to improve the diagnostic procedures of COVID-19. To date, the gold standard in the clinic is still a molecular diagnostic test based on a \textit{polymerase chain reaction} (PCR), which is precise but time-consuming, requires specialized personnel and laboratories and is in general limited by the capacities and resources of the health systems. This poses difficulties due to the rapid rate of growth of the disease. An alternative to PCR is the rapid tests such as those based in \textit{real-time reverse transcriptase-polymerase chain reaction} (RT-PCR), as they can be more rapidly deployed, decrease the load of the specialized laboratories, require less specialized personal and provide faster diagnosis compared to traditional PCR. Other tests, such as those based on antigens, are now available, but are mainly used for massive testings (i.e. for non clinical applications) due to a higher chance of missing an active infection. In contrast with RT-PCR, which detect the virus's genetic material, antigen tests identify specific proteins on the surface of the virus, requiring a higher viral load, which significantly shortens the period of sensitivity. In clinical practice, the RT-PCR test is usually complemented with a chest XR, in such a manner that the combined analysis reduces the significant number of false negatives and, at the same time, brings additional information about the extent and severity of the disease. In addition to that, thorax CT is also used as a second row method for evaluation. Although the evaluation with CT provides more accurate results in early stages and have been shown to have greater sensitivity and specificity \cite{ai2020correlation}, XR imaging has become the standard in the screening protocols, since it is fast, minimally-invasive, low-cost, and requires simpler logistics for its implementation. In the search of rapid, more objective, accurate and sensitive procedures, which could complement the diagnosis and assessment of the disorder, a trend of research has emerged to employ clinical features extracted from thorax CT or chest XR for automatic detection purposes. A potential benefit of studying the radiological images also comes from the potentiality of medical imaging to characterize pneumonic states even in asymptomatic population \cite{chan2020familial}, although more research is needed in this field as the lack of findings in infected patients is also reported \cite{li2020chest}. The consolidation of such technology will permit a speedy and accurate diagnosis of COVID-19, decreasing the pressure on microbiological laboratories in charge of the PCR tests, and providing more objective means of assessing the severity of the disease. To this end, techniques based on deep learning have been employed to characterize XR with promising results. Although it would be desirable to employ CT for detection purposes, some major drawbacks are often present, including higher costs, a more time-consuming procedure, the necessity of thorough hygienic protocols not to spread infections, and the requirement of specialized equipment that might not be readily available in hospitals or health centres. By contrast, XR is available as a first screening test in many hospitals or health centres, at lower expenses and with less time-consuming imaging procedures. Several approaches for COVID-19 detection based on chest XR images and different deep learning architectures have been published in the last few months, reporting classification accuracies around 90\% or higher. However, the central analysis in most of those works have focused on the variations of network architectures and less attention has been pay to the variability factors that a real solution should tackled before it can be deployed in the medical setting. In this sense, no analysis have been provided to demonstrate the reliability of the predictions made by the networks, which in the context of medical solutions acquires special relevance. Moreover, most of the works in the state of the art have validated their results with data sets containing dozens or a few hundreds of COVID-19 samples, limiting the impact of the proposed solutions. With these antecedents in mind, this paper uses a deep learning algorithm based on CNN, data augmentation and regularization techniques to handle data imbalance, for the discrimination between COVID-19, controls and other types of pneumonia. The methods are tested with the largest corpus to date known by the authors. Three different sets of experiments were carried out in the search for the most suitable and coherent approach. To this end, the paper also uses explainability techniques to gain insight about the manners on how the neural network learns, and interpretability in terms of the overlaping among the regions of interest selected by the network and those that are more likely affected by COVID-19. A critical analysis of factors that affect the performance of automatic systems based on deep learning is also carried out. This paper is organized as follows: section \ref{background} presents some background and antecedents on the use of deep learning for COVID-19 detection. section \ref{sec:methodology} presents the methodology, section \ref{sec:results} presents the results obtained, whereas \ref{sec:disscon} presents the discussions and main conclusions of this paper. \begin{figure*}[!hp] \setlength{\tempheight}{0.18\textheight} \settowidth{\tempwidth}{\includegraphics[height=\tempheight]{images/CRXNIH__0__n__DX__PA__19169__5__00019169_017.png}} \centering \hspace{0.1cm} \columnname{Control}\hspace{0.4cm} \columnname{Pneumonia}\hspace{0.3cm} \columnname{COVID-19}\\ \rowname{Raw image} \subfloat[] { \label{rx-control} \includegraphics[width=0.25\textwidth]{images/CRXNIH__0__n__DX__PA__19169__5__00019169_017.png} } \subfloat[] { \label{rx-Pneumonia} \includegraphics[width=0.25\textwidth]{images/CRXNIH__2__n__DX__PA__2439__10__00002439_010.png} } \subfloat[] { \label{rx-COVID-19} \includegraphics[width=0.25\textwidth]{images/HM2__1__n__AP__CR__2049__17__1.3.51.0.7.1724101731.55385.64073.43842.45459.6508.63735.DC3.png} } \\ \rowname{Exp 1} \subfloat[] { \label{Exp1-control} \includegraphics[width=0.25\textwidth]{images/cam_CRXNIH__0__n__DX__PA__19169__5__00019169_017_2.png} } \subfloat[] { \label{Exp1-Pneumonia} \includegraphics[width=0.25\textwidth]{images/cam_CRXNIH__2__n__DX__PA__2439__10__00002439_010_2.png} } \subfloat[] { \label{Exp1-COVID-19} \includegraphics[width=0.25\textwidth]{images/cam_HM2__1__n__AP__CR__2049__17__1.3.51.0.7.1724101731.55385.64073.43842.45459.6508.63735.DC3_2.png} } \\ \rowname{Exp 2} \subfloat[] { \label{Exp2-control} \includegraphics[width=0.25\textwidth]{images/cam_CRXNIH__0__n__DX__PA__19169__5__00019169_017_3.png} } \subfloat[] { \label{Exp2-Pneumonia} \includegraphics[width=0.25\textwidth]{images/cam_CRXNIH__2__n__DX__PA__2439__10__00002439_010_3.png} } \subfloat[] { \label{Exp2-COVID-19} \includegraphics[width=0.25\textwidth]{images/cam_HM2__1__n__AP__CR__2049__17__1.3.51.0.7.1724101731.55385.64073.43842.45459.6508.63735.DC3_3.png} } \\ \rowname{Exp 3} \subfloat[] { \label{Exp3-control} \includegraphics[width=0.25\textwidth]{images/cam_CRXNIH__0__n__DX__PA__19169__5__00019169_017.png} } \subfloat[] { \label{Exp3-Pneumonia} \includegraphics[width=0.25\textwidth]{images/cam_CRXNIH__2__n__DX__PA__2439__10__00002439_010.png} } \subfloat[] { \label{Exp3-COVID-19} \includegraphics[width=0.25\textwidth]{images/cam_HM2__1__n__AP__CR__2049__17__1.3.51.0.7.1724101731.55385.64073.43842.45459.6508.63735.DC3.png} } \caption[]{ Experiments considered in the paper. \textbf{First row:} raw chest XR images belonging to the control, pneumonia, and COVID-19 classes. \textbf{Second row:} Grad-CAM activation mapping for the XR images. Despite of the high accuracy of the methods, in some cases, the model focuses its attention in areas different from the lungs. \textbf{Third row:} Grad-CAM activation mapping after zooming in, cropping to a squared region of interest and resizing. Zooming to the region of interest forces the model to focus its attention to the lungs, but errors are still present. \textbf{Fourth row:} Grad-CAM activation mapping after a zooming and segmentation procedure. Zooming in and segmenting force the model to focus attention in the lungs. The black background represents the mask introduced by the segmentation procedure. } \label{fig:XR-examples} \end{figure*} \section{Background} \label{background} A large body of research has emerged on the use of \textit{Artificial Intelligence} (AI) for the detection of different respiratory diseases using plain XR images. For instance, in \cite{rajpurkar2017chexnet} authors developed a $121$-layer \textit{Convolutional Neural Network} (CNN) architecture, called Chexnet, which was trained with a dataset of $100,000$ XR images for the detection of different types of pneumonia. The study reports an area under the \textit{Receiving Operatinng Characteristic} (ROC) curve of $0.76$ in a multiclass scenario composed of $14$ classes. Directly related to the COVID-19 detection, three CNN architectures (ResNet50, InceptionV3 and InceptionResNetV2) were considered in \cite{narin2020automatic}, using a database of just $50$ controls and $50$ COVID-19 patients. The best accuracy ($98\%$) was obtained with ResNet50. In \cite{hemdan2020covidx}, seven different deep CNN models were tested using a corpus of $50$ controls and $25$ COVID-19 patients. The best resuts were attained with the VGG19 and DenseNet models, obtaining F1-scores of $0.89$ and $0.91$ for controls and patients. The COVID-Net architecture was proposed in \cite{wang2020covid}. The net was trained with an open repository, called COVIDx, composed of $13,975$ XR images, although only $358$ --coming from $266$ patients-- belonged to the COVID-19 class. The attained accuracy was of $93.3\%$. In \cite{zhang2020covid} a deep anomaly detection algorithm was employed for the detection of COVID-19, in a corpus of $100$ COVID-19 images (taken from $70$ patients), and $1,431$ control images (taken from $1008$ patients). $96\%$ of sensitivity and $70\%$ of specificity was obtained. In \cite{Islam2020}, a combination of a CNN for feature extraction and a \textit{Long Short Term Memory Network} (LSTM) for classification were used for automatic detection purposes. The model was trained with a corpus gathered from different sources, consisting of $4,575$ XR images: $1,525$ of COVID-19 (although $912$ come from a repository applying data augmentation), $1,525$ of pneumonia, and $1,525$ of controls. In a $5$-folds cross-validation scheme, a $99$\% accuracy was reported. In \cite{Civit-Masot2020}, the VGG16 network was used for classification, employing a database of $132$ COVID-19, $132$ controls and $132$ pneumonia images. Following a hold-out validation, about $100$\% accuracy was obtained identifying COVID-19, being lower on the other classes. By using transfer-learning based on the Xception network, authors in \cite{NarayanDas2020} adapted a model for the classification of COVID-19. Experiments were carried out in a database of $127$ COVID-19, $500$ controls and $500$ patients with pneumonia gathered from different sources, attaining about $97$\% accuracy. A similar approach, followed in \cite{Ozturk2020}, used the same corpus for the binary classification of COVID-19 and controls; and for the multi-class classification of COVID-19, controls and pneumonia. With a modification of the Darknet model for transfer-learning, and a $5$-folds cross-validation, a $98$\% accuracy in binary classification and $87$\% in multi-class classification was obtained. Another Xception transfer-learning-based approach was presented in \cite{Khan2020}, but considering two multi-class classification tasks: i) controls vs. COVID-19 vs. viral pneumonia and bacterial pneumonia; ii) controls vs. COVID-19 vs. pneumonia. To deal with the imbalance of the corpus, undersampling was used to randomly discard registers from the larger classes, obtaining $290$ COVID-19, $310$ controls, $330$ bacterial pneumonia and $327$ viral pneumonia chest XR images. The reported accuracy in the $4$-class problem was of $89$\%, and of $94$\% in the $3$-class scenario. Moreover, in a $3$-class cross-database experiment, the accuracy was of $90$\%. In \cite{Minaee2020}, four CNN networks (ResNet18, ResNet50, SqueezeNet, and DenseNet-121) were used for transfer learning. Experiments were performed on a database of $184$ COVID-19 and $5,000$ no-finding and pneumonia images. Reported results indicate a sensitivity of about $98$\% and a specificity of $93$\%. In \cite{Apostolopoulos2020}, five state-of-the-art CNN systems --VGG19, MobileNetV2, Inception, Xception, InceptionResNetV2-- were tested on a transfer-learning setting to identify COVID-19 from control and pneumonia images. Experiments were carried out in two partitions: one of $224$ COVID-19, $700$ bacterial pneumonia and $504$ control images; and another that considered the previous normal and COVID-19 data, but included $714$ cases of bacterial and viral pneumonia. The MobileNetV2 net attained the best results with $96$\% and $94$\% accuracy in the 2 and 3-classes classification, respectively. In \cite{Apostolopoulos2020b}, the MobileNetV2 net was trained from scratch, and compared to one net based on transfer-learning and to another based on hybrid feature extraction with fine-tuning. Experiments performed in a dataset of $3905$ XR images of $6$ diseases indicated that training from the scratch outperforms the other approaches, attaining $87$\% accuracy in the multi-class classification and $99$\% in the detection of COVID-19. A system, also grounded on the InceptionNet and transfer-learning, was presented in \cite{Das2020}. Experiments were performed on $6$ partitions of XR images with COVID-19, pneumonia, tuberculosis and controls. Reported results indicate $99$\% accuracy, in a $10$-folds cross-validation scheme, in classification of COVID-19 from other classes. In \cite{Togacar2020}, fuzzy colour techniques were used as a pre-processing stage to remove noise and enhance XR images in a 3-class classification setting (COVID-19, pneumonia and controls). The pre-processed images and the original ones were stacked. Then, two CNN models were used to extract features: MobileNetV2 and SqueezeNet. A feature selection technique based on social mimic optimization and a Support Vector Machine (SVM) were used. Experiments were performed on a corpus of $295$ COVID-19, $65$ controls and $98$ pneumonia XR images, attaining about $99$\% accuracy. Given the limited amount of COVID-19 images, some approaches have focused on generating artificial data to train better models. In \cite{Waheed2020}, an auxiliary \textit{Generative Adversarial Network} (GAN) was used to produce artificial COVID-19 XR images from a database of $403$ COVID-19 and $1,124$ controls. Results indicated that data augmentation increased accuracy from $85$\% to $95$\% on the VGG16 net. Similarly, in \cite{Loey2020}, GAN was used to augment a database of $307$ images belonging to four classes: controls, COVID-19, bacterial and viral pneumonia. Different CNN models were tested in a transfer-learning-based setting, including Alexnet, Googlenet, and Restnet18. The best results were obtained with Googlenet, achieving $99$\% in a multi-class classification approach. In \cite{Toraman2020}, a CNN based on capsule networks (CapsNet), was used for binary (COVID-19 vs. controls) and multi-class classification (COVID-19 vs. pneumonia vs. controls). Experiments were performed on a dataset of $231$ COVID-19, $1,050$ pneumonia and $1,050$ controls XR images. Data augmentation was used to increase the number of COVID-19 images to $1,050$. On a $10$-folds cross-validation scheme, $97$\% accuracy for binary classification, and $84$\% multi-class classification were achieved. The CovXNet architecture, based on depth-wise dilated convolution networks, was proposed in \cite{Mahmud2020}. In a first stage, pneumonia (viral and bacterial) and control images were employed for pretraining. Then, a a refined model of COVID-19 is obtained using transfer learning. In experiments using two-databases, $97$\% accuracy was achieved for COVID-19 vs. controls, and of $90$\% for COVID-19 vs. controls vs. bacterial and viral cases of pneumonia. In \cite{Oh2020}, an easy-to-train neural network with a limited number of training parameters was presented. To this end, patch phenomena found on XR images were studied (bilateral involvement, peripheral distribution and ground-glass opacification) to develop a lung segmentation and a patch-based neural network that distinguished COVID-19 from controls. The basis of the system was the ResNet18 network. Saliency maps were also used to produce interpretable results. In experiments performed on a database of controls ($191$), bacterial pneumonia ($54$), tuberculosis ($57$) and viral pneumonia ($20$), about $89$\% accuracy was obtained. Likewise, interpretable results were reported in terms of large correlations between the activation zones of the saliency maps and the radiological findings found in the XR images. In addition to that, authors indicate that when the lung segmentation approach was not considered the system's accuracy decreased to about $80$\%. In \cite{Altan2020}, 2D curvelets transformations were used to extract features from XR images. A feature selection algorithm based on meta-heuristic was used to find the most relevant characteristics, while a CNN model based on EfficientNet-B0 was used for classification. Experiments were carried out in a database of $1,341$ controls, $219$ COVID-19, and $1,345$ viral pneumonia images, and $99$\% classification accuracy was achieved with the proposed approach. Multi-class and hierarchical classification of different types of diseases producing pneumonia (with $7$ labels and $14$ label paths), including COVID-19, were explored in \cite{Pereira2020}. Since the database of $1,144$ XR images was heavily imbalanced, different resampling techniques were considered. By following a transfer-learning approach based on a CNN architecture to extract features, and a hold-out validation with $5$ different classification techniques, a macro-avg F1-Score of $0.65$ and an F1-Score of $0.89$ were obtained for the multi-class and hierarchical classification scenarios respectively. In \cite{Brunese2020}, a three-phases approach is presented: i) to detect the presence of pneumonia; ii) to classify between COVID-19 and pneumonia; and, iii) to highlight regions of interest of XR images. The proposed system utilized a database of $250$ images of COVID-19 patients, $2,753$ with other pulmonary diseases and $3,520$ controls. By using a transfer-learning system based on VGG16, about $0.97$ accuracy was reported. A CNN-hierarchical approach using decision trees (based on ResNet18) was presented in \cite{Yoo2020}, on which a first tree classified XR images into the normal or pathological classes; the second identified tuberculosis; and the third COVID-19. Experiments were carried out on $3$ partitions obtained after having gathered images from different sources and data augmentation. The accuracy for each one of the decision trees --starting from the first-- was about $98$\%, $80$\%, and $95$\% respectively. \subsection*{Issues affecting results in the literature} Table \ref{tab:SoASummary} presents a summary of the state of the art in the automatic detection of COVID-19 based on XR images and deep learning. Despite the excellent results reported, the review reveals that some of the proposed systems suffer from certain shortcomings that affect the conclusions that can be extracted from them, limiting the possibility to be transferred to the clinical environment. Likewise, there exist variability factors that have not been deeply studied in these papers and which can be regarded as important. For instance, one of the issues that affect most the reviewed systems to detect COVID-19 from plain chest XR images is the use of very limited datasets, which compromises their generalization capabilities. Indeed, from the authors' knowledge, to date, the paper employing the largest database of COVID-19 considers $1,525$ XR images gathered from different sources. However, from these, $912$ belong to a data augmented repository, which does not include additional information about the initial number of files or the number of augmented images. In general terms, most of the works employ less than $300$ COVID-19 XR images, having systems that use as few as $50$ images. This is however understandable since some of these works were published at the onset of the pandemics when the number of available registers was limited. On the other hand, a good balance in the age of the patients is considered important to avoid the model learn age-specific features. However, several previous works have used XR images from children to populate the pneumonia class \footnote{First efforts used the RSNA Pneumonia Detection Challenge dataset, which is focused on the detection of pneumonia cases in children. https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/overview}. This might be biasing the results given the age differences with respect to COVID-19 patients. Despite many works in the literature report a good performance in the detection of COVID-19, most of the approaches follow a brute force approach exploiting the potentiality of deep learning to correlate with the outputs (i.e. the class labels), but providing low interpretability and explainability of the process. It means that it is unclear if the good results are due to the actual capability of the system to extract information related to the pathology or due to the capabilities of the system to learn other aspects biasing and compromising the results. As a matter of example, just one of the works reported in the literature follows a strategy that forces the network to focus in the most significant areas of interest for COVID-19 detection \cite{Oh2020}. It does so, by proposing a methodology based on a semantic segmentation of the lungs. In the remaining cases, it is unclear if the models are analyzing the lungs, or if they are categorizing given any other information available, which might be interesting for classification purposes but might lack diagnostic interest. This is relevant, as in all the analyzed works in the literature, pneumonia and controls classes come from a certain repository, whereas others such as COVID-19 comes from a combination of sources and repositories. Having classes with different recording conditions might certainly affect the results, and as such, a critical study about this aspect is needed. In the same line, other variability issues such as the sensor technology that is employed, the type of projection used, the sex of the patients, and even age, require a thorough study. Finally, the review revealed that most of the published papers showed excellent correlation with the disease but low interpretability and explainability (see Table \ref{tab:SoASummary}). Indeed, in clinical practice, it is often more desirable to obtain interpretable results that correlate to pathological conditions, or to a certain demographic or physiological variable, than a black box system that simply states a binary or a multiclass decision. From the revision of literature, only \cite{Oh2020} and \cite{Mahmud2020} partially addressed this aspect. Thus, further research on this topic is needed. With these ideas in mind, this paper addresses these aspects by training and testing with a wide corpus of RX images, proposing and comparing two strategies to preprocess the images, analyzing the effect of some variability factors, and providing some insights towards more explainable and interpretable results. The major goal is presenting a critical overview of these aspects since they might be affecting the modelling capabilities of the deep learning systems for the detection of COVID-19. \begin{table*}[htbp] \caption{Summary of the literature in the field} \begin{tabular}{@{}lp{5.5cm}lllllp{2cm}l@{}} \toprule \multirow{2}{*}{\textbf{Ref.}} & \multirow{2}{*}{\textbf{Architecture}} & \multicolumn{3}{c}{\textbf{Number of cases}} & \multirow{2}{*}{\textbf{Classes}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Performance\\ metrics\end{tabular}}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Lung\\segment.\end{tabular}}} & \multirow{2}{*}{\textbf{Explainable}} \\ \cmidrule(lr){3-5} & & \textbf{COVID-19} & \textbf{Controls} & \textbf{Others} & & & & \\ \midrule \cite{narin2020automatic} & InceptionV3, InceptionResNetV2, ResNet50 & 50 & 50 & -- & 2 & Acc=98\% & N & N \\ \cite{hemdan2020covidx} & VGG19, DenseNet & 25 & 50 & -- & 2 & AvF1=0.90 & N & N \\ \cite{wang2020covid} & COVID-Net, ResNet50, VGG19 & 358 & 8066 & 5538 & 3 & Acc=93.3\% & N & N \\ \cite{zhang2020covid} & EfficientNet & 100 & 1431 & -- & 2 & Se=96\% Sp=70\% & N & N \\ \cite{Islam2020} & CNN + LSTM & 1525* & 1525 & 1525 & 3 & Acc=99\% & N & N \\ \cite{NarayanDas2020} & Xception & 127 & 500 & 500 & 3 & Acc=97\% & N & N \\ \cite{Ozturk2020} & Darknet & 127 & 500 & 500 & 3 & Acc=87\% & N & N \\ \cite{Khan2020} & Xception & 290 & 310 & 657 & 3 & Acc=93\% & N & N \\ \cite{Apostolopoulos2020} & VGG19, MobileNetV2, Inception, Xception, InceptionResNetV2 & 224 & 504 & 700 & 3 & Acc=94\% & N & N \\ \cite{Togacar2020} & MobileNetV2, SqueezeNet & 295 & 65 & 98 & 3 & Acc=99\% & N & N \\ \cite{Das2020} & Inception & 162 & 2003 & 4650 & 3 & Acc=99\% & N & N \\ \cite{Pereira2020} & Inception-V3 & 90 & 1000 & 687 & 7 & AvF1=0.65 & N & N \\ \cite{Waheed2020} & VGG16 & 403 & 1124 & -- & 2 & Acc=95\% & N & N \\ \cite{Loey2020} & Alexnet, Googlenet, Restnet18 & 69 & 79 & 158 & 4 & Acc=99\% & N & N \\ \cite{Toraman2020} & Capsnet & 231 & 1050 & 1050 & 3 & Acc=84\% & N & N \\ \cite{Mahmud2020} & CovXNet & 305 & 305 & 610 & 4 & Acc=90.2\% & N & Y \\ \cite{Oh2020} & ResNet18 & 180 & 191 & 131 & 4 & Acc=89\% & Y & Y \\ \cite{Civit-Masot2020} & VGG16 & 132 & 132 & 132 & 3 & AvF1=0.85 & N & N \\ \cite{Yoo2020} & ResNet18 & 162 & 585 & 585 & 3 & Acc=95\% & N & N \\ \cite{Minaee2020} & ResNet18, ResNet50, SqueezeNet, DenseNet121 & 184 & 2400 & 2600 & 2 & Se=98\% Sp=92.9\% & N & N \\ \cite{Brunese2020} & VGG16 & 250 & 3520 & 2753 & 4 & Acc=97\% & N & N \\ \cite{Altan2020} & EfficientNet-B & 219 & 1341 & 1345 & 3 & Acc=99\% & N & N \\ \bottomrule \end{tabular} * 912 coming from a repository of data augmented images. \label{tab:SoASummary}% \end{table*} \section{Methodology} \label{sec:methodology} The design methodology is presented in the following section. The procedure that is followed to train the neural network is described firstly, along with the process that was followed to create the dataset. The network and the source code to train it are available at \url{https://github.com/jdariasl/COVIDNET}, so results can be readily reproduced by other researchers. \subsection{The network} \label{sec:network} The core of the system is a deep CNN based on the COVID-Net\footnote{Following the PyTorch implementation available at \url{https://github.com/IliasPap/COVIDNet}} proposed in \cite{wang2020covid}. Some modifications were made to include regularization components in the last two dense layers and a weighted categorical cross entropy loss function in order to compensate for the class imbalance. The structure of the network was also refactored in order to allow gradient-based localization estimations \cite{Selvaraju_2019}, which are used after training in the search for an explainable model. The network was trained with the corpus described in \ref{sec:corpus} using the Adam optimizer with a learning rate policy: the learning rate decreases when learning stagnates for a period of time (i.e., ’patience’). The following hyperparameters were used for training: learning rate=$2\textsuperscript{-5}$, number of epochs=$24$, batch size=$32$, factor=$0.5$, patience=$3$. Furthermore, data augmentation for pneumonia and COVID-19 classes was leveraged with the following augmentation types: horizontal flip, Gaussian noise with a variance of $0.015$, rotation, elastic deformation, and scaling. The variant of the COVID-Net was built and evaluated using the PyTorch library \cite{NEURIPS2019_9015}. The CNN features from each image are concatenated by a flatten operation and the resulting feature map is fed to three fully connected layers to generate a probability score for each class. The first two fully connected layers include dropout regularization of $0.3$ and ReLU activation functions. Dropout was necessary because the original network tended to overfit since the very beginning of the training phase. The input layer of the network rescales the images keeping the aspect ratio, with the shortest dimension scaled to $224$ pixels. Then, the input image is cropped to a square of $224 \times 224$ pixels located in the centre of the image. Images are normalized using a z-score function with parameters $mean=[0.485, 0.456, 0.406]$ and $std=[0.229, 0.224, 0.225]$, for each of the three RGB channels respectively. Even though we are working with grayscale images, the network architecture was designed to be pre-trained on a general purpose database including coloured images; this characteristic was kept in case it would be necessary to use some transfer learning strategy in the future. The output layer of the network provides a score for each of the three classes (i.e. control, pneumonia, or COVID-19), which is converted into three probability estimates --in the range $[0, 1]$-- using a softmax activation function. The final decision about the class membership is made according to the highest of the three probability estimates obtained. \subsection{The corpus} \label{sec:corpus} The corpora used in the paper have been compiled from a set of \textit{Posterior-Anterior} (PA) and \textit{Anterior-Posterior} (AP) XR images from different public sources. The compilation contains images from participants without any observable pathology (controls or no findings), pneumonia, and COVID-19 cases. After the compilation, two subsets of images were generated, i.e. training and testing. Table \ref{tab:classDistribution} contains the number of images per subset and class. Overall, the corpus contains more than $70,000$ XR images, including more than $8,500$ images belonging to COVID-19 patients. \begin{table}[htbp] \centering \caption{Number of images per class for training and testing subsets} \begin{tabular}{@{}cccc@{}} \toprule {Subset} & \multicolumn{1}{l}{Control} & \multicolumn{1}{l}{Pneumonia} & \multicolumn{1}{l}{COVID-19} \\ \midrule Training & 45022 & 21707 & 7716 \\ Testing & 4961 & 2407 & 857 \\ \bottomrule \end{tabular}% \label{tab:classDistribution}% \end{table}% The repositories of XR images employed to create the corpus used in this paper are presented next. Most of these contain solely registers of controls and pneumonia patients. Only the most recent repositories include samples of COVID-19 XR images. In all cases, the annotations were made by a specialist as indicated by the authors of the repositories. The COVID-19 class is modelled compiling images coming from three open data collection initiatives: HM Hospitales COVID \cite{Hospitales2020}, BIMCV-COVID19 \cite{vay2020bimcv} and Actualmed COVID-19 \cite{Actualmed2020} chest XR datasets. The final result of the compilation process is a subset of $8,573$ images from more than $3,600$ patients at different stages of the disease\footnote{Figures at the time the datasets were downloaded. The datasets are still open, and more data might be available in the next future}. Table \ref{tab:DemographicDistribution} summarizes the most significant characteristics of the datasets used to create the corpus, which are presented next: \subsubsection{HM Hospitales COVID-19 dataset} This dataset was compiled by HM Hospitals \cite{Hospitales2020}. It contains all the available clinical information about anonymous patients with the SARS-CoV-2 virus who were treated in different centres belonging to this company since the beginning of the pandemic in Madrid, Spain. The corpus contains the anonymized records of $2,310$ patients, and collects different interactions in the COVID-19 treatment process including, among many other records, information on diagnoses, admissions, diagnostic imaging tests (RX and CT), treatments, laboratory results, discharge or death. The information is organized according to their content, all of them linked by a unique identifier. The dataset contains several radiological studies for each patient corresponding to different stages of the disease. Images are stored in a standard DICOM format. A total of $5,560$ RX images are available in the dataset, with an average of $2.4$ image studies per subject, often taken in intervals of two or more days. The histogram of the patients’ age is highly coherent with the demographics of COVID-19 in Spain (see Table \ref{tab:DemographicDistribution} for more details). Images were acquired with diverse devices, using different technologies, configurations, positions of the patient, and views. Only patients with at least one positive PCR test or positive immunological tests for SARS-CoV-2 were included in the study. The Data Science Commission and the Research Ethics Committee of HM Hospitales approved the current research study and the use of the data for this purpose. \subsubsection{BIMCV COVID19 dataset} BIMCV COVID19 dataset \cite{vay2020bimcv} is a large dataset with chest radiological studies (XR and CT) of COVID-19 patients along with their pathologies, results of PCR and immunological tests, and radiological reports. It was recorded by the Valencian Region Medical Image Bank (BIMCV) in Spain. The dataset contains the anonymized studies of patients with at least one positive PCR test or positive immunological tests for SARS-CoV-2 in the period between February 26th and April 18th, 2020. Patients were identified by querying the Health Information Systems from 11 different hospitals in the Valencian Region, Spain. Studies were acquired using more than 20 different devices. The corpus is composed of a total of $3,013$ XR images, with an average of $1.9$ image studies per subject, taken in intervals of approximately two or more days. The histogram of the patients’ age is highly coherent with the demographics of COVID-19 in Spain (Table \ref{tab:DemographicDistribution}). All images are labelled with the technology of the sensor, but not all of them with the projection. Images were acquired with diverse devices, using different technologies, configurations, positions of the patient, and views. Only patients with at least one positive PCR test or positive immunological tests for SARS-Cov-2 were included in the study. \begin{table*}[htbp] \centering \caption{Demographic data of the datasets used. Only those labels confirmed are reported}\label{tab:DemographicDistribution}% \begin{tabular}{@{}lllllllll@{}} \toprule & Mean age $\pm$ std & \# Males/\# Females & \# Images & AP/PA & DX/CR & COVID-19 & Pneumonia & Control \\ \midrule HM Hospitales & 67,8 $\pm$ 15,7 & 3703/1857 * & 5560 & 5018/542 & 1264/4296 & Y & N & N \\ BIMCV & 62,4 $\pm$ 16,7 & 1527/1486 ** & 3013 & 1171/1217 & 1145/1868 & Y & N & N \\ ACT & -- & -- & 188 & 30/155 & 126/59 & Y & N & Y \\ ChinaSet & 35,4 $\pm$ 14,8 & 449/213 & 662 & 0/662 & 662/0 & N & Y & Y \\ Montgomery & 51,9 $\pm$ 2,41 & 63/74 & 138 & 0/138 & 0/138 & N & Y & Y \\ CRX8 & 45,75 $\pm$ 16,83 & 34760/27030 & 61790 & 21860/39930 & 61790/0 & N & Y & Y \\ CheXpert & 62.38 $\pm$ 18,62 & 2697/1926 & 4623 & 3432/1191 & -- & N & Y & N \\ MIMIC & -- & -- & 16399 & 10850/5549 & -- & N & Y & N \\\bottomrule & * 1377/929 patients & ** 727/626 patients & & & & & & \end{tabular} \end{table*}% \subsubsection{Actualmed set (ACT)} The actualmed COVID-19 Chest XR dataset initiative \cite{Actualmed2020} contains a series of XR images compiled by Actualmed and Universitat Jaume I (Spain). The dataset contains COVID-19 and control XR images, but no information is given about the place or date of recording and/or about the demographics. However, a metadata file is included. It contains an anonymized descriptor to distinguish among patients, and information about the XR modality, type of view and the class to which the image belongs. \subsubsection{China Set - The Shenzhen set} The set was created by the National Library of Medicine, Maryland, USA in collaboration with the Shenzhen No.3 People’s Hospital at Guangdong Medical College in Shenzhen, China \cite{jaeger2014two}. The Chest XR images have been gathered from out-patient clinics and were captured as part of the daily routine using Philips DX Digital Diagnose systems . The dataset contains normal and abnormal chest XR with manifestations of tuberculosis and includes associated radiologist readings. \subsubsection{The Montgomery set} This dataset was created by the National Library of Medicine in collaboration with the Department of Health and Human Services, Montgomery County, Maryland, USA. It contains data from XR images collected under Montgomery County's tuberculosis screening program \cite{jaeger2014two, jaeger2013automatic}. It contains a series of images of controls and tuberculosis patients, captured with a Eureka stationary X-ray machine (CR). All images are de-identified and available in DICOM format. The set covers a wide range of abnormalities, including effusions and miliary patterns. \subsubsection{ChestX-ray8 dataset (CRX8)} The ChestX-ray8 dataset \cite{wang2017chestx} contains $12,120$ images from $14$ common thorax disease categories from $30,805$ unique patients, compiled by the National Institute of Health (NIH). Natural language processing was used to extract the disease from the associated radiological reports. The labels are expected to be >90\% accurate and suitable for weakly-supervised learning. For this study, the images labelled with 'no radiological findings' were used to be part of the control class, whereas the images annotated as 'pneumonia' were used for the pneumonia class. In total $61,790$ images were used. Images annotated as pneumonia with other comorbidities were not included. \subsubsection{CheXpert dataset} CheXpert \cite{irvin2019chexpert} is a dataset of XR images created for an automated evaluation of medical imaging competitions, and contains chest XR examinations carried out in Stanford Hospital during $15$ years. For this study, we selected $4,623$ pneumonia images using those annotated as 'pneumonia' with and without another additional comorbidity. These comorbidities were never caused by COVID-19. The motivation to include pneumonia with comorbidities was to increase the number of pneumonia examples in the final compilation for this study, increasing the variability of this cluster. \subsubsection{MIMIC-CXR Database} MIMIC-CXR \cite{Johnson2019} is an open dataset complied from 2011 to 2016, and comprising de-identified chest RX from patients admitted to the Beth Israel Deaconess Medical Center. The dataset contains $371,920$ XR images associated with $227,943$ imaging studies. Each imaging study can pertain to one or more images, but most often are associated with two images: a frontal and a lateral view. The dataset is complemented with free-text radiology reports. In our study we employed the images for the pneumonia class. The labels were obtained from the agreement of the two methods indicated in \cite{Johnson2019}. The dataset reports no information about gender or age, thus, we assume that the demographics are similar to those of CheXpert dataset, and those of pneumonia \cite{Ramirez2017}. \subsection{Image Pre-processing} XR images were converted to uncompressed grayscale '.png' files, encoded with 16 bits, and preprocessed using the DICOM \textit{WindowCenter} and \textit{WindowWidth} details (when needed). All images were converted to a \textit{Monochrome 2} photometric interpretation. Initially, the images were not re-scaled, to avoid loss of resolution in later processing stages. Only AP and PA views were selected. No differentiation was made between erect, either standing or sitting or decubitus. This information was inferred by a careful analysis of the DICOM tags, but also required a manual checking due to certain labelling errors. \subsection{Experiments} The corpus collected from the aforementioned databases was processed to compile three different datasets of equal size to the initial one. Each of these datasets was used to run a different set of experiments. \subsubsection{Experiment 1. Raw data} The first experiment was run using the raw data extracted from the different datasets. Each image is kept with the original aspect ratio. Only a histogram equalization was applied. \subsubsection{Experiment 2. Cropped image} The second experiment consists of preprocessing the images by zooming in, cropping to a squared region of interest, and resizing to a squared image (aspect ratio $1:1$). The process is summarized in the following steps: \begin{enumerate} \item Lungs are segmented from the original image using a U-Net semantic segmentation algorithm\footnote{Following the Keras implementation available at \url{https://github.com/imlab-uiip/lung-segmentation-2d}}. The algorithm used reports \textit{Intersection-Over-Union} (IoU) and Dice similarity coefficient scores of 0.971 and 0.985 respectively. \item A black mask is extracted to identify the external boundaries of the lungs. \item The mask is used to create two sequences, adding the grey levels of the rows and columns respectively. These two sequences provide four boundary points, which define two segments of different lengths in the horizontal and vertical dimensions. \item The sequences of added grey levels in the vertical and horizontal dimensions of the mask are used to identify a squared region of interest associated with the lungs, taking advantage of the higher added values outside the lungs (Fig. \ref{fig:Mascara}). The process to obtain the squared region requires identifying the middle point of each of the identified segments and cropping in both dimensions using the length of the longest of these two segments. \item The original image is cropped with a squared template placed in the centre of the matrix using the information obtained in the previous step. No mask is placed over the image. \item Histogram equalization of the image obtained. \end{enumerate} This process is carried out to decrease the variability of the data, to make the training process of the network simpler, and to ensure that the region of significant interest is in the centre of the image with no areas cut. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{images/mascara.png} \caption{Identification of the squared region of interest. Plots in the top and left represent the normalized accumulated gray level in the vertical and horizontal dimension respectively. } \label{fig:Mascara} \end{figure} \subsubsection{Experiment 3. Lung segmentation} The third experiment consists of preprocessing the images by masking, zooming in, cropping to a squared region of interest, and resizing to a squared image (aspect ratio $1:1$). The process is summarized in the following steps: \begin{enumerate} \item Lungs are segmented from the original image using the same semantic segmentation algorithm used in experiment 2. \item An external black mask is extracted to identify the external boundaries of the lungs. \item The mask is used to create two sequences, adding the grey levels of the rows and columns respectively. \item The sequences of added grey levels in the vertical and horizontal dimensions of the mask are used to identify a squared region of interest associated to the lungs, taking advantage of the higher added values outside them (Fig. \ref{fig:Mascara}). \item The original image is cropped with a squared template placed in the center of the image. \item The mask is dilated with a $5 \times 5$ pixels kernel, and it is superimposed to the image. \item Histogram equalization is applied only to the segmented area (i.e. the area corresponding to the lungs). \end{enumerate} This preprocessing makes the training of the network much simpler and forces the network to focus the attention on the lungs region, removing external characteristics --like the sternum-- that might influence the obtained results. \begin{table*}[!ht] \centering \caption{Performance measures for the three experiments considered in the paper}\label{tab:NumericResults} \begin{tabular}{@{}lclccccc@{}} \toprule \multirow{2}{*}{\textbf{Experiment}} & \multirow{2}{*}{\textbf{Class}} & \multicolumn{6}{c}{\textbf{Measures}} \\ \cmidrule(l){3-8} & & \textbf{PPV} & \textbf{Recall} & \textbf{F1} & \textbf{Acc} & \textbf{BAcc} & \textbf{GMR} \\ \midrule \multirow{3}{*}{\textbf{Exp. 1}} & \textit{Pneumonia} & 92.53 $\pm$ 1.13 & 94.20 $\pm$ 1.43 & 93.35 $\pm$ 0.68 & \multirow{3}{*}{91.67 $\pm$ 2.56} & \multirow{3}{*}{94.43 $\pm$ 1.36} & \multirow{3}{*}{93.00 $\pm$ 1.00} \\ & \textit{Control} & 93.35 $\pm$ 0.68 & 96.56 $\pm$ 0.50 & 97.24 $\pm$ 0.23 & & & \\ & \textit{COVID-19} & 91.67 $\pm$ 2.56 & 94.43 $\pm$ 1.36 & 93.00 $\pm$ 1.00 & & & \\ \midrule \multirow{3}{*}{\textbf{Exp. 2}} & \textit{Pneumonia} & 84.02 $\pm$ 1.16 & 85.75 $\pm$ 1.46 & 84.86 $\pm$ 0.51 & \multirow{3}{*}{87.64 $\pm$ 0.74} & \multirow{3}{*}{81.35 $\pm$ 2.70} & \multirow{3}{*}{81.36 $\pm$ 0.42} \\ & \textit{Control} & 93.62 $\pm$ 0.76 & 92.67 $\pm$ 0.69 & 93.14 $\pm$ 0.25 & & & \\ & \textit{COVID-19} & 81.60 $\pm$ 3.33 & 81.35 $\pm$ 2.70 & 81.36 $\pm$ 0.42 & & & \\ \midrule \multirow{3}{*}{\textbf{Exp. 3}} & \textit{Pneumonia} & 85.26 $\pm$ 0.73 & 85.26 $\pm$ 0.73 & 87.42 $\pm$ 0.27 & \multirow{3}{*}{91.53 $\pm$ 0.20} & \multirow{3}{*}{87.64 $\pm$ 0.74} & \multirow{3}{*}{87.37 $\pm$ 0.84} \\ & \textit{Control} & 96.99 $\pm$ 0.17 & 94.48 $\pm$ 0.24 & 95.72 $\pm$ 0.15 & & & \\ & \textit{COVID-19} & 78.52 $\pm$ 2.08 & 78.73 $\pm$ 2.80 & 78.57 $\pm$ 1.15 & & & \\ \bottomrule \end{tabular} \end{table*} \begin{figure*}[!h] \setlength{\tempheight}{0.18\textheight} \settowidth{\tempwidth}{\includegraphics[height=\tempheight]{images/ROC_Original.png}} \centering \hspace{\baselineskip} \columnname{Exp. 1}\hfil\vspace{-0.3cm} \columnname{Exp. 2}\hfil \columnname{Exp. 3}\hfil \subfloat[] { \label{ROC_original} \includegraphics[width=0.33\textwidth]{images/ROC_Original.png} } \subfloat[] { \label{ROC_Cropped} \includegraphics[width=0.33\textwidth]{images/ROC_Cropped.png} } \subfloat[] { \label{ROC_CropSeg} \includegraphics[width=0.33\textwidth]{images/ROC_Segmented.png} } \\ \subfloat[] { \label{CM_original} \includegraphics[width=0.33\textwidth]{images/CM_Exp3_OrgImages_Cumulative.png} } \subfloat[] { \label{CM_Cropped} \includegraphics[width=0.33\textwidth]{images/CM_Exp3_CroppedImages_Cumulative.png} } \subfloat[] { \label{CM_CropSeg} \includegraphics[width=0.33\textwidth]{images/CM_Exp3_CroppedSegmentedImages_Cumulative.png} } \caption{ROC curves and confusion matrices for each one of the experiments, considering each one of the classes separately. \textbf{Top:} ROC curves. \textbf{Bottom:} Normalized confusion matrices. \textbf{Left:} Original images (experiment 1). \textbf{Center:} Cropped Images (experiment 2). \textbf{Right:} Segmented images (experiment 3). } \label{fig:ROC_CMatrix} \end{figure*} \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth]{images/Merged_Three.png} \caption{Average ROC curves for each experiment, including AUC values.} \label{fig:ROCThree} \end{figure} \subsection{Identification of the areas of significant interest for the classification} The areas of significant interest used by the CNN for discrimination purposes are identified using a qualitative analysis based on a \textit{Gradient-weighted Class Activation Mapping} (Grad-CAM) \cite{Selvaraju_2019}. This is an explainability method that serves to provide insights about the manners on how deep neural networks learn, pointing to the most significant areas of interest for decision-making purposes. The method uses the gradients of any target class to flow until the final convolutional layer, and to produce a coarse localization map which highlights the most important regions in the image identifying the class. The result of this method is a heat map like those presented in Fig. \ref{fig:XR-examples}, in which the colour encodes the importance of each pixel in differentiating among classes. \section{Results} \label{sec:results} The model has been quantitatively evaluated computing the test \textit{Positive Predictive Value} (PPV), \textit{Recall}, \textit{F1-score} (F1), \textit{Accuracy} (Acc), \textit{Balanced Accuracy} (BAcc), \textit{Geometric Mean Recall} (GMR) and \textit{Area Under the ROC Curve} (AUC) for each of the three classes in the corpus previously described in section \ref{sec:corpus}. The performance of the models is assessed using an independent testing set, which has not been used during development. A $5$-folds cross-validation procedure has been used to evaluate the obtained results (Training/Test balance: 90/10 \%). The performance of the CNN network on the three experiments considered in this paper is summarized in Table \ref{tab:NumericResults}. Likewise, the ROC curves per class for each of the experiments, and the corresponding confusion matrices are presented in Fig. \ref{fig:ROC_CMatrix}. The global ROC curve displayed in Fig. \ref{fig:ROCThree} for each experiment summarizes the global performance of the experiments. Considering experiment 1, and although slightly higher for controls, the detection performance remains almost similar for all classes (the PPV ranges from $91$-$93\%$) (Table \ref{tab:NumericResults}). The remaining measures per class follow the same trend, with similar figures but better numbers for the controls. ROC curves and confusion matrices of Fig. \ref{fig:ROC_CMatrix}a and Fig. \ref{fig:ROC_CMatrix}d point out that the largest source of confusion for COVID-19 is the pneumonia class. The ROC curves for each one of the classes reach in all cases AUC values larger than $0.99$, which, in principle is considered excellent. In terms of global performance, the system achieves an Acc of $91\%$ and a BAcc of $94\%$ (Table \ref{tab:NumericResults}). This is also supported by the average ROC curve of Fig. \ref{fig:ROCThree}, which reveals the excellent performance of the network and the almost perfect behaviour of the ROC curve. Deviations are small for the three classes. When experiment 2 is considered, a decrease in the performance per class is observed in comparison to experiment 1. In this case, the PPV ranges from $81$-$93\%$ (Table \ref{tab:NumericResults}), with a similar trend for the remaining figures of merit. ROC curves and confusion matrices in Fig. \ref{ROC_Cropped} and Fig. \ref{CM_Cropped} report AUC values in the range $0.96$-$0.99$, and an overlapping of the COVID-19 class mostly with pneumonia. The global performance of the system -presented in the ROC curve of Fig. \ref{fig:ROCThree} and Table \ref{tab:NumericResults}- yields an AUC of $0.98$, an Acc of $87\%$ and a BAcc of $81\%$. Finally, for the experiment 3, PPV ranges from $78\%-96\%$ (Table \ref{tab:NumericResults}). In this case, the results are slightly worse than those of experiment 2, with the COVID-19 class presenting the worse performance among all the tests. According to Fig. \ref{ROC_CropSeg}, AUCs range from $0.94$ to $0.98$. Confusion matrix in Fig. \ref{CM_CropSeg} reports a large level of confusion in the COVID-19 class being labelled as pneumonia $18\%$ of the times. In terms of global performance the system reaches an Acc of $91\%$ and a BAcc of $87\%$ (Table \ref{tab:NumericResults}). These results are consistent with the average AUC of $0.97$ shown in Fig. \ref{fig:ROCThree}. \subsection{Explainability and interpretability of the models} The regions of interest identified by the network, were analyzed qualitatively using Grad-CAM activation maps \cite{Selvaraju_2019}. Results shown by the activation maps, permit the identification of the most significant areas in the image, highlighting the zones of interest that the network is using to discriminate. In this regard, Fig. \ref{fig:XR-examples}, presents examples of the Grad-CAM of a control, a pneumonia, and a COVID-19 patient, for each of the three experiments considered in the paper. It is important to note that the activation maps are providing overall information about the behaviour of the network, pointing to the most significant areas of interest, but the whole image is supposed to be contributing to the classification process to a certain extent. The second row in Fig. \ref{fig:XR-examples} shows several prototipical results applying the Grad-CAM techniques to experiment 1. The examples show the areas of significant interest for a control, pneumonia and COVID-19 patient. The results suggest that the detection of pneumonia or COVID-19 is often carried out based on information that is outside the expected area of interest, i.e. the lung area. In the examples provided, the network focuses on the corners of the XR image or in areas around the diaphragm. In part, this is likely due to the metadata which is frequently stamped on the corners of the XR images. The Grad-CAM plots corresponding to the experiment 2 (third row of Fig \ref{fig:XR-examples}), indicates that the model still points towards areas which are different to the lungs, but to a lesser extent. Finally, the Grad-CAM of experiment 3 (fourth row of Fig \ref{fig:XR-examples}) presents the areas of interest where the segmentation procedure is carried out. In this case, the network is forced to look at the lungs, and therefore this scenario is supposed to be more realistic and more prone to generalizing as artifacts that might bias the results are somehow discarded. On the other hand, for visualization purposes, and in order to interpret the separability capabilities of the system, a t-SNE embedding is used to project the high dimensional data of the layer adjacent to the output of the network, to a 2-dimensional space. Results are presented in Fig. \ref{fig:t-SNE_Plots} for each of the three experiments considered in the paper. \begin{figure*}[th!] \setlength{\tempheight}{0.18\textheight} \settowidth{\tempwidth}{\includegraphics[height=\tempheight]{images/Exp3_OrgImages_t-SNE_training_v2.png}} \centering \hspace{\baselineskip} \columnname{Exp. 1}\hfil \columnname{Exp. 2}\hfil \columnname{Exp. 3}\\ \rowname{Training data} \subfloat[] { \label{t-SNE Original Trai} \includegraphics[width=0.3\textwidth]{images/Exp3_OrgImages_t-SNE_training_v2.png} } \subfloat[] { \label{t-SNE Cropped Trai} \includegraphics[width=0.3\textwidth]{images/Exp3_CroppedImages_t-SNE_training_v2.png} } \subfloat[] { \label{t-SNE CropSeg Trai} \includegraphics[width=0.3\textwidth]{images/Exp3_CroppedSegmentedImages_t-SNE_training_v3.png} } \\ \rowname{Test data} \subfloat[] { \label{t-SNE Original Test} \includegraphics[width=0.3\textwidth]{images/Exp3_OrgImages_t-SNE_test_v2.png} } \subfloat[] { \label{t-SNE Cropped Test} \includegraphics[width=0.3\textwidth]{images/Exp3_CroppedImages_t-SNE_test_v2.png} } \subfloat[] { \label{t-SNE CropSeg Test} \includegraphics[width=0.3\textwidth]{images/Exp3_CroppedSegmentedImages_t-SNE_test_v3.png} } \caption{Mapping of the high-dimensional data of the layer adjacent to the output into a two dimensional plot. \textbf{Top:} Output network embedding using t-SNE for the training data. \textbf{Bottom:} Output network embedding using t-SNE for the testing data. \textbf{Left:} Original images (experiment 1). \textbf{Center:} Cropped Images (experiment 2). \textbf{Right:} Segmented images (experiment 3). } \label{fig:t-SNE_Plots} \end{figure*} \begin{figure*}[th!] \setlength{\tempheight}{0.18\textheight} \settowidth{\tempwidth}{\includegraphics[height=\tempheight]{images/Exp3_OrgImages_t-SNE_training_v2.png}} \centering \hspace{\baselineskip} \columnname{Exp. 1}\hfil \columnname{Exp. 2}\hfil \columnname{Exp. 3}\\ \rowname{Training data} \subfloat[] { \label{t-SNE Original Trai 2} \includegraphics[width=0.3\textwidth]{images/Exp3_OrgImages_t-SNE_train_v3_DB_Nolegend.png} } \subfloat[] { \label{t-SNE Cropped Trai 2} \includegraphics[width=0.3\textwidth]{images/Exp3_CroppedImages_t-SNE_train_v3_DB_Nolegend.png} } \subfloat[] { \label{t-SNE CropSeg Trai 2} \includegraphics[width=0.3\textwidth]{images/Exp3_CroppedSegmentedImages_t-SNE_train_v3_DB_Nolegend.png} } \\ \rowname{Test data} \subfloat[] { \label{t-SNE Original Test 2} \includegraphics[width=0.3\textwidth]{images/Exp3_OrgImages_t-SNE_test_v3_DB_Nolegend.png} } \subfloat[] { \label{t-SNE Cropped Test 2} \includegraphics[width=0.3\textwidth]{images/Exp3_CroppedImages_t-SNE_test_v3_DB_Nolegend.png} } \subfloat[] { \label{t-SNE CropSeg Test 2} \includegraphics[width=0.3\textwidth]{images/Exp3_CroppedSegmentedImages_t-SNE_test_v3_DB_Nolegend.png} } \\ { \includegraphics[scale=0.3]{images/t-SNE_v3_DB_legend.png} } \caption{Mapping of the high-dimensional data of the layer adjacent to the output into a two dimensional plot. \textbf{Top:} Output network embedding using t-SNE for the training data. \textbf{Bottom:} Output network embedding using t-SNE for the testing data. \textbf{Left:} Original images (experiment 1). \textbf{Center:} Cropped Images (experiment 2). \textbf{Right:} Segmented images (experiment 3). Labels correspond to data sets and classes.} \label{fig:t-SNE_Plots_v2} \end{figure*} Fig. \ref{fig:t-SNE_Plots} indicates that a good separability exists for all the classes in both training and testing data, and for all experiments. The boundaries of the normal cluster are very well defined in the three experiments, whereas pneumonia and COVID-19 are more spread, overlapping with adjacent classes. In general terms, the t-SNE plots demonstrate the ability of the network to learn a mapping from the input data to the desired labels. However, despite the shape differences found for the three experiments, no additional conclusions can be extracted. \subsection{Potential variability factors affecting the system} There are several variability factors which might be biasing the results, namely: the projection (PA vs. AP); the technology of the detector (\textit{Computed Radiography} (CR) vs. \textit{Digital Radiography} (DX)); the gender of the patients; the age; potential specificities of the dataset; or having trained with several images per patient. The use of several images per patient represents a certain risk of data leak in the COVID-19 class due to its underlying imbalance. However, our initial hypothesis is that using several images per COVID-19 patient but obtained at different instants in time (with days of difference), would increase the variability of the dataset, and thus that source of bias would be disregarded. Indeed, the evolution of the associated lesions often found in COVID-19 is considered fast, in such a manner that very different images are obtained in a time interval as short as one or two days of the evolution. Also, since every single exploration is framed differently, or sometimes even taken with different machines and/or projections, the potential bias is expected to be minimized. Concerning the type of projection, and to evaluate its effectiveness, the system has been studied taking into account this potential variability factor, which is considered to be one of the most significant. In particular, Table \ref{tab:NumericResults2}, presents the outcomes after accounting for the influence of the XR projection (PA/AP) in the performance of the system. In general terms, the system demonstrates consistency with respect to the projection used, and differences are mainly attributable to smaller training and testing sets. However, significant differences are shown for projection PA in class COVID-19/experiment 3, decreasing the F1 up to $65.61$\%. The reason for the unexpected drop in performance is unknown, but likely attributable to an underrepresented class in the corpus (see Table \ref{tab:DemographicDistribution}). Besides, Table \ref{tab:variability_factors} shows --for the three experiments under evaluation and for the COVID-19 class-- the error distribution with respect to the sex of the patient, technology of the detector, dataset and projection. For the four variability factors enumerated, results show that the error distribution committed by the system follows --with minor deviations-- the existing proportion of the samples in the corpus. These results suggest that there is no clear bias with respect to these potential variability factors, at least for the COVID-19 class which is considered the worst case due to its underrepresentation. Similar results would be expected for control and pneumonia classes, but these results are not provided due to the lack of certain labels in some of the datasets used (see Table \ref{tab:DemographicDistribution}). Concerning age, the datasets used are reasonably well balanced (Table \ref{tab:DemographicDistribution}), but with a certain bias in the normal class: COVID-19 and pneumonia classes have very similar average ages, but controls have a lower mean age. Our assumption has been that age differences are not significantly affecting the results, but the mentioned difference might explain why the normal cluster in Fig. \ref{fig:t-SNE_Plots} is less spread than the other two. In any case, no specific age biases have been found in the errors committed by the system. An additional study was also carried out to evaluate the influence of potential specificities of the different datasets used to compile the corpus (i.e. the variability of the results with respect to the datasets merged to build the corpus). This variability factor is evaluated in Fig. \ref{fig:t-SNE_Plots_v2} using different t-SNE plots (one for each experiment in a similar way than in Fig. \ref{fig:t-SNE_Plots}) but differentiating the corresponding cluster for each dataset and class. Results for the different datasets and classes are clearly merged or are adjacent in the same cluster. However, several datasets report a lower variability for certain classes (i.e. variability in terms of scattering). This is especially clear in Chexpert and NIH pneumonia sets, which are successfully merged with the corresponding class, but appear clearly clustered, suggesting that these datasets have certain unknown specific characteristics different to those of the complementary datasets. The model has been able to manage this aspect but is a factor to be analyzed in further studies. \begin{table*}[!ht] \centering \caption{Performance measures considering the XR projection (PA/AP)}\label{tab:NumericResults2} \begin{tabular}{@{}lclccccc@{}} \toprule \multirow{2}{*}{\textbf{Experiment}} & \multirow{2}{*}{\textbf{Class}} & \multicolumn{3}{c}{\textbf{PA}} & \multicolumn{3}{c}{\textbf{AP}} \\ \cmidrule(l){3-8} & & \textbf{PPV} & \textbf{Recall} & \textbf{F1} & \textbf{PPV} & \textbf{Recall} & \textbf{F1} \\ \midrule \multirow{3}{*}{\textbf{Exp. 1}} & \textit{Pneumonia} & 91.25 $\pm$ 1.22 & 92.78 $\pm$ 1.58 & 92.00 $\pm$ 0.93 & 94.70 $\pm$ 0.79 & 96.28 $\pm$ 1.10 & 95.48 $\pm$ 0.50 \\ & \textit{Control} & 98.54 $\pm$ 0.33 & 97.83 $\pm$ 0.23 & 98.18 $\pm$ 0.14 & 97.87 $\pm$ 0.28 & 95.46 $\pm$ 0.87 & 96.65 $\pm$ 0.43 \\ & \textit{COVID-19} & 84.06 $\pm$ 3.94 & 88.91 $\pm$ 2.31 & 86.33 $\pm$ 1.80 & 95.13 $\pm$ 2.46 & 97.18 $\pm$ 0.94 & 96.12 $\pm$ 1.06 \\ \midrule \multirow{3}{*}{\textbf{Exp. 2}} & \textit{Pneumonia} & 81.77 $\pm$ 1.79 & 79.17 $\pm$ 2.38 & 80.41 $\pm$ 1.16 & 87.39 $\pm$ 1.66 & 90.78 $\pm$ 1.21 & 89.03 $\pm$ 0.71 \\ & \textit{Control} & 94.81 $\pm$ 0.46 & 95.56 $\pm$ 0.61 & 95.33 $\pm$ 0.16 & 92.79 $\pm$ 1.53 & 88.15 $\pm$ 1.61 & 90.38 $\pm$ 0.32 \\ & \textit{COVID-19} & 73.72 $\pm$ 2.37 & 68.82 $\pm$ 5.20 & 71.01 $\pm$ 2.27 & 84.96 $\pm$ 2.27 & 87.63 $\pm$ 2.04 & 86.23 $\pm$ 0.86 \\ \midrule \multirow{3}{*}{\textbf{Exp. 3}} & \textit{Pneumonia} & 84.07 $\pm$ 1.72 & 87.19 $\pm$ 1.66 & 85.57 $\pm$ 0.53 & 87.39 $\pm$ 0.97 & 81.66 $\pm$ 1.12 & 89.47 $\pm$ 0.41 \\ & \textit{Control} & 97.88 $\pm$ 0.36 & 97.08$\pm$ 0.21 & 97.48$\pm$ 0.19 & 96.03 $\pm$ 0.81 & 90.65 $\pm$ 0.87 & 93.26 $\pm$ 0.47 \\ & \textit{COVID-19} & 66.68 $\pm$ 4.82 & 65.23 $\pm$ 4.73 & 65.61 $\pm$ 1.05 & 81.82 $\pm$ 3.07 & 83.62 $\pm$ 2.14 & 82.65 $\pm$ 1.28 \\ \bottomrule \end{tabular} \end{table*} \begin{table}[ht!] \centering \caption{Percentage of testing samples and error distribution with respect to several potential variability factors for the COVID-19 class. (\% in hits represents the percentage of samples of every factor under analysis in the correctly predicted set.) }\label{tab:variability_factors} \begin{tabular}{lccccc} \toprule \multirow{2}{*}{\textbf{Factor}} & \multirow{2}{*}{\textbf{Types}} & \multirow{2}{*}{\textbf{\% in test}} & \multicolumn{3}{c}{\textbf{\% in hits}} \\ \cmidrule(l){4-6} & & & \textbf{Exp. 1} & \textbf{Exp. 2} & \textbf{Exp. 3} \\ \hline \multirow{2}{*}{\textbf{Projection}} & AP & 79 & 80.0 & 82.6 & 82.7 \\ & PA & 21 & 20.0 & 17.4 & 17.3 \\ \hline \multirow{2}{*}{\textbf{Sensor}} & DX & 22 & 22.0 & 23.3 & 23.6 \\ & CR & 78 & 78.0 & 76.7 & 76.4 \\ \hline \multirow{2}{*}{\textbf{Sex}} & M & 64 & 64.0 & 65.4 & 65.2 \\ & F & 36 & 36.0 & 34.6 & 34.8 \\ \hline \multirow{3}{*}{\textbf{DB}} & BMICV & 30 & 28.7 & 26.6 & 26.6 \\ & HM & 69 & 71.0 & 72.7 & 73.1 \\ & ACT & 1 & 0.3 & 0.7 & 0.3 \\ \bottomrule \end{tabular} \end{table} \section{Discussion and Conclusions} \label{sec:disscon} This study evaluates a deep learning model for the detection of COVID-19 from RX images. The paper provides additional evidence to the state of the art, supporting the potentiality of deep learning techniques to accurately categorize XR images corresponding to control, pneumonia, and COVID-19 patients (Fig. \ref{fig:XR-examples}). These three classes were chosen under the assumption that they can support clinicians on making better decisions, establishing potential differential strategies to handle patients depending on their cause of infection \cite{wang2020covid}. However, the main goal of the paper was not to demonstrate the suitability of the deep learning for categorizing XR images, but to make a thoughtful evaluation of the results and of different preprocessing approaches, searching for better explainability and/or interpretability of the results, while providing evidence of potential effects that might bias results. The model relies on the COVID-Net network, which has served as a basis for the development of a more refined architecture. This network has been chosen due to its tailored characteristics and given the previous good results reported by other researchers. The COVID-Net was trained with a corpus compiled using data gathered from different sources: the control and pneumonia classes --with $49,983$ and $24,114$ samples respectively-- were collected from the ACT, Chinaset, Montgomery, CRX8, CheXpert and MIMIC datasets; and the COVID-19 class was collected from the information available at the BIMCV, ACT, and HM Hospitales datasets. Although the COVID-19 class only contains $8,573$ chest RX images, the developers of the data sources are continuously adding new cases to the respective repositories, so the number of samples is expected to grow in the future. Despite the unbalance of the COVID-19 class, up to date, and to the authors' knowledge, this is the largest compilation of COVID-19, images based on open repositories. Despite that, the number of COVID-19 RX images is still considered small in comparison to the other two classes, and therefore, it was necessary to compensate for the class imbalance by modifying the network architecture, including regularization components in the last two dense layers. To this end, a weighted categorical cross-entropy loss function was used to compensate for this effect. Likewise, data augmentation techniques were used for pneumonia and COVID-19 classes to automatically generate more samples for these two underrepresented classes. We stand on the fact that automatic diagnosis is much more than a classification exercise, meaning that many factors have to be had in mind to bring these techniques to the clinical practice. To this respect, there is a classic assumption in the literature that the associated heat maps --calculated with techniques such as Grad-CAM-- provide a clinical interpretation of the results, which is unclear in practice. In light of the results shown in the heat maps depicted in Fig. \ref{fig:XR-examples}, we show that experiment 1 must be carefully interpreted. Despite the high-performance metrics obtained in experiment 1, the significant areas identified by the network are pointing towards certain areas with no clear interest for the diagnosis, such as corners of the images, the sternum, clavicles, etc. From a clinical point of view, this is clearly biasing the results. It means that other approaches are necessary to force the network to focus on the lungs area. To this respect, we have developed and compared the results with two preprocessing approaches based on cropping the images and segmenting the lungs area (experiment 2 and experiment 3). Again, given the heat maps corresponding to experiment 2, we also see similar explainability problems to those enumerated for experiment 1. Reducing the area of interest to that proposed in experiment 2 significantly decreases the performance of the system due to the removal of the metadata that usually appear in the top left and/or right corner, and to the removal of areas which are of interest to categorize the images but have no interest from the diagnosis point of view. However, while comparing experiment 2 and 3, performance results improve in the third approach, which focuses on the same region of interest but with a mask that forces the network to see only the lungs. Thus, results obtained in experiments 2 and 3 suggest that eliminating the needless features extracted from the background or non-related regions improves the results. Besides, the third approach (experiment 3), provides more explainable and interpretative results, with the network focusing its attention only in the area of interest for the disease. The gain in explainability of the last method is still at the cost of a lower accuracy with respect to experiment 1, but the improvement in explainability and interpretability are considered critical to translate these techniques to the clinical setting. Despite the decrease in performance, the proposed method in experiment 3 has provided promising results, with an Acc of $91.53\%$, BAcc of $87.6$, GMR of $87.37\%$ and AUC of $0.97$. Performance results obtained are in line with those presented in \cite{wang2020covid}, which reports sensitivities of $95$, $94$ and $91$ for control, pneumonia and COVID-19 classes respectively --also modelling with the COVID-Net in a scenario similar to the one in experiment 1--, but training with a much smaller corpus of $358$ RX images from $266$ COVID-19 patients, $8,066$ controls, and $5,538$ RX images belonging to patients with different types of pneumonia. The paper also critically evaluates the effect of several variability factors that might compromise the performance of the network. For instance, the effect of the projection (PA/AP) was evaluated by retraining the network and checking the outcomes. This effect is important, given that PA projections are often practised in erect positions to better observe the pulmonary ways, and as such, are expected to be examined in healthy or slightly affected patients. In contrast, AP projections are often preferred for patients confined in bed, and as such are expected to be practised in the most severe cases. Since AP projections are common in COVID-19 patients, blood is expected to flow more to lungs’ apices than when standing; thus, not considering this variability factor into account may result in a misdiagnosis of pulmonary congestion \cite{burlacu2020curbing}. Indeed, the obtained results have highlighted the importance of taking into account this factor when designing the training corpus, as PPV have decreased for PA projections in our experiments with the COVID-19 images. This is probably due to an underrepresentation of this class (Table \ref{tab:NumericResults2}), which would require a further specific analysis when designing future corpora. On the other hand, results have shown that the error distribution for the COVID-19 class follows a similar proportion than the percentage of images available in the corpus while categorizing by gender, the technology of the detector, the projection and/or the dataset. These results suggest no significant bias with respect to these potential variability factors, at least for the COVID-19 class, which is the less represented one. An analysis of how the clusters of classes were distributed is also presented in Fig. \ref{fig:t-SNE_Plots}, demonstrating how well each class is differentiated. These plots help to identify existing overlap among classes (especially that present between pneumonia and COVID-19, and to a lesser extent between controls and pneumonia). Similarly, since the corpus used to train the network was built around several datasets, a new set of t-SNE plots was produced, but differentiating according to each of the subsets that were used for training (Fig. \ref{fig:t-SNE_Plots_v2}). This test served to evaluate the influence of potential specific characteristics of each dataset in the training procedure and, hence, possible sources of confusion that arise due to particularities of the corpora that are tested. The plots suggest that in general terms the different datasets are correctly merged together, but with some exceptions. This fact suggests that there might be certain unknown characteristics in the datasets used, which cluster the images belonging to the same dataset together. The COVID-Net has also demonstrated being a good starting point for the characterization of the disease. Indeed, the outcomes of the paper suggest the possibility to automatically identifying the lung lesions associated with a COVID-19 infection (see Fig.\ref{fig:XR-examples}) by analyzing the Grad-CAM mappings of experiment 3, providing an explainable justification about the way the network works. However, the interpretation of the heat maps obtained for the control class must be carried out carefully. Whereas the areas of significant interest for pneumonia and COVID-19 classes are supposed to point to potential lesions (i.e. with higher density and/or with different textures in contrast to controls), the areas of significant interest for the classification in the control group are supposed to correspond to a sort of complement, potentially highlighting less dense areas. Thus, not meaning the presence of any kind of lesion in the lungs. Likewise, and in comparison to the performance achieved by a human evaluator differentiating pneumonia from COVID-19, the system developed in the third experiment attains comparable results. Indeed, in \cite{bai2020performance} the ability of seven radiologists to correctly differentiate pneumonia and COVID-19 from XR images was put into test. The results indicated that the radiologists achieved sensitivities ranging from $97\%$ to $70\%$ (mean $80\%$), and specificities ranging from $7\%$ to $100\%$ (mean $70\%$). These results suggest a potential use in a supervised clinical environment. COVID-19 is still a new disease and much remains to be studied. The use of deep learning techniques would potentially help to understand the mechanisms on how the SARS-CoV2 attacks the lungs and alveoli, and how it evolves during the different stages of the disease. Despite there is some empirical evidence on the evolution of COVID-19 --based on observations made by radiologists \cite{pan2020imaging}--, the employment of automatic techniques based on machine learning would help to analyze data massively, to guide research onto certain paths or to extract conclusions faster. But more interpretable and explainable methods are required to go one step forward. Inline to the previous comment, and based on the empirical evidence respecting the evolution of the disease, it has been stated that during early stages of the disease, ground-glass shadows, pulmonary consolidation and nodules, and local consolidation in the centre with peripheral ground-glass density are often observed, but once the disease evolves the consolidations reduce their density resembling a ground-glass opacity, that can derive in a "white lung" if the disease worsens or in a minimization of the opacities if the course of the disease improves \cite{pan2020imaging}. In this manner, if any of these characteristic behaviours are automatically identified, it would be possible to stratify the stage of the disorder according to its severity. Not only that but computing the extent of the ground-glass opacities or densities would also be useful to assess the severity of the infection or to evaluate the evolution of the disease. The assessment of the infection extent has been previously tested in other CT studies of COVID-19 \cite{yang2020chest}, but using manual procedures based on observation of the images. Solutions like the one discussed in this paper, are intended to support a much faster diagnosis and to alleviate the workload of radiologists and specialists, but not to substitute their assessment. A rigorous validation would open the door to integrating these algorithms in desktop applications or cloud servers for its use in the clinic environment. Thus, its use, maintenance and update would be simple and cost-effective, and would reduce healthcare costs, improve the accuracy of the diagnosis and the response time \cite{topol2019deep}. In any case, the deployment of these algorithms is not exempt from controversies: hosting the AI models in a cloud service would entail the upload of the images which might be subject to national and/or international regulations and constraints to ensure privacy \cite{he2019practical}. \bibliographystyle{IEEEtran} \balance
1,314,259,994,622
arxiv
\section{Introduction} \setlength{\parindent}{0cm} Many clustering methods generate a family of clusterings that depend on some user-defined parameters. The most prominent example is the $K$-means algorithm, where the investigator has to specify the number of clusters. Similarly, in hierarchical clustering, a whole family of clusterings is obtained, starting from the finest partition into singletons and ending in the coarsest clustering, i.e. a single cluster. Again, the investigator chooses the number of clusters based on the dendrogram. \smallskip All these methods come with a variety of suggestions how to choose the optimal number of clusters. Some of these are rather heuristic in nature, while others have deep theoretical foundations. For the $K$-means algorithm these include \textit{the elbow method} or \textit{average silhouette method} (\cite{bib:rousseeuw1987silhouettes}). Another solution is to use a \textit{score statistic} (a function which is intended to measure the quality of a clustering) and among different clusterings proposed by a given method choose the one that maximises the score statistic. Constructing score statistics is not a trivial task; one of the most popular choices is \textit{the gap statistic} (\cite{bib:tibshirani2001estimating}). \smallskip In this article we propose a new score statistic. It is derived as a limit of the first order approximation to the posterior probability (up to the norming constant) in a Nonparametric Bayesian Mixture Model with the inverse Wishart distribution as a base measure for the within group covariance matrices and the Gaussian distribution as a base measure for the cluster means and the component measure. In order to derive the limit we assume that the data is an independent sample from some `input' probability distribution on the observation space; this gives a method of assessing the compatibility of the \textit{partitions of the observation space} to the input distribution. The score function is obtained by taking the empirical measure as the input distribution and tweaking it slightly so that it is well defined on all possible data clusterings. \subsection{Contribution and Results}\label{sec:ctr} Our main contribution is the formulation of a novel score function for clusterings, which is motivated theoretically and performs well on analysed datasets. Suppose that we have a sequence of observations $x_1,\ldots,x_n\in\R^d$ and we believe that it consists of several groups and within every group the data is distributed according to some Gaussian distribution (with unknown mean and covariance matrix). The goal is to construct a simple function that measures how well a given clustering of the dataset corresponds to the assumption of being Gaussially distributed within clusters. Our proposition is the following: for $I\subset[n]$ we define $\ov{\mathbf{x}_I}=\re{|I|}\sum_{i\in I} x_i$ and $\hat{\bV}_\mathbf{x}(I)=\re{|I|}\sum_{i\in I} (x_i-\ov{\mathbf{x}_{[n]}})(x_i-\ov{\mathbf{x}_{[n]}})^t$ and for the notational simplicity denote $\hat{\bV}_\mathbf{x}:=\hat{\bV}_\mathbf{x}([n])$. For $\mathbf{x}=(x_1,\ldots,x_n)$ and $\cI$ -- a partition of $[n]=\{1,2,\ldots,n\}$ let \begin{equation}\label{eq:timon} \cD(\mathbf{x},\cI):= -\re{2}\sum_{I\in\cI} \frac{|I|}{n}\ln\det\Big(\frac{\hat{\bV}_\mathbf{x}}{|I|}+\hat{\bV}_\mathbf{x}(I)\Big) +\sum_{I\in\cI} \frac{|I|}{n}\ln\frac{|I|}{n}. \end{equation} It should be noted that if $\mathbf{x}$ is a realisation of a random independent sample $X_1,\ldots,X_n$ from some distribution $P$ on $\cX$, then the components of the formula \eqref{eq:timon} can be treated as empirical estimates of relevant probabilities or the conditional covariance matrices. This is actually how \eqref{eq:timon} is obtained; we investigate the details in \Cref{sec:deriv}. This remark may be also convenient when dealing with large datasets where the exact computation of \eqref{eq:timon} could be time consuming. In such case we can approximate the variance components of \eqref{eq:timon} by using the random samples from clusters. \section{Score functions and the main formula} \subsection{Basic definitions}\label{sec:score} We start our presentation with a formal definition of a \textit{score function}, intended to measure the quality of the data clustering. \begin{ntn*} For $n\in\N$ let $[n]=\{1,\ldots,n\}$ and let $\Pi_n$ be the set of all partitions of $[n]$. Let $\cX=\R^d$ be the observation space. Let $\cO=\bigcup_{n=1}^\infty \cX^n\times \Pi_n$ be the set of all possible finite sequences of observations and their partitions and let $\ov{\R}=\R\cup\{-\infty,\infty\}$. \end{ntn*} \begin{dfn*} A \textit{clustering score function} is any function $\cS\colon \cO\to\ov{\R}$. \end{dfn*} \begin{dfn*} Let $\cS$ be a score function and let $\cF$ be a family of functions from $\cX$ to $\cX$. We say that $\cS$ is \emph{robust to \cF} if for every $\mathbf{x}=(x_1,\ldots,x_n)\in\cX^n$ and $\cI,\cJ\in\Pi_n$ and every $f\in\cF$ we have $\cS(\mathbf{x},\cI)\leq \cS(\mathbf{x},\cJ)$ if and only if $\cS(f(\mathbf{x}),\cI)\leq \cS(f(\mathbf{x}),\cJ)$, where $f(\mathbf{x})=\big(f(x_1),\ldots,f(x_n)\big)$. \end{dfn*} Hence robustness to $\cF$ means that if we apply any function $f\in\cF$ to all observations, the optimal clustering indicated by the score function will not alter. If no prior knowledge about the clustering structure is available, a natural demand from a score function is to be robust to linear isomorphisms of $\cX$. In particular, it should be robust to scaling of the axes since it would be strange if the result of applying the score function would depend on the units used to measure the observation. For the similar reasons, we expect a good score function to be robust to translations. \smallskip Note, that on the other hand the robustness to \emph{all} linear transformation would be undesirable -- in particular, moving all points to the origin is a linear transformation and we do not expect any clusters to be seen after applying it. \begin{ntn*} Let $\cA$ and $\cB$ be two partitions of the same set. We say that $\cA$ is \emph{finer} than $\cB$ if for every $A\in\cA$ there exist $B\in\cB$ such that $A\subset B$. Equivalently, we say that $\cB$ is \emph{coarser} than $\cA$ and we write $\cA\preceq\cB$. \end{ntn*} \begin{dfn*} Let $\cS$ be a clustering score function. We say that it is \emph{non-increasing} if for every $\mathbf{x}\in\cX^n$ and $\cI,\cJ\in\Pi_n$ such that $\cI\preceq \cJ$ we have $\cS(\mathbf{x},\cI)\leq \cS(\mathbf{x},\cJ)$. If $-\cS$ is non-increasing then $\cS$ is \emph{non-decreasing}. \end{dfn*} Clearly, no non-decreasing score function would be good for clustering purposes as it would assign the highest score to the clustering into one full cluster, regardless of the data. Similarly, a non-increasing function gives the highest score to the partition of singletons. It seems desirable for this two tendencies to interplay and it is theoretically appealing to find increasing and decreasing parts in a given score function. \subsection{Properties of the $\cD$ score function}\label{sec:properties} \begin{ntn*} To facilitate the notation in the remaining part of the text we use $|\Sigma|$ to denote the determinant of a square matrix $\Sigma$. \end{ntn*} \begin{dfn*} With the notation presented in \Cref{sec:ctr} we define \begin{equation}\label{eq:deltasigma} \cD_\Sigma(\mathbf{x},\cI):= -\re{2}\sum_{I\in\cI} \frac{|I|}{n}\ln\Big|\frac{\Sigma}{|I|}+\hat{\bV}_\mathbf{x}(I)\Big| +\sum_{I\in\cI} \frac{|I|}{n}\ln\frac{|I|}{n}. \end{equation} and then $\cD(\mathbf{x},\cI)=\cD_{\hat{\bV}_x}(\mathbf{x},\cI)$ (which is equivalent to \eqref{eq:timon}). Moreover, we use $\cD_0$ to denote $\cD_\Sigma$ with $\Sigma$ being a matrix of zeroes. \end{dfn*} \begin{proper} Let $x_1,\ldots,x_n\in\cX$ such that $x_1,\ldots,x_n$ span $\cX$. Let $\mathbf{x}=(x_1,\ldots,x_n)$. Then $|\cD(\mathbf{x}, \cI)|<\infty$ for any $\cI\in\Pi_n$. \end{proper} \begin{proof} For any $v\in\R^d$ \begin{equation} v^t\left(\sum_{i\in I}(x_i-\ov{bx_I})(x_i-\ov{bx_I})^t\right)v= \sum_{i\in I}\big(v^t(x_i-\ov{bx_I})\big)^2\geq 0 \end{equation} and hence $\sum_{i\in I}(x_i-\ov{bx_I})(x_i-\ov{bx_I})^t$ is non-negative definite. Moreover, it follows from the assumptions that $\hat{\bV}_\mathbf{x}$ is positive definite. A sum of non-negative and positive definite matrix is positive definite, so its determinant is positive. Therefore all the summands in \eqref{eq:timon} are finite and the proof follows. \end{proof} \begin{proper} The score function $\cD$ is robust to translations and linear isomorphisms. \end{proper} \begin{proof} It is easy to check that for any $\mathbf{x}\in\cX^n$, $\cI\in\Pi_n$ and any translation $T$ we have $\cD(\mathbf{x},\cI)=\cD\big(T(\mathbf{x}),\cI\big)$ and hence robustness to translations. \smallskip Let $L\colon\cX\to\cX$ be a linear automorphism, defined by $L(x)=Ax$, where $A$ is a $n\times n$ invertible matrix. Then \begin{equation} \begin{split} \cD\big(L(\mathbf{x}),\cI\big)&= -\re{2}\sum_{I\in\cI} \frac{|I|}{n}\ln\Big|\re{|I|}A\hat{\bV}_\mathbf{x} A^t+\re{|I|}\sum_{i\in I}A(x_i-\ov{\mathbf{x}_I})(x_i-\ov{\mathbf{x}_I})^t A^t\Big| +\sum_{I\in\cI} \frac{|I|}{n}\ln\frac{|I|}{n}=\\ &= -\re{2}\sum_{I\in\cI} \frac{|I|}{n}\ln\Big|A\big(\re{|I|}\hat{\bV}_\mathbf{x} +\re{|I|}\sum_{i\in I}(x_i-\ov{\mathbf{x}_I})(x_i-\ov{\mathbf{x}_I})^t\big) A^t\Big| +\sum_{I\in\cI} \frac{|I|}{n}\ln\frac{|I|}{n}=\\ &= -\re{2}\sum_{I\in\cI} \frac{|I|}{n}\ln\Big(|A|\Big|\re{|I|}\hat{\bV}_\mathbf{x} +\re{|I|}\sum_{i\in I}(x_i-\ov{\mathbf{x}_I})(x_i-\ov{\mathbf{x}_I})^t\Big| |A^t|\Big) +\sum_{I\in\cI} \frac{|I|}{n}\ln\frac{|I|}{n}=\\ &= \cD(\mathbf{x},\cI)-\ln|A|, \end{split} \end{equation} which clearly implies robustness to linear isomorphisms. \end{proof} \begin{proper} \begin{enumerate}[(a)] \item $\sum_{I\in\cI} \frac{|I|}{n}\ln\frac{|I|}{n}$ is increasing \item $-\sum_{I\in\cI} \frac{|I|}{n}\ln\Big|\hat{\bV}_\mathbf{x}(I)\Big|$ is decreasing \item $-\sum_{I\in\cI} \frac{|I|}{n}\ln\Big|\frac{\Sigma}{|I|}\Big|$ is increasing \end{enumerate} \end{proper} \begin{proof} The proof of parts (a) and (b) follow from \Cref{res:disagglo} by taking the empirical measure instead of $P$. Part (c) follows from (a) because \begin{equation} -\sum_{I\in\cI} \frac{|I|}{n}\ln\Big|\frac{\Sigma}{|I|}\Big| = d\sum_{I\in\cI} \frac{|I|}{n}\ln\frac{|I|}{n} + d\ln n - \ln|\Sigma|. \end{equation} \end{proof} \section{The derivation}\label{sec:deriv} In this section we give the theoretical foundations for considering the function $\cD$ as clustering score function. We present a general formulation of a Bayesian Mixture Model and then we concentrate on the case where the data within clusters are distributed as Gaussians. We analyse the asymptotics of the formula for the (unnormalised) posterior in this model. In this way we concentrate on scoring the partitions of the observation space rather than the data themselves. However, it is easy to switch to the score statistic by considering an empirical counterpart of $P$ instead of $P$; this yields $\cD_0$ (cf. \eqref{eq:deltasigma}). The general form of \eqref{eq:deltasigma} is constructed to prevent the function $\cD_0$ from assigning an infinite score to clusterings with very small clusters (of size less than the dimension of the observation space); on the other hand when the clusters are large enough, then $\cD$ approximates $\cD_0$. \subsection{Bayesian Mixture Models} Let $\Theta\subset\R^p$ be the parameter space and $\{ G_\theta\colon \theta\in\Theta \}$ be a family of probability measures on the observation space $\R^d$. Consider a prior distribution $\pi$ on $\Theta$. Let $\nu$ be a probability distribution on the $m$-dimensional simplex $\Delta^m=\{\bm{p}=(p_i)_{i=1}^m\colon \textrm{$\sum_{i=1}^m p_i=1$ and $p_i\geq 0$ for $i\leq m$}\}$ (where $m\in\N\cup\{\infty\}$). Let \begin{equation}\label{eq:bmm1} \begin{array}{rcl} \bm{p}=(p_i)_{i=1}^m&\sim& \nu \\ \bm{\theta}=(\theta_i)_{i=1}^m&\iid& \pi \\ \mathbf{x}=(x_1,\ldots,x_n) \cond \bm{p},\bm{\theta}&\iid& \sum_{i=1}^m p_i G_{\theta_i}. \end{array} \end{equation} This is a \textit{Bayesian Mixture Model}. If $G_\theta$ a Gaussian distribution for all $\theta\in\Theta$, we say that \eqref{eq:bmm1} defines a \textit{Bayesian Mixture of Gaussians}. In this case a convenient choice of the parameter space is $\Theta=\R^d\times \cS^+_d$, where $\cS^+_d$ is the space of positive definite $d\times d$ matrices. Then for $\theta=(\mu,\Lambda)$ the distribution $G_\theta$ is the multivariate normal distribution $\cN(\mu,\Lambda)$. A conjugate prior distribution $\pi$ on $\Theta$ is the \textit{Normal-inverse-Wishart} distribution, which is given by \begin{equation}\label{eq:NIWmodel} \begin{array}{rcl} \Lambda&\sim&\cW^{-1}(\eta_0+d+1,\eta_0\Sigma_0)\\ \mu\cond\Lambda&\sim&\Normal(\mu_0,\Lambda/\kappa_0) \end{array} \end{equation} Here $\cW^{-1}$ denotes the \textit{inverse Wishart} distribution and the hyperparameters are $\kappa_0,\eta_0>0$, $\mu_0\in\R^d$ and $\Sigma_0\in\cS^+$. This prior is listed in \cite{bib:Gelman2013bayesian} with a slightly different hyperparameters, but we made this modification to obtain \begin{equation}\label{eq:NIW_expect} \begin{array}{rl} \E\Lambda&=\Sigma_0,\\ \bV(\mu)&=\E\bV(\mu\cond \Lambda)+\bV\E(\mu\cond\Lambda)=\E \Lambda/\kappa_0+\bV(\mu_0)=\Sigma_0/\kappa_0, \end{array} \end{equation} which gives a nice interpretation of the hyperparameters. \smallskip Formula \eqref{eq:bmm1} can model data clustering; clusters are defined by deciding which $G_{\theta_i}$ generated a given data point. In order to formally define the clusters, we need to rewrite \eqref{eq:bmm1} as \begin{equation}\label{eq:bmm2} \begin{array}{rcl} \bm{p}=(p_i)_{i=1}^m&\sim& \nu \\ \bm{\theta}=(\theta_i)_{i=1}^m&\iid& \pi \\ \bm{\phi}=(\phi_1,\ldots,\phi_n) \cond \bm{p},\bm{\theta}&\iid& \sum_{i=1}^m p_i \delta_{\theta_i}\\ x_i\cond \bm{p},\bm{\theta},\bm{\phi}&\sim& G_{\phi_i} \quad\textrm{ independently for all $i\leq n$.} \end{array} \end{equation} Then the clusters are the classes of abstraction of the equivalence relation $i\sim j\equiv \phi_i=\phi_j$. In this way the distribution $\nu$ on the $m$ dimensional simplex \textit{generates} a probability distribution $\cP_{ \nu, n }$ on the partitions of set $[n]$ into at most $m$ subsets. \begin{exmp} Let $V_1,V_2,\ldots\iid \Beta(1,\alpha)$, $p_1=V_1$, $p_k=V_k\prod_{i=1}^{k-1}(1-V_i)$ for $k>1$. Let $\nu$ to be the distribution of $\bm{p}=(p_1,p_2,\ldots)$. The probability on the space of partitions of $[n]$ that $\nu$ generates is the Generalized Polya Urn Scheme (\cite{bib:Blackwell1973ferguson}) also known as the Chinese Restaurant Process (\cite{bib:Aldous1985exchangeability}) with the probability weight given by \begin{equation} \cP_{ \nu, n }(\cI)=\frac{\alpha^{|\cI|}}{\alpha^{(n)}}\prod_{I\in\cI}(|I|-1)!, \end{equation} where $\alpha^{(n)}=\alpha(\alpha+1)\ldots (\alpha+n-1)$. \end{exmp} \begin{lem} Let $\nu$ be a probability distribution on $\Delta^m$ that generates a probability $\cP_{ \nu, n }$ on the partitions of $[n]$. Then for every partition $\cI$ of $[n]$ \begin{equation}\label{eq:pi_to_erp} \cP_{ \nu, n }(\cI)= \int_{\Delta^m} \sum_{\psi\colon \cI\stackrel{1-1}{\to} [m]} \prod_{I\in\cI} p_{\psi(I)}^{|I|} \d{\nu(\bm{p})} \end{equation} where the ,,middle sum'' ranges over all injective functions from $\cI$ to $[m]$ (with the convention $[\infty]=\N$). \end{lem} \begin{proof} If $|\cI|>m$ then both sides of \eqref{eq:pi_to_erp} are 0. We now assume that $|\cI|\leq m$. Let us go back to \eqref{eq:bmm2} and suppose that the weights $\bm{p}=(p_i)_{i=1}^m$ and the atoms $\bm{\theta}=(\theta_i)_{i=1}^m$ are fixed. We need to know what is the probability that $\bm{\phi}=(\phi_1,\ldots,\phi_n) \cond \bm{p},\bm{\theta}\iid \sum_{i=1}^m p_i \delta_{\theta_i}$ induces a partition $\cI$. This would mean that for every $I\in\cI$ all the values $\phi_i$ for $i\in I$ are equal to $\theta_j$ for some $j\leq m$; let $j=\psi(I)$. The values $\psi(I)$ must be different for different $I\in\cI$, otherwise $\cI$ would not be generated. The probability of the sequence $(\phi_1,\ldots,\phi_n)$ where $\phi_i=\theta_{\psi(I)}$ for $i\in I$ is equal to $\prod_{I\in\cI} p_{\psi(I)}^{|I|}$. Since any assignment of clusters to atoms is valid, so for fixed $\bm{p}$ the probability of $\cI$ is equal to $\sum_{\psi\colon \cI\stackrel{1-1}{\to} [m]} \prod_{I\in\cI} p_{\psi(I)}^{|I|}$. Since $\bm{p}\sim \nu$ is random, we have to integrate it out and \eqref{eq:pi_to_erp} follows. \end{proof} Let $\cP_{ \nu, n }$ be the probability distribution on the space of partitions generated by $\nu$. We can formulate \eqref{eq:bmm1} as follows: firstly we generate the partition of observations into clusters, and then for every cluster we sample actual observations from the relevant marginal distribution. Formally, \eqref{eq:bmm1} is equivalent to \begin{equation}\label{eq:bmm3} \begin{array}{rcl} \cI&\sim& \cP_{ \nu, n }\\ \mathbf{x}_I:=(x_i)_{i\in I}\cond \cI &\sim& f_{ |I| } \quad\textrm{ independently for all $I\in\cI$} \end{array} \end{equation} where for $\theta\sim \pi$, $k\in\N$ and $\mathbf{u}=(u_1,\ldots,u_k)\cond \theta\iid G_\theta$, $f_k$ is the marginal density of $\mathbf{u}$, i.e. \begin{equation} f_{ k }(u_1,\ldots,u_k):=\int_\Theta \pi(\theta)\prod_{i=1}^k g_\theta(u_i)\d{\theta}. \end{equation} ($g_\theta$ is the density of $G_\theta$). We stress the fact that the independent sampling on the `lower' level of \eqref{eq:bmm3} relates to the independence between clusters (conditioned on the random partition); within one cluster the observations are (marginally) dependent. To make the notation more concise we define \begin{equation} f(\mathbf{x}\cond \cI):= \prod_{I\in\cI} f_{|I|}(\mathbf{x}_I). \end{equation} Then \eqref{eq:bmm3} becomes \begin{equation}\label{eq:bmm3a} \begin{array}{rcl} \cI&\sim& \cP_{ \nu, n }\\ \mathbf{x}\cond \cI &\sim& f(\cdot\cond \cI). \end{array} \end{equation} The further analysis requires the exact formula for $f_k$; in our case it is straightforward to compute since $\pi$ and $G_\theta$ are conjugate. We state the result here for the reader's convenience. \begin{prop} \label{res:form_f_MC} Let $\theta=(\mu,\Lambda)$ have the distribution given by \eqref{eq:NIWmodel} and let $\mathbf{u}=(u_1,\ldots,u_k)\cond \theta\iid \cN(\mu,\Lambda)$. Then the marginal distribution of $\mathbf{u}$ is given by \begin{equation}\label{eq:fMC} f_{k}(\mathbf{u}) = \frac{|\eta_0\Sigma_0|^{\nu_0/2}\kappa_0^{1/2}\Gamma_d\big({ \nu_{\color{black}k}\over 2 }\big)} {\pi^{d{\color{black}k}/2}\kappa_{\color{black}k}^{1/2}\Gamma_d\big({ \nu_0\over 2 }\big)}\cdot \det\left(\Sigma(\mathbf{u})\right)^{-\nu_{\color{black}k}/2}, \end{equation} where $\Gamma_d$ is the multivariate Gamma function and \begin{equation} \nu_{\color{black}k}=\eta_0+d+1+{\color{black}k},\ \kappa_{\color{black}k}=\kappa_0+{\color{black}k}\quad\textrm{and} \end{equation} \vspace*{-5mm} \begin{equation}\label{eq:sigmafun} \Sigma(\mathbf{u})= \eta_0\Sigma_0+\sum_{i=1}^k (u_i-\ov{\mathbf{u}})(u_i-\ov{\mathbf{u}_I})^t+ \frac{\kappa_0 {\color{black}k}}{\kappa_{{\color{black}k}}}(\ov{\mathbf{u}}-\mu_0)(\ov{\mathbf{u}}-\mu_0)^t. \end{equation} \end{prop} \begin{proof} The proof follows from \cite{bib:Murphy2007conjugate}, equation (266). \end{proof} \subsection{The Induced Partition} Throughout this section $P$ is some fixed probability distribution on $\R^d$. \begin{dfn} We say that a family $\cA$ of $P$-measurable subsets of $\R^d$ is a \emph{$P$-partition} if \begin{itemize} \item $P\left(\bigcup_{A\in\cA} A\right)=1$ \item $P(A_1\cap A_2)=0$ for all $A_1,A_2\in\cA$, $A_1\neq A_2$. \end{itemize} \end{dfn} \begin{ntn*} Let $\cA$ be a $P$-partition of the observation space. Let $X_1,X_2,\ldots\iid P$ and for $n\in\N$ let $\cI^\cA_n=\{J^A_n \colon A\in\cA\}$ where $J^A_n=\{i\leq n\colon X_i\in A\}$ (if $J^A_n=\emptyset$, we do not include it in $\cI^\cA_n$). We say that $\cI^\cA_n$ is \emph{induced} by $\cA$. \end{ntn*} \begin{prop} Let $\cA$ be a $P$-partition of the observation space. Then $\cI^\cA_n$ is almost surely a partition of $[n]$. \end{prop} \begin{proof} The proof is straightforward and therefore omitted. \end{proof} \smallskip Let $E_P(A)=\E_P(X\cond X\in A)$ and $\bV_P(A)=\Var_P(X\cond X\in A)$, where $X\sim P$. That means $E_P(A)$ is the conditional expected value and $\bV_P(A)$ is the conditional covariance matrix of $X$ conditioned on the event $X\in A$. For a family $\cA$ of sets with positive $P$ measure let \begin{equation} \cV_P(\cA)=\sum_{A\in\cA} P(A)\ln|\bV_P(A)|,\qquad \cH_P(\cA)=-\sum_{A\in\cA} P(A)\ln P(A), \end{equation} where $|\cdot|$ means determinant. Let \begin{equation}\label{eq:deltadef} \overline{\Delta}_P(\cA)=-\re{2}\cV_P(\cA)-\cH_P(\cA \end{equation} It turns out that basically \eqref{eq:deltadef} is (modulo constant) the first order approximation to the logarithm of the posterior probability in Bayesian Mixture Model of the data clustering defined by $\cA$, when the data comes as an iid sample from $P$. \begin{prop}\label{res:allapprox} $\sqrt[n]{\cP_{\nu,n}(\cI^\cA_n)\cdot f(X_{1:n}\cond \cI^\cA_n)}\approx n\exp\{\overline{\Delta}_P(\cA)\}$, where \begin{equation}\label{eq:mydelta} \overline{\Delta}_P(\cA) = -\re{2}\sum_{A\in\cA} P(A)\ln |\bV_P(A)| + \sum_{A\in \cA} P(A)\ln P(A) \end{equation} \end{prop} \begin{proof} The result follows from \Cref{res:likapprox} and \Cref{res:priorapprox}. \end{proof} It should be noted that \Cref{res:allapprox} does not depend on the form of the prior on probability measures. This prior is responsible for the `entropy` part of \eqref{eq:mydelta}. \smallskip The final goal is not to score the partitions of the observation space but clusterings of the data. A natural idea is to replace the distribution $P$ in \eqref{eq:deltadef} by its empirical counterpart. Let $\hat{P}_n=\re{n}\sum_{i\leq n} \delta_{x_i}$ be the empirical probability of $\mathbf{x}$. This is how $\cD_0$ is obtained. \smallskip The function $\cD_0$ would not be a good score statistic, because if $\cJ$ contains a cluster $J$ of size less than $d$ then $\sum_{j\in J}(x_j-\ov{x_J})(x_j-\ov{x_J})^t$ is singular and hence $\hat{\Delta}_\mathbf{x}(\cJ)=\infty$. To circumvent this, one could add some positive definite matrix to the within-group covariance matrix -- in this way the relevant determinant will always be greater than zero. Since we would like to avoid any arbitrary constants in the score function, a natural idea is to use the covariance matrix of the whole dataset, $\hat{\bV}_\mathbf{x}=\sum_{i\leq n}(x_i-\ov{x})(x_i-\ov{x})^t$. This operation is also motivated by considering \textit{the adaptive model}, where the strength of prior distribution is increasing linearly with the number of observations. The details of this approach are given in \Cref{sec:adaptive}. On the other hand, we do not want this modification to affect $\hat{\Delta}_\mathbf{x}$ significantly when the sizes of clusters are large and the empirical covariance matrices are good estimates of theoretical ones. Therefore we decide to decrease the importance of the modification linearly with the cluster size. This gives \eqref{eq:timon}, which is a well defined score statistic. \subsubsection{Auxiliary propositions} \begin{prop}\label{res:likapprox} Let $P$ be a probability distribution on $\R^d$ and let $\cA$ be a \emph{finite} $P$-partition of the observation space. Then $\lim_{n\to\infty} \sqrt[n]{f(X_{1:n}\cond \cI^\cA_n)}\stackrel{\textnormal{a.s.}}{=} \prod_{A\in\cA} |\bV_P(A)|^{P(A)} $ \end{prop} \smallskip Before we present the proof of \Cref{res:likapprox}, we formulate an auxiliary lemma that concerns the asymptotics of the function $\Gamma_d$. \begin{ntn*} If $(a_n)_{n=1}^\infty$ and $(b_n)_{n=1}^\infty$ are real sequences, we write $a_n\approx b_n$ if $\lim_{n\to\infty} \frac{a_n}{b_n}=1$. We write $a_n=o(b_n)$ if $\lim_{n\to\infty} \frac{a_n}{b_n}=0$. Similarly, if $a,b\colon \R\to\R$ are real functions, we write $a(x)\approx b(x)$ if $\lim_{x\to\infty} \frac{a(x)}{b(x)}=1$ and $a(x)=o\big(b(x)\big)$ if $\lim_{x\to\infty} \frac{a(x)}{b(x)}=0$. \end{ntn*} \begin{lem}\label{res:viper} Let $\alpha,\beta,a,b>0$. If $a_n\approx \alpha n^a$ and $b_n-\beta=o\left(\re{n^b}\right)$ then $a_n^{b_n}\approx (\alpha n)^\beta$. \end{lem} \begin{proof} For sufficiently large $n$ we have $1<a_n<2\alpha n^a$ and $-\re{n^b}<b_n-\beta<\re{n^c}$, hence \begin{equation}\label{eq:banana} (2\alpha n^a)^{-\re{n^b}}<a_n^{-\re{n^b}}<a_n^{b_n-\beta}<a_n^{\re{n^b}}<(2\alpha n^a)^{\re{n^c}} \end{equation} Left- and right-hand side of \eqref{eq:banana} converge to 1, so $\lim_{n\to\infty} a_n^{b_n-\beta}=1$. The proof follows from $ \frac{a_n^{b_n}}{(\alpha n)^\beta}= \left(\frac{a_n}{\alpha n^a}\right)^\beta a_n^{b_n-\beta} $. \end{proof} \begin{lem}\label{lem:gammad} If $x_n\approx \lambda n$ and $x_n/n-\lambda = o\big(\re{n^a}\big)$ for some $a>0$ then $\sqrt[n]{\Gamma_d\left( { x_n }\right)}\approx (\lambda \frac{n}{e})^{\lambda d}$. \end{lem} \begin{proof} Recall Stirling's formula: $ \Gamma(x)\approx \sqrt{2\pi x}(\frac{x}{e})^x. $ It follows from \Cref{res:viper} that \begin{equation} \sqrt[n]{\Gamma( x_n)} \approx \left(\sqrt{ 2\pi x_n} \left( { x_n\over e } \right)^{x_n}\right)^{1/n} = (2\pi x_n)^{1/n} \left( { x_n\over e } \right)^{x_n/n} \approx \left(\lambda \frac{n}{e}\right)^\lambda \end{equation} since $n^{1/n^a}\approx 1$. Note that for fixed $t>0$ we have $(x_n-t)\approx \lambda n$ and as a result \begin{equation} \sqrt[n]{\Gamma_d(x_n)}=\sqrt[n]{\pi^{d(d-1)/4}} \prod_{j=1}^d \sqrt[n]{\Gamma\left(x_n-\frac{j-1}{2}\right)} \approx \left(\lambda \frac{n}{e}\right)^{\lambda d}. \end{equation} \end{proof} \begin{proof}[Proof of \Cref{res:likapprox}] Note that $|J^A_n|$ is a random variable with distribution $\Bin(n,P(A))$ for all $A\in\cA$. Due to Law of Iterated Logarithm we have that almost surely $\big(|J^A_n|/n-P(A)\big)=o(n^{-1/2+\eps})$ for any $\eps>0$ and hence the assumptions of \Cref{lem:gammad} are almost surely satisfied, so \begin{equation} \sqrt[n]{\Gamma_d\left( { |J^A_n|+n_0 \over 2 }\right)}\stackrel{\textnormal{a.s.}}{\approx} \left(\frac{P(A)}{2}\cdot \frac{n}{e}\right)^{P(A) d/2}. \end{equation} Because $\cA$ is finite and $\sum_{A\in\cA} P(A)=1$, it means that \begin{equation} \begin{split} \sqrt[n]{\prod_{A\in\cA}\Gamma_d\left( { |J^A_n|+n_0\over 2 }\right)} &\stackrel{\textnormal{a.s.}}{\approx} \left( \prod_{A\in\cA}P(A)^{P(A)} \right)^{d/2} \left( { n\over 2e } \right)^{d/2}. \end{split} \end{equation} By the strong law of large numbers we have that \begin{equation} (x_i-\overline{\mathbf{x}_A})(x_i-\overline{\mathbf{x}_A})^t/|J^A_n|\stackrel{\textnormal{a.s.}}{\approx} \bV_P(A) \quad \textrm{for $A\in\cA$} \end{equation} and hence, by \eqref{eq:sigmafun}, for $A\in\cA$ \begin{equation} \begin{split} \big|\Sigma(\bX_{J^\cA_n})\big|/|J^A_n|^d&= \Big|\Sigma_0/|J^A_n| + \sum_{i\in J^A_n} (x_i-\overline{\mathbf{x}_A})(x_i-\overline{\mathbf{x}_A})^t/|J^A_n| +\frac{k_0}{k_0+{|J^A_n|}}(\overline{\mathbf{x}_A}-\mu_0)(\overline{\mathbf{x}_A}-\mu_0)^t\Big|\stackrel{\textnormal{a.s.}}{\approx}\\ &\stackrel{\textnormal{a.s.}}{\approx} \Big|\sum_{i\in J^A_n} (x_i-\overline{\mathbf{x}_A})(x_i-\overline{\mathbf{x}_A})^t/|J^A_n| \Big|\stackrel{\textnormal{a.s.}}{\approx} |\bV_P(A)|\\ \end{split} \end{equation} Hence $|\Sigma(\bX_{J^\cA_n})|\stackrel{\textnormal{a.s.}}{\approx} |J^A_n|^d |\bV_P(A)|\stackrel{\textnormal{a.s.}}{\approx} n^d P(A)^d |\bV_P(A)|$. Using the Law of Iterated Logarithm and \Cref{res:viper} again we get \begin{equation} \sqrt[n]{|\Sigma(\bX_{J^\cA_n})|^{-( |J^A_n|+n_0 )/2}} \stackrel{\textnormal{a.s.}}{\approx} ( P(A)^{P(A)} )^{-d/2} n^{-dP(A)/2}|\bV_P(A)|^{-P(A)/2} \end{equation} which means \begin{equation} \sqrt[n]{\prod_{A\in\cA} |\Sigma(\bX_{J^\cA_n})|^{-( |J^A_n|+n_0 )/2}}\stackrel{\textnormal{a.s.}}{\approx} \left( \prod_{A\in\cA} P(A)^{P(A)} \right)^{-d/2} n^{-d/2}\prod_{A\in\cA}|\bV_P(A)|^{-P(A)/2} \end{equation} and therefore \begin{equation} \begin{split} \sqrt[n]{f(X_{1:n}\cond \cI^\cA_n)}&\stackrel{\textnormal{a.s.}}{\approx} \left( \prod_{A\in\cA}P(A)^{P(A)} \right)^{d/2} \left( { n\over 2e } \right)^{d/2} \left( \prod_{A\in\cA} P(A)^{P(A)} \right)^{-d/2} n^{-d/2}\prod_{A\in\cA}|\bV_P(A)|^{-P(A)/2}=\\ &=(2e)^{-d/2}\prod_{A\in\cA}|\bV_P(A)|^{-P(A)/2} \end{split} \end{equation} \end{proof} \begin{prop}\label{res:priorapprox} Let $P$ be a probability distribution on $\R^d$ and let $\cA$ be a \emph{finite} $P$-partition of the observation space. Let $\cP_{ \nu, n }$ be a probability distribution on the partitions of $[n]$, generated by the probability distribution $\nu$ on $\Delta^\infty$. Then $\lim_{n\to\infty} \sqrt[n]{\cP_{\nu, n}(\cI^\cA_n)}\stackrel{\textnormal{a.s.}}{=} \prod_{A\in\cA} P(A)^{P(A)}$. \end{prop} \begin{proof} The proof is a direct consequence of the Law of Large Numbers and \Cref{res:huniv}. \end{proof} By $\eqref{eq:deltadef}$, $\overline{\Delta}_P$ consists of two components: $\cV_P$ and $\cH_P$. These two behave differently when two clusters are joined; the variance component is increasing whereas the entropy component is decreasing. \begin{prop}\label{res:disagglo} Let $\cA$ be a partition of $\R^d$ and let $A,B\in \cA$. Let $\cC$ be a partition obtained from $\cA$ by joining $A$ and $B$, i.e. $\cC=\cA\cup \{A\cup B\}\sm \{A,B\}$. Then \begin{enumerate}[(a)] \item $\cH_P(\cA)\geq \cH_P(\cC)$ \item $\cV_{P}(\cA)\leq \cV_{P}(\cC)$. \end{enumerate} \end{prop} \begin{proof} Let $C=A\sqcup B$.\\ \textit{Part (a):} \begin{equation}\label{eq:thth} P(A)\ln P(A)+P(B)\ln P(B)-P(C)\ln P(C)= P(A)\ln \frac{P(A)}{P(C)} + P(B)\ln \frac{P(B)}{P(C)}\leq 0 \end{equation} and the proof follows. The last inequality in \eqref{eq:thth} comes from $P(A),P(B)\leq P(C)$. \begin{lem}\label{res:varsubadd} Let $A\cap B=\emptyset, C:=A\cup B$. Then \begin{equation} P(A)\bV_P(A)+P(B)\bV_P(B)\preceq P(C)\bV_P(C) \end{equation} where $\preceq$ is the L\"{o}wner partial order, i.e. $M_1\preceq M_2$ iff $M_2-M_1$ is non-negative definite. \end{lem} \begin{proof} Let $e_1(A)=\E X\1_A(X)$ and $e_2(A)=\E XX^t \1_A(X)$ where $X\sim P$. Then \begin{equation} \bV_P(A)=\frac{e_2(A)}{P(A)}-\frac{e_1(A)e_1(A)^t}{P(A)^2}. \end{equation} Note that the functions $P,e_1,e_2$ are additive, hence \begin{equation}\label{eq:fhvg} \def-5cm{-5cm} \begin{split} P(C)\bV_P(C)-P(A)\bV_P(A)-P(B)\bV_P(B) &=\\ &\hspace{-5cm}= \left(e_2(C)-\frac{e_1(C)e_1(C)^t}{P(C)}\right)- \left(e_2(A)-\frac{e_1(A)e_1(A)^t}{P(A)}\right)- \left(e_2(B)-\frac{e_1(B)e_1(B)^t}{P(B)}\right) =\\ &\hspace{-5cm}= \frac{e_1(A)e_1(A)^t}{P(A)}+\frac{e_1(B)e_1(B)^t}{P(B)}- \frac{e_1(C)e_1(C)^t}{P(C)} =\\ &\hspace{-5cm}= \frac{e_1(A)e_1(A)^t}{P(A)}+\frac{e_1(B)e_1(B)^t}{P(B)}- \frac{\big(e_1(A)+e_1(B)\big)\big(e_1(A)+e_1(B)\big)^t}{P(A)+P(B)} =\\ &\hspace{-5cm}= \frac{P(A)P(B)}{(P(A)+P(B))}\left(\frac{e(A)}{ P(B) }-\frac{e(B)}{ P(A) }\right) \left(\frac{e(A)}{ P(B) }-\frac{e(B)}{ P(A) }\right)^t. \end{split} \end{equation} The last matrix in \eqref{eq:fhvg} is clearly non-negative definite and the proof follows. \end{proof} \begin{thm}\label{res:detcon} \textnormal{(Theorem 2.4.4 in \cite{bib:HornJohnson1990matrix})} The function $\ln \det(\cdot)$ is convex on the space of positive definite matrices. \end{thm} \smallskip \textit{Proof of part (b):} \begin{equation} \begin{split} \frac{P(A)}{P(C)}\ln |\bV_P(A)|+ \frac{P(B)}{P(C)}\ln |\bV_P(B)| &\stackrel{\Cref{res:detcon}}{\leq} \ln\Big|\frac{P(A)}{P(C)}\bV_P(A)+\frac{P(B)}{P(C)}\bV_P(B)\Big|\leq\\ &\stackrel{\Cref{res:varsubadd}}{\leq} \ln|\bV_P(C)| \end{split} \end{equation} and the proof follows. \end{proof} \begin{thm}\label{res:huniv} Let $\cP_{ \nu, n }$ be a probability distribution on the partitions of $[n]$, generated by the probability distribution $\nu$ on $\Delta^\infty$. Fix $K\in\N$ and consider a sequence of partitions $(\cI_n)_{n\in\N}$, where $\cI_n=\{I_{n,1},\ldots,I_{n,K}\}$ is a partition of $[n]$ (it is possible that $I_{n,i}=\emptyset$ for some $i\leq K$). Assume that $|I_{n,k}|/n \to \alpha_k>0$ for $k\leq K$. Then \begin{equation} \lim_{n\to\infty}\sqrt[n]{\cP_{n,\nu}(\cI_n)} = \prod_{k=1}^K \alpha_k^{ \alpha_k } \end{equation} \end{thm} \begin{proof} Firstly note that for sufficiently large $n$ we have $|I_{k,n}|\geq 1$ for all $k\leq K$. Then in \eqref{eq:pi_to_erp} we sum functions that depend on exactly $K$ coordinates of $\bm{p}$. Hence we can express $\eqref{eq:pi_to_erp}$ in the form of an integral on the $K$-dimensional set ${\blacktriangle}^K=\{(p_1,\ldots,p_K)\colon \sum_{k=1}^K p_k=1, \forall_{k\leq K} p_k\in(0,1)\}$ as \begin{equation} \cP_{n,\nu}(\cI_n)= \int_{{\blacktriangle}^K} \prod_{k=1}^K p_{k}^{|I_{k,n}|} \d{\nu_K(\bm{p})} \end{equation} where $\nu_K$ is a measure on ${\blacktriangle}^K$ defined by \begin{equation} \nu_K(A)=\sum_{\psi\colon[K]\stackrel{1-1}{\to}\N} \nu\big( (p_{\psi(1)},p_{\psi(2)},\ldots,p_{\psi(K)})\in A\big) \end{equation} for $A\subset {\blacktriangle}^K$, where $[K]=\{1,2,\ldots,K\}$. Hence \begin{equation} \sqrt[n]{\cP_{n,\nu}(\cI_n)}= \sqrt[n]{ \int_{{\blacktriangle}^K} \prod_{k=1}^K p_{i}^{|I_{k,n}|} \d{\nu_K(\bm{p})} }=\norm{g_n}_{n} \end{equation} where $g_n(p_1,\ldots,p_K)=\prod_{k=1}^K p_{k}^{|I_{k,n}|/n}$ and $\norm{\cdot}_n$ is the norm in $L^n({\blacktriangle}^K, \nu_K)$ space. \smallskip Since $\nu_K$ is not a finite measure on ${\blacktriangle}^K$, in the remaining part of the proof we will have to be careful that the functions we are considering belong to the space $L^n({\blacktriangle}^K, \nu_K)$ for sufficiently large $n$. \smallskip Let $g(p_1,\ldots,p_K)=\prod_{k=1}^K p_{k}^{\alpha_k}$ and let $h(p_1,\ldots,p_K)=\prod_{k=1}^K p_{k}$. Note that \begin{equation} \int_{{\blacktriangle}^K} h(\bm{p}) \d{\nu_K(\bm{p})}= \cP_{K,\nu}\Big(\big\{\{1\},\{2\},\ldots,\{K\}\big\}\Big)\leq 1. \end{equation} Moreover for $n>1/\min{\alpha_i}$ we have $g^n(\bm{p})\leq h(\bm{p})$ and therefore $g\in L^n({\blacktriangle}^K, \nu_K)$ for $n>1/\min{\alpha_i}$. Because $g$ is bounded by 1 we get \begin{equation} \norm{g}_n\to \norm{g}_\infty=\sup_{{\blacktriangle}^K} g=\prod_{k\leq K}\alpha_k^{\alpha_k} \end{equation} (the fact that $\norm{g}_\infty=\sup_{{\blacktriangle}^K} g=\prod_{k\leq K}\alpha_k^{\alpha_k}$ follows easily from applying the Lagrange multipliers). \smallskip We now prove that $\norm{g_n-g}_n\to 0$. It is not a straightforward consequence of the pointwise convergence of $g_n$ to $g$ since $\nu_K$ is not a finite measure on ${\blacktriangle}^K$. \smallskip Clearly, $( |I_{k,n}|/n-\alpha_k/2 )\to\alpha_k/2>0$ and hence $\norm{g_n g^{-1/2}- g^{1/2}}_\infty\to 0$ on ${\blacktriangle}^K$.\\ Let $N\in \N$ be chosen so that for $n>N$ we have $\norm{g_n g^{-1/2}-g^{1/2}}_\infty <\eps$ and $n\alpha_k\geq 2$ for $k\leq K$. Then for $n>N$ \begin{equation} \begin{split} \norm{g_n-g}_n^n &= \int_{{\blacktriangle}^K} | g_n-g |^n \d{\nu_K(\bm{p})} = \int_{{\blacktriangle}^K} | g_n g^{-1/2}-g^{1/2} |^n g^{n/2} \d{\nu_K(\bm{p})}\leq \\ &\leq \epsilon^n \int_{{\blacktriangle}^K} g^{n/2} \d{\nu_K(\bm{p})} \leq \epsilon^n \int_{{\blacktriangle}^K} h \d{\nu_K(\bm{p})} \leq \epsilon^n, \end{split} \end{equation} hence $\norm{g_n-g}_n\to 0$. The result follows from the triangle inequality \begin{equation} \big|\norm{g_n}_n-\norm{g}_\infty\big|\leq \big|\norm{g_n}_n-\norm{g}_n\big|+ \big|\norm{g}_n-\norm{g}_\infty\big|\leq \norm{g_n - g}_n+ \big|\norm{g}_n-\norm{g}_\infty\big|. \end{equation} \end{proof} \begin{lem} Let $\alpha_i>0$ for $i\leq K$ and $\sum_{i=1}^K \alpha_i=1$. Let $g(p_1,\ldots,p_K)=\prod_{k=1}^K p_{k}^{\alpha_k}$. Then $\sup_{{\blacktriangle}^K} g=\prod_{k\leq K}\alpha_k^{\alpha_k}$. \end{lem} \begin{proof} As $\alpha_i>0$ for $i\leq K$, the function $g$ is continuous and, because ${\blacktriangle}^K$ is compact in $\R^K$, it achieves its extreme values. Let $\hat{\bm{p}}=(\hat{p}_1,\ldots,\hat{p}_K)\in{\blacktriangle}^K$ satisfy $g(\hat{\bm{p}}_K)=\sup_{{\blacktriangle}^K} g$. Clearly, $\hat{\bm{p}}\in\Delta^K$. Indeed, otherwise $s=\sum_{i=1}^K \hat{p}_i<1$, $\hat{\bm{p}}/s\in {\blacktriangle}^K$ and $g(\hat{\bm{p}}/s)=g(\hat{\bm{p}})/s>g(\hat{\bm{p}})$, which contradicts the definition of $\hat{\bm{p}}$. Since $g$ is nonnegative on $\Delta^K$ and it is equal to 0 on the boundary of $\Delta^K$, we know that $\hat{\bm{p}}$ is in the interior of $\Delta^K$. The function $g$ is positive on the interior of $\Delta^K$, so by considering the function $\ln(g)$ and using the Lagrange multipliers, we gat that $\hat{\bm{p}}$ satisfies \begin{equation} 0=(\alpha_i\ln p_i)'+\lambda=\frac{\alpha_i}{p_i}+\lambda \end{equation} for $i\leq K$ and some $\lambda\in\R$. Hence $p_i$'s are proportional to $\alpha_i$'s, and because $\sum_{i=1}^K\alpha_i=1$, we get that $\hat{p_i}=\alpha_i$ and the proof follows. \end{proof} \section{Adaptive model}\label{sec:adaptive} We now allow parameters of the model \eqref{eq:NIWmodel} to change with the number of observations. More precisely, we perform a substitution $\eta_0\mapsto \lambda n=:\eta_n$ so that the expected value of the within group precision matrix is fixed and increasingly concentrated on $\Sigma_0$. We investigate the limit formula for the posterior as $n$ goes to infinity. Note that in this case $\Sigma_{|J^A_n|}/n\to \lambda \Sigma_0+\bV_P(A)$. \begin{equation}\label{eq:NIWmodelMod} \begin{array}{rcl} \Lambda&\sim&\cW^{-1}(\eta_n+d+1,\eta_n\Sigma_0)\\ \mu\cond\Lambda&\sim&\Normal(\mu_0,\Lambda/\kappa_0) \end{array} \end{equation} \begin{prop}\label{res:likapproxMod} Let $P$ be a probability distribution on $\R^d$ and let $\cA$ be a \emph{finite} $P$-partition of the observation space. Then \begin{equation}\label{eq:adapt} \begin{split} \sqrt[n]{f(X_{1:n}\cond \cI^\cA_n)}&\stackrel{\textnormal{a.s.}}{\approx} (2e)^{-(1+|\cA|\lambda)d/2}\prod_{A\in\cA}|\frac{\lambda}{P(A)+\lambda}\Sigma_0+\frac{P(A)}{P(A)+\lambda}\bV_P(A)|^{-\big(P(A)+\lambda\big)/2} \end{split} \end{equation} \end{prop} \begin{proof} Note that $|J^A_n|$ is a random variable with distribution $\Bin(n,P(A))$ for all $A\in\cA$. Due to Law of Iterated Logarithm we have that almost surely $\big(|J^A_n|/n-P(A)\big)=o(n^{-1/2+\eps})$ for any $\eps>0$ and hence the assumptions of \Cref{lem:gammad} are almost surely satisfied, so \begin{equation} \sqrt[n]{\Gamma_d\left( { |J^A_n|+\eta_n \over 2 }\right)}\stackrel{\textnormal{a.s.}}{\approx} \left(\frac{P(A)+\lambda}{2}\cdot \frac{n}{e}\right)^{\big(P(A)+\lambda\big) d/2}. \end{equation} Because $\cA$ is finite and $\sum_{A\in\cA} P(A)=1$, it means that \begin{equation} \begin{split} \sqrt[n]{\prod_{A\in\cA}\Gamma_d\left( { |J^A_n|+n_0\over 2 }\right)} &\stackrel{\textnormal{a.s.}}{\approx} \left( \prod_{A\in\cA}\big(P(A)+\lambda\big)^{P(A)+\lambda} \right)^{d/2} \left( { n\over 2e } \right)^{(1+|\cA|\lambda)d/2}. \end{split} \end{equation} By the strong law of large numbers we have that \begin{equation} (x_i-\overline{\mathbf{x}_A})(x_i-\overline{\mathbf{x}_A})^t/|J^A_n|\stackrel{\textnormal{a.s.}}{\approx} \bV_P(A) \quad \textrm{for $A\in\cA$} \end{equation} and hence, by \eqref{eq:sigmafun}, for $A\in\cA$ \begin{equation} \begin{split} \big|\Sigma(\bX_{J^\cA_n})\big|/|J^A_n|^d&= \Big|\eta_n\Sigma_0/|J^A_n| + \sum_{i\in J^A_n} (x_i-\overline{\mathbf{x}_A})(x_i-\overline{\mathbf{x}_A})^t/|J^A_n| +\frac{k_0}{k_0+{|J^A_n|}}(\overline{\mathbf{x}_A}-\mu_0)(\overline{\mathbf{x}_A}-\mu_0)^t\Big|\stackrel{\textnormal{a.s.}}{\approx}\\ &\stackrel{\textnormal{a.s.}}{\approx} \Big|\frac{\lambda}{P(A)}\Sigma_0+\sum_{i\in J^A_n} (x_i-\overline{\mathbf{x}_A})(x_i-\overline{\mathbf{x}_A})^t/|J^A_n| \Big|\stackrel{\textnormal{a.s.}}{\approx} |\frac{\lambda}{P(A)}\Sigma_0+\bV_P(A)|\\ \end{split} \end{equation} Hence $|\Sigma(\bX_{J^\cA_n})|\stackrel{\textnormal{a.s.}}{\approx} \stackrel{\textnormal{a.s.}}{\approx} n^d \big(P(A)+\lambda\big)^d |\frac{\lambda}{P(A)+\lambda}\Sigma_0+\frac{P(A)}{P(A)+\lambda}\bV_P(A)|$. Using the Law of Iterated Logarithm and \Cref{res:viper} again we get \begin{equation} \sqrt[n]{|\Sigma(\bX_{J^\cA_n})|^{-( |J^A_n|+\eta_n )/2}} \stackrel{\textnormal{a.s.}}{\approx} \big( n( P(A)+\lambda )\big)^{-( P(A)+\lambda )d/2} |\frac{\lambda}{P(A)+\lambda}\Sigma_0+\frac{P(A)}{P(A)+\lambda}\bV_P(A)|^{-\big(P(A)+\lambda\big)/2} \end{equation} and \eqref{eq:adapt} follows. \end{proof} \section{Discussion} In this article we proposed a score function that can be used for choosing the number of clusters in popular clustering methods. It is derived as a limit in a Bayesian Mixture Model of Gaussians. We derived some of its properties, though there are some questions that remain unanswered. For example, it is interesting to ask what assumptions on $P$ should be made to ensure that the supremum of possible values of the $\overline{\Delta}$ function is finite. \bibliographystyle{named}
1,314,259,994,623
arxiv
\section{Anisotropy computation} In this section, we provide a brief description of the methodology employed for computing optimal anisotropy parameters $(\beta^{\star},\theta^{\star})$. In depth exposition can be found in our previous work \cite{CHAKRABORTY20221}. In order to compute the anisotropy parameters $(\beta^{\star},\theta^{\star})$, we employ the polynomial representation of the error estimator mentioned in~\cref{err_est}. In this article, we employ a scaled version of the standard inner product ($H^{1}(T_h) \times H(div,T_h)$) associated with the test space for convection-diffusion and diffusion problems. We call the corresponding test norm as the scaled V-norm~\cite{Demkowicz2012a,CHAKRABORTY20221}, given by \begin{align} {\Vert (\psi_v,\bm{\psi}_{\tau}) \Vert}^2_{\mathbb{V},k} & = {\Vert \psi_{v} \Vert}^2_{2,k} + \sqrt{\vert k \vert} \, {\Vert \nabla \psi_v \Vert}^2_{2,k} + {\Vert \bm{\psi}_{\tau}\Vert}^2_{2,k} + \sqrt{\vert k \vert} \, {\Vert \nabla \cdot \bm{\psi}_{\tau}\Vert}^2_{2,k} \notag \\ &= \int_k {(\psi_{v}(\mathbf{x}))}^2+ \bm{\psi_\tau}(\mathbf{x}) \cdot\bm{\psi}_{\tau}(\mathbf{x}) + \sqrt{\vert k \vert} ({\nabla \psi_{v}(\mathbf{x}) \cdot \nabla \psi_{v}(\mathbf{x})} + {(\nabla \cdot\bm{\psi}_{\tau}(\mathbf{x}))}^2))\,d\mathbf{x}. \label{scaled_norm_MN} \end{align} Here ${\vert k \vert}$ represents the volume of the element $k \, \in \, T_h$ and $(\psi_v,\bm{\psi}_{\tau})$ are the associated error-representation functions. The computation of optimal anisotropy parameters $(\beta^{\star},\theta^{\star})$ only involves minimization of a bound on the element's energy error estimate (proposed in~\cite{CHAKRABORTY20221}, Section 4). This bound is given by a parameterized integral of $\beta$ and $\theta$. The minimum of the bound is sought iteratively by performing alternate searches in $\beta$ and $\theta$~\cite{Dolejsi2019}. \section{Discretization} This section briefly discusses the formulation of DPG schemes with optimal test functions. These schemes can be interpreted as minimum residual methods, which allows derivation of an inbuilt error-estimator in a straightforward way. Let $\mathbb{X}$ and $\mathbb{V}$ be Hilbert spaces over $\mathbb{R}^{d}$. We consider a well-posed variational boundary value problem using a continuous bilinear form $b:\mathbb{X} \times \mathbb{V} \rightarrow \mathbb{R}$. Here $\mathbb{X}$ is the approximation space, and $\mathbb{V}$ is the test space. Let $\mathbb{X}'$ and $\mathbb{V}'$ be the respective dual spaces of $\mathbb{X}$ and $\mathbb{V}$. For a given functional $\ell \in \mathbb{V}'$, the primal solution can be defined as $u^{\star} \in \mathbb{X}$ satisfying \begin{align} b(u^{\star},v) = \ell(v) \quad \forall \quad v\in \mathbb{V}. \label{blnr_a} \end{align} The bilinear form $b$ generates a continuous linear operator $\mathcal{B}: \mathbb{X} \rightarrow \mathbb{V}'$ such that, \begin{align} \left\langle \mathcal{B}u,v \right\rangle = \left\langle u,\mathcal{B}'v \right\rangle = b(u,v) \quad {\forall} \quad u \in \mathbb{X},v\in\mathbb{V}. \end{align} Thus, $u^{\star} = {\mathcal{B}}^{-1}\ell$. Let $\mathbb{X}_h \subset \mathbb{X}$ be a finite dimensional subspace of $\mathbb{X}$. One can characterize the optimal solution $u_h^{opt} \in \mathbb{X}_h$ as the one with minimal error in an appropriately chosen norm. Since $u^{\star}$ is not accessible a priori, it cannot be invoked to define the optimality statement. Instead, the following minimum residual principle can be used \begin{align} u_h^{opt} = \argmin_{u_h \in \mathbb{X}_h} {\Vert \mathcal{B} u_h - \ell \Vert}^2_{\mathbb{V}'}. \label{ip1} \end{align} On taking the Gateaux derivative of ${\Vert \mathcal{B} u_h - \ell \Vert}^2_{\mathbb{V}'}$ and using the first order optimality condition, the following variational equation is obtained: \begin{align} \left\langle \mathcal{B}u_h^{opt},{\mathcal{R}_\mathbb{V}}^{-1}\mathcal{B}u_h\right\rangle = \left\langle \ell, {\mathcal{R}_\mathbb{V}}^{-1}\mathcal{B}u_h\right\rangle \quad \forall u_h \in \mathbb{X}_h \label{optblnr_a}, \end{align} where ${\mathcal{R}}_\mathbb{V}: \mathbb{V} \rightarrow {\mathbb{V}}^{\prime}$ is the Riesz map. ~\Cref{optblnr_a} can be redefined as \begin{align} b(u^{opt}_h,v_h) = \ell(v_h) \quad \forall v_h \in {\mathcal{R}_\mathbb{V}}^{-1}\mathcal{B}(\mathbb{X}_h). \end{align} On defining the energy inner product on $\mathbb{X}$ as $a(u,w) = \left\langle \mathcal{B}u, {\mathcal{R}_{\mathbb{V}}}^{-1}\mathcal{B}w \right\rangle$, we can define a norm on $\mathbb{X}$ as follows: \begin{align} \vertiii{u}_{\mathbb{X}} = a(u,u) = {\Vert \mathcal{B}u \Vert}^2_{\mathbb{V}'}. \end{align} Since $\mathbb{V}$ is infinite-dimensional, the inverse of the Riesz map cannot be computed exactly. In practice, the Riesz map is approximately inverted over an enriched finite dimensional subspace $\mathbb{V}_r \subset \mathbb{V}$ such that $M = dim(\mathbb{V}_r) \geq dim(\mathbb{X}_h) = N$ \footnote{In practice, the basis for $\mathbb{V}_r$ is enriched by adding higher-order polynomials to the basis of $\mathbb{X}_h$. Hence, if the basis functions of $\mathbb{X}_h$ are obtained using order $p$ polynomials, then we obtain the basis for $\mathbb{V}_r$ by employing polynomials of order $p + \delta p$. \label{fn_enrch}} \cite{VaziriAstaneh2018}. The approximation involves a symmetric Gram matrix which is induced by the inner product on $\mathbb{V}$. Thus, a practical implementation of the DPG method aims to find $u_h \in \mathbb{X}_h$ such that \begin{align} b(u_h,v_h) = \ell(v_h) \quad \forall v_h \in \mathbb{V}_h.\label{apprx_blnr2} \end{align} where $\mathbb{V}_h = Range(F_r)$, and $F_r:\mathbb{X}_h \rightarrow \mathbb{V}_r$ represents the approximation of the mapping ${\mathcal{R}_\mathbb{V}}^{-1}\mathcal{B}$. Let $\varphi$ be the Riesz representation of the residual, i.e. $\mathcal{R}_\mathbb{V} \varphi = (\mathcal{B}u_h - \ell) $. Since it is not possible to compute the exact inverse of $\mathcal{R}_{\mathbb{V}}$, we need to approximate $\varphi$ using the finite-dimensional subspace $\mathbb{V}_r$, i.e., \begin{align} \left\langle \mathcal{R}_\mathbb{V} \varphi_h, \psi_i \right\rangle = {(\varphi_h,\psi_i)}_{\mathbb{V}} = \left\langle \mathcal{B}u_h - \ell, \psi_i \right\rangle \quad \forall \qquad i = 1,\dots M, \label{oprt_errb} \end{align} where $\psi_1, \dots ,\psi_M $ is any basis for $\mathbb{V}_r$ and {$\varphi_h$ approximates $\varphi$}. Let $\varphi_h = \sum_{j=1}^{M} \hat{c}_j \psi_j$. Then from~\cref{oprt_errb}, we have \begin{align*} \mathbb{G}{\bm{c}} = \bm{r}, \end{align*} where $\mathbb{G} \in \mathbb{R}^{M \times M}$ is the Gram matrix with $\mathbb{G}_{i,j} = (\psi_i,\psi_j)_{\mathbb{V}}$, $\bm{c} = (\hat{c}_1, \dots, \hat{c}_M)^T$, and $\bm{r} \in \mathbb{R}^M$ with ${r}_i = b(u_h,\psi_i) - \ell(\psi_i)$. Thus, the error in energy norm is approximated by: \begin{align} \vertiii{ u - u_h }_{\mathbb{X}} = {\Vert \ell - \mathcal{B}u_h \Vert}_{\mathbb{V}'} \approx {\Vert \varphi_h \Vert}_\mathbb{V}. \label{err_est} \end{align} The function $\varphi_h$ is also known as the error-representation function \cite{Demkowicz2012a,Zitelli2011}. In this article, we employ the ultra-weak variational formulation for our numerical experiments. Details about the algebraic structure of the corresponding linear system can be found in \cite{VaziriAstaneh2018}. {The ultra-weak formulation for convection-diffusion and diffusion problems leads to a composite trial space i.e. $U_h:=(u_h,\sigma_h,\hat{u}_h,\hat{\sigma}_h) \in \mathbb{X}_h$ \footnote{{$\mathbb{X}_h \subset \mathbb{X} = L^2 \times {(L^2)}^d \times H^{\frac{1}{2}} \times H^{-1}$}} where $u_h$ approximates $u$, $\sigma_h$ approximates $\nabla u$, $\hat{u}_h$ approximates the trace of $u$ on the mesh skeleton, and $\hat{\sigma}_h$ approximates the normal flux on the mesh skeleton {with $u$ being the diffused or the convected quantity}. When presenting numerical results in~\cref{results_hp}, we refer to the approximate energy norm error ${\Vert \varphi_h \Vert}_{\mathbb{V}}$ as ${\Vert U - U_h \Vert}_E$.} Next, we briefly discuss the DPG-star method \cite{DEMKOWICZ2018}. We employ the DPG-star method to solve a compatible dual problem to obtain an error indicator that can drive goal-oriented adaptations. The DPG-star method has been previously used in this context, both for isotropic $h-$adaptation \cite{KeithGoal} and anisotropic $h-$adaptation~\cite{CHAKRABORTY20221}. Let the strong form of the primal problem be given by \begin{align} Lu = s \quad \mathrm{in}\ \Omega, \quad Au = g \quad \mathrm{on} \ \partial \Omega, \end{align} where $\Omega$ is the computational domain, $L$ is a linear differential operator, $s \in L^2(\Omega)$, $g \in L^2(\partial \Omega)$, and $A$ is a linear differential boundary operator on $\partial \Omega$. In this article, we deal with target functionals of the following form: \begin{align} J(u) = \int_{\Omega} j_{\Omega} u \, dx + \int_{\partial \Omega} j_{\partial \Omega} Cu \, dx, \label{tf} \end{align} where $j_{\Omega} \in L^2(\Omega)$, $j_{\partial \Omega} \in L^2(\partial \Omega)$, and $C$ is a boundary operator on $\partial \Omega$. We assume that the target functional satisfies the compatibility condition for linear problems \cite{Adjcompat}, i.e., \begin{equation} {\left( {L}u,z \right)}_{\Omega} + {\left( Au,C^{\star}z \right)}_{\partial \Omega} = {\left( u, {{L}}^{\star}z \right)}_{\Omega} + {\left( Cu,A^{\star}z \right)}_{\partial \Omega}, \end{equation} where $L^{\star},A^{\star}$ and $C^{\star}$ are the adjoint operators of $L,A$ and $C$. The DPG-star method approximates the solution of the dual problem associated with the target functional in~\cref{tf}. The associated dual problem is given by \begin{align} {L}^{\star} z = j_{\Omega} \quad \mathrm{in}\ \Omega, \quad A^{\star}z = j_{\partial \Omega} \quad \mathrm{on} \partial \Omega. \end{align} An in-depth exposition to the DPG-star method can be found in \cite{DEMKOWICZ2018,KeithGoal}. The resulting error in the target functional can be bounded as: \begin{align} \vert J(u- u_h) \vert \leq {\Vert \mathcal{B} \Vert}^{-1} {\Vert \mathcal{B} u_h - l \Vert}_{\mathbb{V}'} {\Vert {\mathcal{B}'} z_h - J \Vert}_{U'} .\label{err_bnd_impl} \end{align} In~\cref{err_bnd_impl}, ${\Vert \mathcal{B} u_h - l \Vert}_{\mathbb{V}'}$ is the well-studied energy error estimator for DPG methods whereas for the residual of the dual problem, several error estimators have been proposed~\cite{DEMKOWICZ2018}. In the present work, we use the DPG-star element error indicator proposed in section 5.2 of \cite{DEMKOWICZ2018} for convection-diffusion problems with ultra-weak formulation: \begin{equation} \eta^{\star}_k = {\left({\Vert {{L}}^{\star} (v_{h,z},\bm{\tau}_{h,z}) - j_{\Omega} \Vert }_{L^2(k)}^2 + \sum_{e \in \partial k \setminus \partial \Omega} h_e {\Vert {\llbracket \bm{\tau}_{h,z} \cdot \bm{n} \rrbracket} \Vert}_{L^2(e)}^2 + \sum_{e \in \partial k} {h_e}^{-1} {\Vert {\llbracket v_{h,z} \rrbracket} \Vert}_{L^2(e)}^2 \right)}^{\frac{1}{2}}, \label{ele_dual_ind} \end{equation} where $v_{h,z},\bm{\tau}_{h,z}$ represent the approximate adjoint solution, $\llbracket \bm{\tau}_{h,z} \cdot \bm{n} \rrbracket$ and ${\llbracket v_{h,z} \rrbracket}$ represents the jump across the edge $e$ of length $h_e$. We assume that the variational problem is well-posed and $\mathcal{B}$ is bounded from below. Thus, we can use the product of the energy error estimator and the DPG-star error indicator as a valid error indicator for the goal-oriented mesh adaptations. Since we are using the DPG-star method to solve the dual problem, we have replaced ${L}$ with ${L}^{\star}$ in the error estimator from section 5.2 of \cite{DEMKOWICZ2018} to obtain~\cref{ele_dual_ind}. \section{Continuous $hp-$mesh model} \label{hpadap} In this section, we present the procedure for $hp-$adaptations using the inbuilt error estimator that accompanies DPG schemes with optimal test functions. The procedure is a two-step process that comprises selecting the order of polynomial approximation, which is followed by a mesh density computation. Next, we reiterate an important assumption (see \cite{CHAKRABORTY20221, Dolejsi2017}) which is fundamental for the proposed methodology: \begin{assumption} Let ${T}_h$ be a given triangulation and $\eta_k$ be a local error estimate such that ${\eta}^2 = \sum_{k \in T_h} {\eta}^2_k$ and $\eta = O(h^s)$ i.e the error estimate converges with order $s$~\cite{Dolejsi2017}. We assume that the local error estimate $\eta_k$ scales\footnote{To motivate the scaling $\eta_k \propto {\vert k \vert}^{(s+1)}$, recall that the global error estimate $\eta^2$ scales as $h^{2s} \propto {\vert K \vert}^s$ (We have a method of order $s$.). At the same time, the contribution from each individual sub-element (obtained from local isotropic refinement) scales with an additional factor of k, because of the local domain of integration. For more details, see section 5, \cite{Dolejsi2017}} as $\eta_k = \overline{A}_k {\vert k \vert}^{(s+1)}$, where $\overline{A}_k$ depends upon the anisotropy of the element and order of the method $s$, but is independent of $\vert k \vert$. Furthermore, the order of the method $s$ directly depends upon the polynomial degree of approximation $p$. \label{assump1} \end{assumption} Verification of~\cref{assump1} can be found in \cite{Dolejsi2017,VENDITTI200240}. Let $\left\{{T}_h\right\}_n$ be the sequence of triangulations employed. We intend to construct an error estimate which asymptotically achieves global equality with the inbuilt energy error estimate : \begin{align} \Vert U - U_h \Vert^2_{E,k} \approx e_d(\mathbf{x}_k) \vert k \vert \quad \text{for} \, \, \mathbf{x}_k \in k, \, k \in T_h, \label{origin} \end{align} where we define $e_d(\mathbf{x}_k)$ as the error density function. Thus asymptotically ($h \rightarrow 0$), we have \begin{align*} \Vert U-U_h \Vert^2_{E,\Omega} = \sum_{k \in {T}_h} e_d(\mathbf{x}_k) \vert k \vert \rightarrow \int_{\Omega} e_d(\mathbf{x}) \, d \mathbf{x}. \end{align*} Using~\cref{assump1}, $\vert k \vert = \frac{\alpha}{d}$, and~\cref{origin}, we define the error density function as follows: \begin{equation} e_d(\mathbf{x},p(\mathbf{x})) := \overline{A}(\mathbf{x},p(\mathbf{x})){d(\mathbf{x})}^{-s(\mathbf{x})}\alpha^{s(\mathbf{x})}, \label{error_density} \end{equation} where $\alpha = \frac{3\sqrt{3}}{4}$. This asymptotic continuous-mesh error model will be used in our $hp$-mesh optimization scheme below. Using the a priori established order of convergence of the energy estimate~\cite{Demkowicz2011a}, we can set $ s(\mathbf{x}) = p(\mathbf{x})+1 $. In an $hp-$adaptive scheme, $s$ can vary for various elements in the mesh. \subsection{Polynomial selection} \label{polyselec} To choose the order of polynomial approximation for an element, we solve the governing PDE locally over a patch surrounding the element. The boundary conditions for these local problems are obtained using the trace $(\hat{u}_h)$ for Dirichlet boundary conditions or the normal flux $(\hat{\sigma}_h)$ for Neumann boundary conditions (the traces or the normal fluxes utilized as the local boundary conditions are computed globally for the current polynomial distribution). For complex non-linear problems, choosing local boundary conditions is non-trivial and would need specific analysis. One such example would be the Euler equations, for which one can employ characteristic boundary conditions. In~\cref{local_patch}, we show an internal patch with Dirichlet boundary condition for a scalar convection-diffusion problem. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.7]{Images/patchimage.pdf} \end{center} \caption{Patch around an element $k$ over which local problems are solved. Edges marked by red represent Dirichlet boundaries of the patch.} \label{local_patch} \end{figure} These local problems are solved at three different polynomial orders, $p_k$\footnote{To have same fidellity of solution from the local solves at different polynomial orders, we solve the local problem at $p_k$ rather than using the solution obtained by solving the global system on the current $hp-$mesh.} and $p_k \pm 1$ where $p_k$ represents the current polynomial order in element $k \in T_h$. Once the solution is computed, it is followed by computing the local energy error estimate ${\Vert U- U_h \Vert}_{E,k}$ for each polynomial order. Let $E_{p_k+i}$ and $N_{p_k + i}$ denote, respectively, the energy error and the number of degrees of freedom, computed at the polynomial order $p_k+i$, where $i = -1,0,1$. Next, we introduce a parameter $m_{p_k + i}$ which corresponds to the amount of uniform refinement or coarsening required by $p_k + i$ to achieve the same error as $p_k$. \begin{equation} m_{p_k+i} = {\left(\frac{E_{p_k + i}}{E_{p_k}}\right)}^{\frac{2}{s_i +1}} N_{p_k + i}. \label{ref_par} \end{equation} In~\cref{ref_par}, $s_i$ is the a priori rate of convergence for order $p_k + i$. For the energy norm, we have used $s_i = p_k + i + 1$. In order to choose the optimal polynomial order, we compute $m_{p_k+i}$ for $p_k - 1$ and $p_k + 1$. The optimal order ($p_{k,opt}$) is the one which achieves $E_{p_k}$ with the smallest value of $m_{p_k+i}$, i.e., \begin{equation} p_{k,opt} = \argmin_{i = -1,0,1} m_{p_k + i}. \end{equation} The rationale behind this approach is to find (if possible) a polynomial order, which is more efficient in terms of degrees of freedom required to achieve the same error level as $p_k$. For $i = 0$, the above computation is trivial. After we compute the polynomial distribution, we still need to compute the mesh density distribution. For this purpose, we target the continuous error estimate $E_c = \int_{\Omega} e_d(\mathbf{x}) \, d\mathbf{x}$ and solve a continuous minimization problem using calculus of variations. This is addressed in the next section. The enrichment $\delta p$ of the space $\mathbb{V}_r$ when performing $hp-$adaptations is one order higher compared to the enrichment utilized when performing $h-$adaptation \cite{CHAKRABORTY20221}. This increase in order of enrichment stems from the use of approximation spaces of (up to) order $p_k + 1$ while solving the local problem. \subsection{Mesh density computation} \label{step_1} In order to generate the optimal density distribution at fixed cost, we need to define the notion of cost. To that end, we define the mesh complexity $\mathcal{N}_{h,p}$ as \begin{align} \mathcal{N}_{h,p} = \int_{\Omega} w(p(\mathbf{x})) d(\mathbf{x}) \, d\mathbf{x}, \end{align} where \begin{align} w(p(\mathbf{x}) = \frac{(p(\mathbf{x})+1)(p(\mathbf{x})+1)}{2}. \end{align} The relation between the mesh complexity and number of degrees of freedoms $(N)$ can be computed as follows: \begin{align} \mathcal{N}_{h,p} = \int_{\Omega} w(\mathbf{x},p(\mathbf{x}))d(\mathbf{x})\, d\mathbf{x} \approx \sum_{k \in T_h} d(\mathbf{x}_k)w(\mathbf{x}_k,p_k) \vert k \vert = \sum_{k \in T_h} \alpha w(\mathbf{x}_k,p_k) = N(Ne,\mathbf{p}). \label{rel_cont_comp} \end{align} where $Ne$ is the number of elements and $\mathbf{p}$ is the polynomial distribution vector associated with the $hp-$mesh. Previously, we defined $\overline{A}(\mathbf{x})$ as a continuous analog of $\overline{A}_k$ from~\cref{assump1}. Similarly, we can treat the polynomial distribution vector $\mathbf{p}$ and introduce a continuous analog $p(\mathbf{x})$ for the continuous mesh model. The polynomial distribution vector $\mathbf{p}$ can be seen as a vector created from the snapshots of $p(\mathbf{x})$ in each element. The continuous analog of $\mathbf{p}$ and $\overline{A}_k$ allows us to formulate a continuous optimization problem as follows: \begin{problem} Let $\mathcal{N}_{h,p} $ be the desired mesh complexity and $e_d(\mathbf{x},p(\mathbf{x}))$ be the error density. We seek a mesh density distribution $d(\mathbf{x}): \Omega \rightarrow \mathbb{R}^{+}$ for a given polynomial distribution $p(\mathbf{x}): \Omega \rightarrow Z^{+}$ such that: \label{opt_prob} (a) $\mathcal{N}_{h,p} = \int_{\Omega} w(\mathbf{x},p(\mathbf{x})) d(\mathbf{x}) \,d\mathbf{x }\qquad with \quad w(\mathbf{x},p(\mathbf{x})) = \frac{2(p(\mathbf{x})+1)(p(\mathbf{x})+2)}{3\sqrt{3}} $. (b) $E_c =\int_{\Omega} e_d(\mathbf{x},p(\mathbf{x})) \, d\mathbf{x}$ is minimized. \end{problem} Using~\cref{error_density}, the continuous error estimate can be written as \begin{equation} E_c = \int_{\Omega} {\alpha}^{(p(\mathbf{x})+1)}{\overline{A}(\mathbf{x},p(\mathbf{x}))} {d(\mathbf{x})}^{-(p(\mathbf{x})+1)} \, d\mathbf{x}. \label{contglobalerror} \end{equation} In order to compute the minimum, we employ calculus of variations. Taking the variation of the complexity constraint in~\cref{opt_prob} with respect to $d(\mathbf{x})$ yields \begin{equation} \delta \mathcal{N}_{h,p} = \int_{\Omega} w(\mathbf{x},p(\mathbf{x})) \delta d(\mathbf{x}) d\mathbf{x} = 0. \label{constraint_var} \end{equation} Next, on taking the variation of the continuous error estimate with respect to the density, we obtain \begin{equation} \delta E_c = \int_{\Omega} -(p(\mathbf{x})+1) {\alpha}^{(p(\mathbf{x})+1)}{\overline{A}(\mathbf{x},p(\mathbf{x}))}{d}^{-(p(\mathbf{x})+2)} \delta d \, d\mathbf{x}. \label{variation_energy} \end{equation} From~\cref{constraint_var} and~\cref{variation_energy}, it is implied that \begin{equation} \frac{{(p(\mathbf{x})+1)\overline{A}(\mathbf{x},p(\mathbf{x}))}}{w(\mathbf{x},p(\mathbf{x}))} {\alpha}^{(p(\mathbf{x})+1)} {d(\mathbf{x})}^{-{(p(\mathbf{x})+2)}} = const. \label{comb_const} \end{equation} Solving~\cref{comb_const} for $d(\mathbf{x})$ yields \begin{equation} d^{\star}(\mathbf{x}) = {\left( \frac{(p(\mathbf{x})+1)\overline{A}(\mathbf{x},p(\mathbf{x})) {\alpha}^{(p(\mathbf{x})+1)}}{w(\mathbf{x},p(\mathbf{x}))} \right)}^{\frac{1}{(p(\mathbf{x})+2)}} {const}^{-\frac{1}{(p(\mathbf{x})+2)}}. \label{opt_den_c} \end{equation} Here, $const$ does not have a closed form solution. On substituting the expression for optimal density from~\cref{opt_den_c} into the complexity constraint, we obtain \begin{align} \mathcal{N}_{h,p} = \int_{\Omega} w(\mathbf{x}){\left( \frac{(p(\mathbf{x})+1)\overline{A}(\mathbf{x},p(\mathbf{x})) {\alpha}^{(p(\mathbf{x})+1)}}{w(\mathbf{x},p(\mathbf{x}))} \right)}^{\frac{1}{(p(\mathbf{x})+2)}} {const}^{-\frac{1}{(p(\mathbf{x})+2)}} \, d\mathbf{x}. \label{bisec} \end{align} ~\Cref{bisec} is non-linear in $const$ due to the varying exponent. Thus, we need to employ numerical techniques to solve for $const$. We use a bisection method to compute this constant in our current work. Once $const$ is computed, we can substitute its value into~\cref{opt_den_c} to obtain the optimum density distribution $d^{\star}(\mathbf{x})$. The proposed $hp-$adaptive algorithm is a generalization of the $h-$adaptation \cite{CHAKRABORTY20221}. If the variable polynomial distribution is substituted with $p(\mathbf{x}) = p$ throughout the domain, then $w(\mathbf{x})$ is constant and the proposed iterative algorithm reverts back to the $h-$only adaptation method. Since we are in a discrete setting in terms of triangulation, the quantities $\overline{A}(\mathbf{x},p(\mathbf{x}))$, $p(\mathbf{x})$ and optimal density $d^{\star}(\mathbf{x})$ are computed for each element $k \in T_h$. Thus, if $\mathbf{x}_k \in k$ and $k \in T_h$, using~\cref{assump1}, we have \begin{align} \overline{A}_{k,p_{k,opt}} = \frac{{\Vert U - U_h \Vert}^2_{E,k,p_{k,opt}}}{{\vert k \vert}^{p_{k,opt}+2}} \label{abar_el} \end{align} and \begin{equation} d^{\star}(\mathbf{x}_k) = {\left( \frac{(p_{k,opt}+1)\overline{A}_{k,p_{k,opt}} {\alpha}^{(p_{k,opt}+1)}}{w(\mathbf{x}_k,p_{k,opt})} \right)}^{\frac{1}{(p_{k,opt}+2)}} {const}^{-\frac{1}{(p_{k,opt}+2)}}, \label{opt_den_el} \end{equation} where ${\Vert U - U_h \Vert}_{E,k,p_{k,opt}}$ represents the energy error in element $k$ for the optimal polynomial order ($p_{k,opt}$) chosen via the process mentioned in~\cref{polyselec}. \subsection{Goal-oriented adaptation} For goal-oriented adaptation, we use the product of the DPG-star error estimate ($ \eta^{\star}_k$, see~\cref{ele_dual_ind}) and the energy estimate, i.e., $\eta_k = \eta^{\star}_k {\Vert U - U_h \Vert}_{E,k,p_{k,opt}}$ in~\cref{assump1} to compute $\overline{A}_{k,p_{k,opt}}$. The polynomial order selection process stays the same, as we only depend upon the primal variables and their effect on energy error. For goal-oriented adaptations, we employ $s(\mathbf{x}) = p(\mathbf{x}) + 1$ in density computations. As previously mentioned in section 5.1 of \cite{CHAKRABORTY20221},~\cref{assump1} requires sufficient regularity but typically, the regularity of both dual and primal solution can not always be guaranteed. Hence, we pursue a pessimistic approach rather then setting $s(\mathbf{x}) = 2p(\mathbf{x}) + 1$ for convection-diffusion and diffusion problems. \section{Introduction} Automatic mesh adaptation algorithms are potent tools that aid the computation of efficient and accurate solutions of partial differential equations (PDEs). These algorithms can significantly increase the accuracy and computational efficiency by modifying the functional space in which the approximate solution is sought. These modifications can be roughly classified into three different categories: \begin{itemize} \item Modifying the geometrical properties of mesh elements ($h-$adaptation); \item modifying the order of approximation space ($p-$adaptation); \item combining $h-$ and $p-$adaptation ($hp-$adaptation). \end{itemize} In this article, we focus on $hp-$adaptation. An automatic $hp-$mesh adaptation strategy attempts to produce a near-optimal distribution of element size ($h-$adaptivity) and combine it with an appropriate distribution of local order of approximation ($p-$adativity) with the least amount of user's input. Such an automatic mesh adaptation strategy faces the problem of locally choosing between $h-$refinement or $p-$refinement. This issue has been addressed in the past in many isotropic mesh-adaptation methods~\cite{houston_adap,giani_adap,houston_adap2,jmhp,BABUSKA19905,LD_book}. In recent times, metric-based mesh adaptation has emerged as a promising technology for generating meshes, especially for computational fluid dynamics \cite{ceze2013, Leicht2008, LEICHT20107344}. Meshes comprised of simplices can be represented by a Riemannian metric field. In metric-based mesh adaptation techniques, the parameters defining the metric field are optimized to generate an appropriate mesh. In general, this optimization problem is discrete and defies any analytical solution. However, using the continuous mesh approach \cite{inria_a,inria_b}, one formulates a continuous analog of the discrete mesh optimization problem, thus allowing analytical techniques, such as calculus of variations, to be employed for optimization. Although the mesh generated from the continuous mesh model is not provably optimal, it typically has excellent approximation properties \cite{Dolejsi2017,Rangarajan2018,CHAKRABORTY20221,RANGARAJAN2020109321,Ar2021}. When these adaptive techniques are combined with higher-order approximation methods, they present themselves as very potent tools for attaining higher accuracy with reduced degrees of freedom~\cite{cfd2030}. The potency of metric-based adaptation methods stems from the flexibility they provide for the efficient discretization of the computational domain near difficulties such as singularities, interior, and boundary layers. Consult \cite{ringue_adap,RANGARAJAN2020109321,Dolejsi2018,Dahm_dissert} for recent development in the context of metric-based adaptation for hybridized discontinuous Galerkin and discontinuous Galerkin discretizations. An efficient mesh adaptation algorithm is only a part of the machinery required for computing accurate numerical solutions of PDEs. It complements the numerical method employed to compute the approximate solution. Typically, automatic mesh adaptation algorithms employ {some a posteriori} measure of error. These measures are computed using the approximate solution. Thus, the accuracy and stability of the underlying numerical scheme become indispensable for accurate mesh adaptations. In terms of robust and stable higher-order methods, discontinuous Petrov-Galerkin schemes (DPG) with optimal test functions have been a critical development over the past decade. Demkowicz and Gopalakrishnan first introduced the DPG methodology with optimal test functions in \cite{Demkowicz2010,dem_part2}. The core idea of this approach is to compute a discrete test space such that the best possible discrete inf-sup constant is achieved, thus ensuring higher numerical stability and accuracy. Apart from numerical accuracy and stability, these numerical schemes are accompanied by a residual-based error estimator. Due to the presence of this inbuilt error estimator, the DPG methodology with optimal test functions is tailor-made for mesh optimization \cite{Demkowicz2012a, KeithGoal, DEMKOWICZ2020}. This article proposes a metric-based anisotropic adaptation framework that uses the DPG inbuilt error estimator to generate meshes with optimized size and shape distribution, as well as an optimized polynomial degree distribution, thus complementing the optimal test space with a near-optimal approximation space. The article is structured as follows. First, section 2 briefly discusses PG schemes with optimal test functions. Then, in section 3, we review the relationship between a metric field and a triangular mesh. The proposed method to optimize the local mesh anisotropy and the continuous $hp-$mesh model used to determine the optimal element size and polynomial distribution are discussed in section 4 and section 5, respectively. Finally, in section 6, we provide numerical results, and concluding remarks in section 7. \section{Conclusion and outlook} In this article, we present a continuous $hp-$mesh model that utilizes the inbuilt residual-based error estimator of DPG finite element schemes with optimal test functions. The model can drive both solution and target-based mesh adaptations. Moreover, the model can be extended to other minimal-residual finite element methods as long as one can localize the error estimate and obtain a polynomial representation of the error estimate. For non-linear problems, the boundary conditions needed for solving the local problems (see~\cref{polyselec}) on a patch need rigorous analysis. It will depend upon the nature of the non-linear problem and the linearization employed. An extension of the proposed methodology to generate optimal meshes for the compressible Navier-stokes equations is currently being investigated and will be presented in future work. \section{Acknowledgment} Funding: This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 333849990/GRK2379 (IRTG Modern Inverse Problems). Declaration of interest: none \section*{References} \section{Mesh representation using metric fields} Next, we provide a short description of mesh representation using Riemannian metric fields. Such metric-based mesh representation provides a convenient way of encoding and manipulating a mesh~\cite{inria_a,inria_b}. In this work, we concern ourselves with meshes consisting of triangles, but the proposed ideas and formulations can be extended to tetrahedra \cite{Archk}. Let $T_h$ be a given triangulation. Then an element in $T_h$ can be characterized by a symmetric positive definite matrix. Let $k \in T_h$ represent an element with $e_i,\ i = 1,2,3$ being the vectorial representation of the edges. Then, there exists a non-degenerate symmetric positive definite matrix \begin{align} \mathbb{M} = \begin{bmatrix} M_{1,1} & M_{1,2} \\ M_{2,1} & M_{2,2} \end{bmatrix} \end{align} such that each edge is equilateral under the metric induced by $\mathbb{M}$, i.e., \begin{align*} e_i^T \mathbb{M} e_i = C \quad \forall \quad i=1,2,3, \end{align*} where $C > 0$ is a constant. The triangular element can be associated with its circumscribing ellipse, given by $\{\mathbf{x}\in\mathbb{R}^2: \mathbf{x}^T\mathbb{M}\mathbf{x} = 1\}$. This is illustrated in~\cref{ellip_ele}. \begin{figure}[h] \begin{center} \includegraphics[scale=1.0]{Images/slides-figure25.pdf} \caption{Circumscribing ellipse and triangle for $C = 3$.}\label{ellip_ele} \end{center} \end{figure} The eigenvectors and the eigenvalues of $\mathbb{M}$ encode information about the orientation ($\theta$) and the aspect ratio of the ellipse and, subsequently, the associated element. This is evident from the spectral decomposition of $\mathbb{M}$: \begin{align} \mathbb{M} = {\begin{bmatrix} \text{cos}(\theta) && -\text{sin}(\theta) \\ \text{sin}(\theta) && \text{cos}(\theta) \end{bmatrix}}^T\begin{bmatrix} \alpha_{1} && 0\\ 0 && \alpha_{2} \end{bmatrix}\begin{bmatrix} \text{cos}(\theta) && -\text{sin}(\theta) \\ \text{sin}(\theta) && \text{cos}(\theta) \end{bmatrix}. \end{align} The eigenvalues $\alpha_1 = \frac{1}{h_1^2}$ and $\alpha_2 = \frac{1}{h_2^2}$ are related to the principal axes of the ellipse. The area of the triangle and the circumscribing ellipse are related by \begin{align} \vert k \vert = \frac{3\sqrt{3}}{4} h_1 h_2 = \frac{3\sqrt{3}}{4d}, \end{align} where $d$ is called the local mesh density. Note that $d$ is proportional to inverse of area of the ellipse. The slenderness of the ellipse and the associated triangle is represented by the aspect ratio $\beta = \frac{h_1}{h_2} \geq 1$. With $\theta$, $\beta$, and $d$, one can describe the mesh element geometrically. Typically, most metric-based mesh adaptation algorithms generate a discrete metric field. Metric-based mesh generators suitably interpolate the discrete metric field to produce its continuous equivalent, which is further employed by the mesh generator to produce a metric conforming mesh \footnote{A metric conforming mesh contains elements which are nearly equilateral under the input metric field. In this article, we use BAMG\cite{Bamg} as the preferred metric-based mesh generator.}. Next, we recall the concept of an $hp-$mesh. For a given triangulation $T_h$, we associate an integer $p_k$ with every element $k \in T_h$, representing its polynomial order of approximation. These integers form a vector called the polynomial distribution vector: $\mathbf{p} = (p_k)_{k \in T_h}$. The triangulation $T_h$ and the associated polynomial distribution vector $\mathbf{p}$ form the $hp-$mesh $T_{h,p} := \{T_h,\mathbf{p}\}$. \section{Numerical results}\label{results_hp} \subsection{Test case I: boundary layer}\label{blhp} Sharp boundary layers are one of the most encountered features in fluid dynamics. Through this test case, we present the fidelity of the proposed algorithm in the presence of such boundary layers. In particular, we solve, \begin{equation} \begin{aligned} \beta \cdot {\nabla}u-\epsilon{\nabla}^2u &= s \qquad&& \mathrm{in}\ \Omega = {(0,1)}^2, \\ u &= 0 && \mathrm{on} \ \partial \Omega, \end{aligned} \end{equation} where $\beta = {[1,1]}^T$ and $\epsilon > 0$. The source term $s(\mathbf{x})$ is chosen in such a manner that the exact solution is given by \begin{equation} u(\mathbf{x}) = \left( x + \frac{e^{\frac{x}{\epsilon}}-1}{1-e^{\frac{1}{\epsilon}}} \right) \left( y + \frac{e^{\frac{y}{\epsilon}}-1}{1-e^{\frac{1}{\epsilon}}} \right), \end{equation} where $\mathbf{x} = [x,y]^T$. The strength of the boundary layer is inversely proportional to $\epsilon$. The $hp-$adaptation is initialized with a mesh comprised of $32$ elements and a constant polynomial order of $p_{initial} = 2$. Mesh complexity ($\mathcal{N}_{h,p}$) for the first adaptation cycle is computed as \begin{equation} \mathcal{N}_{h,p} = Ne \times \frac{\left(p_{initial}+1\right)\left(p_{initial}+2\right)}{2}\times \frac{3\sqrt{3}}{4}, \end{equation} where $Ne$ represents the number of elements in the mesh. Between each adaptation cycle, $\mathcal{N}_{h,p}$ is increased by $30 \%$. The choice of growth in $\mathcal{N}_{h,p}$ is arbitrary. A different choice in growth may result in different pre-asymptotic behavior but should produce a similar asymptotic result. \begin{figure}[h!] \begin{subfigure}[]{0.5\textwidth} \includegraphics[scale=0.22]{Data_hp/Boundary_layer⁩/MNS_0p5/Boundarylayerhp_1to10_eps_0p005_296el_6980ndof-eps-converted-to.pdf} \caption{} \end{subfigure} \hspace{0.2cm} \begin{subfigure}[]{0.5\textwidth} \includegraphics[scale=0.22]{Data_hp/Boundary_layer⁩/MNS_0p5/Boundarylayerhp_1to10_eps_0p005_296el_6980ndof_polyorder-eps-converted-to.pdf} \caption{} \end{subfigure} \caption{Boundary layer: (a) solution contour on an adapted mesh and (b) polynomial distribution on the same adapted mesh with $\epsilon = 0.005$ and 6980 degrees of freedom. } \label{fig:contourAndPdist} \end{figure} \begin{figure}[h!] \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture}[scale=0.8] \begin{semilogyaxis}[xmin=5,xmax=50, ymin=1e-11,ymax=1,xlabel=\large{$\sqrt[3]{ndof}$},ylabel=\large{$||u-u_h||_{L^{2}(\Omega)}$},grid=major,legend style={at={(1,1)},anchor=north east,font=\tiny,rounded corners=2pt}] \addplot[color = blue,mark=square*] table[x= ndof, y=err_l2, col sep = comma] {Data_hp/Boundary_layer⁩/exp/L2_error_DPG_BL_0p005_p1_MN_0p5.txt}; \addplot [color = red,mark=square*] table[x= ndof, y=err_l2, col sep = comma] {Data_hp/Boundary_layer⁩/exp/L2_error_DPG_BL_0p005_p2_MN_0p5.txt}; \addplot [color = black,mark=square*] table[x= ndof, y=err_l2, col sep = comma] {Data_hp/Boundary_layer⁩/exp/L2_error_DPG_BL_0p005_p3_MN_0p5.txt}; \addplot [color = magenta,mark=square*] table[x= ndof, y=err_l2, col sep = comma] {Data_hp/Boundary_layer⁩/exp/L2_error_DPG_BL_0p005_p4_MN_0p5.txt}; \addplot [color = cyan,mark=square*] table[x= ndof, y=err_l2, col sep = comma] {Data_hp/Boundary_layer⁩/exp/L2_error_DPG_BL_0p005_p5_MN_0p5.txt}; \addplot [color =orange,mark=square*] table[x= ndof, y=err_l2, col sep = comma] {Data_hp/Boundary_layer⁩/exp/L2_error_BL_MNS_0p5_Hpnew.txt}; \legend{$p = 1$,$p = 2$,$p = 3$,$p = 4$,$p = 5$,$hp$} \end{semilogyaxis} \end{tikzpicture} \caption{} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture}[scale=0.8] \begin{semilogyaxis}[xmin=5,xmax=50, ymin=1e-11,ymax=1,xlabel=\large{$\sqrt[3]{ndof}$},ylabel=\large{$||U-U_h||_{E(\Omega)}$},grid=major,legend style={at={(1,1)},anchor=north east,font=\tiny,rounded corners=2pt}] \addplot[color = blue,mark=square*] table[x= ndof, y=EE, col sep = comma] {Data_hp/Boundary_layer⁩/exp/EE_error_DPG_BL_0p005_p1_MN_0p5.txt}; \addplot [color = red,mark=square*] table[x= ndof, y=EE, col sep = comma] {Data_hp/Boundary_layer⁩/exp/EE_error_DPG_BL_0p005_p2_MN_0p5.txt}; \addplot [color = black,mark=square*] table[x= ndof, y=EE, col sep = comma] {Data_hp/Boundary_layer⁩/exp/EE_error_DPG_BL_0p005_p3_MN_0p5.txt}; \addplot [color = magenta,mark=square*] table[x= ndof, y=EE, col sep = comma] {Data_hp/Boundary_layer⁩/exp/EE_error_DPG_BL_0p005_p4_MN_0p5.txt}; \addplot [color = cyan,mark=square*] table[x= ndof, y=EE, col sep = comma] {Data_hp/Boundary_layer⁩/exp/EE_error_DPG_BL_0p005_p5_MN_0p5.txt}; \addplot [color =orange,mark=square*] table[x= ndof, y=EE, col sep = comma] {Data_hp/Boundary_layer⁩/exp/EE_error_BL_MNS_0p5_Hpnew.txt}; \legend{$p = 1$,$p = 2$,$p = 3$,$p = 4$,$p = 5$,$hp$} \end{semilogyaxis} \end{tikzpicture} \caption{} \end{subfigure} \caption{Convergence plots of (a) $L^2$ error in $u_h$ and (b) energy error using scaled V-norm.} \label{convergence_BL_scaled_math_norm_hp} \end{figure} \begin{figure}[h!] \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture}[scale=0.8] \begin{axis}[xmin=0,xmax=18, ymin=1,ymax=600,xlabel=\large{$Adaptations$},ylabel=\large{$Ne$},grid=major,legend style={at={(1,1)},anchor=north east,font=\tiny,rounded corners=2pt}] \addplot [color = red,mark=square*] table[x= adap, y=ne, col sep = comma] {Data_hp/Boundary_layer⁩/MNS_0p5/order_ne_adap_p1to10_0p1.txt}; \addplot [color = blue,mark=square*] table[x= adap, y=ne, col sep = comma] {Data_hp/Boundary_layer⁩/MNS_0p5/order_ne_adap_p1to10_0p01.txt}; \addplot [color = black,mark=square*] table[x= adap, y=ne, col sep = comma] {Data_hp/Boundary_layer⁩/MNS_0p5/order_ne_adap_p1to10_0p005.txt}; \legend{$\epsilon = 0.1$,$\epsilon = 0.01$,$\epsilon = 0.005$} \end{axis} \end{tikzpicture} \caption{} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture}[scale=0.8] \begin{axis}[xmin=0,xmax=18, ymin=1,ymax=15,xlabel=\large{$Adaptations$},ylabel=\large{$p_{avg}$},grid=major,legend style={at={(1,1)},anchor=north east,font=\tiny,rounded corners=2pt}] \addplot [color = red,mark=square*] table[x= adap, y=orderavg, col sep = comma] {Data_hp/Boundary_layer⁩/MNS_0p5/order_ne_adap_p1to10_0p1.txt}; \addplot [color =blue,mark=square*] table[x= adap, y=orderavg, col sep = comma] {Data_hp/Boundary_layer⁩/MNS_0p5/order_ne_adap_p1to10_0p01.txt}; \addplot [color = black,mark=square*] table[x= adap, y=orderavg, col sep = comma] {Data_hp/Boundary_layer⁩/MNS_0p5/order_ne_adap_p1to10_0p005.txt}; \legend{$\epsilon = 0.1$,$\epsilon = 0.01$,$\epsilon = 0.005$} \end{axis} \end{tikzpicture} \caption{} \end{subfigure} \caption{Evolution of (a) number of mesh elements and (b) average polynomial order with adaptations using the scaled V-norm at a fixed cost $\mathcal{N} = 3072$ for different values of $\epsilon$.} \label{hp_fixed_dof_ne_pavg} \end{figure} \begin{figure}[h!] \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture}[scale=0.8] \begin{semilogyaxis}[xmin=5,xmax=30, ymin=1e-12,ymax=1,xlabel=\large{$\sqrt[3]{ndof}$},ylabel=\large{$||u-u_h||_{L^{2}(\Omega)}$},grid=major,legend style={at={(1,1)},anchor=north east,font=\tiny,rounded corners=2pt}] \addplot [color =blue,mark=square*] table[x= ndof, y=err_l2, col sep = comma] {Data_hp/Boundary_layer⁩/exp/L2_error_patchsolve_hp_1to20_DLS_128el_BL.txt}; \addplot [color =red,mark=square*] table[x= ndof, y=err_l2, col sep = comma] {Data_hp/Boundary_layer⁩/exp/L2_error_patchsolve_hp_1to20_DLS_randompolyorder_BL.txt}; \legend{$p_{initial} = 2$, $p_{initial} = Random$} \end{semilogyaxis} \end{tikzpicture} \caption{} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture}[scale=0.8] \begin{semilogyaxis}[xmin=5,xmax=30, ymin=1e-11,ymax=1,xlabel=\large{$\sqrt[3]{ndof}$},ylabel=\large{$||U-U_h||_{E(\Omega)}$},grid=major,legend style={at={(1,1)},anchor=north east,font=\tiny,rounded corners=2pt}] \addplot [color =blue,mark=square*] table[x= ndof, y=EE, col sep = comma] {Data_hp/Boundary_layer⁩/exp/EE_error_patchsolve_hp_1to20_DLS_128el_BL.txt}; \addplot [color =red,mark=square*] table[x= ndof, y=EE, col sep = comma] {Data_hp/Boundary_layer⁩/exp/EE_error_patchsolve_hp_1to20_DLS_randompolyorder_BL.txt}; \legend{$p_{initial} = 2$, $p_{initial} = Random$} \end{semilogyaxis} \end{tikzpicture} \caption{} \end{subfigure} \caption{Convergence plots of (a) $L^2$ error in $u_h$ and (b) energy error using scaled V-norm with different initial polynomial distributions.} \label{convergence_BL_randompolyorder} \end{figure} \begin{figure}[h!] \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture}[scale=0.8] \begin{semilogyaxis}[xmin=5,xmax=30, ymin=1e-12,ymax=1,xlabel=\large{$\sqrt[3]{ndof}$},ylabel=\large{$||u-u_h||_{L^{2}(\Omega)}$},grid=major,legend style={at={(1,1)},anchor=north east,font=\tiny,rounded corners=2pt}] \addplot [color =red,mark=square*] table[x= ndof, y=err_l2, col sep = comma] {Data_hp/Boundary_layer⁩/exp/L2_error_BL_MNS_0p5_Hpnew.txt}; \addplot [color =blue,mark=square*] table[x= ndof, y=err_l2, col sep = comma] {Data_hp/Boundary_layer⁩/exp/L2_error_patchsolve_hp_1to20_DLS_128el_BL.txt}; \legend{$ne_{0} = 32$,$ne_{0} = 128$} \end{semilogyaxis} \end{tikzpicture} \caption{} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture}[scale=0.8] \begin{semilogyaxis}[xmin=5,xmax=30, ymin=1e-11,ymax=1,xlabel=\large{$\sqrt[3]{ndof}$},ylabel=\large{$||U-U_h||_{E(\Omega)}$},grid=major,legend style={at={(1,1)},anchor=north east,font=\tiny,rounded corners=2pt}] \addplot [color =red,mark=square*] table[x= ndof, y=EE, col sep = comma] {Data_hp/Boundary_layer⁩/exp/EE_error_BL_MNS_0p5_Hpnew.txt}; \addplot [color =blue,mark=square*] table[x= ndof, y=EE, col sep = comma] {Data_hp/Boundary_layer⁩/exp/EE_error_patchsolve_hp_1to20_DLS_128el_BL.txt}; \legend{$ne_{0} = 32$,$ne_{0} = 128$} \end{semilogyaxis} \end{tikzpicture} \caption{} \end{subfigure} \caption{Convergence plots of (a) $L^2$ error in $u_h$ and (b) energy error using scaled V-norm with different initial mesh.} \label{convergence_BL_diffmesh_hp} \end{figure} \Cref{fig:contourAndPdist} shows a contour plot of the solution obtained on an adapted mesh as well as the corresponding polynomial degree distribution. In~\cref{convergence_BL_scaled_math_norm_hp}, we present the convergence results comparing the $h-$adaptation algorithm~\cite{CHAKRABORTY20221} and proposed new $hp-$adaptation algorithm. On continuously increasing $\mathcal{N}_{h,p}$, $hp-$adaptation produces better convergence compared to the $h-$adaptation with constant polynomial orders $p = 1-5$. For all convergence curves, we plot error against $\sqrt[3]{ndof}$ to verify exponential convergence (${\Vert e \Vert}_{X} \approx C e^{-bN^{1/3}}$, \cite{Guo1986} ). On increasing $\mathcal{N}_{h,p}$, initially the adaptation is dominated by $h-$refinement. Consequently, once the boundary layer is resolved, $p-$refinement starts taking place in the boundary layers along with $h-$refinements. The algorithm prefers higher-order polynomials in elements in the boundary layer, whereas it prescribes $p = 2$ away from the boundary layer. The nature of the analytical solution is approximately quadratic away from the boundary layer, at least up to machine precision, i.e., $u(x) \approx xy$ \cite{Ar2021}. Since with the assumption of shape-regular elements one can show the equivalence of the scaled V-norm with the $L^2$ norm of the field variables~\cite{Demkowicz2011a}, we expect to obtain the polynomial distribution reflecting the local behaviour of the solution variables. In the next numerical experiment, we perform the adaptation while keeping $\mathcal{N}_{h,p}$ constant. Here, we have limited $p_{max}$ (maximum polynomial order in the $hp$ mesh) at $p_{max}=10$. Since $u \in C^{\infty}$, $p_{max}$ will otherwise increase indefinitely. Through this numerical experiment, we intend to observe the interplay between $h-$ and $p-$refinements, which is measured by computing the average polynomial order ($p_{avg}$) in the mesh. The evolution of $p_{avg}$ with subsequent adaptations is shown in~\cref{hp_fixed_dof_ne_pavg} for different values of $\epsilon$. For $\epsilon = 0.005$, $ h-$refinements dominate initial adaptations, and it is only after some initial $h-$refinement in the boundary layers that $p-$refinements start dominating along with $h-$refinements. On increasing $\epsilon$, the algorithm performs simultaneous $p-$adaptation and $h-$adaptation as the boundary layers are smooth enough so as to not require increased spatial resolution for $p-$adaptations. This interplay between two adaptive processes is visible from the gradual decrease in the number of elements and subsequent increase in $p_{avg}$ (see~\cref{hp_fixed_dof_ne_pavg}). In many adaptation strategies, the initial conditions such as initial mesh or distribution of the polynomial order may affect the performance. In the proposed $hp-$adaptation algorithm, arbitrary coarsening and refinements can be done due to re-meshing. This reduces the dependency of the current mesh on previous iterations. To demonstrate this, we present two instances of variations in initial conditions. First, we use different initial meshes with the same initial polynomial order of approximation (see~\cref{convergence_BL_diffmesh_hp}). In second instance, the same initial mesh is used, but one mesh with a random distribution of initial order of approximation (see~\cref{convergence_BL_randompolyorder}). We observe no deviation in the asymptotic behavior in the convergence plots, showing the proposed methodology's robustness towards these perturbations. \subsection{Goal oriented adaptation: Gaussian peak} Next, we present results showing the performance of the proposed $hp-$adaptation algorithm for goal-oriented adaptation. The primal problem is the same as that of~\cref{blhp}. We consider the solution-dependent (target) functional \begin{equation} J(u) = \int_\Omega j_{\Omega}(\mathbf{x}) u(\mathbf{x}) \, d\mathbf{x}, \label{volumetargethp} \end{equation} where \begin{equation} j_{\Omega}(\mathbf{x}) = e^{-\alpha \left({(x - x_c)}^2 + {(y - y_c)}^2\right)}. \end{equation} The target is thus given by a weighted volume integral, where the weight is a Gaussian peak centered at $(x_c,y_c) = (0.99,0.5)$. We choose $\alpha = 1000$, leading to a strong localization of the peak (see~\cref{goalgaussian}). This leads to the dual problem \begin{align} -\beta \cdot {\nabla} \bar{\eta}-\epsilon{\nabla}^2 \bar{\eta} &= j_{\Omega}\qquad && \mathrm{in}\ \Omega = {(0,1)}^2, \label{adjstrngeqhp}\\ \bar{\eta} &= 0 && \mathrm{on}\ \partial \Omega, \label{adjbndhp} \end{align} where $\beta = {[1, \,1]}^T$ and $\epsilon > 0$. \begin{figure}[H] \begin{center} \includegraphics[scale=0.25]{Images/target_function_contour_3d_mod-eps-converted-to.pdf} \end{center} \caption{Contour showing $j_{\Omega}(\mathbf{x})$.} \label{goalgaussian} \end{figure} In~\cref{convergence_gaussian_peak_error_hp} and~\cref{convergence_gaussian_peak_dwr_hp}, we present the convergence plots comparing $h-$adaptation \cite{CHAKRABORTY20221} to the proposed $hp-$adaptive method. We obtain exponential convergence for $hp-$adaptation, thus outperforming $h-$adaptation. We present a snapshot of one such mesh in~\cref{adaptedmeshgaussianpeak}. Note that both $h-$ and $p-$refinements mainly take place in the right boundary layer. As we previously observed in~\cref{blhp}, the $h-$refinements precede $p-$refinements, thus following a similar trajectory. The difference here is that only a portion of the boundary layer undergoes $hp-$refinement, which the algorithm deems necessary for resolving the target functional. We can notice the saturation of the $hp-$convergence curves near machine precision in~\cref{convergence_gaussian_peak_error_hp} and~\cref{convergence_gaussian_peak_dwr_hp}. \begin{figure}[H] \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture}[scale = 0.8] \begin{semilogyaxis}[xmin=3,xmax=30, ymin=1e-15,ymax=1,xlabel=\large{$\sqrt[3]{ndof}$},ylabel=\large{$\vert J(u) - J(u_h) \vert$},grid=major,legend style={at={(1,1)},anchor=north east,font=\tiny,rounded corners=2pt}] \addplot [color = blue,mark=square*,ultra thick] table[x=ndof, y=target_err,col sep = comma]{⁨Data_hp⁩/TargetBasedAdaptatioHp⁩/expconv/target_error_s_p_plus_1_gaussianpeak_eps_0p0005_alpha_1000_p1_exp.txt⁩}; \addplot [color = red,mark=square*,ultra thick] table[x=ndof, y=target_err,col sep = comma]{⁨Data_hp⁩/TargetBasedAdaptatioHp⁩/expconv/target_error_s_p_plus_1_gaussianpeak_eps_0p0005_alpha_1000_p2_exp.txt}; \addplot [color = black,mark=square*,ultra thick] table[x=ndof, y=target_err,col sep = comma]{⁨Data_hp⁩/TargetBasedAdaptatioHp/expconv/target_error_s_p_plus_1_gaussianpeak_eps_0p0005_alpha_1000_p3_exp.txt⁩}; \addplot [color = cyan,mark=square*,ultra thick] table[x=ndof, y=target_err,col sep = comma]{⁨Data_hp⁩/TargetBasedAdaptatioHp⁩/expconv/target_error_gaussian_peak_hp.txt⁩}; \legend{$P =1$,$P =2$,$P =3$,$hp$} \end{semilogyaxis} \end{tikzpicture} \caption{}\label{convergence_gaussian_peak_error_hp} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture}[scale = 0.8] \begin{semilogyaxis}[xmin=3,xmax=30, ymin=1e-15,ymax=1,xlabel=\large{$\sqrt[3]{ndof}$},ylabel=\large{$DWR$},grid=major,legend style={at={(1,1)},anchor=north east,font=\tiny,rounded corners=2pt}] \addplot [color = blue,mark=square*,ultra thick] table[x=ndof, y=DWR_est,col sep = comma]{⁨Data_hp⁩/TargetBasedAdaptatioHp⁩/expconv/target_error_s_p_plus_1_gaussianpeak_eps_0p0005_alpha_1000_p1_exp.txt⁩}; \addplot [color = red,mark=square*,ultra thick] table[x=ndof, y=DWR_est,col sep = comma]{⁨Data_hp⁩/TargetBasedAdaptatioHp⁩/expconv/target_error_s_p_plus_1_gaussianpeak_eps_0p0005_alpha_1000_p2_exp.txt}; \addplot [color = black,mark=square*,ultra thick] table[x=ndof, y=DWR_est,col sep = comma]{⁨Data_hp⁩/TargetBasedAdaptatioHp⁩/expconv/target_error_s_p_plus_1_gaussianpeak_eps_0p0005_alpha_1000_p3_exp.txt⁩}; \addplot [color = cyan,mark=square*,ultra thick] table[x=ndof, y=DWR_est,col sep = comma]{⁨Data_hp⁩/TargetBasedAdaptatioHp⁩/expconv/target_error_gaussian_peak_hp.txt⁩}; \legend{$P =1$,$P =2$,$P =3$,$hp$} \end{semilogyaxis} \end{tikzpicture} \caption{}\label{convergence_gaussian_peak_dwr_hp} \end{subfigure} \caption{Convergence plots for (a) error in target functional and (b) dual weighted residual (DWR) using the scaled V-norm.} \label{convgaussianpeaksmoothing} \end{figure} Next, we perform a numerical experiment where we keep $\mathcal{N}_{h,p}$ fixed and then run the adaptations. This numerical experiment aims to examine the algorithm's capacity to distribute the Dofs optimally. It takes only a few adaptations to reach nearly machine precision in terms of the error in target functional. \begin{table}[h!] \captionsetup{justification=centering,margin=2cm} \centering \begin{tabular}{|c|c|c|c|} \hline Adaptation & Ndof & ${\vert J(u) - J(u_h) \vert}$ & $DWR$ \\ \hline 0 & 3072 & 1.3599e-04 & 2.55956e-05 \\ \hline 2 & 2698 & 2.62413e-09 & 1.16551e-09 \\ \hline 4 & 3271 & 1.2955e-11 & 1.69234e-11 \\ \hline 6 & 3045 & 2.92165e-12 & 2.71505e-12 \\ \hline 8 & 3276 & 3.98067e-14 & 1.31114e-13 \\ \hline \end{tabular} \caption{Adaptation Vs. error for constant complexity using scaled V-norm ($\mathcal{N}_{h,p} = \int_{\Omega} w(\mathbf{x}) d(\mathbf{x}) \, d\mathbf{x} = 3072.0$).} \label{fixed_cost_target_a} \end{table} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.22]{Data_hp/TargetBasedAdaptatioHp/gaussianpeak_eps0p005_hp_polynomialdist-eps-converted-to.pdf} \end{center} \caption{Polynomial distribution on an adapted mesh with 11513 degrees of freedom.} \label{adaptedmeshgaussianpeak} \end{figure} \subsection{Test case II: inverse tangent - flux target} This numerical experiment compares $h-$adaptation~\cite{CHAKRABORTY20221} and the proposed $hp-$adaptation algorithm for a target given by a boundary integral. The governing PDE is the same as in the boundary layer test case (see~\cref{blhp}). Here the source term $s(\mathbf{x})$ is chosen in such a way that the exact solution is given by \begin{equation} u(\mathbf{x}) = \left( tan^{-1}(\alpha(x - x_1)) + tan^{-1}(\alpha (x_2 - x)) \right) \left( tan^{-1}(\alpha(y -y_1)) + tan^{-1}(\alpha (y_2 - y)) \right), \notag \end{equation} where $x_1 = y_1 = \frac{1.0}{3.0}$, $x_2 = y_2 = \frac{2.0}{3.0}$, $\alpha = 50.0 $ and $\epsilon = 0.01$. The boundary condition is obtained from the exact solution. The target functional is given by: \begin{equation} J({u}) = \int_{\partial \Omega} j_{\partial \Omega}(\mathbf{x}) {\nabla u} \cdot \mathbf{n} ds, \end{equation} where \begin{equation} j_{\partial \Omega} = \begin{cases} 1 \qquad \, \mathbf{n} = \left( 1,0 \right) \\ 0 \qquad otherwise \end{cases}. \end{equation} The dual problem has no volumetric source, and the weighting function $j_{\partial \Omega}(\mathbf{x})$ appears in the boundary condition for the dual problem: \begin{align} -\beta \cdot {\nabla} \bar{\eta}-\epsilon{\nabla}^2 \bar{\eta} &= 0 \qquad && \mathrm{in}\ \Omega = {(0,1)}^2, \label{adjstrngeqbhp}\\ \bar{\eta} &= j_{\partial \Omega} && \mathrm{on}\ \partial \Omega. \label{adjbndbhp} \end{align} In~\cref{adaptedmeshfluxhp}, we present the polynomial distribution of an $hp-$adapted mesh. Since we are adapting to resolve the flux at $x = 1.0$ and the convection is in direction ${[1,1]}^T$, the majority of the $hp-$refinement takes place in the right lower diagonal half of the domain (Since the flux on the right boundary depends upon the inlet conditions of the bottom edge at $y=0$, the dual error estimate gives more weighting to the elements in this half.). In~\cref{convergence_flux_error_hp} and~\cref{convergence_flux_dwr_hp}, we present the convergence results. Again we observe exponential convergence in the case of $hp-$adaptations. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.22]{Data_hp/TargetBasedAdaptatioHp/Polynomialdist_el549_nadarajah_square_jump-eps-converted-to.pdf} \end{center} \caption{Polynomial distribution on an adapted mesh with 15339 degrees of freedom.} \label{adaptedmeshfluxhp} \end{figure} \begin{figure}[h!] \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture}[scale = 0.8] \begin{semilogyaxis}[xmin=3,xmax=40, ymin=1e-12,ymax=1,xlabel=\large{$\sqrt[3]{ndof}$},ylabel=\large{$\vert J(u) - J(u_h) \vert$},grid=major,legend style={at={(1,1)},anchor=north east,font=\tiny,rounded corners=2pt}] \addplot [color = blue,mark=square*,ultra thick] table[x=ndof, y=target_err,col sep = comma]{⁨Data⁩_hp/TargetBasedAdaptatioHp⁩/expconv/targetbasedadaptation_s_p_plus_1_squarejump_Nadarajah_regularized_alpha_50_rightbndflux_eps_0p01_p1.txt⁩}; \addplot [color = red,mark=square*,ultra thick] table[x=ndof, y=target_err,col sep = comma]{⁨Data⁩_hp/TargetBasedAdaptatioHp⁩/expconv/targetbasedadaptation_s_p_plus_1_squarejump_Nadarajah_regularized_alpha_50_rightbndflux_eps_0p01_p2.txt}; \addplot [color = black,mark=square*,ultra thick] table[x=ndof, y=target_err,col sep = comma]{⁨Data⁩_hp/TargetBasedAdaptatioHp/expconv/targetbasedadaptation_s_p_plus_1_squarejump_Nadarajah_regularized_alpha_50_rightbndflux_eps_0p01_p3.txt⁩}; \addplot [color = cyan,mark=square*,ultra thick] table[x=ndof, y=target_err,col sep = comma]{⁨Data⁩_hp/TargetBasedAdaptatioHp⁩/expconv/targetbasedadaptation_s_p_plus_1_squarejump_nadarajah_alpha50_hp.txt⁩}; \legend{$P =1$,$P =2$,$P =3$,$hp$} \end{semilogyaxis} \end{tikzpicture} \caption{}\label{convergence_flux_error_hp} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture}[scale = 0.8] \begin{semilogyaxis}[xmin=3,xmax=40, ymin=0.5e-12,ymax=1,xlabel=\large{$\sqrt[3]{ndof}$},ylabel=\large{$DWR$},grid=major,legend style={at={(1,1)},anchor=north east,font=\tiny,rounded corners=2pt}] \addplot [color = blue,mark=square*,ultra thick] table[x=ndof, y=DWR_est,col sep = comma]{⁨Data⁩_hp/TargetBasedAdaptatioHp⁩/expconv/targetbasedadaptation_s_p_plus_1_squarejump_Nadarajah_regularized_alpha_50_rightbndflux_eps_0p01_p1.txt⁩}; \addplot [color = red,mark=square*,ultra thick] table[x=ndof, y=DWR_est,col sep = comma]{⁨Data⁩_hp/TargetBasedAdaptatioHp⁩/expconv/targetbasedadaptation_s_p_plus_1_squarejump_Nadarajah_regularized_alpha_50_rightbndflux_eps_0p01_p2.txt}; \addplot [color = black,mark=square*,ultra thick] table[x=ndof, y=DWR_est,col sep = comma]{⁨Data⁩_hp/TargetBasedAdaptatioHp/expconv/targetbasedadaptation_s_p_plus_1_squarejump_Nadarajah_regularized_alpha_50_rightbndflux_eps_0p01_p3.txt⁩}; \addplot [color = cyan,mark=square*,ultra thick] table[x=ndof, y=DWR_est,col sep = comma]{⁨Data⁩_hp/TargetBasedAdaptatioHp⁩/expconv/targetbasedadaptation_s_p_plus_1_squarejump_nadarajah_alpha50_hp.txt⁩}; \legend{$P =1$,$P =2$,$P =3$,$hp$} \end{semilogyaxis} \end{tikzpicture} \caption{}\label{convergence_flux_dwr_hp} \end{subfigure} \caption{Convergence plots for (a) error in target functional and (b) dual weighted residual (DWR) using scaled V-norm.} \label{convnadarajahflux} \end{figure} \subsection{Test case III: L-shaped domain} Next, we present the classical L-shaped domain Poisson problem: \begin{equation} \begin{aligned} -{\nabla}^2 u &= s\quad && \mathrm{in}\ \Omega = {(-1,1)}^2 \setminus [0,1] \times [-1,0], \\ u &= g_D && \mathrm{on} \ \partial \Omega. \end{aligned} \end{equation} The source term $s(\mathbf{x})$ is chosen in such a way that the exact solution is given by: \begin{equation} u(\mathbf{x}) = r^{\frac{2}{3}}sin\left(\frac{2}{3}\theta\right) \quad where \quad \theta = tan^{-1}\left(\frac{y}{x}\right) \quad \text{and} \quad r = \sqrt{x^2 + y^2}. \end{equation} The boundary conditions are taken from the exact solution. This test case serves to demonstrate the performance of $hp-$adaptivity in the presence of a singularity, where the additional flexibility of varying the local order of approximation leads to dramatically improved results compared to $h-$only adaptation. In the current example, when using uniform refinement, it is expected to achieve a convergence of $O(h^{\frac{4}{3}})$ \cite{crsing} in the $L^2$ sense in $u$. In \cite{Yano2012}, it has already been shown that higher-order convergence i.e. $p+1$ (for $h-$adaptation) can be achieved using exponentially graded meshes. In \cite{Guo1986} and \cite{DaVeiga2018}, it has been shown that exponential convergence can be achieved with a sequence of geometrically refined $hp-$meshes in the presence of a corner singularity. However, these meshes are typically hand-crafted and require a priori information (polynomial and size distribution), contrary to an automatic mesh adaptation procedure. Thus, generating such meshes with optimal gradation in size and polynomial distribution is a challenging problem for any automatic mesh adaptation algorithm. In ~\cref{convergence_scaledvnorm_Lshaped_plota} and ~\cref{convergence_scaledvnorm_Lshaped_plotb}, we present the convergence results. We observe exponential convergence in $L^2$ norm, energy norm, $L^{\infty}$, $H^1$ norm, and semi-norm. One can observe that $hp-$adaptation completely outperforms $h-$adaptation. \begin{figure}[H] \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture}[scale=0.7] \begin{semilogyaxis}[xmin=3,xmax=35, ymin=1e-9,ymax=0.01,xlabel=\large{$\sqrt[3]{ndof}$},ylabel=\large{${ \Vert u-u_h \Vert}_{L^{2}(\Omega)}$},grid=major,legend style={at={(1,1)},anchor=north east,font=\tiny,rounded corners=2pt}] \addplot[color = blue,mark=square*] table[x= ndof, y=err_l2, col sep = comma] {Data_hp/L_shaped⁩/expconv/L2_error_Lshaped_Domain_MNS_0p5_p1_hp.txt}; \addplot [color = red,mark=square*] table[x= ndof, y=err_l2, col sep = comma] {Data_hp/L_shaped⁩/expconv/L2_error_Lshaped_Domain_MNS_0p5_p2_hp.txt}; \addplot [color = black,mark=square*] table[x= ndof, y=err_l2, col sep = comma] {Data_hp/L_shaped⁩/expconv/L2_error_Lshaped_Domain_MNS_0p5_p3_hp.txt}; \addplot [color = cyan,mark=square*] table[x= ndof, y=err_l2, col sep = comma] {Data_hp/L_shaped⁩/expconv/L2_error_Lshaped_MNS_0p5_Hpnew_1to20.txt}; \legend{$p = 1$,$p = 2$,$p = 3$,$hp$} \end{semilogyaxis} \end{tikzpicture} \caption{} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \begin{tikzpicture}[scale=0.7] \begin{semilogyaxis}[xmin=3,xmax=35, ymin=1e-6,ymax=0.5,xlabel=\large{$\sqrt[3]{ndof}$},ylabel=\large{${ \Vert U-U_h \Vert}_{E(\Omega)}$},grid=major,legend style={at={(1,1)},anchor=north east,font=\tiny,rounded corners=2pt}] \addplot[color = blue,mark=square*] table[x= ndof, y=EE, col sep = comma] {Data_hp/L_shaped⁩/expconv/EE_error_Lshaped_Domain_MNS_0p5_p1_hp.txt}; \addplot [color = red,mark=square*] table[x= ndof, y=EE, col sep = comma] {Data_hp/L_shaped⁩/expconv/EE_error_Lshaped_Domain_MNS_0p5_p2_hp.txt}; \addplot [color = black,mark=square*] table[x= ndof, y=EE, col sep = comma] {Data_hp/L_shaped/expconv⁩/EE_error_Lshaped_Domain_MNS_0p5_p3_hp.txt}; \addplot [color =cyan,mark=square*] table[x= ndof, y=EE, col sep = comma] {Data_hp/L_shaped/expconv/EE_error_Lshaped_MNS_0p5_Hpnew_1to20.txt}; \legend{$p = 1$,$p = 2$,$p = 3$,$hp$} \end{semilogyaxis} \end{tikzpicture} \caption{} \end{subfigure} \caption{Convergence plots of (a) $L^2$ error in $u_h$ and (b) energy error using scaled V-norm.} \label{convergence_scaledvnorm_Lshaped_plota} \end{figure} \begin{figure}[h] \begin{subfigure}[b]{0.33\textwidth} \begin{tikzpicture}[scale=0.6] \begin{semilogyaxis}[xmin=3,xmax=35, ymin=1e-6,ymax=0.1,xlabel=\large{$\sqrt[3]{ndof}$},ylabel=\large{$ {\Vert u-u_h \Vert}_{L^{\infty}(\Omega)}$},grid=major,legend style={at={(1,1)},anchor=north east,font=\normalsize,rounded corners=2pt}] \addplot[color = blue,mark=square*] table[x= ndof, y=Linf,col sep = comma] {Data_hp/L_shaped⁩/expconv/L2_inf_error_Lshapeddomain_p1.txt}; \addplot [color = red,mark=square*] table[x= ndof, y=Linf, col sep = comma] {Data_hp/L_shaped⁩/expconv/L2_inf_error_Lshapeddomain_p2.txt}; \addplot [color = black,mark=square*] table[x= ndof, y=Linf, col sep = comma] {Data_hp/L_shaped⁩/expconv/L2_inf_error_Lshapeddomain_p3.txt}; \addplot [color = cyan,mark=square*] table[x= ndof, y=Linf, col sep = comma] {Data_hp/L_shaped⁩/expconv/all_error_L_shaped_Hp.txt}; \legend{$p = 1$,$p = 2$,$p = 3$,$hp$} \end{semilogyaxis} \end{tikzpicture} \caption{} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \begin{tikzpicture}[scale=0.6] \begin{semilogyaxis}[xmin=3,xmax=35, ymin=5e-7,ymax=0.2,xlabel=\large{$\sqrt[3]{ndof}$},ylabel=\large{$ {\vert u-u_h \vert}_{H^{1}(\Omega)}$},grid=major,legend style={at={(1,1)},anchor=north east,font=\normalsize,rounded corners=2pt}] \addplot[color = blue,mark=square*] table[x= ndof, y=H1_Seminorm_error, col sep = comma] {Data_hp/L_shaped⁩/expconv/allerror_Lshaped_hp_p1.txt}; \addplot [color = red,mark=square*] table[x= ndof, y=H1_Seminorm_error, col sep = comma] {Data_hp/L_shaped⁩/expconv/allerror_Lshaped_hp_p2.txt}; \addplot [color = black,mark=square*] table[x= ndof, y=H1_Seminorm_error, col sep = comma] {Data_hp/L_shaped⁩/expconv/allerror_Lshaped_hp_p3.txt}; \addplot [color = cyan,mark=square*] table[x= ndof, y=H1_Seminorm_error, col sep = comma] {Data_hp/L_shaped⁩/expconv/all_error_L_shaped_Hp.txt}; \legend{$p = 1$,$p = 2$,$p = 3$,$hp$} \end{semilogyaxis} \end{tikzpicture} \caption{} \end{subfigure} \begin{subfigure}[b]{0.33\textwidth} \begin{tikzpicture}[scale=0.6] \begin{semilogyaxis}[xmin=3,xmax=35, ymin=5e-7,ymax=0.2,xlabel=\large{$\sqrt[3]{ndof}$},ylabel=\large{$ {\Vert u-u_h \Vert}_{H^1(\Omega)}$},grid=major,legend style={at={(1,1)},anchor=north east,font=\normalsize,rounded corners=2pt}] \addplot[color = blue,mark=square*] table[x= ndof, y=H1_error, col sep = comma] {Data_hp/L_shaped⁩/expconv/allerror_Lshaped_hp_p1.txt}; \addplot [color = red,mark=square*] table[x= ndof, y=H1_error, col sep = comma] {Data_hp/L_shaped⁩/expconv/allerror_Lshaped_hp_p2.txt}; \addplot [color = black,mark=square*] table[x= ndof, y=H1_error, col sep = comma] {Data_hp/L_shaped/expconv⁩/allerror_Lshaped_hp_p3.txt}; \addplot [color = cyan,mark=square*] table[x= ndof, y=H1_error, col sep = comma] {Data_hp/L_shaped/expconv⁩/all_error_L_shaped_Hp.txt}; \legend{$p = 1$,$p = 2$,$p = 3$,$hp$} \end{semilogyaxis} \end{tikzpicture} \caption{} \end{subfigure} \caption{Convergence plots of (a) $L^{\infty}$ error, (b) $H^1$ seminorm error and (c) $H^1$ error using scaled V-norm.} \label{convergence_scaledvnorm_Lshaped_plotb} \end{figure} In ~\cref{Lshapeddomain_polydist}, we present the polynomial distribution on an adapted mesh. Near the singularity, we observe the lowest order of approximation. As we move away from the singularity, the algorithm chooses higher polynomial orders due to the smoothness of primal variables. \begin{figure}[H] \begin{subfigure}[b]{0.5\textwidth} \includegraphics[scale = 0.24]{Data_hp/L_shaped⁩/zoomedinview/level0-eps-converted-to.pdf} \caption{} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[scale = 0.24]{Data_hp/L_shaped⁩/zoomedinview/level1-eps-converted-to.pdf} \caption{} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[scale = 0.24]{Data_hp/L_shaped⁩/zoomedinview/level2-eps-converted-to.pdf} \caption{} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \includegraphics[scale = 0.24]{Data_hp/L_shaped⁩/zoomedinview/level3-eps-converted-to.pdf} \caption{} \end{subfigure} \caption{Zoomed-in view showing polynomial order with (a) Level $0$, (b) Level $1$, (c) Level $2$ and (d) Level $3$ magnification (with Level 0 denoting the least magnification and Level 3 denoting the most magnification).} \label{Lshapeddomain_polydist} \end{figure} Finally, we perform a numerical experiment where we keep the required DOFs fixed and adapt the mesh. This numerical experiment aims to demonstrate the reduction in error at a fixed cost. In ~\cref{Constant_DOF_Lshaped}, it can be observed that there is a reduction in both the $L^2$ error in $u$ and the energy error by three orders of magnitude. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Adap no: & 0 & 3 & 5 & 7 & 10 \\ \hline ${\Vert u - u_h \Vert}_{L^2(\Omega)}$ & $0.000157763$ & $9.87e-07$ & $1.34e-07$& $1.13e-07$ & $1.89e-07$ \\ \hline ${\Vert U - U_h \Vert}_{E(\Omega)}$ & $0.0251805$ & $8.05e-04$ & $1.24e-04$& $3.66e-05$ & $1.98e-05$ \\ \hline $p_{avg}$ & $2$ & $2.7$ & $4.01$& $5.57$ & $6.81$ \\ \hline $\text{Ndof}$ & $3072$ & $2848$ & $3185$& $3238$ & $3308$ \\ \hline \end{tabular} \end{center} \caption{Adaptation vs. error for constant complexity using scaled V-norm ($\mathcal{N}_{h,p} = \int_{\Omega} w(\mathbf{x}) d(\mathbf{x}) \, d\mathbf{x} = 3072.0$).} \label{Constant_DOF_Lshaped} \end{table}
1,314,259,994,624
arxiv
\section{Introduction} \label{introduction} We have developed a model which aims to reproduce self-consistently observational data of many kinds related to cosmic-ray origin and propagation: direct measurements of nuclei, antiprotons, electrons and positrons, $\gamma$-rays, and synchrotron radiation. These data provide many independent constraints on any model and our approach is able to take advantage of this since it must be consistent with all types of observation. A numerical method and corresponding computer code (GALPROP) for the calculation of Galactic cosmic-ray propagation in 3D has been developed. The basic spatial propagation mechanisms are (momentum-dependent) diffusion and convection, while in momentum space energy loss and diffusive reacceleration are treated. Fragmentation and energy losses are computed using realistic distributions for the interstellar gas and radiation fields. The code is sufficiently flexible that it can be extended to include new aspects as required. The basic procedure is first to obtain a set of propagation parameters which reproduce the cosmic ray B/C\ and $^{10}$Be/$\,^9$Be\ ratios; the same propagation conditions are then applied to primary electrons. Gamma-ray and synchrotron emission are then evaluated with the same model. Our approach is not intended to perform detailed source abundance calculations with a large network of reactions, which is still best done with the path-length distribution approach (see e.g. \cite{DuVernois96} and references therein). Instead we use just the principal progenitors and weighted cross sections based on the observed cosmic-ray abundances (see \cite{Webber92}). The B/C\ data is used since it is the most accurately measured ratio covering a wide energy range and having well established cross sections. A re-evaluation of the halo size is desirable since new $^{10}$Be/$\,^9$Be\ data are available from Ulysses with better statistics than previously. Preliminary results were presented in Strong \& Moskalenko (1997) (hereafter \cite{SM87}) and full results for protons, Helium, positrons, and electrons in Moskalenko \& Strong (1998a) (hereafter \cite{MS98a}). Evaluation of the B/C\ and $^{10}$Be/$\,^9$Be\ ratios, evaluation of diffusion/convection and reacceleration models, and setting of limits on the halo size, as well as full details of the numerical method and energy losses for nucleons and electrons are summarized in Strong \& Moskalenko (1998) (hereafter \cite{SM98}). Evaluation of antiprotrons in connection with diffuse Galactic $\gamma$-rays and interstellar nucleon spectrum are given in Moskalenko, Strong, \& Reimer (1998) (hereafter \cite{MSR98}). For a recent discussion of diffuse Galactic continuum $\gamma$-rays\ and synchrotron emission in the context of this approach see Strong, Moskalenko, \& Reimer (1998) (hereafter \cite{SMR98}) and Moskalenko \& Strong (1998d). For interested users our model is available in the public domain on the World Wide Web ({\it http://www.gamma.mpe-garching.mpg.de/$\sim$aws/aws.html}) \section{Motivation} It was pointed out many years ago (see \cite{Ginzburg80}, \cite{Berezinskii90}) that the interpretation of radioactive cosmic-ray nuclei is model-dependent and in particular that halo models lead to a quite different physical picture from homogeneous models. The latter show simply a rather lower average matter density than the local Galactic hydrogen (e.g., \cite{SimpsonGarcia88,Lukasiak94a}), but do not lead to a meaningful estimate of the size of the confinement region, and the correponding cosmic-ray `lifetime' is model-dependent. In such treatments the lifetime is combined with the grammage to yield an `average density'. For example Lukasiak et al. (1994a) find an `average density' of 0.28 cm$^{-3}$ compared to the local interstellar value of about 1 cm$^{-3}$, indicating a $z$-extent of less than 1 kpc compared to the several kpc found in diffusive halo models. Our model includes spatial dimensions as a basic element, and so these issues are automatically addressed. The possible r\^ole of convection was shown by Jokipii (1976), and Jones (1979) pointed out its effect on the energy-dependence of the secondary/primary ratio. Recent papers give estimates for the halo size and limits on convection based on existing calculations (e.g., \cite{Webber92}, \cite{WebberSoutoul98}), and we attempt to improve on these models with a more detailed treatment. Previous approaches to the spatial nucleon propagation problem have been mainly analytical: Jones (1979), Freedman et al. (1980), Berezinskii et al. (1990), Webber, Lee, \& Gupta (1992), Bloemen et al. (1993), and Ptuskin \& Soutoul (1998) treated diffusion/convection models in this way. Bloemen et al. (1993) used the `grammage' formulation rather than the explicit isotope ratios, and their propagation equation implicitly assumes identical distributions of primary and secondary source functions. These papers did not attempt to fit the low-energy ($<1$ GeV/nucleon) B/C\ data (which we will show leads to problems) and also did not consider reacceleration. It is clear than an analytical treatment quickly becomes limited as soon as more realistic models are desired, and this is the main justification for the numerical approach. The case of electrons and positrons is even more intractable analytically, although fairly general cases have been treated (\cite{Lerche82}). Recently Porter \& Protheroe (1997) made use of a Monte-Carlo method for electrons, with propagation in the $z$-direction only. This method would be very time-consuming for 2- or 3-D cases. Our method, using numerical solution of the propagation equation, is a practical alternative. Reacceleration has previously been handled using leaky-box calculations (\cite{Letaw93,SeoPtuskin94,HeinbachSimon95}); this has the advantage of allowing a full reaction network to be used (far beyond what is possible in the present approach), but suffers from the usual limitations of leaky-box models, especially concerning radioactive nuclei, which were not included in these treatments. Our simplified reaction network is necessary because of the added spatial dimensions, but we believe it is fully sufficient for our purpose, since we are not attempting to derive a comprehensive isotopic composition. A more complex reaction scheme would not change our conclusions. \section{Description of the models} \label{Description} The models are three dimensional with cylindrical symmetry in the Galaxy, and the basic coordinates are $(R,z,p)$, where $R$ is Galactocentric radius, $z$ is the distance from the Galactic plane, and $p$ is the total particle momentum. The distance from the Sun to the Galactic centre is taken as $R_\odot=8.5$ kpc. In the models the propagation region is bounded by $R=R_h$, $z=\pm z_h$ beyond which free escape is assumed. We take $R_h=30$ kpc. The range $z_h=1-20$ kpc is considered. For a given $z_h$ the diffusion coefficient as a function of momentum is determined by B/C\ for the case of no reacceleration; if reacceleration is assumed then the reacceleration strength (related to the Alfv\'en speed) is constrained by the energy-dependence of B/C. The spatial diffusion coefficient for the case of no reacceleration is taken as $D_{xx} = \beta D_0(\rho/\rho_0)^{\delta_1}$ below rigidity $\rho_0$, $\beta D_0(\rho/\rho_0)^{\delta_2}$ above rigidity $\rho_0$, where the factor $\beta$ ($= v/c$) is a natural consequence of a random-walk process. Since the introduction of a sharp break in $D_{xx}$ is a contrived procedure which is adopted just to fit B/C\ at all energies, we also consider the case $\delta_1=\delta_2$, i.e. no break, in order to investigate the possibility of reproducing the data in a physically simpler way. The convection velocity (in $z$-direction only) $V(z)$ is assumed to increase linearly with distance from the plane ($V>0$ for $z>0$, $V<0$ for $z<0$, and $dV/dz>0$ for all $z$); this implies a constant adiabatic energy loss. The linear form for $V(z)$ is suggested by cosmic-ray driven MHD wind models (e.g., \cite{Zirakashvili96}). We include diffusive reacceleration since some stochastic reacceleration is inevitable, and it provides a natural mechanism to reproduce the energy dependence of the B/C\ ratio without an {\it ad hoc} form for the diffusion coefficient (\cite{Letaw93,SeoPtuskin94,HeinbachSimon95,SimonHeinbach96}). The spatial diffusion coefficient for the case of reacceleration assumes a Kolmogorov spectrum of weak MHD turbulence so $D_{xx}=\beta D_0(\rho/\rho_0)^\delta$ with $\delta=1/3$ for all rigidities. For this case the momentum-space diffusion coefficient $D_{pp}$ is related to the spatial coefficient using the formula given by Seo \& Ptuskin (1994), and Berezinskii et al.\ (1990) \begin{equation \label{2.1} D_{pp}D_{xx} = {4 p^2 {v_A}^2\over 3\delta(4-\delta^2)(4-\delta) w}\ , \end{equation where $w$ characterises the level of turbulence, and is equal to the ratio of MHD wave energy density to magnetic field energy density. The free parameter in this relation is $v_A^2 /w$, where $v_A$ is the Alfv\'en speed; we take $w = 1$ (\cite{SeoPtuskin94}). The adopted distributions of atomic and molecular hydrogen and of ionized hydrogen are described in detail in \cite{SM98}; Fig.~\ref{fig1} shows the radial distribution of density in the Galactic plane. The He/H ratio of the interstellar gas is taken as 0.11 by number (see \cite{SM98} for a discussion). \placefigure{fig1} \begin{figure}[t! \centerline{ \psfig{file=fig1.ps,width=65mm,clip=} } \figcaption[fig1.ps]{The adopted radial distribution of atomic (HI), molecular (H$_2$) and ionized (HII) hydrogen at $z = 0$. \label{fig1} } \end{figure} The distribution of cosmic-ray sources is chosen to reproduce (after propagation) the cosmic-ray distribution determined by analysis of EGRET $\gamma$-ray\ data (\cite{StrongMattox96}). The form used is \begin{equation \label{2.2} q(R,z) = q_0 \left({R\over R_\odot}\right)^\eta e^{-\xi{R-R_\odot\over R_\odot} -{|z|\over 0.2{\rm\ kpc}}}\ , \end{equation where $q_0$ is a normalization constant, $\eta$ and $\xi$ are parameters; the $R$-dependence has the same parameterization as that used for SNR by Case \& Bhattacharya (1996, 1998). We compute models with their SNR distribution, but also with different parameters to better fit the $\gamma$-ray\ gradient. We apply a cutoff in the source distribution at $R = 20$ kpc since it is unlikely that significant sources are present at such large radii. The $z$-dependence of $q$ is nominal and reflects simply the assumed confinement of sources to the disk. The primary propagation is computed first giving the primary distribution as a function of ($R, z, p$); then the secondary source function is obtained from the gas density and cross sections, and finally the secondary propagation is computed. The bremsstrahlung and inverse Compton $\gamma$-rays\ are computed self-consistently from the gas and radiation fields used for the propagation. The $\pi^0$-decay $\gamma$-rays\ are calculated explicitly from the proton and Helium spectra using Dermer's (1986) approach. The secondary nucleon and secondary $e^\pm$ source functions are computed from the propagated primary distribution and the gas distribution. \section{Propagation equation} \label{propagation_eq} The propagation equation we use is written in the form: \begin{equation \label{A.1} {\partial \psi \over \partial t} = q(\vec r, p) + \vec\nabla \cdot ( D_{xx}\vec\nabla\psi - \vec V\psi ) + {\partial\over\partial p}\, p^2 D_{pp} {\partial\over\partial p}\, {1\over p^2}\, \psi - {\partial\over\partial p} \left[\dot{p} \psi - {p\over 3} \, (\vec\nabla \cdot \vec V )\psi\right] - {1\over\tau_f}\psi - {1\over\tau_r}\psi\ , \end{equation where $\psi=\psi (\vec r,p,t)$ is the density per unit of total particle momentum, $\psi(p)dp = 4\pi p^2 f(\vec p)$ in terms of phase-space density $f(\vec p)$, $q(\vec r, p)$ is the source term, $D_{xx}$ is the spatial diffusion coefficient, $\vec V$ is the convection velocity, reacceleration is described as diffusion in momentum space and is determined by the coefficient $D_{pp}$, $\dot{p}\equiv dp/dt$ is the momentum loss rate, $\tau_f$ is the time scale for fragmentation, and $\tau_r$ is the time scale for the radioactive decay. The numerical solution of the transport equation is based on a Crank-Nicholson (\cite{Press92}) implicit second-order scheme. The three spatial boundary conditions \begin{equation \label{B.4} \psi(R_h,z,p) = \psi(R,\pm z_h,p) = 0 \end{equation are imposed on each iteration. We use particle momentum as the kinematic variable since it greatly facilitates the inclusion of the diffusive reacceleration terms. The injection spectrum of primary nucleons is assumed to be a power law in momentum for the different species, $dq(p)/dp \propto p^{-\Gamma}$ for the injected {\it density} as expected for diffusive shock acceleration (e.g., \cite{Blandford80}). This corresponds to an injected {\it flux} per kinetic energy interval $dF(E_k)/dE_k \propto p^{-\Gamma}$, a form often used; the value of $\Gamma$ can vary with species. The injection spectrum for $^{12}$C and $^{16}$O was taken as $dq(p)/dp \propto p^{-2.35}$, for the case of no reacceleration, and $p^{-2.25}$ with reacceleration. These values are consistent with Engelmann et al.\ (1990) who give an injection index $2.23\pm0.05$. The same indices reproduce the observed proton and $^4$He spectra (\cite{MS98a}). For primary electrons, the injection spectrum can be adjusted to reproduce direct measurements or $\gamma$-ray\ and synchrotron data; all details are given in our series of papers (I--V). For secondary nucleons, the source term is $q(\vec r, p) = \beta c\, \psi_p (\vec r, p)[\sigma^{ps}_H (p) n_H (\vec r)+ \sigma^{ps}_{He}(p) n_{He}(\vec r)]$, where $\sigma^{ps}_H (p)$, $\sigma^{ps}_{He} (p)$ are the production cross sections for the secondary from the progenitor on H and He targets, $\psi_p$ is the progenitor density, and $n_H$, $n_{He}$ are the interstellar hydrogen and Helium number densities. To compute B/C\ and $^{10}$Be/$\,^9$Be\ it is sufficient for our purposes to treat only one principal progenitor and compute weighted cross sections based on the observed cosmic-ray abundances, which we took from Lukasiak et al. (1994b). Explicitly, for a principal primary with abundance $I_p$, we use for the production cross section $\overline\sigma^{ps} = \sum_i \sigma^{is} I_i/I_p$, where $\sigma^{is}$, $I_i$ are the cross sections and abundances of all species producing the given secondary. For the case of Boron, the Nitrogen progenitor is secondary but only accounts for $\approx$ 10\% of the total Boron production, so that the approximation of weighted cross sections is sufficient. For the fragmentation cross sections we use the formula given by Letaw, Silberberg, \& Tsao (1983). For the secondary production cross sections we use the Webber, Kish, \& Schrier (1990) parameterizations in the form of code obtained from the Transport Collaboration (\cite{Guzik97}). For the important B/C\ ratio, we take the $^{12}$C, $^{16}$O $\to\, ^{10}$B, $^{10}$C, $^{11}$B, $^{11}$C cross sections from the fit to experimental data given by Heinbach \& Simon (1995). For electrons and positrons the same propagation equation is valid when the appropriate energy loss terms (ionization, bremsstrahlung, inverse Compton, synchrotron) are used. The energy loss formulae for these loss mechanisms are given in \cite{SM98}. \section{Evaluation of models} We consider the cases of diffusion+convection and diffusion+reacceleration, since these are the minimum combinations which can reproduce the key observations. In principle all three processes could be significant, and such a general model can be considered if independent astrophysical information or models, for example for a Galactic wind (e.g., \cite{Zirakashvili96,Ptuskin97}), were to be used. In our evaluations we use the B/C\ data summarized by Webber et al.\ (1996), from HEAO--3 and Voyager 1 and 2. The spectra were modulated to 500 MV appropriate to this data using the force-field approximation (\cite{GleesonAxford68}). We also show B/C\ values from Ulysses (\cite{DuVernois96}) for comparison, but since this has large modulation (600--1080 MV) we do not base conclusions on these values. We use the measured $^{10}$Be/$\,^9$Be\ ratio from Ulysses (\cite{Connell98}) and from Voyager--1,2, IMP--7/8, ISEE--3 as summarized by Lukasiak et al. (1994a). The source distribution adopted has $\eta=0.5$, $\xi=1.0$ in eq.~(\ref{2.2}) (apart from the cases with SNR source distribution). This form adequately reproduces the small observed $\gamma$-ray\ based gradient, for all $z_h$; a more detailed discussion is given in Section~\ref{CRgradients}. \subsection{Diffusion/convection models} The main parameters are $z_h$, $D_0$, $\delta_1$, $\delta_2$ and $\rho_0$ and $dV/dz$. We treat $z_h$ as the main unknown quantity, and consider values 1--20 kpc. For a given $z_h$ we show B/C\ for a series of models with different $dV/dz$. \placefigure{fig2} \begin{figure}[t! \centerline{ \psfig{file=fig2.ps,width=65mm,clip=} } \figcaption[fig2.ps]{ B/C\ ratio for diffusion/convection models without break in diffusion coefficient, for $z_h$ = 3 kpc, $dV/dz$ = 0 (solid line), 5 (dotted line), and 10 km s$^{-1}$ kpc$^{-1}$ (dashed line). Solid line: interstellar ratio, shaded area: modulated to 300--500 MV. Data: vertical bars: HEAO-3, Voyager (\cite{Webber96}), filled circles: Ulysses (\cite{DuVernois96}: $\Phi$ = 600, 840, 1080 MV). \label{fig2} } \end{figure} Fig.~\ref{fig2} shows the case of no break, $\delta_1 = \delta_2$; for each $dV/dz$, the remaining parameters $D_0$, $\delta_1$ and $\rho_0$ are adjusted to fit the data as well as possible. It is clear that a {\it good} fit is {\it not} possible; the basic effect of convection is to reduce the variation of B/C\ with energy, and although this improves the fit at low energies the characteristic peaked shape of the measured B/C\ cannot be reproduced. Although modulation makes the comparison with the low energy Voyager data somewhat uncertain, Fig.~\ref{fig2} shows that the fit is unsatisfactory; the same is true even if we use a very low modulation parameter (300 MV) in an attempt to improve the fit. This modulation is near the minimum value for the entire Voyager 17 year period (cf. the average value of 500 MV; \cite{Webber96}). The failure to obtain a good fit is an important conclusion since it shows that the simple inclusion of convection cannot solve the problem of the low-energy falloff in B/C. \placefigure{fig3} \begin{figure}[t! \centerline{ \psfig{file=fig3a.ps,width=65mm,clip=} \psfig{file=fig3b.ps,width=65mm,clip=} } \figcaption[fig3a.ps,fig3b.ps]{ Predicted $^{10}$Be/$\,^9$Be\ ratio as function of (a) $z_h$ for $dV/dz$ = 0, 5, 10 km s$^{-1}$ kpc$^{-1}$, (b) $dV/dz$ for $z_h = 1 - 20$ kpc at 525 MeV/nucleon corresponding to the mean interstellar value for the Ulysses data (\cite{Connell98}); the Ulysses experimental limits are shown as horizontal dashed lines. The shaded regions show the parameter ranges allowed by the data. \label{fig3} } \end{figure} We can however force a fit to the data by allowing a break in $D_{xx}(p)$, i.e.\ $\delta_1 \ne \delta_2$. In the absence of convection, the falloff in B/C\ at low energies requires that the diffusion coefficient increases rapidly below $\rho_0 = 3$ GV ($\delta_1\sim -0.6$) reversing the trend from higher energies ($\delta_2 \sim +0.6$). Inclusion of the convective term does not reduce the size of the {\it ad hoc} break in the diffusion coefficient, in fact it rather exacerbates the problem by requiring a larger break\footnote{ Note that the dependence of interaction rate on particle velocity itself is not sufficient to cause the full observed low-energy falloff. In leaky-box treatments the low-energy behaviour is modelled by adopting a constant path-length below a few GeV/nucleon, without attempting to justify this physically. A convective term is often invoked, but our treatment shows that this alone is not sufficient. }. Fig.~\ref{fig3} summarizes the limits on $z_h$ and $dV/dz$, using the $^{10}$Be/$\,^9$Be\ ratio at the interstellar energy of 525 MeV/nucleon appropriate to the Ulysses data (\cite{Connell98}). For $z_h <4$ kpc, the predicted ratio is always too high, even for no convection; no convection is allowed for such $z_h$ values since this increases $^{10}$Be/$\,^9$Be\ still further. For $z_h \ge 4$ kpc agreement with $^{10}$Be/$\,^9$Be\ is possible provided $0 < dV/dz < 7$ km s$^{-1}$ kpc$^{-1}$. We conclude from Fig.~\ref{fig3}a that in the absence of convection $4{\rm\ kpc}<z_h < 12 {\rm\ kpc}$, and if convection is allowed the lower limit remains but no upper limit can be set. It is interesting that an upper as well as a lower limit on $z_h$ is obtained in the case of no convection, although $^{10}$Be/$\,^9$Be\ approaches asymptotically a constant value for large halo sizes and becomes insensitive to the halo dimension. From Fig.~\ref{fig3}b, $dV/dz < 7$ km s$^{-1}$ kpc$^{-1}$ and this figure places upper limits on the convection parameter for each halo size. These limits are rather strict, and a finite wind velocity is only allowed in any case for $z_h > 4$ kpc. Note that these results are not very sensitive to modulation since the predicted $^{10}$Be/$\,^9$Be\ is fairly constant from 100 to 1000 MeV/nucleon. \subsection{Diffusive reacceleration models \label{diff_reacc_models}} \placefigure{fig4} \begin{figure}[t! \centerline{ \psfig{file=fig4.ps,width=65mm,clip=} } \figcaption[fig4.ps]{ B/C\ ratio for diffusive reacceleration models with $z_h$ = 5 kpc, $v_A$ = 0 (dotted), 15 (dashed), 20 (thin solid), 30 km s$^{-1}$ (thick solid). In each case the interstellar ratio and the ratio modulated to 500 MV is shown. Data: as Fig.~\ref{fig2}. \label{fig4} } \end{figure} \placefigure{fig5} \begin{figure}[t! \centerline{ \psfig{file=fig5a.ps,width=65mm,clip=} \psfig{file=fig5b.ps,width=65mm,clip=} } \figcaption[fig5a.ps,fig5b.ps]{ $^{10}$Be/$\,^9$Be\ ratio for diffusive reacceleration models: (a) as function of energy for (from top to bottom) $z_h$ = 1, 2, 3, 4, 5, 10, 15 and 20 kpc; (b) as function of $z_h$ at 525 MeV/nucleon corresponding to the mean interstellar value for the Ulysses data (\cite{Connell98}); the Ulysses experimental limits are shown as horizontal dashed lines. Data points from Lukasiak et al. (1994a) (Voyager-1,2: square, IMP-7/8: open circle, ISEE-3: triangle) and Connell (1998) (Ulysses): filled circle. \label{fig5} } \end{figure} The main parameters are $z_h$, $D_0$ and $v_A$. Again we treat $z_h$ as the main unknown quantity. The evaluation is simpler than for convection models since the number of free parameters is smaller. Fig.~\ref{fig4} illustrates the effect on B/C\ of varying $v_A$, from $v_A = 0$ (no reacceleration) to $v_A=30$ km s$^{-1}$, for $z_h= 5$ kpc. This shows how the initial form becomes modified to produce the characteristic peaked shape. Reacceleration models thus lead naturally to the observed peaked form of B/C, as pointed out by several previous authors (e.g., \cite{Letaw93,SeoPtuskin94,HeinbachSimon95}). Fig.~\ref{fig5} shows $^{10}$Be/$\,^9$Be\ for the same models, (a) as a function of energy for various $z_h$, (b) as a function of $z_h$ at 525 MeV/nucleon corresponding to the Ulysses measurement. Comparing with the Ulysses data point, we conclude that $4{\rm\ kpc} <z_h < 12$ kpc. Again the result is not very sensitive to modulation since the predicted $^{10}$Be/$\,^9$Be\ is fairly constant from 100 to 1000 MeV/nucleon. Energy losses attenuate the flux of stable nuclei much more than radioactive nuclei, and hence lead to an increase in $^{10}$Be/$\,^9$Be. Clearly if losses are ignored the predicted ratio will be too low and the derived value of $z_h$ will be too small since $z_h$ will have to be reduced to fit the observations. Our results on the halo size can be compared with those of other studies: $z_h \ge7.8$ kpc (\cite{Freedman80}), $ z_h \le3$ kpc (\cite{Bloemen93}), and $z_h \le4$ kpc (\cite{Webber92}). Lukasiak et al. (1994a) found $1.9{\rm\ kpc} < z_h < 3.6$ kpc (for no convection) based on Voyager Be data and using the Webber, Lee, \& Gupta (1992) models. We believe our new limits to be an improvement, first because of the improved Be data from Ulysses, second because of our treatment of energy losses (see Section~\ref{diff_reacc_models}) and generally more realistic astrophysical details in our model. Recently, Webber \& Soutoul (1998), Ptuskin \& Soutoul (1998) have obtained $z_h= 2-4$ kpc and $4.9_{-2}^{+4}$ kpc, respectively, in agreement with our results. \section{Cosmic-ray gradients} \label{CRgradients} \placefigure{fig6} \begin{figure}[t! \centerline{ \psfig{file=fig6a.ps,width=65mm,clip=} \psfig{file=fig6b.ps,width=65mm,clip=} } \figcaption[fig6a.ps,fig6b.ps]{ {\it Left panel}: Radial distribution of 3 GeV protons at $z = 0$, for diffusive reacceleration model with halo sizes $z_h = 1$, 3, 5, 10, 15, and 20 kpc (solid curves). The source distribution is that for SNR given by Case \& Bhattacharya (1996), shown as a dashed line. The cosmic-ray distribution deduced from EGRET $>$100 MeV $\gamma$-rays\ (\cite{StrongMattox96}) is shown as the histogram. {\it Right panel}: Radial distribution of 3 GeV protons at $z = 0$, for diffusive reacceleration model with various halo sizes $z_h = 1$, 3, 5, 10, 15, and 20 kpc (solid curves). The source distribution used is shown as a dashed line. It was adopted to reproduce the cosmic-ray distribution deduced from EGRET $>$100 MeV $\gamma$-rays\ (\cite{StrongMattox96}) which is shown as the histogram. \label{fig6} } \end{figure} An important constraint on any model of cosmic-ray propagation is provided by $\gamma$-ray\ data which give information on the radial distribution of cosmic rays in the Galaxy. For a given source distribution, a large halo will give a smaller cosmic-ray gradient. It is generally believed that supernova remnants (SNR) are the main sources of cosmic rays (see \cite{Webber97} for a recent review), but unfortunately the distribution of SNR is poorly known due to selection effects. Nevertheless it is interesting to compare quantitatively the effects of halo size on the gradient for a plausible SNR source distribution. For illustration we use the SNR distribution from Case \& Bhattacharya (1996), which is peaked at $R = 4 - 5$ kpc and has a steep falloff towards larger $R$. Fig.~\ref{fig6} (left panel) shows the effect of halo size on the resulting radial distribution of 3 GeV cosmic-ray protons, for the reacceleration model. For comparison we show the cosmic-ray distribution deduced by model-fitting to EGRET $\gamma$-ray\ data ($>100$ MeV) from Strong \& Mattox (1996), which is dominated by the $\pi^0$-decay component generated by GeV nucleons; the analysis by Hunter et al. (1997), based on a different approach, gives a similar result. The predicted cosmic-ray distribution using the SNR source function is too steep even for large halo sizes; in fact the halo size has a relatively small effect on the distribution. Other related distributions such as pulsars (\cite{Taylor93}, \cite{Johnston94}) have an even steeper falloff. Only for $z_h = 20$ kpc does the gradient approach that observed, and in this case the combination of a large halo and a slightly less steep SNR distribution could give a satisfactory fit. For diffusion/convection models the situation is similar, with more convection tending to make the gradient follow more closely the sources. A larger halo ($z_h \gg 20$ kpc), apart from being excluded by the $^{10}$Be analysis presented here, would in fact not improve the situation much since Fig.~\ref{fig6} shows that the gradient approaches an asymptotic shape which hardly changes beyond a certain halo size. This is a consequence of the nature of the diffusive process, which even for an unlimited propagation region still retains the signature of the source distribution. Based on these results we have to conclude, in the context of the present models, that the distribution of sources is not that expected from the (highly uncertain: see \cite{Green91}) distribution of SNR. This conclusion is similar to that previously found by others (\cite{Webber92,Bloemen93}). In view of the difficulty of deriving the SNR distribution this is perhaps not a serious shortcoming; if SNR are indeed cosmic-ray sources then it is possible that the $\gamma$-ray\ analysis gives the best estimate of their Galactic distribution. Therefore in our standard model we have obtained the source distribution empirically by requiring consistency with the high energy $\gamma$-ray\ results. Alternatively it is possible that the diffusion is not isotropic but occurs preferentially in the radial direction, so smoothing the source distribution more effectively. This possibility will be addressed in future work. Fig.~\ref{fig6} (right panel) shows the source distribution adopted in the present work, and the resulting 3 GeV proton distribution, again compared to that deduced from $\gamma$-rays. The gradients are now consistent, especially considering that some systematic effects, due for example unresolved $\gamma$-ray\ sources, are present in the $\gamma$-ray\ based distribution. \section{Interstellar positrons and antiproton spectra} The positron and antiproton fluxes reflect the proton and Helium spectra throughout the Galaxy and thus provide an essential check on propagation models and also on the interpretation of diffuse $\gamma$-ray\ emission (\cite{MSR98}). Secondary positrons and antiprotrons in Galactic cosmic rays are produced in collisions of cosmic-ray particles with interstellar matter\footnote{ Secondary origin of cosmic-ray antiprotons is basically accepted, though some other exotic contributors such as, e.g., neutralino annihilation (\cite{Bottino98}) are also discussed. }. These are an important diagnostic for models of cosmic-ray propagation and provide information complementary to that provided by secondary nuclei. However, unlike secondary nuclei, antiprotons reflect primarily the propagation history of the protons, the main cosmic-ray component. In our model the proton and Helium spectra are computed as a function of $(R,z,p)$ by the propagation code. The injection spectrum is adjusted to give a good fit to the locally measured spectrum, normalizing at 10 GeV/nucleon. For the injection spectra of protons, we find $\Gamma=2.15$ reproduces the observed spectra in the case of no reacceleration, and $\Gamma=2.25$ with reacceleration. We find it is necessary to use slightly steeper (0.2 in the index) injection spectra for Helium nuclei in order to fit the observed spectra in the 1--100 GeV range of interest for positron production. The spectra fit up to about 100 GeV beyond which the Helium spectrum without reacceleration becomes too steep and the proton spectrum with reacceleration too flat; these deviations are of no consequence for the positron and antiproton calculations. Our calculations of the interstellar antiproton spectra and $\bar{p}/p$ ratio for these spectra are shown in Fig.~\ref{fig7}. The computed antiproton spectrum is divided by the same interstellar proton spectrum, and the ratio is modulated to 750 MV. The corresponding ratios are shown on the right panel. We have performed the same calculations for models with and without reacceleration and the results differ only in details. As seen, our result agrees well with the calculations of Simon et al.\ (1998), showing that our treatment of the production cross-sections is adequate (for the details of the cross sections see \cite{MSR98}). \placefigure{fig7} \begin{figure}[t! \centerline{ \psfig{file=fig7a.ps,width=65mm,clip=} \psfig{file=fig7b.ps,width=65mm,clip=} } \caption[fig7a.ps,fig7b.ps]{ {\it Left panel:} Interstellar nucleon and antiproton spectra as calculated in nonreacceleration models (thin lines) and models with reacceleration (thick lines). Proton spectra consistent with the local one are shown by the solid lines, hard spectra are shown by the dashed lines. The local spectrum as measured by IMAX (\cite{Menn97}) is shown by dots. {\it Right panel:} $\bar{p}/p$ ratio for different ambient proton spectra. Lines are coded as on the left. The ratio is modulated with $\Phi=750$ MV. Calculations of Simon et al.\ (1998) are shown by the dotted lines. Data: see references in \cite{MSR98}. \label{fig7}} \end{figure} \placefigure{fig8} \begin{figure}[t! \centerline{ \psfig{file=fig8.ps,width=65mm,clip=} } \figcaption[fig8.ps]{ Spectra of secondary positrons for `conventional' (thin line) and hard (dashes) nucleon spectra (no reacceleration). Thick line: `conventional' case with reacceleration. Data: Barwick et al.\ (1998). \label{fig8}} \end{figure} Fig.~\ref{fig8} shows the computed secondary positron spectra for the cases without and with reacceleration. Our predictions are compared with recent absolute measurements above a few GeV where solar modulation is small (\cite{Barwick98}), and the agreement is satisfactory in both cases; this comparison has the advantage of being independent of the electron spectrum, unlike the positron/electron ratio which was the main focus of \cite{MS98a}. \section{Probes of the interstellar nucleon spectrum} \label{positrons_pbar} Diffuse Galactic $\gamma$-ray\ observations have been interpreted as requiring a harder average nucleon spectrum in interstellar space than that observed directly (\cite{Hunter97}, \cite{Gralewicz97}, \cite{Mori97}, \cite{MS98b},c, see also Section \ref{gammarays}). A sensitive test of the interstellar nucleon spectra is provided by secondary antiprotons and positrons. Because they are secondary, they reflect the {\it large-scale} nucleon spectrum independent of local irregularities in the primaries. We consider a case which matches the $\gamma$-ray\ data (Fig.~\ref{fig9}) at the cost of a much harder proton spectrum than observed. The dashed lines in Fig.~\ref{fig7} (right) show the $\bar{p}/p$ ratio for the hard proton spectrum (with and without reacceleration); the ratio is still consistent with the data at low energies but rapidly increases toward higher energies and becomes $\sim$4 times higher at 10 GeV. Up to 3 GeV it does not confict with the data with their very large error bars. It is however larger than the point at 3.7--19 GeV (\cite{Hof96}) by about $5\sigma$. Clearly we cannot conclude definitively on the basis of this one point\footnote{ We do not consider here the older $\bar{p}$ measurement of Golden et al.\ (1984a) because the flight of the early instrument in 1979 was repeated in 1991 with significantly improved instrument and analysis techniques (see \cite{Hof96} and a discussion therein). }, but it does indicate the sensitivity of this test. In view of the sharply rising ratio in the hard-spectrum scenario it seems unlikely that the data could be fitted in this case even with some re-scaling due to propagation uncertainties. More experiments are planned (see \cite{MSR98} for a summary) and these should allow us to set stricter limits on the nucleon spectra including less extreme cases than considered here, and to constrain better the interpretation of $\gamma$-rays. \placefigure{fig9} \begin{figure}[t! \centerline{ \psfig{file=fig9a.ps,width=65mm,clip=} \psfig{file=fig9b.ps,width=65mm,clip=} } \caption[fig9a.ps,fig9b.ps]{ Gamma-ray energy spectrum of the inner Galaxy ($300^\circ \le l\le 30^\circ$, $|b|\le 5^\circ$) compared with our model calculations. Data: EGRET (\cite{StrongMattox96}), COMPTEL (\cite{Strongetal98}), OSSE ($l=0, 25^\circ$: \cite{Kinzer97}). {\it Left panel:} Model with `conventional' nucleon and electron spectra. Also shown are the contributions of individual components: bremsstrahlung, inverse Compton, and $\pi^0$-decay. {\it Right panel:} The same compared to the model with the {\it hard nucleon} spectrum (no reacceleration). \label{fig9}} \end{figure} Positrons also provide a good probe of the nucleon spectrum, but are more affected by energy losses and propagation uncertainties. Fig.~\ref{fig8} shows, in addition to the normal case, the positron flux resulting from a hard nucleon spectrum. The predicted flux is a factor 4 above the Barwick et al. (1998) measurements and hence provides further evidence against the `hard nucleon spectrum' hypothesis. \section{Diffuse Galactic continuum gamma rays} \label{gammarays} \placefigure{fig10} \begin{figure}[t! \centerline{ \psfig{file=fig10.ps,width=65mm,clip=} } \figcaption[fig10.ps]{ Electron spectra at $R_\odot = 8.5$ kpc in the plane, for `conventional' (solid line), and hard electron spectrum models without (dashes), and with (dots) low-energy upturn. Data (direct measurements): Taira et al.\ (1993) (vertical lines), Golden et al.\ (1984b, 1994) (shaded areas), Ferrando et al.\ (1996) (small diamonds), Barwick et al.\ (1998) (large diamonds). \label{fig10}} \end{figure} We can also use our model to study the diffuse $\gamma$-ray\ emission from the Galaxy. Recent results from both COMPTEL and EGRET indicate that inverse Compton (IC) scattering is a more important contributor to the diffuse emission that previously believed. COMPTEL results (\cite{Strongetal97}) for the 1--30 MeV range show a latitude distribution in the inner Galaxy which is broader than that of HI and H$_2$, so that bremsstrahlung of electrons on the gas does not appear adequate and a more extended component such as IC is required. The broad distribution is the result of the large $z$-extent of the interstellar radiation field\footnote{ We have made a new calculation of the interstellar radiation field (\cite{SMR98}) based on stellar population models and IRAS and COBE data. } which can interact with cosmic-ray electrons up to several kpc from the plane. At much higher energies, the puzzling excess in the EGRET data above 1 GeV relative to that expected for $\pi^0$-decay has been suggested to orginate in IC scattering from a hard interstellar electron spectrum (e.g., \cite{PohlEsposito98}). Fig.~\ref{fig9} (left) shows the $\gamma$-ray\ spectrum of the inner Galaxy for a `conventional' case which matches the directly measured electron and nucleon spectra and is consistent with synchrotron spectral index data (\cite{MS98c}, \cite{SMR98}). Fig.~\ref{fig10} shows electron spectra at $R_\odot = 8.5$ kpc in the disk for this model. It fits the observed $\gamma$-ray\ spectrum only in the range 30 MeV -- 1 GeV. Fig.~\ref{fig9} (right) shows the case of $\pi^0$-decay $\gamma$-rays\ from a hard nucleon spectrum (but still the `conventional' electron spectrum). This can improve the fit above 1 GeV but the high energy antiproton and positron data probably exclude the hypothesis that the local nucleon spectrum differs significantly from the Galactic average (see Section \ref{positrons_pbar}). \placefigure{fig11} \begin{figure}[t! \centerline{ \psfig{file=fig11a.ps,width=65mm,clip=} \psfig{file=fig11b.ps,width=65mm,clip=} } \figcaption[fig11a.ps,fig11b.ps]{ Distributions of 1--2 GeV $\gamma$-rays\ computed for a hard electron spectrum (reacceleration model) as compared to EGRET data (Cycles 1--4, point sources removed, see \cite{SMR98}). Contribution of various components is shown as calculated in our model. {\it Left panel:} Latitude distribution ($330^\circ <l <30^\circ$). {\it Right panel:} Longitude distribution for $|b|<5^\circ$. \label{fig11}} \end{figure} We thus consider the `hard electron spectrum' alternative. The electron injection spectral index is taken as --1.7 (with reacceleration), which after propagation provides consistency with radio synchrotron data (a crucial constraint). Following Pohl \& Esposito (1998), for this model we do {\it not} require consistency with the locally measured electron spectrum above 10 GeV since the rapid energy losses cause a clumpy distribution so that this is not necessarily representative of the interstellar average. For this case, the interstellar electron spectrum deviates strongly from that locally measured as illustrated in Fig.~\ref{fig10}. Because of the increased IC contribution at high energies, the predicted $\gamma$-ray\ spectrum can reproduce the overall intensity from 30 MeV -- 10 GeV but the detailed shape above 1 GeV is still problematic. Further refinement of this scenario is presented in \cite{SMR98}. Fig.~\ref{fig11} shows the model latitude and longitude $\gamma$-ray\ distributions for the inner Galaxy for 1--2 GeV, convolved with the EGRET point-spread function, compared to EGRET Phase 1--4 data (with known point sources subtracted). It shows that a model with large IC component can indeed reproduce the data. The latitude distribution here is not as wide as at low energies owing to the rapid energy losses of the electrons, so that an observational distinction between a gas-related $\pi^0$-component from a hard nucleon spectrum and the IC model does not seem possible on the basis of $\gamma$-rays alone, but requires also other tests such as consistency with antiproton and positron data (see Section~\ref{positrons_pbar}). \placefigure{fig12} \begin{figure}[t! \centerline{ \psfig{file=fig12a.ps,width=65mm,clip=} \psfig{file=fig12b.ps,width=65mm,clip=} } \caption[fig12a.ps,fig12b.ps]{ $\gamma$-ray\ spectrum of inner Galaxy compared to models with a hard electron spectrum without (left) and with low-energy upturn (right). Data as in Fig.~\ref{fig9}. \label{fig12}} \end{figure} None of these models fits the $\gamma$-ray\ spectrum below $\sim$30 MeV as measured by the Compton Gamma-Ray Observatory (Fig.~\ref{fig12} left). In order to fit the low-energy part as diffuse emission (Fig.~\ref{fig12} right), without violating synchrotron constraints (\cite{SMR98}), requires a rapid upturn in the cosmic-ray electron spectrum below 200 MeV (e.g., as in Fig.~\ref{fig10}). However, in view of the energetics problems (\cite{Skiboetal97}), a population of unresolved sources seems more probable and would be the natural extension of the low energy plane emission seen by OSSE (\cite{Kinzer97}) and GINGA (\cite{Yamasaki97}). \section{Conclusions} Our propagation model has been used to study several areas of high energy astrophysics. We believe that combining information from classical cosmic-ray studies with $\gamma$-ray\ and other data leads to tighter constraints on cosmic-ray origin and propagation. We have shown that simple diffusion/convection models have difficulty in accounting for the observed form of the B/C\ ratio without special assumptions chosen to fit the data, and do not obviate the need for an {\it ad hoc} form for the diffusion coefficient. On the other hand we confirm the conclusion of other authors that models with reacceleration account naturally for the energy dependence over the whole observed range. Combining these results points rather strongly in favour of the reacceleration picture. We take advantage of the recent Ulysses Be measurements to obtain estimates of the halo size. Our limits on the halo height are $4{\rm\ kpc} < z_h < 12$ kpc. Our new limits should be an improvement on previous estimates because of the more accurate Be data, our treatment of energy losses, and the inclusion of more realistic astrophysical details (such as, e.g., the gas distribution) in our model. The gradient of protons derived from $\gamma$-rays\ is smaller than expected for SNR sources, and we therefore adopt a flatter source distribution in order to meet the $\gamma$-ray\ constraints. This may just reflect the uncertainty in the SNR distribution. The positron and antiproton fluxes calculated are consistent with the most recent measurements. The $\bar{p}/p$ data point above 3 GeV and positron flux measurements seem to rule out the hypothesis that the local cosmic-ray nucleon spectrum differs significantly from the Galactic average (by implication adding support to the `hard electron' alternative), but confirmation of this conclusion must await more accurate antiproton data at high energies. Gamma-ray data suggest that the interstellar electron spectrum is harder then that locally measured, but this remains to be confirmed by detailed study of the angular distribution. The low-energy Galactic $\gamma$-ray\ emission is difficult to explain as truly diffuse and a point source population seems more probable.
1,314,259,994,625
arxiv
\section{Introduction} Neutrinos are unique messengers from the high-energy Universe. Produced through interactions of high-energy cosmic rays with ambient matter and photon fields, they provide an unambiguous tracer of the sites of hadronic acceleration (see \citealp{ahlers2018} for a recent review). Following the discovery of a diffuse astrophysical neutrino flux by the IceCube collaboration \citep{Aartsen:2015rwa,2014PhRvL.113j1101A} there is now a major effort to identify their origin. No significant clustering has yet been found within the neutrino data alone, but a search for neutrino clusters from known gamma-ray emitters found evidence for a correlation with the nearby Seyfert galaxy NGC\,1068 at the $2.9\sigma$ level \citep{2020PhRvL.124e1103A}. A complementary approach is to search directly for electromagnetic counterparts to individual high-energy neutrinos that have a high probability to be of astrophysical origin. Since 2016, the IceCube realtime program \citep{icecube17_realtime} has published their detections of such events through public realtime alerts and two candidate electromagnetic counterparts have since been identified at the $\sim 3\sigma$ level. In 2017, the gamma-ray blazar TXS 0506+056 was observed in spatial coincidence with a high-energy neutrino during a period of electromagnetic flaring \citep{IceCube:2018dnn}. A search for neutrino clustering from the same source revealed an additional neutrino flare in 2014-15 \citep{ice2018_txsflare}, during a period without any significant electromagnetic flaring activity \citep{Fermi-LAT:2019hte}. Theoretical models have confirmed that conditions in the source are consistent with the detection of one neutrino after accounting for Eddington bias \citep{Strotjohann:2018ufz}. However, explaining the ``orphan'' neutrino flare in 2014/15 proved to be difficult \citep{Reimer:2018vvw,Rodrigues:2018tku}. Statistically, the $\gamma$-ray blazar population contributes less than 27\% to the diffuse neutrino flux \citep{icecube17}. In 2019, the Tidal Disruption Event (TDE) AT2019dsg was associated with a high-energy neutrino \citep{Stein:2020xhk}. Models have proposed various TDE neutrino production zones, including the wind, disk, or corona (see \citealp{Hayasaki2021jem} for a recent review) which are consistent with the detection of one high-energy neutrino. Radio observations of AT2019dsg confirm long-lived non-thermal emission from the source \citep{Stein:2020xhk, Cendes:2021bvp, mohan_21, matsumoto_21}, but generally challenge models relying on the presence of an on-axis relativistic jet \citep{winter21}. A population analysis constrained the contribution of TDEs to less than 39\% of the diffuse neutrino flux \citep{stein_19}. The coincidence of the TDE-like flare AT2019fdr with a high-energy neutrino \citep{reusch2021} strong dust echos in AT2019fdr and AT2019dsg motivated a search for similar events \citep{vanvelzen2021}. A correlation at $3.7\sigma$ level of such flares with high-energy neutrino alerts was found. Taken together, these results suggest that the astrophysical neutrino flux has contributions from multiple source populations \citep{Bartos:2021tok}. Other possible neutrino source populations include supernovae and gamma ray bursts. Here we present the optical follow-up of 56 IceCube realtime alerts released between April 2016 and August 2021 with the All-Sky Automated Survey for SuperNovae (ASAS-SN; \citealp{Shappee14, Kochanek17b}). ASAS-SN is a network of optical telescopes located around the globe that observes the visible sky daily. Its large field of view makes it well-suited for fast follow-up of IceCube alerts and enables searches for transient neutrino counterparts. In Section \ref{sec:alerts} we introduce the IceCube alert selection followed by the description of our optical follow-up. We present our analysis of possible counterparts in Section \ref{sec:followup}. We derive limits on neutrino source luminosity functions in Section \ref{sec:limits} and discuss our conclusions in Section \ref{sec:conclusions}. \section{IceCube realtime alerts} \label{sec:alerts} \begin{table*} \begin{center} \begin{tabular}{c r r r r r r c} \hline \hline \multicolumn{1}{c}{\textbf{Event}} & \multicolumn{1}{c}{\textbf{R.A. (J2000)}} & \multicolumn{1}{c}{\textbf{Dec (J2000)}} & \multicolumn{1}{c}{\textbf{90\% area}} &\multicolumn{1}{c}{\textbf{1d coverage}} & \multicolumn{1}{c}{\textbf{14d coverage}} & \multicolumn{1}{c}{\textbf{Signalness}}& \multicolumn{1}{c}{\textbf{Refs}}\\ \multicolumn{1}{c}{}& \multicolumn{1}{c}{[deg]}&\multicolumn{1}{c}{[deg]}& \multicolumn{1}{c}{[sq. deg.]}& \multicolumn{1}{c}{[\%]} & \multicolumn{1}{c}{[\%]} & \multicolumn{1}{c}{[\%]} & \multicolumn{1}{c}{}\\ \hline \input{latex/alert_table_observed}\\ \hline \hline \multicolumn{8}{l}{$^{*}$For offline selected events no, signalness is given. Because they are promising events that were selected by hand, we assume a signalness of 50\%} \end{tabular} \caption{A summary of the 56 neutrino alerts followed up by ASAS-SN. In the first three column we list the name of the alert and its position. In columns three to six we give the 90\% rectangular localisation of the neutrino as sent out in the GCN and the fraction of this area covered by ASAS-SN in the first 24 hours and 14 days, respectively, after the neutrino arrival. Finally, we list the signalness of the event and the reference to the original IceCube GCN. For HESE events no signalness was given and we neglect these events to be conservative.} \label{tab:nu_alerts_observed} \end{center} \end{table*} \begin{table*} \centering \begin{tabular}{c r r r c c c c} \hline \hline \multicolumn{1}{c}{\textbf{Event}} & \multicolumn{1}{c}{\textbf{R.A. (J2000)}} & \multicolumn{1}{c}{\textbf{Dec (J2000)}} & \multicolumn{1}{c}{\textbf{90\% area}} & \multicolumn{1}{c}{\textbf{Reason}} & \multicolumn{1}{c}{\textbf{Refs}}\\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{[deg]}&\multicolumn{1}{c}{[deg]}& \multicolumn{1}{c}{[sq. deg.]} &\multicolumn{1}{c}{}& \multicolumn{1}{c}{}\\ \hline \input{latex/alert_table_not_observed}\\ \hline \hline \end{tabular} \caption{A summary of the 15 neutrino alerts that could not be observed by ASAS-SN. We list the event name and position in the first three columns. The area of the 90\% rectangular localisation of the neutrino is listed in column four and the reference to the IceCube GCN in column 5.} \label{tab:nu_alerts_not_observed} \end{table*} The IceCube neutrino observatory, located at the South Pole, is the world's largest neutrino telescope with an instrumented volume of one cubic kilometre \citep{IceCube:2016zyt}. The IceCube realtime program \citep{icecube17_realtime} has released alerts since 2016 for individual high-energy (>100 TeV) neutrino events with a high probability to be of astrophysical origin. Initially, there were two alert streams: the \textit{Extremely-High Energy} (EHE) alerts and the \textit{High-Energy-Starting Events} (HESE) alerts. EHE events were reported with an estimate of the probability for the event to have an astrophysical origin, called \textit{signalness}. This quantity was not reported for the HESE alerts. The first alert was issued on 27th April 2016 \citep{ic160427a}. To increase the alert rate and to reduce the retraction rate, these streams were replaced with a unified `Astrotrack' alert stream in 2019 \citep{icecube19_realtime}. All alerts are now assigned a signalness value, with the stream subdivided into Gold alerts (with a mean signalness of 50\%) and Bronze alerts (mean signalness of 30\%). A total of 85 alerts were issued before September 2021. Twelve were later retracted because they were consistent with atmospheric neutrino background events. For two alerts, IC190504A and IC200227A, IceCube was not able to estimate the uncertainty of the spatial localisation. Since the coverage of these alerts cannot be calculated, we exclude these two alerts from the subsequent analysis. The remaining 71 neutrino alerts were candidates for our follow-up program. A summary of the follow-up status of the alerts is shown in Figure \ref{fig:alerts_stats}. All IceCube neutrino alerts that could be followed up with ASAS-SN are listed in Table \ref{tab:nu_alerts_observed}. The ones that could not be observed are listed in Table \ref{tab:nu_alerts_not_observed}. \section{Optical follow-up with ASAS-SN} \label{sec:followup} \subsection{The All-Sky Automated Survey for Supernovae} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/alert_stats_piechart.pdf} \caption{Statistics of ASAS-SN follow-up observations of the 85 IceCube alerts issued through to August 2021.} \label{fig:alerts_stats} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/time_of_observation.pdf} \caption{ \textit{Top panel}: Number of events triggered by ASAS-SN within about two weeks of the IceCube alert. In less than one day, 43 events have been observed over a total of 56 triggered by ASAS-SN between 2016--2021. \textit{Bottom panel}: The mean probability of ASAS-SN observations for an IceCube alert. The boundary between the two station and the fully comissiond five station configuration is mid-2019. } \label{fig:alerts_24h} \end{figure} ASAS-SN is ideal to search for optical counterparts to external triggers such as IceCube neutrino alerts or gravitational-wave events, because it is the only ground-based survey to map the visible sky daily to a depth of $g = 18.5$ mag \citep{Shappee14, Kochanek17b}. ASAS-SN started late 2013 with its first unit, Brutus, located on Haleakala in Hawaii (USA). Shortly after, in 2014, ASAS-SN expanded with a second unit, Cassius, situated at Cerro Tololo International Observatory (CTIO) in Chile. Since late-2017, ASAS-SN is composed of five stations located in both hemispheres: the two original stations (Cassius and Brutus), Paczynski, also situated at CTIO in Chile, Leavitt at McDonald Observatory in Texas (USA), and finally Payne-Gaposchkin at the South African Astrophysical Observatory (SAAO) in Sutherland, South Africa. The two original units used a $V$-band filter until late-2018. The new units were installed using $g$-band filters and the two old units were switched from $V$ to $g$ after roughly a year of $V$- and $g$-band overlap. Each unit consists of four 14-cm aperture Nikon telephoto lenses, each with a 4.47 by 4.47-degree field of view. They are hosted by Las Cumbres Observatory (\citealt{Brown13}). The ASAS-SN survey has two modes of operation \citep{dejaeger21}: a normal survey operation mode and a Target-of-Opportunity mode (ToO) to get rapid imaging follow-up of multi-messenger alerts. During normal operations, each ASAS-SN field is observed with three dithered 90-second exposures with $\sim$15 seconds of overheads between each image, for a total of $\sim$315 seconds per field. For the ToO mode, we trigger immediately if there is a site that can observe the IceCube neutrino region. Thanks to the four sites, this is often the case. We obtain $\sim 15-20$ exposures for the pointing closest to the centre of the search region to go deeper and discover fainter candidates. All the images obtained from the ToO or the normal survey are processed and analysed in realtime by the standard ASAS-SN pipeline. A full description of the ASAS-SN optical counterpart search strategy can be found in \citet{dejaeger21}. Prior to May 2017, only normal operation images were available. Once the ToO mode was implemented, we triggered on all the IceCube neutrino alerts and obtained images as soon as possible, in some cases within three minutes of the alert arrival time (IC190221A, IC190503A, IC200911A, IC201114A, IC201130A, IC210210A, and IC210811A). For one event (IC161103A), ASAS-SN was observing the respective localisation region as part of normal survey operations at the time of the neutrino arrival, resulting in images taken 105 seconds before and 2.5 sec after the alert arrival time. Since late 2017, there generally is a normal operations image ($\sim$18.5 mag) taken within a day if there are no weather or technical issues and the search region is not Sun or Moon constrained. The bottom panel of Figure \ref{fig:alerts_24h} shows the cumulative distributions of observed events per year. To estimate the completeness of our observations, we draw lightcurves on random locations all over the sky. We inject simulated SN Ia lightcurves and test whether ASAS-SN would have detected the simulated supernova. For each lightcurve this is repeated 100 times. This gives a completeness down to 16.5 mag in V-Band and 17.5 mag in g-band, respectively. The analysis will be described in \citet[in prep.]{desai22}. Fourteen neutrino alerts had a localisation too close to the Sun to be observed and one alert was missed due to the short observing window (less than 2 hours), leaving 56 that were followed up out of 71 real IceCube alerts. The top panel in Figure \ref{fig:alerts_24h} shows the cumulative number of events observed by ASAS-SN within about two weeks from the neutrino arrival, where the right side shows events observed after the neutrino arrival. Thanks to our strategy, we managed to observe 11 of the 56 triggered alerts in less than 1 hour (20\%) among which nine were observed in less than five minutes, another four in less than two hours (7\%), and 28 in less than one day (50\%). This illustrates our ability to promptly observe the majority of the IceCube alerts independent of the time or localisation. Finally, another thirteen events were observed between 24 hours and two weeks (23\%; see Figure \ref{fig:alerts_24h}): four within two days, two in less than three days, four within four days, one in less than five days, and two within two weeks. Note that the longest delays in observation (IC200107A and IC201221A) were due to observability constraints or bad weather. So within at most two weeks, we observed all of the neutrino alerts that have not been retracted (12), have a well defined search region, and satisfy our observational restrictions: (1) the Sun is at least 12 degrees below the horizon, (2) the airmass is at most two, (3) the Hour Angle is at most five hours, and (4) the minimum distance to the Moon is larger than 20$^{\circ}$. The left side of the top panel in Figure \ref{fig:alerts_24h} shows the cumulative number of events that were serendipitously observed during routine observations. For 36 events we obtained images within 24 hours before the alert, which allows us to put better constraints on candidates. The localisation region of one alert (IC200530A) was observed about 30 minutes before the neutrino arrival and another one (IC161103A) was being observed at the time of neutrino arrival. We also show the distributions for the periods before and after mid-2019. This marks the commissioning of the full five stations and the switch of the first stations to g-band (two stations and five stations in Figure \ref{fig:alerts_24h}, respectively). We calculate the probability of any event being observed by dividing the number of followed-up events by the number of neutrino alerts. The results are shown in the bottom panel of Figure \ref{fig:alerts_24h}. For any given neutrino alert ASAS-SN has a probability of about 60\% of obtaining observations. Most notably, the switch to the five station configuration significantly increased the probability of obtaining follow-up observations. For example it became 50\% more likeliy to obtain observations within one day. Finally, it is worth noting that for 12 out of the 71 alerts (around 17\%) considered in this analysis, ASAS-SN observations are the only optical follow-up observation for the respective neutrino alert reported to the Gamma-ray Coordinates Network (GCN)\footnote{\url{https://gcn.gsfc.nasa.gov/selected.html}}. \subsection{Possible Counterpart Classes to High-Energy Neutrinos} \label{sec:counterpart_classes} The challenge in identifying counterparts to high-energy neutrino events is that there are many possible neutrino source populations, each with different electromagnetic properties. Again, ASAS-SN's large field of view, fast response time, and archival data for the whole sky make it well suited for discovering transient counterparts to the IceCube neutrino events. The list of promising candidate source classes include: \begin{itemize} \item \textbf{Supernovae with strong circumstellar material (CSM) interactions:} Models predict shock acceleration when the supernova ejecta interacts with the CSM \citep{murase11, murase2018, Zirakashvili16}. For sufficiently high-density CSM, strong interactions produce the narrow emission lines defining a Type IIn supernova \citep{schlegel1990, chugai1994}. The shock can produce high-energy neutrinos for several years but for typical Type IIn conditions the flux is expected to have dropped by an order of magnitude after the first year. \item \textbf{Gamma-ray bursts (GRBs) and supernovae with relativistic jets:} Particle acceleration can occur inside the jet or at the shock where the jet interacts with the star's envelope \citep{meszaros01, senno16, ando2005}. This is true for both `successful' jets which escape the star and `choked' jets. In the first case the electromagnetic counterpart would be a stripped-envelope supernova with broad spectral features (a broad line Ic supernova, SN Ic-BL), and possibly a long GRB with an optical afterglow if the jet is aligned with the line of sight \citep{Woosley2006}. In the latter case the object would be a supernova Ic or Ib \citep{senno16}. In either case, a Type Ib/c SN with an explosion within a few days of the neutrino arrival is a compelling counterpart candidate, because the neutrino production is expected within tens of seconds of the core-collapse \citep{senno16}. \item \textbf{Tidal Disruption Events (TDEs):} TDEs have been proposed as high-energy neutrino sources, where neutrino production can occur in jets, outflows or winds, the accretion disk itself or the disk corona (see \citealp{Hayasaki2021jem} for a recent review). The TDE AT2019dsg was observed in coincidence with a high-energy neutrino alert, where the neutrino arrived 150 days after the optical peak of the TDE \citep{Stein:2020xhk}. Another neutrino was observed about 300 days after the peak of the possible counterpart AT2019fdr \citep{reusch2021}. The timescale for non-thermal emission in TDEs can span several hundred days, so any active TDE coincident with a high-energy neutrino is interesting. This is especially true in the light of recently found indication of correlation of high-energy neutrino alerts with TDE-like flares \citep{vanvelzen2021}. \item \textbf{Active Galactic Nuclei (AGN) Flares:} AGN flares may produce high-energy neutrinos by accelerating particles in accretion shocks \citep{stecker91}. This is especially true for blazar flares, where a jet points towards the Earth \citep{petropoulou15}. The blazar TXS0506+056 was observed in coincidence with a high-energy neutrino alert while it was in a flaring state \citep{IceCube:2018dnn}. Because these objects are numerous, we examined the ASAS-SN light curves of all Fermi 4FGL $\gamma$-ray detected blazars in the footprints of the neutrino alerts (see below and Figure \ref{fig:lightcurves_blazars}). \end{itemize} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/IC170922A_4FGL_J0509.4+0542_mag_closeup.pdf} \includegraphics[width=0.5\textwidth]{figures/IC190730A_4FGL_J1504.4+1029_mag_closeup.pdf} \caption{The ASAS-SN light curves of two blazars observed in spatial coincidence with high-energy neutrino alerts. We show $5 \sigma$ detections and upper limits. The date of the corresponding neutrino arrival is marked with a vertical line.} \label{fig:lightcurves_blazars} \end{figure} \subsection{Candidate Counterparts} \label{sec:candidate_counterparts} \begin{table*} \centering \begin{tabular}{c c c c c c} \hline \hline \multicolumn{1}{c}{\textbf{Transient}} & \multicolumn{1}{c}{\textbf{ASAS-SN detection}} & \multicolumn{1}{c}{\textbf{IceCube alert}} & \multicolumn{1}{c}{\textbf{Alert epoch}} & \multicolumn{1}{c}{{$\mathbf{\Delta_{t}=t_{ASASSN}-t_{IceCube}}$}} & \multicolumn{1}{c}{\textbf{Transient type}} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{JD} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{JD} & \multicolumn{1}{c}{days} & \multicolumn{1}{c}{}\\ \hline ZTF18adicfwn (AT2020rng) &2459089.9 &IC210608A &2459373.7 &-284 &Unknown\\ ATLAS19ljj (AT2019fxr) &2458634.9 &IC200410A &2458950.5 &-316 &Unknown\\ ZTF19aapreis (AT2019dsg) &2458618.9 &IC191001A &2458758.3 &-139 &TDE\\ ZTF19aadypig (SN~2019aah) &2458519.6 &IC191119A &2458806.5 &-287 &SN~II\\ ASASSN-18mx (SN~2018coq) &2458286.1 &IC190619A &2458654.1 &-368 &SN~II\\ ASASSN-17ot (AT2017hzv) &2458070.8 &IC180908A &2458370.3 &-300 &Unknown\\ \hline \hline \end{tabular} \caption{An excerpt of Table \ref{tab:asassn_transients_long} of the transients that occur at most 500 days before the corresponding neutrino was detected, excluding spectroscopically-confirmed type Ia supernovae and CVs where neutrino emission is not expected. We give the name of the Transient and the Julian Date of its discovery in the first two columns. Columns three and four list the corresponding IceCube alert and the neutrino arrival time. In the last two column we give the difference between transient discovery and neutrino arrival and the transient type.} \label{tab:asassn_transients} \end{table*} \begin{figure*} \begin{center} \includegraphics[width=0.95\textwidth]{figures/combined.pdf} \caption{The ASAS-SN light curves for the transients found in the footprint of the IceCube neutrinos. We show $5 \sigma$ detections and upper limits as a function of the days after the discovery dates listed in Table \ref{tab:asassn_transients}. For AT2020rng, we used the Zwicky Transient forced-photometry service facility and show the $5\sigma$ detections in the r- and g-band (see Fig.~\ref{fig:at2020rng}). Vertical lines mark the time of the neutrino arrival.} \label{fig:candidate_lcs} \end{center} \end{figure*} Table \ref{tab:asassn_transients} lists all transients identified by ASAS-SN in the 500 days prior to the neutrino arrival time excluding Type Ia SNe and dwarf novae (cataclysmic variables). The list includes the paring of the TDE AT2019dsg and IC191001 \citep{Stein:2020xhk}. We do not detect the TDE AT2019fdr \citep{ic200530a_ztf} because it was too faint to trigger our transient detection pipeline. The supernova SN~2019aah was spatially coincident with IC191119A. SN~2019aah was detected $\sim$300 days before the neutrino alert \citep{nordin_19aah} and was classified 30 days after the discovery as a Type II supernovae \citep{dahiwale2020_sn2019aah}. Its spectrum does not show narrow emission lines, so there is no evidence for a strong CSM interaction to produce neutrino emission. The emission is predicted to be strongest near the optical peak \citep{murase2019, Zirakashvili16}, so we conclude that SN 2019aah is unrelated to the neutrino. SN~2018coq was spatially coincident with IC190619A. It is also a Type II SN \citep{cartier2018_sn2018coq}, discovered 370 days prior to the neutrino alert \citep{stanek18_sn18coq}. Similar to SN~2019aah, its spectrum 13 days after the discovery does not show prominent narrow lines as a sign of CSM interaction. The supernova peaked even earlier relative to the neutrino than SN~2019aah, so SN~2018coq is unlikely related to IC190619A. We find four neutrino-coincident events that could not be classified. All of them were first detected more than 280 days before the corresponding neutrino arrival. AT2017hzv \citep{at2017hzv} and AT2019fxr \citep{at2019fxr} faded on the time scale of a few weeks and are not detectable at the time of the neutrino arrival. The rapid fading suggest a supernova or AGN flare origin inconsistent with the neutrino arrival time which makes it unlikeliy they are associated with the corresponding neutrino. For AT2020rng, we used the publicly available Zwicky Transient forced-photometry service \citep{masci2019}. We only find sporadic detections surrounded by upper limits (see Figure \ref{fig:at2020rng} in the Appendix). This, together with the relatively bright host galaxy with a mean g-band magnitude of 15.3 mag suggests that AT2020rng is a subtraction artefact rather than a physical transient. We also examined the ASAS-SN light curves of every Fermi 4FGL Blazar \citep{fermi2020_4fgl, fermi2020_dr2} within the footprint of a neutrino alert. We do not find any flaring activity coincident with the arrival of the corresponding neutrino, except for the previously-reported ASAS-SN observations of TXS 0506+056 \citep{IceCube:2018dnn}. This light curve is shown in the top panel of Figure \ref{fig:lightcurves_blazars}, with the source exhibiting an optical flare at the time of the neutrino detection. The neutrino IC190730A was observed in spatial coincidence with the Flat Spectrum Radio Quasar (FSRQ) PKS 1502+106 \citep{ic190730a,2020ApJ...893..162F} and the ASAS-SN light curve for this object is shown in the lower panel of Figure \ref{fig:lightcurves_blazars}. We confirm that the blazar was in a low optical state at the time of the neutrino arrival, as reported by \cite{ic190730a_ztf} and \cite{ic190730a_goto}. Time dependent radiation modeling found that the detection of a high-energy neutrino from this source is consistent with its multi-wavelength properties \citep{rodrigues_21}. \section{Limits} \label{sec:limits} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/neutrino_cdfs.pdf} \caption{The relative cumulative neutrino flux at earth of neutrino source populations with a GRB-like and a SFR-like density evolution.} \label{fig:neutrino_cdfs} \end{figure} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/limits_90cl_notde_separate_bands.pdf} \caption{Constraints on the fraction $F_{\mathrm{L}}$ of a neutrino source population as a function of the intrinsic source Peak Absolute Magnitude. } \label{fig:limits} \end{figure*} While we do not find any new candidate counterpart transients in our follow-up campaign, we can use the non-detections to derive limits on neutrino source luminosity functions following the method of \citet[in prep.]{stein21b}. Because we recover the two pre-existing source candidates (TXS 0506+056 and AT2019dsg), these non-detection limits do not apply to blazars or TDEs. For an astrophysical neutrino with an electromagnetic counterpart, we can calculate the probability of detecting the counterpart based on the percentage of the neutrino localisation that was observed by ASAS-SN. For each neutrino this fraction is listed in Table \ref{tab:nu_alerts_observed} for one and fourteen days after the neutrino arrival. The probability of detecting a counterpart also depends on the probability for the neutrino to be of astrophysical origin. This is given by IceCube as the \textit{signalness} (see Section \ref{sec:alerts} and Table \ref{tab:nu_alerts_observed}). We will assume in the following that we would have observed a transient if it had reached 18.5 mag and adapt it as the limiting magnitude of our program. At a 90\% confidence level we can constrain the fraction of neutrino sources above our limiting magnitude to be no more than 39.3\% and 15.3\% for fast transients which reach their peak within two hours and one day, respectively. For transients that peak within fourteen days, the fraction is 10.3\%. These constraints refer to the visibility of the transients and do not include any physical properties of the source classes. To constrain physical populations of candidate neutrino sources we have to assume a rate $\dot{\rho}(z)$ at which the transients occur as a function of redshift $z$. We consider a GRB-like \citep{lien14_grbrate} and a star formation rate (SFR) like \citep{strolger15_sfrrate} source evolution. Because the optical afterglow of a GRB rapidly fades on the timescale of a few days \citep{Kann:2007cc}, we use the two hour coverage fraction of 15.3\%. Interacting supernovae typically rise on a timescale of at least two weeks \citep{nyholm2020} so we use the 39.3\% constraint for our coverage after 14 days. The cumulative neutrino fluxes at earth from these populations as calculated with \texttt{flarestack} \citep{flarestack} are shown in Figure \ref{fig:neutrino_cdfs}. Assuming an absolute magnitude for the transient, we can compute the luminosity distance at which the transient would be at the apparent magnitude to which our follow-up program is complete. As a conservative choice we use the magnitude limit derived for the Type Ia SNe in ASAS-SN (see Section \ref{sec:followup}). Using the source evolutions from Figure \ref{fig:neutrino_cdfs}, we derive the neutrino flux that would arise in this volume from the corresponding neutrino source population. Given our limits on the fraction of the population above our limiting magnitude we can convert this into constraints on the fraction of sources $F_{\mathrm{L}}$ above the source absolute magnitude as shown in Figure \ref{fig:limits}. These results are not yet constraining for typical supernovae with absolute magnitudes up to around $-21.5$. However, we can constrain the luminosity function of a neutrino source population with a GRB-like source evolution to produce counterparts that are below $-27$ magnitude in V-band about 54\% of the cases and in g-band about 40\% of the cases, one day after the neutrino arrival. This is the first such constraint on this timescale which is thanks to the high observation cadance and rapid follow-up of ASAS-SN. \\ \section{Conclusions} \label{sec:conclusions} We presented the ASAS-SN optical follow-up program for IceCube high-energy, astrophysical neutrino candidates. We observed the 90\% localisation region of 56 alerts over the period from April 2016 until August 2021. Eleven of these alerts were covered within one hour after their detection. After 1 day we had observed 43 events and after two weeks we had observed the localisation regions for all 56 alerts to a limiting magnitude of $\sim 18.5$. For 12 events (around 17\%), this is the only optical follow-up. We did not detect any new coincident transients in our analysis, but we did recover the associations with the blazar TXS 05056+056 and the TDE AT2019dsg. We find additional transients that we disfavour as counterparts of the corresponding neutrino. Given the non-detection of any transient counterpart in our search we derive upper limits on the luminosity function of different possible transient neutrino source populations. Assuming the IceCube alert stream does not change, we can expect about 20 neutrino alerts per year. If our average coverage (18\% after two hours and 94\% after 14 days) does not change, we can set limits that are twice as strict on GRBs in 3.5 years and on CCSNe in 3 years, respectively. The planned extension of IceCube, called IceCube-Gen2, is expected to increase the event rate significantly and improve the spatial resolution of through-going tracks \citep{2021icecube_gen2}. This will allow us to followup more neutrino alerts and cover a higher percentage of the smaller neutrino localisation area leading to an improved sensitivity to detect optical counterparts. \section*{Acknowledgements} J.N. was supported by the Helmholtz Weizmann Research School on Multimessenger Astronomy, funded through the Initiative and Networking Fund of the Helmholtz Association, DESY, the Weizmann Institute, the Humboldt University of Berlin, and the University of Potsdam. Support for T.d.J. has been provided by NSF grants AST-1908952 and AST-1911074. R.S. and A.F. were supported by the Initiative and Networking Fund of the Helmholtz Association, Deutsches Elektronen Synchrotron (DESY). B.J.S. is supported by NSF grants AST-1907570, AST-1908952, AST-1920392, and AST-1911074. CSK and KZS are supported by NSF grants AST-1814440 and AST-1908570. J.F.B. is supported by National Science Foundation grant No.\ PHY-2012955. Support for TW-SH was provided by NASA through the NASA Hubble Fellowship grant HST-HF2-51458.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-265. ASAS-SN is funded in part by the Gordon and Betty Moore Foundation through grants GBMF5490 and GBMF10501 to the Ohio State University, NSF grant AST-1908570, the Mt. Cuba Astronomical Foundation, the Center for Cosmology and AstroParticle Physics (CCAPP) at OSU, the Chinese Academy of Sciences South America Center for Astronomy (CAS-SACA), and the Villum Fonden (Denmark). Development of ASAS-SN has been supported by NSF grant AST-0908816, the Center for Cosmology and AstroParticle Physics at the Ohio State University, the Mt. Cuba Astronomical Foundation, and by George Skestos. Some of the results in this paper have been derived using the healpy and HEALPix packages. The ZTF forced-photometry service was funded under the Heising-Simons Foundation grant \#12540303 (PI: Graham). \section*{Data Availability} All information about the IceCube neutrino alerts that we used are publicly available and can be accessed via the GCN archive for GOLD and BRONZE (\url{https://gcn.gsfc.nasa.gov/amon_icecube_gold_bronze_events.html}), the HESE (\url{https://gcn.gsfc.nasa.gov/amon_hese_events.html}) and EHE (\url{https://gcn.gsfc.nasa.gov/amon_ehe_events.html}) events. The ZTF forced-photometry service is publicly available (see \url{http://web.ipac.caltech.edu/staff/fmasci/ztf/forcedphot.pdf} for a description of the access). \bibliographystyle{mnras} \section{Introduction} Neutrinos are unique messengers from the high-energy Universe. Produced through interactions of high-energy cosmic rays with ambient matter and photon fields, they provide an unambiguous tracer of the sites of hadronic acceleration (see \citealp{ahlers2018} for a recent review). Following the discovery of a diffuse astrophysical neutrino flux by the IceCube collaboration \citep{Aartsen:2015rwa,2014PhRvL.113j1101A} there is now a major effort to identify their origin. No significant clustering has yet been found within the neutrino data alone, but a search for neutrino clusters from known gamma-ray emitters found evidence for a correlation with the nearby Seyfert galaxy NGC\,1068 at the $2.9\sigma$ level \citep{2020PhRvL.124e1103A}. A complementary approach is to search directly for electromagnetic counterparts to individual high-energy neutrinos that have a high probability to be of astrophysical origin. Since 2016, the IceCube realtime program \citep{icecube17_realtime} has published their detections of such events through public realtime alerts and two candidate electromagnetic counterparts have since been identified at the $\sim 3\sigma$ level. In 2017, the gamma-ray blazar TXS 0506+056 was observed in spatial coincidence with a high-energy neutrino during a period of electromagnetic flaring \citep{IceCube:2018dnn}. A search for neutrino clustering from the same source revealed an additional neutrino flare in 2014-15 \citep{ice2018_txsflare}, during a period without any significant electromagnetic flaring activity \citep{Fermi-LAT:2019hte}. Theoretical models have confirmed that conditions in the source are consistent with the detection of one neutrino after accounting for Eddington bias \citep{Strotjohann:2018ufz}. However, explaining the ``orphan'' neutrino flare in 2014/15 proved to be difficult \citep{Reimer:2018vvw,Rodrigues:2018tku}. Statistically, the $\gamma$-ray blazar population contributes less than 27\% to the diffuse neutrino flux \citep{icecube17}. In 2019, the Tidal Disruption Event (TDE) AT2019dsg was associated with a high-energy neutrino \citep{Stein:2020xhk}. Models have proposed various TDE neutrino production zones, including the wind, disk, or corona (see \citealp{Hayasaki2021jem} for a recent review) which are consistent with the detection of one high-energy neutrino. Radio observations of AT2019dsg confirm long-lived non-thermal emission from the source \citep{Stein:2020xhk, Cendes:2021bvp, mohan_21, matsumoto_21}, but generally challenge models relying on the presence of an on-axis relativistic jet \citep{winter21}. A population analysis constrained the contribution of TDEs to less than 39\% of the diffuse neutrino flux \citep{stein_19}. The coincidence of the TDE-like flare AT2019fdr with a high-energy neutrino \citep{reusch2021} strong dust echos in AT2019fdr and AT2019dsg motivated a search for similar events \citep{vanvelzen2021}. A correlation at $3.7\sigma$ level of such flares with high-energy neutrino alerts was found. Taken together, these results suggest that the astrophysical neutrino flux has contributions from multiple source populations \citep{Bartos:2021tok}. Other possible neutrino source populations include supernovae and gamma ray bursts. Here we present the optical follow-up of 56 IceCube realtime alerts released between April 2016 and August 2021 with the All-Sky Automated Survey for SuperNovae (ASAS-SN; \citealp{Shappee14, Kochanek17b}). ASAS-SN is a network of optical telescopes located around the globe that observes the visible sky daily. Its large field of view makes it well-suited for fast follow-up of IceCube alerts and enables searches for transient neutrino counterparts. In Section \ref{sec:alerts} we introduce the IceCube alert selection followed by the description of our optical follow-up. We present our analysis of possible counterparts in Section \ref{sec:followup}. We derive limits on neutrino source luminosity functions in Section \ref{sec:limits} and discuss our conclusions in Section \ref{sec:conclusions}. \section{IceCube realtime alerts} \label{sec:alerts} \begin{table*} \begin{center} \begin{tabular}{c r r r r r r c} \hline \hline \multicolumn{1}{c}{\textbf{Event}} & \multicolumn{1}{c}{\textbf{R.A. (J2000)}} & \multicolumn{1}{c}{\textbf{Dec (J2000)}} & \multicolumn{1}{c}{\textbf{90\% area}} &\multicolumn{1}{c}{\textbf{1d coverage}} & \multicolumn{1}{c}{\textbf{14d coverage}} & \multicolumn{1}{c}{\textbf{Signalness}}& \multicolumn{1}{c}{\textbf{Refs}}\\ \multicolumn{1}{c}{}& \multicolumn{1}{c}{[deg]}&\multicolumn{1}{c}{[deg]}& \multicolumn{1}{c}{[sq. deg.]}& \multicolumn{1}{c}{[\%]} & \multicolumn{1}{c}{[\%]} & \multicolumn{1}{c}{[\%]} & \multicolumn{1}{c}{}\\ \hline \input{latex/alert_table_observed}\\ \hline \hline \multicolumn{8}{l}{$^{*}$For offline selected events no, signalness is given. Because they are promising events that were selected by hand, we assume a signalness of 50\%} \end{tabular} \caption{A summary of the 56 neutrino alerts followed up by ASAS-SN. In the first three column we list the name of the alert and its position. In columns three to six we give the 90\% rectangular localisation of the neutrino as sent out in the GCN and the fraction of this area covered by ASAS-SN in the first 24 hours and 14 days, respectively, after the neutrino arrival. Finally, we list the signalness of the event and the reference to the original IceCube GCN. For HESE events no signalness was given and we neglect these events to be conservative.} \label{tab:nu_alerts_observed} \end{center} \end{table*} \begin{table*} \centering \begin{tabular}{c r r r c c c c} \hline \hline \multicolumn{1}{c}{\textbf{Event}} & \multicolumn{1}{c}{\textbf{R.A. (J2000)}} & \multicolumn{1}{c}{\textbf{Dec (J2000)}} & \multicolumn{1}{c}{\textbf{90\% area}} & \multicolumn{1}{c}{\textbf{Reason}} & \multicolumn{1}{c}{\textbf{Refs}}\\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{[deg]}&\multicolumn{1}{c}{[deg]}& \multicolumn{1}{c}{[sq. deg.]} &\multicolumn{1}{c}{}& \multicolumn{1}{c}{}\\ \hline \input{latex/alert_table_not_observed}\\ \hline \hline \end{tabular} \caption{A summary of the 15 neutrino alerts that could not be observed by ASAS-SN. We list the event name and position in the first three columns. The area of the 90\% rectangular localisation of the neutrino is listed in column four and the reference to the IceCube GCN in column 5.} \label{tab:nu_alerts_not_observed} \end{table*} The IceCube neutrino observatory, located at the South Pole, is the world's largest neutrino telescope with an instrumented volume of one cubic kilometre \citep{IceCube:2016zyt}. The IceCube realtime program \citep{icecube17_realtime} has released alerts since 2016 for individual high-energy (>100 TeV) neutrino events with a high probability to be of astrophysical origin. Initially, there were two alert streams: the \textit{Extremely-High Energy} (EHE) alerts and the \textit{High-Energy-Starting Events} (HESE) alerts. EHE events were reported with an estimate of the probability for the event to have an astrophysical origin, called \textit{signalness}. This quantity was not reported for the HESE alerts. The first alert was issued on 27th April 2016 \citep{ic160427a}. To increase the alert rate and to reduce the retraction rate, these streams were replaced with a unified `Astrotrack' alert stream in 2019 \citep{icecube19_realtime}. All alerts are now assigned a signalness value, with the stream subdivided into Gold alerts (with a mean signalness of 50\%) and Bronze alerts (mean signalness of 30\%). A total of 85 alerts were issued before September 2021. Twelve were later retracted because they were consistent with atmospheric neutrino background events. For two alerts, IC190504A and IC200227A, IceCube was not able to estimate the uncertainty of the spatial localisation. Since the coverage of these alerts cannot be calculated, we exclude these two alerts from the subsequent analysis. The remaining 71 neutrino alerts were candidates for our follow-up program. A summary of the follow-up status of the alerts is shown in Figure \ref{fig:alerts_stats}. All IceCube neutrino alerts that could be followed up with ASAS-SN are listed in Table \ref{tab:nu_alerts_observed}. The ones that could not be observed are listed in Table \ref{tab:nu_alerts_not_observed}. \section{Optical follow-up with ASAS-SN} \label{sec:followup} \subsection{The All-Sky Automated Survey for Supernovae} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/alert_stats_piechart.pdf} \caption{Statistics of ASAS-SN follow-up observations of the 85 IceCube alerts issued through to August 2021.} \label{fig:alerts_stats} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/time_of_observation.pdf} \caption{ \textit{Top panel}: Number of events triggered by ASAS-SN within about two weeks of the IceCube alert. In less than one day, 43 events have been observed over a total of 56 triggered by ASAS-SN between 2016--2021. \textit{Bottom panel}: The mean probability of ASAS-SN observations for an IceCube alert. The boundary between the two station and the fully comissiond five station configuration is mid-2019. } \label{fig:alerts_24h} \end{figure} ASAS-SN is ideal to search for optical counterparts to external triggers such as IceCube neutrino alerts or gravitational-wave events, because it is the only ground-based survey to map the visible sky daily to a depth of $g = 18.5$ mag \citep{Shappee14, Kochanek17b}. ASAS-SN started late 2013 with its first unit, Brutus, located on Haleakala in Hawaii (USA). Shortly after, in 2014, ASAS-SN expanded with a second unit, Cassius, situated at Cerro Tololo International Observatory (CTIO) in Chile. Since late-2017, ASAS-SN is composed of five stations located in both hemispheres: the two original stations (Cassius and Brutus), Paczynski, also situated at CTIO in Chile, Leavitt at McDonald Observatory in Texas (USA), and finally Payne-Gaposchkin at the South African Astrophysical Observatory (SAAO) in Sutherland, South Africa. The two original units used a $V$-band filter until late-2018. The new units were installed using $g$-band filters and the two old units were switched from $V$ to $g$ after roughly a year of $V$- and $g$-band overlap. Each unit consists of four 14-cm aperture Nikon telephoto lenses, each with a 4.47 by 4.47-degree field of view. They are hosted by Las Cumbres Observatory (\citealt{Brown13}). The ASAS-SN survey has two modes of operation \citep{dejaeger21}: a normal survey operation mode and a Target-of-Opportunity mode (ToO) to get rapid imaging follow-up of multi-messenger alerts. During normal operations, each ASAS-SN field is observed with three dithered 90-second exposures with $\sim$15 seconds of overheads between each image, for a total of $\sim$315 seconds per field. For the ToO mode, we trigger immediately if there is a site that can observe the IceCube neutrino region. Thanks to the four sites, this is often the case. We obtain $\sim 15-20$ exposures for the pointing closest to the centre of the search region to go deeper and discover fainter candidates. All the images obtained from the ToO or the normal survey are processed and analysed in realtime by the standard ASAS-SN pipeline. A full description of the ASAS-SN optical counterpart search strategy can be found in \citet{dejaeger21}. Prior to May 2017, only normal operation images were available. Once the ToO mode was implemented, we triggered on all the IceCube neutrino alerts and obtained images as soon as possible, in some cases within three minutes of the alert arrival time (IC190221A, IC190503A, IC200911A, IC201114A, IC201130A, IC210210A, and IC210811A). For one event (IC161103A), ASAS-SN was observing the respective localisation region as part of normal survey operations at the time of the neutrino arrival, resulting in images taken 105 seconds before and 2.5 sec after the alert arrival time. Since late 2017, there generally is a normal operations image ($\sim$18.5 mag) taken within a day if there are no weather or technical issues and the search region is not Sun or Moon constrained. The bottom panel of Figure \ref{fig:alerts_24h} shows the cumulative distributions of observed events per year. To estimate the completeness of our observations, we draw lightcurves on random locations all over the sky. We inject simulated SN Ia lightcurves and test whether ASAS-SN would have detected the simulated supernova. For each lightcurve this is repeated 100 times. This gives a completeness down to 16.5 mag in V-Band and 17.5 mag in g-band, respectively. The analysis will be described in \citet[in prep.]{desai22}. Fourteen neutrino alerts had a localisation too close to the Sun to be observed and one alert was missed due to the short observing window (less than 2 hours), leaving 56 that were followed up out of 71 real IceCube alerts. The top panel in Figure \ref{fig:alerts_24h} shows the cumulative number of events observed by ASAS-SN within about two weeks from the neutrino arrival, where the right side shows events observed after the neutrino arrival. Thanks to our strategy, we managed to observe 11 of the 56 triggered alerts in less than 1 hour (20\%) among which nine were observed in less than five minutes, another four in less than two hours (7\%), and 28 in less than one day (50\%). This illustrates our ability to promptly observe the majority of the IceCube alerts independent of the time or localisation. Finally, another thirteen events were observed between 24 hours and two weeks (23\%; see Figure \ref{fig:alerts_24h}): four within two days, two in less than three days, four within four days, one in less than five days, and two within two weeks. Note that the longest delays in observation (IC200107A and IC201221A) were due to observability constraints or bad weather. So within at most two weeks, we observed all of the neutrino alerts that have not been retracted (12), have a well defined search region, and satisfy our observational restrictions: (1) the Sun is at least 12 degrees below the horizon, (2) the airmass is at most two, (3) the Hour Angle is at most five hours, and (4) the minimum distance to the Moon is larger than 20$^{\circ}$. The left side of the top panel in Figure \ref{fig:alerts_24h} shows the cumulative number of events that were serendipitously observed during routine observations. For 36 events we obtained images within 24 hours before the alert, which allows us to put better constraints on candidates. The localisation region of one alert (IC200530A) was observed about 30 minutes before the neutrino arrival and another one (IC161103A) was being observed at the time of neutrino arrival. We also show the distributions for the periods before and after mid-2019. This marks the commissioning of the full five stations and the switch of the first stations to g-band (two stations and five stations in Figure \ref{fig:alerts_24h}, respectively). We calculate the probability of any event being observed by dividing the number of followed-up events by the number of neutrino alerts. The results are shown in the bottom panel of Figure \ref{fig:alerts_24h}. For any given neutrino alert ASAS-SN has a probability of about 60\% of obtaining observations. Most notably, the switch to the five station configuration significantly increased the probability of obtaining follow-up observations. For example it became 50\% more likeliy to obtain observations within one day. Finally, it is worth noting that for 12 out of the 71 alerts (around 17\%) considered in this analysis, ASAS-SN observations are the only optical follow-up observation for the respective neutrino alert reported to the Gamma-ray Coordinates Network (GCN)\footnote{\url{https://gcn.gsfc.nasa.gov/selected.html}}. \subsection{Possible Counterpart Classes to High-Energy Neutrinos} \label{sec:counterpart_classes} The challenge in identifying counterparts to high-energy neutrino events is that there are many possible neutrino source populations, each with different electromagnetic properties. Again, ASAS-SN's large field of view, fast response time, and archival data for the whole sky make it well suited for discovering transient counterparts to the IceCube neutrino events. The list of promising candidate source classes include: \begin{itemize} \item \textbf{Supernovae with strong circumstellar material (CSM) interactions:} Models predict shock acceleration when the supernova ejecta interacts with the CSM \citep{murase11, murase2018, Zirakashvili16}. For sufficiently high-density CSM, strong interactions produce the narrow emission lines defining a Type IIn supernova \citep{schlegel1990, chugai1994}. The shock can produce high-energy neutrinos for several years but for typical Type IIn conditions the flux is expected to have dropped by an order of magnitude after the first year. \item \textbf{Gamma-ray bursts (GRBs) and supernovae with relativistic jets:} Particle acceleration can occur inside the jet or at the shock where the jet interacts with the star's envelope \citep{meszaros01, senno16, ando2005}. This is true for both `successful' jets which escape the star and `choked' jets. In the first case the electromagnetic counterpart would be a stripped-envelope supernova with broad spectral features (a broad line Ic supernova, SN Ic-BL), and possibly a long GRB with an optical afterglow if the jet is aligned with the line of sight \citep{Woosley2006}. In the latter case the object would be a supernova Ic or Ib \citep{senno16}. In either case, a Type Ib/c SN with an explosion within a few days of the neutrino arrival is a compelling counterpart candidate, because the neutrino production is expected within tens of seconds of the core-collapse \citep{senno16}. \item \textbf{Tidal Disruption Events (TDEs):} TDEs have been proposed as high-energy neutrino sources, where neutrino production can occur in jets, outflows or winds, the accretion disk itself or the disk corona (see \citealp{Hayasaki2021jem} for a recent review). The TDE AT2019dsg was observed in coincidence with a high-energy neutrino alert, where the neutrino arrived 150 days after the optical peak of the TDE \citep{Stein:2020xhk}. Another neutrino was observed about 300 days after the peak of the possible counterpart AT2019fdr \citep{reusch2021}. The timescale for non-thermal emission in TDEs can span several hundred days, so any active TDE coincident with a high-energy neutrino is interesting. This is especially true in the light of recently found indication of correlation of high-energy neutrino alerts with TDE-like flares \citep{vanvelzen2021}. \item \textbf{Active Galactic Nuclei (AGN) Flares:} AGN flares may produce high-energy neutrinos by accelerating particles in accretion shocks \citep{stecker91}. This is especially true for blazar flares, where a jet points towards the Earth \citep{petropoulou15}. The blazar TXS0506+056 was observed in coincidence with a high-energy neutrino alert while it was in a flaring state \citep{IceCube:2018dnn}. Because these objects are numerous, we examined the ASAS-SN light curves of all Fermi 4FGL $\gamma$-ray detected blazars in the footprints of the neutrino alerts (see below and Figure \ref{fig:lightcurves_blazars}). \end{itemize} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/IC170922A_4FGL_J0509.4+0542_mag_closeup.pdf} \includegraphics[width=0.5\textwidth]{figures/IC190730A_4FGL_J1504.4+1029_mag_closeup.pdf} \caption{The ASAS-SN light curves of two blazars observed in spatial coincidence with high-energy neutrino alerts. We show $5 \sigma$ detections and upper limits. The date of the corresponding neutrino arrival is marked with a vertical line.} \label{fig:lightcurves_blazars} \end{figure} \subsection{Candidate Counterparts} \label{sec:candidate_counterparts} \begin{table*} \centering \begin{tabular}{c c c c c c} \hline \hline \multicolumn{1}{c}{\textbf{Transient}} & \multicolumn{1}{c}{\textbf{ASAS-SN detection}} & \multicolumn{1}{c}{\textbf{IceCube alert}} & \multicolumn{1}{c}{\textbf{Alert epoch}} & \multicolumn{1}{c}{{$\mathbf{\Delta_{t}=t_{ASASSN}-t_{IceCube}}$}} & \multicolumn{1}{c}{\textbf{Transient type}} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{JD} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{JD} & \multicolumn{1}{c}{days} & \multicolumn{1}{c}{}\\ \hline ZTF18adicfwn (AT2020rng) &2459089.9 &IC210608A &2459373.7 &-284 &Unknown\\ ATLAS19ljj (AT2019fxr) &2458634.9 &IC200410A &2458950.5 &-316 &Unknown\\ ZTF19aapreis (AT2019dsg) &2458618.9 &IC191001A &2458758.3 &-139 &TDE\\ ZTF19aadypig (SN~2019aah) &2458519.6 &IC191119A &2458806.5 &-287 &SN~II\\ ASASSN-18mx (SN~2018coq) &2458286.1 &IC190619A &2458654.1 &-368 &SN~II\\ ASASSN-17ot (AT2017hzv) &2458070.8 &IC180908A &2458370.3 &-300 &Unknown\\ \hline \hline \end{tabular} \caption{An excerpt of Table \ref{tab:asassn_transients_long} of the transients that occur at most 500 days before the corresponding neutrino was detected, excluding spectroscopically-confirmed type Ia supernovae and CVs where neutrino emission is not expected. We give the name of the Transient and the Julian Date of its discovery in the first two columns. Columns three and four list the corresponding IceCube alert and the neutrino arrival time. In the last two column we give the difference between transient discovery and neutrino arrival and the transient type.} \label{tab:asassn_transients} \end{table*} \begin{figure*} \begin{center} \includegraphics[width=0.95\textwidth]{figures/combined.pdf} \caption{The ASAS-SN light curves for the transients found in the footprint of the IceCube neutrinos. We show $5 \sigma$ detections and upper limits as a function of the days after the discovery dates listed in Table \ref{tab:asassn_transients}. For AT2020rng, we used the Zwicky Transient forced-photometry service facility and show the $5\sigma$ detections in the r- and g-band (see Fig.~\ref{fig:at2020rng}). Vertical lines mark the time of the neutrino arrival.} \label{fig:candidate_lcs} \end{center} \end{figure*} Table \ref{tab:asassn_transients} lists all transients identified by ASAS-SN in the 500 days prior to the neutrino arrival time excluding Type Ia SNe and dwarf novae (cataclysmic variables). The list includes the paring of the TDE AT2019dsg and IC191001 \citep{Stein:2020xhk}. We do not detect the TDE AT2019fdr \citep{ic200530a_ztf} because it was too faint to trigger our transient detection pipeline. The supernova SN~2019aah was spatially coincident with IC191119A. SN~2019aah was detected $\sim$300 days before the neutrino alert \citep{nordin_19aah} and was classified 30 days after the discovery as a Type II supernovae \citep{dahiwale2020_sn2019aah}. Its spectrum does not show narrow emission lines, so there is no evidence for a strong CSM interaction to produce neutrino emission. The emission is predicted to be strongest near the optical peak \citep{murase2019, Zirakashvili16}, so we conclude that SN 2019aah is unrelated to the neutrino. SN~2018coq was spatially coincident with IC190619A. It is also a Type II SN \citep{cartier2018_sn2018coq}, discovered 370 days prior to the neutrino alert \citep{stanek18_sn18coq}. Similar to SN~2019aah, its spectrum 13 days after the discovery does not show prominent narrow lines as a sign of CSM interaction. The supernova peaked even earlier relative to the neutrino than SN~2019aah, so SN~2018coq is unlikely related to IC190619A. We find four neutrino-coincident events that could not be classified. All of them were first detected more than 280 days before the corresponding neutrino arrival. AT2017hzv \citep{at2017hzv} and AT2019fxr \citep{at2019fxr} faded on the time scale of a few weeks and are not detectable at the time of the neutrino arrival. The rapid fading suggest a supernova or AGN flare origin inconsistent with the neutrino arrival time which makes it unlikeliy they are associated with the corresponding neutrino. For AT2020rng, we used the publicly available Zwicky Transient forced-photometry service \citep{masci2019}. We only find sporadic detections surrounded by upper limits (see Figure \ref{fig:at2020rng} in the Appendix). This, together with the relatively bright host galaxy with a mean g-band magnitude of 15.3 mag suggests that AT2020rng is a subtraction artefact rather than a physical transient. We also examined the ASAS-SN light curves of every Fermi 4FGL Blazar \citep{fermi2020_4fgl, fermi2020_dr2} within the footprint of a neutrino alert. We do not find any flaring activity coincident with the arrival of the corresponding neutrino, except for the previously-reported ASAS-SN observations of TXS 0506+056 \citep{IceCube:2018dnn}. This light curve is shown in the top panel of Figure \ref{fig:lightcurves_blazars}, with the source exhibiting an optical flare at the time of the neutrino detection. The neutrino IC190730A was observed in spatial coincidence with the Flat Spectrum Radio Quasar (FSRQ) PKS 1502+106 \citep{ic190730a,2020ApJ...893..162F} and the ASAS-SN light curve for this object is shown in the lower panel of Figure \ref{fig:lightcurves_blazars}. We confirm that the blazar was in a low optical state at the time of the neutrino arrival, as reported by \cite{ic190730a_ztf} and \cite{ic190730a_goto}. Time dependent radiation modeling found that the detection of a high-energy neutrino from this source is consistent with its multi-wavelength properties \citep{rodrigues_21}. \section{Limits} \label{sec:limits} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/neutrino_cdfs.pdf} \caption{The relative cumulative neutrino flux at earth of neutrino source populations with a GRB-like and a SFR-like density evolution.} \label{fig:neutrino_cdfs} \end{figure} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{figures/limits_90cl_notde_separate_bands.pdf} \caption{Constraints on the fraction $F_{\mathrm{L}}$ of a neutrino source population as a function of the intrinsic source Peak Absolute Magnitude. } \label{fig:limits} \end{figure*} While we do not find any new candidate counterpart transients in our follow-up campaign, we can use the non-detections to derive limits on neutrino source luminosity functions following the method of \citet[in prep.]{stein21b}. Because we recover the two pre-existing source candidates (TXS 0506+056 and AT2019dsg), these non-detection limits do not apply to blazars or TDEs. For an astrophysical neutrino with an electromagnetic counterpart, we can calculate the probability of detecting the counterpart based on the percentage of the neutrino localisation that was observed by ASAS-SN. For each neutrino this fraction is listed in Table \ref{tab:nu_alerts_observed} for one and fourteen days after the neutrino arrival. The probability of detecting a counterpart also depends on the probability for the neutrino to be of astrophysical origin. This is given by IceCube as the \textit{signalness} (see Section \ref{sec:alerts} and Table \ref{tab:nu_alerts_observed}). We will assume in the following that we would have observed a transient if it had reached 18.5 mag and adapt it as the limiting magnitude of our program. At a 90\% confidence level we can constrain the fraction of neutrino sources above our limiting magnitude to be no more than 39.3\% and 15.3\% for fast transients which reach their peak within two hours and one day, respectively. For transients that peak within fourteen days, the fraction is 10.3\%. These constraints refer to the visibility of the transients and do not include any physical properties of the source classes. To constrain physical populations of candidate neutrino sources we have to assume a rate $\dot{\rho}(z)$ at which the transients occur as a function of redshift $z$. We consider a GRB-like \citep{lien14_grbrate} and a star formation rate (SFR) like \citep{strolger15_sfrrate} source evolution. Because the optical afterglow of a GRB rapidly fades on the timescale of a few days \citep{Kann:2007cc}, we use the two hour coverage fraction of 15.3\%. Interacting supernovae typically rise on a timescale of at least two weeks \citep{nyholm2020} so we use the 39.3\% constraint for our coverage after 14 days. The cumulative neutrino fluxes at earth from these populations as calculated with \texttt{flarestack} \citep{flarestack} are shown in Figure \ref{fig:neutrino_cdfs}. Assuming an absolute magnitude for the transient, we can compute the luminosity distance at which the transient would be at the apparent magnitude to which our follow-up program is complete. As a conservative choice we use the magnitude limit derived for the Type Ia SNe in ASAS-SN (see Section \ref{sec:followup}). Using the source evolutions from Figure \ref{fig:neutrino_cdfs}, we derive the neutrino flux that would arise in this volume from the corresponding neutrino source population. Given our limits on the fraction of the population above our limiting magnitude we can convert this into constraints on the fraction of sources $F_{\mathrm{L}}$ above the source absolute magnitude as shown in Figure \ref{fig:limits}. These results are not yet constraining for typical supernovae with absolute magnitudes up to around $-21.5$. However, we can constrain the luminosity function of a neutrino source population with a GRB-like source evolution to produce counterparts that are below $-27$ magnitude in V-band about 54\% of the cases and in g-band about 40\% of the cases, one day after the neutrino arrival. This is the first such constraint on this timescale which is thanks to the high observation cadance and rapid follow-up of ASAS-SN. \\ \section{Conclusions} \label{sec:conclusions} We presented the ASAS-SN optical follow-up program for IceCube high-energy, astrophysical neutrino candidates. We observed the 90\% localisation region of 56 alerts over the period from April 2016 until August 2021. Eleven of these alerts were covered within one hour after their detection. After 1 day we had observed 43 events and after two weeks we had observed the localisation regions for all 56 alerts to a limiting magnitude of $\sim 18.5$. For 12 events (around 17\%), this is the only optical follow-up. We did not detect any new coincident transients in our analysis, but we did recover the associations with the blazar TXS 05056+056 and the TDE AT2019dsg. We find additional transients that we disfavour as counterparts of the corresponding neutrino. Given the non-detection of any transient counterpart in our search we derive upper limits on the luminosity function of different possible transient neutrino source populations. Assuming the IceCube alert stream does not change, we can expect about 20 neutrino alerts per year. If our average coverage (18\% after two hours and 94\% after 14 days) does not change, we can set limits that are twice as strict on GRBs in 3.5 years and on CCSNe in 3 years, respectively. The planned extension of IceCube, called IceCube-Gen2, is expected to increase the event rate significantly and improve the spatial resolution of through-going tracks \citep{2021icecube_gen2}. This will allow us to followup more neutrino alerts and cover a higher percentage of the smaller neutrino localisation area leading to an improved sensitivity to detect optical counterparts. \section*{Acknowledgements} J.N. was supported by the Helmholtz Weizmann Research School on Multimessenger Astronomy, funded through the Initiative and Networking Fund of the Helmholtz Association, DESY, the Weizmann Institute, the Humboldt University of Berlin, and the University of Potsdam. Support for T.d.J. has been provided by NSF grants AST-1908952 and AST-1911074. R.S. and A.F. were supported by the Initiative and Networking Fund of the Helmholtz Association, Deutsches Elektronen Synchrotron (DESY). B.J.S. is supported by NSF grants AST-1907570, AST-1908952, AST-1920392, and AST-1911074. CSK and KZS are supported by NSF grants AST-1814440 and AST-1908570. J.F.B. is supported by National Science Foundation grant No.\ PHY-2012955. Support for TW-SH was provided by NASA through the NASA Hubble Fellowship grant HST-HF2-51458.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-265. ASAS-SN is funded in part by the Gordon and Betty Moore Foundation through grants GBMF5490 and GBMF10501 to the Ohio State University, NSF grant AST-1908570, the Mt. Cuba Astronomical Foundation, the Center for Cosmology and AstroParticle Physics (CCAPP) at OSU, the Chinese Academy of Sciences South America Center for Astronomy (CAS-SACA), and the Villum Fonden (Denmark). Development of ASAS-SN has been supported by NSF grant AST-0908816, the Center for Cosmology and AstroParticle Physics at the Ohio State University, the Mt. Cuba Astronomical Foundation, and by George Skestos. Some of the results in this paper have been derived using the healpy and HEALPix packages. The ZTF forced-photometry service was funded under the Heising-Simons Foundation grant \#12540303 (PI: Graham). \section*{Data Availability} All information about the IceCube neutrino alerts that we used are publicly available and can be accessed via the GCN archive for GOLD and BRONZE (\url{https://gcn.gsfc.nasa.gov/amon_icecube_gold_bronze_events.html}), the HESE (\url{https://gcn.gsfc.nasa.gov/amon_hese_events.html}) and EHE (\url{https://gcn.gsfc.nasa.gov/amon_ehe_events.html}) events. The ZTF forced-photometry service is publicly available (see \url{http://web.ipac.caltech.edu/staff/fmasci/ztf/forcedphot.pdf} for a description of the access). \bibliographystyle{mnras}
1,314,259,994,626
arxiv
\section{Introduction} \label{sec:into} The existence of the tiny neutrino mass can be naturally explained by the seesaw mechanism \cite{Minkowski:1977sc, Yanagida:1980xy, Schechter:1980gr, Sawada:1979dis, GellMann:1980vs,Glashow:1979nm, Mohapatra:1979ia} which extends the Standard Model (SM) through Majorana type Right Handed Neutrinos (RHNs). As a result the SM light neutrinos become Majorana particles. Alternatively there is a simple model, neutrino Two Higgs Doublet Model ($\nu$THDM) \cite{Davidson:2009ha, Wang:2006jy}, which can generate the Dirac mass term for the light neutrinos as well as for the other fermions in the SM. In this model we have two Higgs doublets; one is the same as the SM-like Higgs doublet and the other one is having a small VEV $(\mathcal{O}(1))$ eV to explain the tiny neutrino mass correctly. Due to this fact, the neutrino Dirac Yukawa coupling could be order 1. It has been discussed in \cite{Davidson:2009ha} that a global softly broken $U(1)_X$ symmetry can forbid the Majorana mass terms of the RHNs; a hidden $U(1)$ gauge symmetry can be also applied to realize $\nu$THDM as in ref.~\cite{Nomura:2017wxf}. In this model all the SM fermions obtain Dirac mass terms via Yukawa interactions with the SM-like Higgs doublet $(\Phi_2)$ whereas only the neutrinos get Dirac masses through the Yukawa coupling with the other Higgs doublet $(\Phi_1)$. Another scenario of the generation of Dirac neutrino mass through a dimension five operator has been studied in \cite{CentellesChulia:2018gwr}. The corresponding Yukawa interactions of the Lagrangian can be written as \begin{eqnarray} \mathcal{L}_{Y}=-\overline{Q}_L Y^u \widetilde{\Phi}_2 u_R -\overline{Q}_L Y^d \Phi_2 d_R -\overline{L}_L Y^e \Phi_2 e_R -\overline{L}_L Y^\nu \widetilde{\Phi}_1 \nu_R +\rm{H. c.} \label{Yuk1} \end{eqnarray} where $\widetilde{\Phi}_i = i \sigma_2 \Phi_i^* (i=1,2)$, $Q_L$ is the SM quark doublet, $L_L$ is the SM lepton doublet, $e_R$ is the right handed charged lepton, $u_R$ is the right handed up-quark, $d_R$ is the right handed down-quark and $\nu_R$ are the RHNs. The $\Phi_1$ and $\nu_R$ are assigned with the global charge $3$ under the $U(1)_X$ group. The global symmetry forbids the Majorana mass term between the RHNs. In the original model~\cite{Davidson:2009ha}, the global symmetry is softly broken by the mixed mass term between $\Phi_1$ and $\Phi_2$ $(m_{12}^2 \Phi_1^\dagger \Phi_2)$ such that a small VEV is obtained by seesaw-like formulas \begin{eqnarray} v_1 =\frac{m_{12}^2 v_2}{M_A^2}, \end{eqnarray} where $M_A$ is the pseudo-scalar mass in \cite{Davidson:2009ha}. If $M_A \sim 100$ GeV and $m_{12} \sim {\cal O}(100)$ keV then $v_1$ can be obtained as $\mathcal{O}(1)$ eV. In the paper~\cite{Baek:2016wml}, the model is extended to include singlet scalar $S$ which breaks the $U(1)_X$ symmetry. The soft term $m_{12}^2$ is identified with $\mu \langle S \rangle$ where $\mu$ is the Higgs mixing term, $\mu \Phi_1^\dagger \Phi_2 S + h.c.$. It has been studied in~\cite{Baek:2016wml} that an SM singlet fermion being charged under $U(1)_X$ could be a potential DM candidate. In this paper we extend the model with a natural scalar Dark Matter (DM) candidate $(X)$. In this model the global $U(1)_X$ symmetry is spontaneously broken down to $Z_2$ symmetry by VEV of a new singlet scalar $S$. The remnant of the $Z_2$ symmetry makes the DM candidate stable. The $Z_2$ symmetry would be broken by quantum gravity effect and DM would decay via effective interaction \cite{Mambrini:2015sia}. This can be avoided if the $U(1)_X$ is a remnant of local symmetry at a high energy scale and we assume the $Z_2$ symmetry is not broken. A CP odd component of $S$ becomes the Goldstone boson and hence we study the DM annihilation from this model and compare with the current experimental sensitivity. The papers is organized as follows. In Sec.~\ref{Model} we describe the model. In Sec.~\ref{DMP} we discuss the DM phenomenology and finally in Sec.~\ref{Conc} we conclude. \section{The Model} \label{Model} We discuss the extended version of the model in \cite{Davidson:2009ha} with a scalar field $(X)$. We write the scalar and the RHN sectors of the particle content in Tab.~\ref{tab1} \begin{table}[h] \centering \begin{tabular}{|c||c|c|c|c||c|}\hline\hline &\multicolumn{4}{c||}{Scalar Fields} & \multicolumn{1}{c|}{New Fermion} \\\hline & ~$\Phi_1$~ & ~$\Phi_2$~ & ~$S$ ~ & ~$X$~ & ~$\nu_R$ \\\hline $SU(2)_L$ & $\bf{2}$ & $\bf{2}$ & $\bf{1}$ & $\bf{1}$ & $\bf{1}$ \\\hline $U(1)_Y$ & $\frac12$ & $\frac12$ & $0$ & $0$ & $0$ \\\hline $U(1)_X$ & $3$ & $0$ & $3$ & $1$ & $3$ \\\hline \end{tabular} \caption{Scalar fields and new fermion in our model.} \label{tab1} \end{table} The gauge singlet Yukawa interaction between the lepton doublet $(L_L)$, the doublet scalars $(\Phi_1, \Phi_2)$ and the RHNs $(\nu_R)$ can be written as \begin{eqnarray} \mathcal{ L} &\supset & - Y_{ij}^e \bar L_{L_{i}} \Phi_2 e_{Rj} - Y^\nu_{ij} \bar L_{L_{i}} \tilde \Phi_1 \nu_{Rj} + \rm{H.c}. \label{eq:Yukawa} \end{eqnarray} We assume that the Yukawa coupling constants $Y_{ij}^e$ and $Y_{ij}^\nu$ are real. The scalar potential can be written by \begin{eqnarray} V(\Phi_1, \Phi_2, S) &= & - m_{11}^2 \Phi^\dagger_1 \Phi_1 - m_{22}^2 \Phi_2^\dagger \Phi_2 - m_{S}^2 S^\dagger S + M_X^2 X^\dagger X - (\mu \Phi^\dagger_1 \Phi_2 S + h.c.) \nonumber \\ && + \lambda_1 (\Phi_1^\dagger \Phi_1)^2 + \lambda_2 (\Phi_2^\dagger \Phi_2)^2 + \lambda_3 (\Phi_1^\dagger \Phi_1)( \Phi_2^\dagger \Phi_2) + \lambda_4 (\Phi_1^\dagger \Phi_2)( \Phi_2^\dagger \Phi_1) \nonumber \\ &&+ \lambda_S (S^\dagger S)^2+ \lambda_{1S} \Phi_1^\dagger \Phi_1 S^\dagger S + \lambda_{2S} \Phi_2^\dagger \Phi_2 S^\dagger S +\lambda_X (X^\dagger X)^2 + \lambda_{1X} \Phi_1^\dagger \Phi_1 X^\dagger X \nonumber \\ && + \lambda_{2X} \Phi_2^\dagger \Phi_2 X^\dagger X+ \lambda_{SX} S^\dagger S X^\dagger X - (\lambda_{3X} S^\dagger X X X + \rm{H.c.}) , \label{eq:potential} \end{eqnarray} The Dirac mass terms of the neutrinos are generated by the small VEV of $\Phi_1$. According to \cite{Davidson:2009ha, Wang:2006jy} we assume that the VEV of $\Phi_1$ is much smaller than the electroweak scale. The vacuum stability analysis of a general scalar potential has been studied in \cite{Kannike:2016fmd}. Additionally, a remaining $Z_3$ symmetry is also involved when $U(1)_X$ is broken by non-zero VEV of S. Here $X$ is the only $Z_3$ charged stable (scalar) particle and as a result $X$ could be considered as a potential Dark Matter (DM) candidate. The mass term $M_X$ of $X$ in Eq.~\ref{eq:potential} is positive definite which forbids $X$ to get VEV and as a result the $Z_3$ symmetry promotes the stability of $X$ as a DM candidate. It has already been discussed in \cite{Baek:2016wml} that a CP-odd component in $S$ becomes massless Goldstone boson. Then we write scalar fields as follows \begin{eqnarray} \Phi'_1 &=& \begin{pmatrix} \phi^+_1 \\ \frac{1}{\sqrt{2}} (v_1 + h_1 + i a_1) \end{pmatrix}, \quad \Phi_2 = \begin{pmatrix} \phi^+_2 \\ \frac{1}{\sqrt{2}} (v_2 + h_2 + i a_2) \end{pmatrix}, \\ \quad X &=& X' e^{ i\frac{a_S}{2v_S}}, \quad \Phi_1 = \Phi'_1 e^{ i\frac{a_S}{v_S}}, ~~S = \frac{1}{\sqrt{2}} r_S e^{ i\frac{a_S}{v_S}}, \end{eqnarray} where $r_S = \rho + v_S$. We assume $X$ does not develop a VEV while the VEVs of $\Phi_1$, $\Phi_2$ and $S$ are obtained by requiring the stationary conditions $\partial V(v_1,v_2,v_S)/\partial v_i =0$ following \begin{eqnarray} -2 m_{11}^2 v_1 + 2 \lambda_1 v_1^3 + v_1 (\lambda_{1S} v_S^2 + \lambda_3 v_2^2 + \lambda_4 v_2^2) - \sqrt{2} \mu v_2 v_S &=&0, \nonumber \\ -2 m_{22}^2 v_2 + 2 \lambda_2 v_2^3 + v_2 (\lambda_{2S} v_S^2 + \lambda_3 v_1^2 + \lambda_4 v_1^2) - \sqrt{2} \mu v_1 v_S &=& 0, \nonumber \\ -2 m_{S}^2 v_S + 2 \lambda_S v_S^3 + v_S (\lambda_{1S} v_1^2 + \lambda_{2S} v_2^2 ) - \sqrt{2} \mu v_1 v_2 &=& 0. \label{eqstn} \end{eqnarray} We then find that these conditions can be satisfied with $v_1 \simeq \mu \ll \{ v_2, v_S \}$ and SM Higgs VEV is given as $v \simeq v_2 \simeq 246$ GeV. From the first one of the Eq.~\ref{eqstn} we find that $v_1$ is proportional to and of the same order with $\mu$ such that \begin{eqnarray} v_1 \simeq \frac{\sqrt{2} \mu v_2 v_S}{\lambda_{1S} v_S^2 +(\lambda_3+\lambda_4) v_2^2 -2 m_{11}^2}. \end{eqnarray} The small order of $v_1 (\sim \mu)$ is required to keep $v_2$ and $v_S$ in the electroweak scale. Considering the neutrino mass scale as $m_\nu \sim 0.1$ eV, the value of $\mu/v_2$ should be small such as $\mu/v_2 \sim {\mathcal O}(10^{-12})$ ensuring $Y^\nu$ as ${\mathcal O}(1)$ such that $m_e/v_2 \sim {\mathcal O}(10^{-6})$. Hence $v_1$ is considered to be smaller than the other VEVs. It also interesting to notice that $\mu=0$ restores the symmetry of the Lagrangian hence a technically natural small value of $\mu$ is acceptable \cite{tHooft:1979rat,Baek:2014sda}. It is also interesting to notice that $\mu=0$ enhances the symmetry of the Lagrangian in the sense that we can assign arbitrary $U(1)_X$ charge to $\Phi_1$, which ensures the radiative generation of the $\mu$-term is proportional to $\mu$ itself. Hence a small value of $\mu$ is technically natural \cite{tHooft:1979rat,Baek:2014sda}. Now we identify mass spectra in the scalar sector. {\tt Charged scalar:} In this case we calculate the mass matrix in the basis $(\phi_{1}^{\pm}, \phi_{2}^{\pm})$ where $\phi_1^\pm $ is approximately physical charged scalar while $\phi_2^\pm$ is approximately NG boson absorbed by $W^\pm$ boson. In the following we write physical charged scalar field as $H^\pm \simeq \phi^\pm_1$. The charged scalar mass matrix can be written as \begin{equation} M^2_{H^\pm} = \begin{pmatrix} \frac{v_2 (\sqrt{2} \mu v_S - \lambda_4 v_1 v_2 )}{2v_1} & - \frac{1}{2} (\sqrt{2}\mu v_S - \lambda_4 v_1 v_2) \\ - \frac{1}{2} (\sqrt{2}\mu v_S - \lambda_4 v_1 v_2) & \frac{v_1 (\sqrt{2} \mu v_S - \lambda_4 v_1 v_2)}{2 v_2} \end{pmatrix} \simeq \begin{pmatrix} \frac{v_2 (\sqrt{2} \mu v_S - \lambda_4 v_1 v_2 )}{2v_1}& 0 & \\ 0 & 0 & \end{pmatrix}. \end{equation} The charged Higgs mass can be written as \begin{eqnarray} m_{H^{\pm}}^2 \simeq \frac{v_2 (\sqrt{2} \mu v_S-\lambda_4 v_1 v_2)}{2 v_1}. \end{eqnarray} {\tt CP-even neutral scalar:} In the case of CP-even scalar all three components are physical. Hence the mass matrix can be written in the basis of $(h_1,h_2, \rho)$ as \begin{align} M^2_H &= \begin{pmatrix} 2 \lambda_1 v_1^2 + \frac{\mu v_2 v_S}{\sqrt{2} v_1} & (\lambda_3 + \lambda_4) v_1 v_2 - \frac{\mu v_S}{\sqrt{2}} & \lambda_{1S} v_1 v_S - \frac{\mu v_2}{\sqrt{2}} \\ (\lambda_3 + \lambda_4) v_1 v_2 - \frac{\mu v_S}{\sqrt{2}} & 2 \lambda_2 v_2^2 + \frac{\mu v_1 v_S}{\sqrt{2} v_2} & \lambda_{2S} v_2 v_S - \frac{\mu v_1}{\sqrt{2}} \\ \lambda_{1S} v_1 v_S - \frac{\mu v_2}{\sqrt{2}} & \lambda_{2S} v_2 v_S - \frac{\mu v_1}{\sqrt{2}} & 2 \lambda_S v_S^2 + \frac{\mu v_1 v_2}{\sqrt{2} v_S} \end{pmatrix} \nonumber \\ & \simeq \begin{pmatrix} \frac{ \mu v_2 v_S }{\sqrt{2} v_1} & 0 & 0 \\ 0 & 2 \lambda_2 v_2^2 & \lambda_{2S} v_2 v_S \\ 0 & \lambda_{2S} v_2 v_S & 2 \lambda_{S} v_S^2 \end{pmatrix}. \end{align} We find that all the masses of the mass eigenstates, $H_i (i=1,2,3)$, are at the electroweak scale and the mixings between $h_1$ and other components are negligibly small while the $h_2$ and $\rho$ can have sizable mixing. The mass eigenvalues and the mixing angle for $h_2$ and $\rho$ system can be given by \begin{align} & m_{H_2,H_3}^2 = \frac{1}{2} \left[ m_{22}^2 + m_{33}^2 \mp \sqrt{(m_{22}^2-m_{33}^2)^2 + 4 m_{23}^4} \right], \\ & \tan 2 \theta = \frac{-2 m_{23}^2}{m_{22}^2 - m_{33}^2}, \\ & m_{22}^2 = 2 \lambda_2 v_2^2, \quad m_{33}^2 = 2 \lambda_{S} v_S^2, \quad m_{23}^2 = \lambda_{2S} v_2 v_S. \end{align} Hence the mass eigenstates are obtained as \begin{equation} \begin{pmatrix} H_1 \\ H_2 \\ H_3 \end{pmatrix} \simeq \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos \theta & - \sin \theta \\ 0 & \sin \theta & \cos \theta \end{pmatrix} \begin{pmatrix} h_1 \\ h_2 \\ \rho \end{pmatrix}. \label{eq:eigenstates} \end{equation} Here $H_2$ is the SM-like Higgs, $h$, and $m_{H_{2}} \simeq m_h$ where the mixing angle $\theta$ between $H_2$ and $H_3$ is constrained as $\sin\theta\leq0.2$ by the LHC Higgs data \cite{Chpoi:2013wga, Cheung:2015dta,Cheung:2015cug} using the numerical analyses on the Higgs decay followed by \cite{Djouadi:1997yw, Djouadi:2006bz}. {\tt CP-odd neutral scalar:} Calculating the mass matrix of the pseudo-scalars in a basis $(a_1, a_2, a_S)$ we get the mass matrix as \begin{equation} M^2_A = \frac{\mu}{\sqrt{2}} \begin{pmatrix} \frac{v_2 v_S}{v_1} & - v_S & - v_2 \\ -v_S & \frac{v_1 v_S}{v_2} & v_1 \\ -v_2 & v_1 & \frac{v_1 v_2}{v_S} \end{pmatrix} \simeq \begin{pmatrix} \frac{\mu v_2 v_S}{\sqrt{2} v_1} & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, \end{equation} using $S\simeq \frac{v_S+\rho+ i a_S}{\sqrt{2}}$. In the last step we used the approximation, $v_1 (\sim \mu) \ll v_2, v_S$. We find three mass eigenstates, \begin{align} A &= a_1 -\frac{v_1}{v_2} a_2 - \frac{v_1}{v_S} a_S, \nonumber \\ G^0 &= \frac{v_1}{v_2} a_1 +a_2, \nonumber \\ a &= \frac{v_1}{v_S} a_1 -\frac{v_1^2}{v_2 v_S} a_2 +\left(1+\frac{v_1^2}{v_2^2} \right) a_S, \end{align} up to normalization. They correspond to massive pseudo-scalar, the masslesss Nambu-Goldstone (NG) mode which is absorbed by the $Z$ boson, and a massless physical Goldstone boson associated with the $U(1)_X$ breaking, respectively. Hence the mass of $A$ is given by \begin{equation} m_A^2 =\frac{\mu(v_1^2 v_2^2 + v_1^2 v_S^2 + v_2^2 v_S^2)}{\sqrt{2} v_1 v_2 v_S} \simeq \frac{\mu v_2 v_S}{\sqrt{2} v_1}, \end{equation} which is at the electroweak scale. It can be shown~\cite{Baek:2016wml} that the Goldstone boson, $a$, is safe from the phenomenological constraints such as $Z \to H_i a (i=1,2,3) $ decay, stellar cooling from the interaction $a \overline{e} \gamma_5 e$, {\it etc.}, because it interacts with the SM particles only via highly-suppressed ($\sim v_1/v_{2,S}$) mixing with the SM Higgs. Note that, in our analysis below, we approximate pseudo-scalars as $A \simeq a_1$, $G^0 \simeq a_2$ and $a \simeq a_S$ since we assume $v_1 \ll v_2, v_S$ in realizing small neutrino mass. Here we also discuss decoupling of the physical Goldstone boson from thermal bath where we assume it is thermalized via Higgs portal interaction. The interactions $ \rho \partial_\mu a_S \partial^\mu a_S/v_S$ , $\lambda_{2S} v_S v_2 \rho h_2$ and the SM Yukawa interactions generate the effective interaction among the Goldstone boson $a$ and the SM fermions \begin{equation} - \frac{\lambda_{2S} m_f}{2 m_{H_3}^2 m_{H_2}^2} \partial_\mu a \partial^\mu a \bar f f, \end{equation} where $m_f$ is the mass of the SM fermion $f$, and we used $a_s \simeq a$. The temperature, $T_a$, at which $a$ decouples from thermal bath is roughly estimated by~\cite{Weinberg:2013kea} \begin{equation} \frac{\text{collision rate}}{\text{expansion rate}} \simeq \frac{\lambda_{2S}^2 m_f^2 T_a^5 m_{PL}}{m_{H_2}^4 m_{H_3}^4} \sim 1, \label{eq:decoup_a} \end{equation} where $m_{PL}$ denotes the Planck mass and $m_f$ should be smaller than $T_a$ so that $f$ is in thermal bath. The decoupling temperature is then calculated by \begin{equation} T_a \sim 2 \, {\rm GeV} \left( \frac{m_{H_3}}{100 \, {\rm GeV}} \right)^{\frac{4}{5}} \left( \frac{{\rm GeV}}{m_f} \right)^{\frac{2}{5}} \left( \frac{0.01}{\lambda_{2S}} \right)^{\frac{2}{5}}. \end{equation} Thus Goldstone boson $a$ can decouple from thermal bath sufficiently earlier than muon decoupling and does not contribute to the effective number of active neutrinos\footnote{If $m_{H_3} \approx 500$ MeV and $\lambda_{2S} \approx 0.005$, then $a$ can make sizable contribution: $\Delta N_{\rm eff}=4/7$~\cite{Weinberg:2013kea}.}~\cite{Brust:2013xpv}. Note that the Goldstone boson should be in thermal bath at temperature below that of freeze-out of DM when we consider the relic density of DM, $X$, is explained by the process, $ X \bar{X} \to a a $, in our analysis below. Taking minimum DM mass as $\sim 100$ GeV freeze-out temperature $T_f$ is larger than $\sim 100/x_f$ GeV $\sim 4$ GeV where $x_f = m_{\rm DM}/T_f \sim 25$. Therefore we can get $T_f > T_a$ even with small $\lambda_{2S} (=0.01)$ as long as $m_{H_3}$ is not much heavier than the electroweak scale. As the phenomenology of the Higgs sector has been discussed in \cite{Davidson:2009ha,Machado:2015sha, Bertuzzo:2015ada, Baek:2016wml}, we concentrate on the DM phenomenology in the following analysis. \section{DM phenomenology} \label{DMP} In this section, we discuss DM physics of our model such as relic density, direct and indirect detections which are compared with experimental constraints. Since the Higgs portal interaction is strongly constrained by DM direct detection~\cite{Baek:2011aa,Baek:2014kna,Baek:2012se,Cline:2013gha}, we consider the case of small mixing so that $h_1 \simeq H_1$, $h_2 \simeq H_2$ and $\rho \simeq H_3$; here $H_2$ is the SM-like Higgs in our DM analysis. \begin{figure}[t] \begin{center} \includegraphics[bb=0 0 581 250, scale=0.23]{diagram1.pdf} \qquad \qquad \includegraphics[bb=0 0 581 250, scale=0.23]{diagram2.pdf} \includegraphics[bb=0 0 650 450, scale=0.23]{diagram3.pdf} \qquad \qquad \qquad \includegraphics[bb=0 0 450 450, scale=0.23]{diagram4.pdf} \end{center} \caption{Diagrams in (I), (II), (III) and (IV) correspond to DM annihilation process in scenario-I, II, III and IV. \label{fig:diagram1} } \end{figure} \subsection*{Dark matter interaction} Firstly masses of dark matter candidates $X$ is given by~\cite{Baek:2014kna} \begin{align} m_{X}^2 = M_X^2 + \frac{\lambda_{1X}}{2} v_1^2 + \frac{\lambda_{2X}}{2} v_2^2 + \frac{\lambda_{SX}}{2} v_S^2 \end{align} where the real and imaginary part of $X$ has the same mass and $X$ is taken as a complex scalar field; this is due to remnant $Z_3$ symmetry. The interactions relevant to DM physics are given by \begin{align} {\cal L} \ \supset \ & \frac{1}{v_S} \partial_\mu a (X \partial^\mu X^* - X^* \partial^\mu X) + \frac{1}{4 v_S^2} \partial_\mu a \partial^\mu a X^* X \nonumber \\ & + \frac{\lambda_{1X}}{2} \left(H^+ H^- + \frac{1}{2} H_1^2 + A^2 \right)X^* X + \frac{\lambda_{2X}}{4} (2 v_2 H_2 + H_2^2)X^* X \nonumber \\ & + \frac{\lambda_{SX}}{4} (2v_S H_3 + H_3^2)X^* X + \frac{\lambda_{3X}}{2} (v_S + H_3) (X X X + c.c.) \nonumber \\ & - \mu_{SS} H_3^3 + \frac{1}{v_S} H_3 \partial_\mu a \partial^\mu a - \mu_{1S} H_3 \left(H^+ H^- + \frac{1}{2} (H_1^2 + A^2) \right) - \frac{\mu_{2S}}{2} H_3 H_2^2, \label{eq:intDM} \end{align} where we ignored terms proportional to $v_1$ since the value of VEV is tiny, $\mu_{SS} \equiv m_{H_3}^2/(2v_S)$, $\mu_{1S} \equiv \lambda_{1S} v_S$, $\mu_{2S} \equiv \lambda_{2S} v_S$, and omitted scalar mixing $\sin \theta(\cos \theta)$ assuming $\cos \theta \simeq 1$ and $\sin \theta \ll 1$. Thus relevant free parameters to describe DM physics are summarized as; \begin{equation} \{ m_{X}, m_{H_1}, m_{H_3}, m_A, m_{H^\pm}, v_S, \lambda_{1X}, \lambda_{2X}, \lambda_{SX}, \lambda_{3X}, \mu_{1S}, \mu_{2S} \}, \end{equation} where we choose $\mu_{1S,2S}$ as free parameter instead of $\lambda_{1S,2S}$ and we use $\mu_{SS} = m_{H_3}^2/(2v_S)$. In our analysis, we focus on several specific scenarios for DM physics by making assumptions for model parameters to illustrate some particular processes of DM annihilations. These scenarios are given as follows: \begin{itemize} \item Scenario-I: 100 GeV $< v_S < 2000$ GeV, $\{ \lambda_{1X}, \lambda_{2X}, \lambda_{SX}, \lambda_{3X}, \mu_{1S}/v \} \ll 1$. \item Scenario-II : $v_S \gg v$, $\{ \lambda_{SX}, \mu_{1S}/v \} \gg \{ \lambda_{1X}, \lambda_{2X}, \lambda_{3X}, \mu_{1S}/v \} $. \item Scenario-III: $v_S \gg v$, $\lambda_{1X} \gg \{ \lambda_{2X}, \lambda_{SX}, \lambda_{3X}, \mu_{1S}/v \} $. \item Scenario-IV: $v_S \gg v$, $\lambda_{X3} \gg \{\lambda_{1X}, \lambda_{2X}, \lambda_{SX}, \mu_{1S}/v \} $. \end{itemize} Here we set $v\equiv v_2 \simeq 246\, {\rm GeV}$ since $v_1 \ll v_2$. In scenario-I DM mainly annihilates into $a_S a_S$ and $a_S H_3$ final state as shown in Fig.~\ref{fig:diagram1}-(I). In scenario-II DM annihilates via $H_3$ portal interaction as Fig.~\ref{fig:diagram1}-(II). In scenario-III DM annihilates into components of $\Phi_1$ through contact interaction with coupling $\lambda_{1X}$ as shown Fig.~\ref{fig:diagram1}-(III). Finally scenario-IV represents semi-annihilation processes $X X \to X H_3$ as shown in Fig.~\ref{fig:diagram1}-(IV). In our analysis, we assumed $\lambda_{2S} \ll {\cal O}(1)$ so that we can neglect the case of DM annihilation via the SM Higgs portal interaction since it is well known and constraints from direct detection experiments are strong. \begin{figure}[t] \begin{center} \includegraphics[bb=-100 0 581 230, scale=1]{Vs_vs_MXR.pdf} \end{center} \caption{ Scatter plot for parameters on $m_{X}$-$v_S$ plane under the DM relic abundance bound in Scenario-I.} \label{fig:DM1} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[bb=0 20 581 250, scale=0.94]{LambdaSX_vs_MXR.pdf} \includegraphics[bb=-300 0 581 0, scale=0.81]{LambdaSX_vs_M1S.pdf} \end{center} \caption{ Scatter plot for parameters on $m_{X}$-$\lambda_{SX}$ and $\mu_{1S}$-$\lambda_{SX}$ planes in left and right panels under the DM relic abundance bound in Scenario-II.} \label{fig:DM2} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[bb=0 20 581 250, scale=0.94]{Lambda1X_vs_MXR.pdf} \includegraphics[bb=-230 0 581 0, scale=0.94]{Lambda12X_vs_MXR.pdf} \end{center} \caption{ Left: Scatter plot for parameters on $m_{X}$-$\lambda_{1X}$ plane under the DM relic abundance bound in Scenario-III. Right: that for parameters on $m_{X}$-$\lambda_{12X}$ in Scenario-IV} \label{fig:DM3} \end{figure} \subsection*{Relic density} Here we estimate the thermal relic density of DM for each scenario given above. The relic density is calculated numerically with {\tt micrOMEGAs 4.3.5}~\cite{Belanger:2014vza} to solve the Boltzmann equation by implementing relevant interactions. In numerical calculations we apply randomly produced parameter sets in the following parameter ranges. For all scenarios we apply parameter settings as \begin{align} &m_{X} \in [50,500] \ {\rm GeV}, \quad \mu_{2S} = 1 \ {\rm GeV}, \quad M_{H_1} = M_{A} = M_{H^\pm} \in [100, 1000] \ {\rm GeV}, \nonumber \\ & \lambda_{2X} \ll 1, \end{align} where the setting for $\lambda_{2X}$ is to suppress the SM Higgs portal interactions and small value of $\mu_{2S}$ is to suppress scalar mixing. Then we set parameter region for each scenarios as follows: \begin{align} {\rm Scenraio-I}: \ \ & \ v_S \in [100, 2000] \ {\rm GeV}, \quad \lambda_{SX, 1X, 3X} \in [10^{-8}, 10^{-4}], \nonumber \\ & \ \mu_{1S} \in [0.001, 0.1] \ {\rm GeV}, \quad M_{H_3} \in [10, 30] \ {\rm GeV}, \\ {\rm Scenario-II}: \ & \ v_S \in [3000, 10000] \ {\rm GeV}, \quad \lambda_{SX} \in [10^{-3}, 1], \quad \lambda_{1X, 3X} \in [10^{-8}, 10^{-4}], \nonumber \\ & \ \mu_{1S} \in [100, 1000] \ {\rm GeV}, \quad M_{H_3} \in [150, 2000] \ {\rm GeV}, \\ {\rm Scenario-III}: & \ v_S \in [3000, 10000] \ {\rm GeV}, \quad \lambda_{1X} \in [10^{-3}, 1], \quad \lambda_{SX, 3X} \in [10^{-8}, 10^{-4}], \nonumber \\ & \ \mu_{1S} \in [0.001, 0.1] \ {\rm GeV}, \quad M_{H_3} \in [150, 2000] \ {\rm GeV}, \\ {\rm Scenario-IV}: & \ v_S \in [3000, 10000] \ {\rm GeV}, \quad \lambda_{3X} \in [10^{-3}, 1], \quad \lambda_{SX, 1X} \in [10^{-8}, 10^{-4}], \nonumber \\ & \ \mu_{1S} \in [0.001, 0.1] \ {\rm GeV}, \quad M_{H_3} \in [50, m_{X}] \ {\rm GeV}. \end{align} Then we search for the parameter sets which can accommodate with observed relic density. Here we apply an approximated region~\cite{Ade:2015xua} \begin{equation} 0.11 \lesssim \Omega h^2 \lesssim 0.13. \end{equation} In Fig.~\ref{fig:DM1}, we show parameter points on $m_{X}$-$v_S$ plane which can explain the observed relic density of DM in Scenario-I. In this scenario, relic density is mostly determined by the cross section of $X X \to a_S a_S$ process which depends on $m_{X}/v_S$ via second term of the Lagrangian in Eq.~(\ref{eq:intDM}). Thus preferred value of $v_S$ becomes larger when DM mass increases as seen in Fig.~\ref{fig:DM1}. In left and right panel of Fig.~\ref{fig:DM2}, we respectively show parameter points on $m_{X}$-$\lambda_{SX}$ and $\mu_{1S}$-$\lambda_{SX}$ planes satisfying correct relic density in Scenario-II. In this scenario, the region $m_{X} \lesssim 100$ GeV requires relatively larger $\lambda_{SX}$ coupling since scalar boson modes $\{ H_3 H_3, H_1H_1, AA, H^\pm H^\mp \}$ are forbidden by our assumption for scalar boson masses. On the other hand the region $m_{X} > 100$ GeV allow wider range of $\lambda_{SX}$ around $0.01 \lesssim \lambda_{SX} \lesssim 1.0$ since DM can annihilate into other scalar bosons if kinematically allowed. In left (right) panel of Fig.~\ref{fig:DM3}, we show parameter region on $m_{X}$-$\lambda_{1X}(\lambda_{ 3X} )$ satisfying the relic density in Scenario-III(IV). In scenario-III, DM mass should be larger than $\sim 100$ GeV to annihilate into scalar bosons from $\Phi_1$ and required value of the coupling is $0.2 \lesssim \lambda_{1X} \lesssim 1.0$ for $m_{X} \leq 500$ GeV. In scenario-IV, the required value of the coupling $\lambda_{3X}$ has similar behavior as $\lambda_{1X}$ in the scenario-III for $m_X > 100$ GeV but slightly larger value. This is due to the fact that semi-annihilation process require larger cross section than that of annihilation process. \subsection*{Direct detection} Here we briefly discuss constraints from direct detection experiments estimating DM-nucleon scattering cross section in our model. Then we focus on our scenario-III since DM can have sizable interaction with nucleon via $H_2$ and $H_3$ exchange and investigate upper limit of mixing $\sin \theta$. The relevant interaction Lagrangian with mixing effect is given by \begin{equation} \mathcal{L} \supset \frac{\lambda_{SX} v_S}{2} X^* X (c_\theta H_3 - s_\theta H_2) + \sum_{q} \frac{m_q}{v} \bar q q (s_\theta H_3 + c_\theta H_2), \end{equation} where $q$ denote the SM quarks with mass $m_q$, and we assumed $\mu_X \ll \lambda_{SX} v_S$ as in the relic density calculation. We thus obtain the following effective Lagrangian for DM-quark interaction by integrating out $H_2$ and $H_3$; \begin{equation} \mathcal{L}_{\rm eff} = \sum_q \frac{\lambda_{SX} v_S m_q s_\theta c_\theta}{2v} \left( \frac{1}{m_h^2} - \frac{1}{m_{H_3}^2} \right) X^*X \bar q q, \end{equation} where $m_{H_2} \simeq m_h = 125$ GeV is used. The effective interaction can be rewritten in terms of nucleon $N$ instead of quarks such that \begin{equation} \mathcal{L}_{\rm eff} = \frac{f_N \lambda_{SX} v_S m_N s_\theta c_\theta}{v} \left( \frac{1}{m_h^2} - \frac{1}{m_{H_3}^2} \right) X^*X \bar N N, \end{equation} where $m_N$ is nucleon mass and $f_N$ is the effective coupling constant given by \begin{equation} f_N = \sum_q f_q^N = \sum_q \frac{m_q}{m_N} \langle N | \bar q q | N \rangle. \end{equation} The heavy quark contribution is replaced by the gluon contributions such that \begin{align} \sum_{q=c,b,t} f_q^N = {1 \over m_N} \sum_{q=c,b,t} \langle N | \left(-{ \alpha_s\over 12 \pi} G^a_{\mu\nu} G^{a\mu\nu}\right) N \rangle, \label{eq:f_Q} \end{align} which is obtained by calculating the triangle diagram for heavy quarks inside a loop. Then we write the trace of the stress energy tensor as follows by considering the scale anomaly; \begin{align} \theta^\mu_\mu =m_N \bar{N} N = \sum_q m_q \bar{q} q - {7 \alpha_s \over 8 \pi} G^a_{\mu\nu} G^{a\mu\nu}. \label{eq:stressE} \end{align} Combining Eqs.~(\ref{eq:f_Q}) and (\ref{eq:stressE}), we get \begin{align} \sum_{q=c,b,t} f_q^N = \frac{2}{9} \left( 1 - \sum_{q = u,d,s} f_q^N \right), \end{align} which leads \begin{align} f_N = \frac29+\frac{7}{9}\sum_{q=u,d,s}f_{q}^N. \end{align} Finally we obtain the spin independent $X$-$N$ scattering cross section as follows; \begin{equation} \sigma_{\rm SI}(X N \to X N) = \frac{1}{8 \pi} \frac{\mu_{NX}^2 f_N^2 m_N^2 \lambda_{SX}^2 v_S^2 s_\theta^2 c_\theta^2}{v^2 m_{X}^2} \left( \frac{1}{m_h^2} - \frac{1}{m_{H_3}^2} \right)^2, \end{equation} where $\mu_{NX} = m_N m_{X}/(m_N + m_{X})$ is the reduced mass of nucleon and DM. Here we consider DM-neutron scattering cross section for simplicity where that of DM-proton case gives almost similar result. In this case, we adopt the effective coupling $f_n \simeq 0.287$ (with $f_u^n = 0.0110$, $f_d^n = 0.0273$, $f_s^b = 0.0447$) in estimating the cross section. In Fig.~\ref{fig:DD}, we show DM-nucleon scattering cross section as a function of $\sin \theta$ we take $m_{X} = 300$ GeV, $m_{H_3}= 300$ GeV, $v_S = 5000$ GeV, and $\lambda_{SX}= 0.5(0.01)$ for red(blue) line as reference values. We find that some parameter region is constrained by direct detection when $\lambda_{SX}$ is relatively large and $\sin \theta > 0.01$. More parameter region will be tested in future direct detection experiments. \begin{figure}[t] \begin{center} \includegraphics[bb=-100 0 581 230, scale=0.95]{DD.pdf} \end{center} \caption{ DM-Nucleon scattering cross section as a function of $\sin \theta$ in Scenario-II where we take $m_{X} = 300$ GeV, $m_{H_3}= 300$ GeV, $v_S = 5000$ GeV and $\lambda_{SX}=0.5(0.01)$ for red(blue) line as reference values. The current bounds from XENON1T~\cite{Aprile:2017iyp} and PandaX-II~\cite{Cui:2017nnn}}. \label{fig:DD} \end{figure} The Higgs portal interaction can be also tested by collider experiments. The interaction can be tested via searches for invisible decay of the SM Higgs for $2 m_X < m_h$ while collider constraint is less significant compared with direct detection constraints for $2m_X > m_h$~\cite{Khachatryan:2016whc, Hoferichter:2017olk,Aad:2015pla}. Furthermore DM can be produced via heavier Higgs boson $H_3$ if $2 m_X < m_{H_3}$ and the possible signature will be mono-jet with missing transverse momentum as $pp \to H_3 j \to XX j$. However the production cross section will be small when the mixing effect $\sin \theta$ is small as we assumed in our analysis. Such a process would be tested in future LHC with sufficiently large integrated luminosity while detailed analysis is beyond the scope of this paper. \subsection*{Indirect detection} Here we discuss possibility of indirect detection in our model by estimating thermally averaged cross section in current Universe with {\tt micrOMEGAs 4.3.5} using allowed parameter sets from relic density calculations. Since $a_Sa_S$ final state is dominant in scenario-I, we focus on the other scenarios in the following. Fig.~\ref{fig:ID} shows DM annihilation cross section in current Universe as a function of $m_{X}$ where left and right panels correspond to Scenario-II and Scenario-III/IV. In Scenario-II, the cross section is mostly $\sim O(10^{-26})$cm$^{-3}/$s while some points give smaller(larger) values corresponding to the region with $2 m_{X} \gtrsim(\lesssim) M_{H_3}$ as a consequence of resonant effect. The annihilation processes in the scenario provide the SM final state via decay of $H_3$ and $\{H_1, H^\pm, A\}$ where $H_3$ decay gives mainly $b \bar b$ via mixing with the SM Higgs and the scalar bosons from second doublet gives leptons. This cross section would be tested via $\gamma$-ray observation like Fermi-LAT~\cite{Ackermann:2013yva} as well as high energy neutrino search such as IceCube~\cite{Aartsen:2015zva, Aartsen:2017mau}, especially when the cross section is enhanced. In Scenario-III, the cross section is mostly $\sim O(10^{-26})$cm$^{-3}/$s and the final states from DM annihilation include components of $\Phi_1$ that are $\{H_1, H^\pm, A\}$. Thus DM mainly annihilate into neutrinos via the decay these scalar bosons while little amount of charged lepton appear from $H^\pm$. Therefore constraints from indirect detection is weaker in this scenario. In Scenario-IV, the values of cross section is relatively larger due to the nature of semi-annihilation scenario. In this case final states from DM annihilation give mostly $b \bar b$ via decays of $H_3$ in the final state. Then it would be tested by $\gamma$-ray search and neutrino observation as in the scenario-II. \begin{figure}[t] \begin{center} \includegraphics[bb=0 20 581 280, scale=0.8]{ID1.pdf} \includegraphics[bb=-280 0 581 20, scale=0.8]{ID2.pdf} \end{center} \caption{ Left: the current DM annihilation cross section in Scenario-II as a function of $m_{X}$. Right: that for Scenario-III and IV represented by red and blue points.} \label{fig:ID} \end{figure} \section{Conclusion} \label{Conc} We consider a neutrino Two Higgs Doublet Model ($\nu$THDM) in which small Dirac neutrino masses are explained by small VEV, $v_1 \sim {\cal O}(1)$ eV, of Higgs $H_1$ associated with neutrino Yukawa interaction. A global $U(1)_X$ symmetry is introduced to forbid seesaw mechanism. The smallness of $v_1$ proportional to soft $U(1)_X$-breaking parameter $m_{12}^2$ is technically natural. We extend the model to introduce a scalar dark matter candidate $X$ and scalar $S$ breaking $U(1)_X$ symmetry down to discrete $Z_2$ symmetry. Both are charged under $U(1)_X$. The lighter state of $X$ is stable since it is the lightest particle with $Z_2$ odd parity. The soft parameter $m_{12}^2$ is replaced by $ \mu \langle S \rangle$. The physical Goldstone boson whose dominant component is pseudoscalar part of $S$ is shown to be phenomenologically viable due to small ratio ($\sim {\cal O}(10^{-9})$) of $v_1$ compared to electroweak scale VEVs of the SM Higgs and $S$. We study four scenarios depending on dark matter annihilation channels in the early Universe to simplify the analysis of dark matter phenomenology. In Scenario I, Goldstone modes are important. Scenario II is $H_3$ portal. In Scenario III, the dark matter makes use of the portal interaction with $\Phi_1$ which generates Dirac neutrino masses. In Scenario IV the dominant interaction is $\lambda_{3X} S^\dagger X X X + h.c.$ which induces semi-annihilation process of our dark matter candidate. In Scenario II, the dark matter scattering cross section with neucleons can be sizable and detected at next generation direct detection experiments. We calculated indirect detection cross section in Scenarios II, III, and IV, which can be tested by observing cosmic $\gamma$-ray and/or neutrinos. \section*{Acknowledgments} \vspace{0.5cm} This work is supported in part by National Research Foundation of Korea (NRF) Research Grant NRF-2015R1A2A1A05001869 (SB). \providecommand{\href}[2]{#2} \addcontentsline{toc}{section}{References} \bibliographystyle{JHEP}
1,314,259,994,627
arxiv
\section{Introduction} \label{sec:intro} \subsection{Motivation and Background} \label{ssec:motivate} Auction based mechanisms are extremely relevant in modern day electronic procurement systems \cite{CHANDRA07,NARAHARI09a} since they enable a promising way of automating negotiations with suppliers and achieving the ideal goals of procurement efficiency and cost minimization. In many cases it may be beneficial to allow the suppliers to bid on combinations of items rather than on single items. Such auctions are called {\it combinatorial auctions}. Simply defined, a combinatorial auction is a mechanism where bidders can submit bids on combinations of items. The winner determination problem is to select a winning set of bids such that each item to be bought is included in at least one of the selected bids, and the total cost of procurement is minimized. In this paper, our interest is in multi-unit combinatorial procurement auctions, where a buyer is interested in procuring multiple units of multiple items. In mechanism design literature, an optimal auction refers to an auction which optimizes a performance metric (for example maximizes revenue to a seller or minimizes cost to a buyer) subject to two critical game theoretic properties: (1) {\em incentive compatibility\/} and (2) {\em individual rationality\/}. Incentive compatibility comes in two forms: dominant strategy incentive compatibility (DSIC) and Bayesian incentive compatibility (BIC). DSIC property is a property that guarantees that reporting true valuations (or costs as the case may be) is a best response for each bidder, irrespective of the valuations (or costs) reported by the other bidders. BIC is a much weaker property which ensures that truth revelation is a best response for each bidder whenever the other bidders are also truthful. Individual rationality (IR) is a property which assures non-negative utility to each participant in the mechanism thus ensuring their voluntary participation. The IR property may be (1) ex-ante IR (if the bidders decide on participation even before knowing their exact types (valuations or costs) or (2) interim IR (if the bidders decide on participation just after observing their types), or ex-post IR (if the bidders can withdraw even after the game is over). For more details on these concepts, the reader is referred to \cite{Garg08a,Garg08b,Mascollel,MYERSON81}. \subsection{Contributions and Outline} \label{ssec:review} In his seminal work, Myerson \cite{MYERSON81} characterized an optimal auction for selling a single unit of a single item. Extending his work has been attempted by several researchers and there have been some generalizations of his work for multi-unit single item auctions \cite{Malakhov05,Iyengar08,Raghav07}. Armstrong \cite{Armstrong00} characterized an optimal auction for two objects where type sets are binary. Malakhov and Vohra \cite{Malakhov05} studied an optimal auction for a single item multi-unit procurement auctions using a network interpretation. An implicit assumption in the above papers is that the sellers have limited capacity for the item. They also assume that the valuation sets are discrete. Kumar and Iyengar \cite{Iyengar08} and Gautam, Hemachandra, Narahari, Prakash \cite{Raghav07} have proposed an optimal auction for multi-unit, single item procurement. Recently, Ledyard \cite{Ledyard07a} has looked at single unit combinatorial auctions in the presence of single minded bidders. A \emph{single minded bidder} is one who only bids on a particular subset of the items. Ledyard's auction, however, does not take into account multiple units of multiple items and this motivates our current work which extends Ledyard's auction to the case of procuring multiple units of multiple items. The following are our specific contributions. \begin{enumerate} \item We characterize Bayesian incentive compatibility and interim individual rationality for procuring multiple units of multiple items when the bidders are single minded, by deriving a necessary and sufficient condition. \item We design an optimal auction that minimizes the cost of procurement while satisfying Bayesian incentive compatibility and interim individual rationality. \item We show under appropriate regularity conditions that the proposed optimal auction also satisfies dominant strategy incentive compatibility. \end{enumerate} Some of the results presented here appeared in our paper \cite{GUJAR09a}. The rest of the paper is organized as follows. First, we will explain our model in Section \ref{sec:model} and describe the notation that we use. We also outline certain essential technical details of optimal auctions from the literature. In Section 3, we present the three contributions listed above. Section 4 concludes the paper. \section{The Model} \label{sec:model} We consider a scenario in which there is a buyer and multiple sellers. The buyer is interested in procuring a set of distinct objects, $I$. She is interested in procuring multiple units of each object. She specifies her demand for each object. The sellers are \emph{single minded}. That is each seller is interested in selling a specific bundle of the objects. We illustrate through an example below. \begin{example} Consider a buyer interested in buying $100$ units of $A$, $150$ units of $B$, and $200$ units of $C$. Assume that there are three sellers. Seller 1 might be interested in providing $70$ units of bundle $\{A,B\}$, that is, $70$ units of $A$ and $70$ units of $B$ as a bundle. Because he is single minded, he does not bid for any other bundles. We also assume that he would supply equal numbers of $A$ and $B$. Similarly, seller 2 may provide a bid for $100$ units of the bundle $\{B,C\}$. The bid from seller 3 may be $125$ units of the bundle $\{A,C\}$. \end{example} The sellers are capacitated i.e. there is a maximum quantity of the bundle of interest they could supply. The bid therefore specifies a unit cost of the bundle and the maximum quantity that can be supplied. After receiving these bids, the buyer will determine the allocation and payment as per auction rules. We summarize below important assumptions in the model. \label{ssec:assumptions} \begin{itemize} \item The sellers are single minded. \item The sellers can collectively fulfill the demands specified by the buyer. \item The sellers are capacitated i.e. they can not supply beyond the capacity specified in the bids. \item The seller will never inflate his capacity, as it can be detected. If he fails to supply the quantity exceeding his capacity, he incurs a penalty which is deterrent on inflating his capacity. This is an important assumption. \item Whenever a buyer buys anything from the seller, she will procure the same number of units of each of the items from the seller's bundle of interest. \item All the participants are rational and intelligent. \end{itemize} Table \ref{tab:notation} shows the notation that will be used in the rest of the paper. \begin{table*} \centering \caption{Notation} \label{tab:notation} \begin{normalsize} \begin{tabular}{|l|l|} \hline $I$ & Set of items the buyer is interested in buying, $\{1,2,\ldots,m\}$\tabularnewline \hline $D_{j}$ & Demand for item $j$, $j=\ldots m$\tabularnewline \hline $N$ & Set of sellers. $\{1,2,\dots,n\}$\tabularnewline \hline $c_i$ & True cost of production of one unit of bundle of interest to the seller $i$,\tabularnewline & $c_i\in [ \underline{c_i},\bar{c_i} ]$\tabularnewline \hline $q_i$ & True capacity for bundle which seller $i$ can supply, $q_i\in[\underline{q_i},\bar{q_i}]$\tabularnewline \hline $\hat{c_i}$ & Reported cost by the seller $i$\tabularnewline \hline $\hat{q_i}$ & Reported capacity by the seller $i$\tabularnewline \hline $\theta_{i}$ & True type i.e. cost and capacity of the seller $i$, $\theta_{i}=(c_i,q_i)$\tabularnewline \hline $b_i$ & Bid of the seller $i$. $b_i=(\hat{c_i},\hat{q_i})$\tabularnewline \hline $b$ & Bid vector, $(b_{1},b_{2},\ldots,b_{n})$\tabularnewline \hline $b_{-i}$ & Bid vector without the seller $i$, i.e. $(b_{1},b_{2},\ldots,\, b_{i-1},b_{i+1},\ldots,\, b_{n})$\tabularnewline \hline $t_{i}(b)$ & Payment to the seller $i$ when submitted bid vector is $b$\tabularnewline \hline $T_i(b_i)$ & Expected payment to the seller $i$ when he submits bid $b_i$. \tabularnewline & Expectation is taken over all possible values of $b_{-i}$\tabularnewline \hline $x_i=x_{i}(b)$ & Quantity of the bundle to be procured from the seller $i$\tabularnewline & when the bid vector is $b$\tabularnewline \hline $X_i(b_i)$ & Expected quantity of the bundle to be procured from the seller $i$ \tabularnewline & when he submits bid $b_i$.\tabularnewline & Expectation is taken over all possible values of $b_{-i}$\tabularnewline \hline $f_{i}(c_i,q_i)$ & Joint probability density function of $(c_i,q_i)$\tabularnewline \hline $F_{i}(c_i,q_i)$ & Cumulative distribution function of $f_{i}(c_i,q_i)$\tabularnewline \hline $f_{i}(c_i|q_i)$ & Conditional probability density function of production cost\tabularnewline & when it is given that the capacity of the seller $i$ is $q_i$\tabularnewline \hline $F_{i}(c_i|q_i)$ & Cumulative distribution function of $f_{i}(c_i|q_i)$\tabularnewline \hline $H_{i}(c_i,q_i)$ & Virtual cost function for seller $i$, \tabularnewline & $H_{i}(c_i,q_i)=c_i+\frac{F_{i}(c_i|q_i)}{f_{i}(c_i|q_i)}$\tabularnewline \hline $\rho_{i}(b_i)$ & Expected offered surplus to seller $i$, when his bid is $b_i$\tabularnewline \hline $u_{i}(b,\theta_{i})$ & Utility to seller $i$, when bid vector is $b$ and his type is $\theta_{i}$ \tabularnewline \hline $U_i(b_i,\theta_{i})$ & Expected utility to the seller $i$, when he submits bid $b_i$ and his \tabularnewline & type is $\theta_{i}$. Expectation is taken over all possible values of $b_{-i}$\tabularnewline \hline \end{tabular} \end{normalsize} \end{table*} \subsection{Some Preliminaries} \label{sec:opt} The problem of designing an optimal mechanism was first studied by Myerson \cite{MYERSON81} and Riley and Samuelson \cite{Riley81}. Myerson's work is more general and considers the setting of a seller trying to sell a single unit of a single object to one of several possible buyers. Note here that, unlike the rest of paper, the auctioneer is the seller and his objective is to maximize the revenue. (In the rest of the paper, the auctioneer will be a buyer and her objective will be to minimize the cost of procurement.) So in this particular setting, as per notation defined in Table \ref{tab:notation}, $m=1, D_1=1$. (So, $q_i$ will be 1 for all the agents and no longer a private information). $F_i$, $H_i$ defined in Table \ref{tab:notation} will be function of single variable. The buyer's private information will be the maximum cost he is willing to pay, which we will denote as $\theta_i$. $\theta_i \in \Theta_i = [\underline{\theta_i},\overline{\theta_i}]$. Myerson \cite{MYERSON81} characterizes all auction mechanisms that are Bayesian incentive compatible and interim individually rational in this setting. From this, he derives the allocation rule and the payment function for the optimal auction mechanism, using an interesting notion called the virtual cost function, defined as follows: $$ H_i(\theta_i)= \theta_i - \frac{1-F_i(\theta_i)}{f_i(\theta_i)}$$ He has shown that an optimal auction is one with allocation rule as: \begin{eqnarray} x_i(\theta) &=& 1 \textrm{ if }H_i(\theta_i) > \max\Big\lbrace 0, \max_{j\neq i}H_j(\theta_j)\Big\rbrace \nonumber \\ &=& 0 \textrm{ otherwise} \label{eq:opt_myerson1} \end{eqnarray} \begin{eqnarray} T_i(\theta_i) &=& E_{b_{-i}}(u_i(\theta) - \theta_i(x_i(\theta))) \nonumber\\ &=& U_i(\theta_i) - \theta_iX_i(\theta_i) \nonumber \\ &=& \int_{\underline{\theta_i}}^{\theta_i}X_i(s)ds - \theta_iX_i(\theta_i) \label{eq:opt_myerson1_pay} \end{eqnarray} One such payment rule is given by, $$t_i(\theta_i,\theta_{-i}) = \Big( \int_{\underline{\theta_i}}^{\theta_i}x_i(s,\theta_{-i})ds \Big) - \Big( \theta_ix_i(\theta)\Big) \; \forall \theta$$ Any auction for single unit of an single item which satisfies Equation (\ref{eq:opt_myerson1}) and Equation (\ref{eq:opt_myerson1_pay}) is optimal i.e. maximizes seller's revenue and is BIC and IIR. \textit{Regularity Assumption}: If $H_i(\theta_i)$ is increasing with respect to $\theta_i$, then we say, the virtual cost function is regular or regularity condition holds true. Under this assumption one such optimal auction is, \begin{enumerate} \item Collect bids from the buyers \item Sort them according to their virtual costs \item If the highest virtual cost is positive, allocate the object to the corresponding bidder \item The winner, say $i$, will pay $t_i(\theta_{-i})$\\ $= \inf \{ \theta_i | H_i(\theta_i) > 0 \textrm{ and } H_i(\theta_i) > H_j(\theta_j) \forall j \neq i\}$ \end{enumerate} From the payment rule, it is a dominant strategy for each bidder to bid truthfully under the regularity assumption. When bidders are symmetric, i.e. $F_i$ is same $\forall i$, then the above optimal auction is Vickrey's second price auction \cite{VICKREY61}. Myerson's work can be easily extended to the case of multi-unit auctions with unit demand. But problems arise when the unit-demand assumption is relaxed. We move into a setting of multi-dimensional type information which makes truth elicitation non-trivial. Several attempts have addressed this problem, albeit under some restrictive assumptions \cite{Malakhov05,Iyengar08,Raghav07}. It is assumed, for example, that even though the seller is selling multiple units (or even objects), the type information of the entities is still one dimensional \cite{Chen04,Dasgupta89,Zhang05}. Researchers have also worked on extending Myerson's work for an optimal auction for multiple objects. The private information, in this setting may not be single dimensional. Armstrong \cite{Armstrong00} has solved this problem for two object case, when type sets are binary by enumerating all incentive compatibility conditions. Recently, Ledyard \cite{Ledyard07a} has characterized an optimal multi-object single unit auction, when bidders are single minded. \section{Optimal Multi-Unit Combinatorial Procurement Auction} \label{sec:opt_auc} We will start this section with an example to illustrate that in a multi-unit, multi-item procurement auction, the suppliers may have an incentive to misreport their costs. \begin{example}{} Suppose the buyer has a requirement for 1000 units. Also, suppose that there are four suppliers with $(c_i, q_i)$ values of $S_1 \: : \: (10, 500), \: S_2\: : \:(8, 500), \: S_3\: :\: (12, 800)$ and $S4\: :\: (6, 500)$. Suppose the buyer conducts the classic $k^{th}$ price auction, where the payment to a supplier is equal to the cost of the first losing supplier. In this case, the sellers will be able to do better by misreporting types. To see this, consider that all suppliers truthfully bid both the cost and the quantity bids. The allocation then would be $S_1\: : \:0,\: S_2\: : \:500, \:S_3 \: : \:0, \:S_4\: :\: 500$ and this minimizes the total payment. Under this allocation the payment to $S_4$ would be $10\times 500 = 5000$ currency units. However, if he bids his quantity to be $490$, then the allocation changes to $S_1\: : \:10,\: S_2\: :\: 500,\: S_3\: :\: 0,\: S_4\: :\: 490$ giving him a payment of $12 \times 490 = 5880$ currency units and thus incentive compatibility does not hold. Thus it is evident that such uniform price mechanisms are not applicable to the case where both unit cost and maximum quantity are private information. The intuitive explanation for this could be that by under reporting their capacity values, the suppliers create an artificial scarcity of resources in the system. Such fictitious shortages force the buyer to pay overboard for use of the virtually limited resources. We also make another observation here. Suppose, the seller $4$ bids (6,600). Then the buyer will order from him 600 units at the cost of 10 per unit. Being his capacity 500, he would not be able to supply the remaining 100 units. If he bids (6,1000), then he will be paid only 8 per unit and the buyer will be ordering him 1000 units. This clearly indicates our assumption that a seller will not inflate his capacity is quite natural. \end{example} We are interested in designing an optimal mechanism, for a buyer, that satisfies Bayesian incentive compatibility (BIC) and individual rationality (IR). BIC means that the best response of each seller is to bid truthfully if all the other sellers are bidding truthfully. IR implies the players have non-negative payoff by participating in the mechanism. More formally, these can be stated as (see Table \ref{tab:notation} for notation), \\ $\forall i \in N$ and $\forall \;\theta_{i} \in [\underline{c_i},\bar{c_i}]\times[\underline{q_i},\bar{q_i}] $ \begin{eqnarray} U_i(\theta_{i},\theta_{i}) &\geq& U_i(b_i,\theta_{i})\forall \;\;b_i,\;\;\mbox{(BIC)}\label{eq:bic}\\ U_i(\theta_{i},\theta_{i}) &\geq& 0 \qquad\qquad\qquad\; \mbox{(IR)}\label{eq:ir} \end{eqnarray} The IR condition above corresponds to interim individual rationality. \subsection{Necessary and Sufficient Conditions for BIC and IR} \label{ssec:ana} To make the sellers report their types truthfully, the buyer has to offer them incentives. We propose the following incentive, motivated by paying a seller higher than what he claims to be the total cost of the production for the ordered quantity. $\forall i \in N,$ $$ \rho_i(b_i)=T_i(b_i)-\hat{c_i}X_i(b_i), \mbox{ where } b_i=(\hat{c_i},\hat{q_i})$$ $\Rightarrow $ \begin{eqnarray} U_i(b_i,\theta_{i}) &=& T_i(b_i) - c_i X_i(b_i) \nonumber\\ &=& \rho_{i}(b_i) -(c_i-\hat{c_i})X_i(b_i) \label{eq:rho_utility} \end{eqnarray} With the above offered incentive, we now state and prove the following theorem. \begin{theorem} \label{thm:bic_ir} Any mechanism in the presence of single minded, capacitated sellers is BIC and IR iff \begin{enumerate} \item $\rho_{i}(b_i) = \rho_{i}(\bar{c_i,}\hat{q_i}) + \int_{\hat{c_i}}^{\bar{c_i}}X_i(t,\hat{q_i})dt$ \item $\rho_{i}(b_i)$ non-negative, and non-decreasing in $\hat{q_i}\;\;\forall\;\hat{c_i}\;\in\;[\underline{c_i},\bar{c_i}]$ \item The quantity which seller $i$ is asked to supply, $X_i(c_i,q_i)$ is non-increasing in $c_i\;\;\forall q_i\;\in\;[\underline{q_i},\bar{q_i}]$. \end{enumerate} \end{theorem} \vspace*{5mm} \noindent\begin{proof}: A similar theorem is presented by Kumar and Iyengar \cite{Iyengar08} for the case of multi-unit single item procurement auctions. Using the notion of single minded bidder \cite{Ledyard07a}, we state and prove a result for a wider setting. To prove the necessity part of the theorem, we first observe that, $$U_i(b_i,\theta_{i}) = U_i(\hat{c_i},\hat{q_i},c_i,q_i) = T_i(b_i)-c_iX_i(b_i) $$ $$\mbox{and BIC }\Rightarrow U_i(\hat{c_i},\hat{q_i},c_i,q_i) \leq U_i(c_i,q_i,c_i,q_i), $$ $$\forall(\hat{c_i},\hat{q_i}) \mbox{ and }(c_i,q_i)\in \Theta_i$$ In particular, $$U_i(\hat{c_i},q_i,c_i,q_i)\leq U_i(c_i,q_i,c_i,q_i)$$ Without loss of generality, we assume $\hat{c_i}>c_i.$ Rearrangement of these terms yields, $$U_i(\hat{c_i},q_i,c_i,q_i) = U_i(\hat{c_i},q_i,\hat{c_i},q_i) $$ $$\quad\qquad+ (\hat{c_i}-c_i)X_i(\hat{c_i},q_i)$$ $\Rightarrow$ $$\frac{U_i(\hat{c_i},q_i,\hat{c_i},q_i)-U_i(c_i,q_i,c_i,q_i)}{\hat{c_i}-c_i} \leq -X_i(\hat{c_i},q_i)$$ Similarly using, $$U_i(c_i,q_i,\hat{c_i},q_i) \leq U_i(\hat{c_i},q_i,\hat{c_i},q_i)$$ \begin{equation} -X_i(c,q)\leq\frac{U_i(\hat{c_i},q_i,\hat{c_i},q_i)-U_i(c_i,q_i,c_i,q_i)}{\hat{c_i}-c_i}\nonumber \end{equation} \begin{equation} \leq-X_i(\hat{c_i},q_i).\label{eq:mono} \end{equation} Taking limit $\hat{c_i}\rightarrow c_i,$ we get, \begin{equation} \frac{\partial U_i(c_i,q_i,c_i,q_i)}{\partial{c_i}} = -X_i(c_i,q_i). \label{eq:pde} \end{equation} Equation (\ref{eq:mono}) implies, $X_i(c_i,q_i)$ is non-increasing in $c_i$. This proves statement 3 of the theorem in the forward direction. When the seller bids truthfully, from Equation (\ref{eq:rho_utility}), \begin{equation} \rho_{i}(c_i,q_i)=U_i(c_i,q_i,c_i,q_i).\label{eq:rho1} \end{equation} For BIC, Equation (\ref{eq:pde}) should be true. So, \begin{equation} \rho_{i}(c_i,q_i)=\rho_{i}(\bar{c_i},q_i)+\int_{c_i}^{\bar{c_i}}X_i(t,q_i)dt\label{eq:rho2} \end{equation} This proves claim 1 of the theorem. BIC also requires, $$q_i \in \arg\max_{\hat{q_i}\in[\underline{q_i},q_i]} U_i(c_i,\hat{q_i},c_i,q_i) \;\forall\; c_i\;\in\;[\underline{c_i},\bar{c_i}]$$ (Note that $\hat{q_i}\in [\underline{q_i},q_i] \mbox{ and not } \in[\underline{q_i},\bar{q_i}]$ as it is assumed that a bidder will not over report his capacity.) This implies, $\forall\; c_i\;\;\rho_{i}(c_i,q_i)$ should be non-decreasing in $q_i.$. The IR conditions (Equations (\ref{eq:ir}) and (\ref{eq:rho1})) imply $$\rho_{i}(c_i,q_i)\geq 0.$$ This proves statement 2 of the theorem. Thus, these three conditions are necessary for BIC and IR properties. We now prove that these are sufficient conditions for BIC and IR. Assume that all three conditions are true, $$\Rightarrow U_i(\theta_{i},\theta_{i})=\rho_i(c_i,q_i) \geq 0.$$ So the IR property is satisfied. \begin{eqnarray} U_i(b_i,\theta_{i}) &=& \rho_{i}(\hat{c_i},\hat{q_i})+(\hat{c_i}-c_i)X_i(\hat{c_i},\hat{q_i})\nonumber \\ &=& \rho_{i}(\bar{c_i},\hat{q_i}) + \int_{\hat{c_i}}^{\bar{c_i}}X_i(t,\hat{q_i})dt\nonumber \\ & & \quad + (\hat{c_i}-c_i)X_i(\hat{c_i},\hat{q_i}) \nonumber \\ &=& \rho_{i}(\bar{c_i},\hat{q_i}) + \int_{c_i}^{\bar{c_i}}X_i(t,\hat{q_i})dt\nonumber \\ & & \quad - \int_{c_i}^{\hat{c_i}}X_i(t,\hat{q_i})dt \nonumber \\ & & \quad+ (\hat{c_i}-c_i)X_i(\hat{c_i},\hat{q_i})\nonumber \\ &\leq& \rho_{i}(c_i,\hat{q_i})\nonumber \\ & & \mbox{ as }X_i\mbox{ is non-increasing in }c_i \nonumber \\ &\leq& \rho_{i}(c_i,q_i) \nonumber \\ &=& U_i(\theta_{i},\theta_{i}) \:\;\nonumber \\ & & \mbox{ as }\rho_{i}\mbox{ is non-decreasing in }q_i \nonumber \end{eqnarray} This proves the sufficiency of the three conditions.\\ \end{proof} \subsection{Allocation and Payment Rules of Optimal Auction} \label{ssec:oa} The buyer's problem is to solve, \begin{center} $\min\:\mathbf{E}_{b} \sum_{i=1}^{n} t_{i}(b) \quad \mbox{s.t.}$ \end{center} \begin{enumerate} \item $t_{i}(b) = \rho_{i}(b) + \hat{c_i}x_{i}(b)$ \item All three conditions in Theorem \ref{thm:bic_ir} hold true. \item She procures at least $D_{j}$ units of each item $j$. \end{enumerate} Expectation being a linear operator, the buyer's problem is to minimize $\sum_{i=1}^{n} {\mathbf{E}}_{b_i} T_i(\hat{c_i},\hat{q_i})$. Condition 1 of the theorem has to hold true, which will imply the $i^{th}$ term in the summation is given by, \begin{center} $\int_{\underline{q_i}}^{\bar{q_i}} \int_{\underline{c_i}}^{\bar{c_i}} \Big(c_iX_i(c_i,q_i) + \rho_{i}(\bar{c_i},q_i)\qquad $\\$\quad\quad+ \int_{c_i}^{\bar{c_i}}X_i(t,q_i)dt\Big)f_{i}(c_i,q_i)dc_idq_i$ \end{center} However, \begin{center} $\int_{\underline{c_i}}^{\bar{c_i}} \left(\int_{c_i}^{\bar{c_i}} X_i(t,q_i)dt\right)f_{i}(c_i,q_i)dc_i =\qquad$\\$\qquad \int_{\underline{c_i}}^{\bar{c_i}} X_i(c_i,q_i)F_{i}(c_i|q_i)f_{i}(q_i)dc_i$\end{center} Condition 2 of Theorem \ref{thm:bic_ir} requires $\rho_{i}(\bar{c_i},q_i) \geq 0 $ and the buyer wants to minimize the total payment to be made. So, she has to assign $\rho_{i}(\bar{c_i},q_i) = 0 \; \forall\; q_i ,\forall i$. So her problem is to solve, \begin{center} $\min\,\sum_{i=1}^{n} \int_{\underline{q_i}}^{\bar{q_i}} \int_{\underline{c_i}}^{\bar{c_i}} \left(c_i+\frac{F_{i}(c_i|q_i)}{f_{i}(c_i|q_i)}\right) \qquad\qquad$ \\ $\qquad\qquad\qquad X_i(c_i,q_i)f_{i}(c_i,q_i)dc_idq_i $ \end{center} That is, \begin{center} $\min\,\sum_{i=1}^{n} \int_{\underline{q_i}}^{\bar{q_i}} \int_{\underline{c_i}}^{\bar{c_i}} H_{i}(c_i,q_i)X_i(c_i,q_i)\qquad$\\ $\qquad\qquad f_{i}(c_i,q_i)dc_idq_i$ \end{center} where, $H_{i}(c_i,q_i)$ is the virtual cost function, defined in Table \ref{tab:notation}. Define, \begin{center} $\bar{c} = (\bar{c_{1}},\bar{c_{2}},\ldots,\bar{c_{n}})$ \\ $c = (c_{1},c_{2},\ldots,c_{n})$ \\ $\underline{c} = (\underline{c_{1}},\underline{c_{2}},\ldots,\underline{c_{n}})$. \end{center} Similarly, define $\bar{q}$ , $q$ and $\underline{q}.$ Let, \begin{center} $dc = dc_{1}dc_{2}\ldots dc_{n}$\\ $dq = dq_{1}dq_{2}\ldots dq_{n}$ \\ $f(c,q) = \prod_{i=1}^{n}f_{i}(c_i,q_i) $ \end{center} Her problem now reduces to, \begin{center} $\min \int_{\underline{q}}^{\bar{q}} \int_{\underline{c}}^{\bar{c}} \left(\sum_{i=1}^{n}H_{i}(c_i,q_i)x_{i}(c_i,q_i)\right)$\\ $\qquad\qquad f(c,q)dcdq\quad\mbox{s.t.}$ \end{center} 1. $\forall\: i,\; X_i(c_i,q_i)$ is non-increasing in $c_i, \forall\: q_i. $\\ 2. The Buyer's minimum requirement of each item is satisfied. \\ This is \emph{an optimal auction} for the buyer in the presence of the single minded sellers. In the next subsection, we will see an optimal auction under regularity conditions. \subsection{Optimal Auction under Regularity Assumption} \label{ssec:regular} First, we make the assumption that, $$H_{i}(c_i,q_i) = c_i + \frac{F_{i}(c_i|q_i)}{f_{i}(c_i|q_i)}$$ is non-increasing in $q_i$ and non-decreasing in $c_i$. This regularity assumption is the same as regularity assumption made by Kumar and Iyengar \cite{Iyengar08}. With this assumption, the buyer's optimal auction when bidder $i$ submits bid as $(c_i,q_i)$ is, $$\min\sum_{i=1}^{n} x_{i}H_{i}(c_i,q_i)\quad\quad\mbox{subject to}$$ \begin{enumerate} \item $0 \leq x_{i} \leq q_i$, where $x_{i}$ denotes the quantity that seller $i$ has to supply of bundle $\bar{x_i}$. \item Buyer's demands are satisfied. \end{enumerate} The condition $X_i(c_i,q_i)$ is non increasing in $c_i, \forall \:q_i$ and $\forall \:i$. After this problem has been solved, the buyer pays each seller $i$ the amount \begin{equation} t_{i} = c_ix^*_{i} + \int_{c_i}^{\bar{c_i}}x_{i}(t,q_i)dt \label{eq:payment} \end{equation} where $x^*_i$ is what agent $i$ has to supply after solving the above problem. We exemplify the optimal mechanism with one example. \begin{example} Suppose, the buyer is interested in buying 100 units of $\{ A,C,D\}$ and $250$ units of $\{B\}$. Seller 1 ($S1$) is interested in providing $q_1=100$ units of bundle $\{A,B\}$, seller 2 ($S2$): $q_2=100$ units of $\{B\}$, seller 3, ($S3$) $q_3=150$ units of $\{B,C,D\}$ and seller 4 ($S4$) is interested in up to $q_4=120$ units of $\{A,B,C,D\}$. The unit costs of the respective bundles are $c_1=100$, $c_2=50$, $c_3=70$ and $c_4=110$. Each seller will submit his bid as $(c_i,q_i)$. After receiving the bids, buyer will solve, $$\min x_1H_1(100,100) + x_2H_2(50,100)\qquad$$ $$\qquad+x_3H_3(70,150)+x_4H_4(110,120)$$ s.t. \begin{eqnarray} x_i &\geq& 0 \;\; i=1,2,3,4. \nonumber \\ x_1 &\leq& 100 \nonumber \\ x_2 &\leq& 100 \nonumber \\ x_3 &\leq& 150 \nonumber \\ x_4 &\leq& 120 \nonumber \\ x_1 + x_2 &\geq& 100 \label{eq:1} \\ x_1+x_2+x_3+x_4 &\geq& 250 \label{eq:2} \\ x_3+x_4 &\geq& 100 \label{eq:3} \end{eqnarray} Equation (\ref{eq:1}) is required to be satisfied as at least 100 units of $A$ has to be procured. Equation (\ref{eq:2}) is for procuring at least 250 units of $B$, and Equation (\ref{eq:3}) is for procuring at least 100 units of $C$ and $D$. After solving this optimization problem, she will determine the payment according to Equation (\ref{eq:payment}). \end{example} It can be seen that for the seller $i$, the best response is to bid truthfully irrespective of whatever the others are bidding. Thus, this mechanism enjoys the stronger property, namely dominant strategy incentive compatibility. Note that this property is much stronger than BIC. The above property is a direct consequence of the result proved by Mookherjee, and Stefan \cite{Mookherjee92}. They have given the monotonicity conditions for DSIC implementation of a BIC mechanism. Under these regularity assumptions, $x_i$ satisfies these conditions. So we have a DSIC mechanism. In the next section we consider X-OR bidding with unit demand case. \section{An Optimal Auction when Bidders are XOR Minded} \label{sec:xor} Consider the situation where a supplier can manufacture some of the items required by the buyer, say $A,B,C,D$. However, with the machinery he has, at a time either he can manufacture $A,D$ or $B,C$ but not any other combination simultaneously. Thus he can either supply $A,D$ as bundle or $B,C$ as a bundle but not both. That is, he is interested in X-OR bidding. \begin{definition}[XOR Minded Bidder] We say a bidder is an {\em XOR minded} if he is interested in supplying either of two disjoint subsets of items auctioned for but not both. \end{definition} To simplify the analysis, in this section, we restrict ourselves to the unit demand case. That is the buyer is interested in buying single unit of each of the items from $I$. And hence there are no capacity constraints. We formally state assumptions. \begin{itemize} \item We assume that the bidders are XOR minded. \item For each bidder, his costs of the two bundles of his interest are independent. \item The two bundles for which each seller is going to submit an X-OR bid, are known. \item The sellers can collectively supply the items required by the buyer. \item The buyer and the sellers are strategic. \item Free disposal. That is, if the buyer procures more than one unit of an item, he can freely dispose it of. \end{itemize} With the above assumptions, we now discuss an extension of the current art of designing optimal auctions for combinatorial auctions in the presence of XOR minded bidders. Though we assume the bidders are XOR minded, the BIC characterization and the auction designed here work even though the bidders are either single minded or XOR minded. \subsection{Notation} As, $q_i=1$ for each bidder, we drop capacity from the types and bids for all the agents. Each agent will be reporting the costs for each bundle of his interest, he will be bidding two real numbers. And we need to calculate virtual costs on both the bundles. Thus, we need appropriate modifications in some of the notation used in the paper. We summarize the new notation for this section in Table \ref{tab:notation2}. Each agent is submitting tow different bids on two different bundles. We will use $j$ to refer to the bundle. \begin{table*}[!hbt] \centering \begin{normalsize} \caption{Notation: XOR Minded Bidders} \label{tab:notation2} \begin{tabular}{|l|l|} \hline $j$ & $j=1\mbox{ or }2$. Bundle index.\\ $B_{i_j}$ & The $j^{th}$ bundle of items for which the agent $i$ is bidding. $j=1,2$\\ \hline $c_{i_j}$ & True cost of production of $B_{i_j}$ to the seller $i$. $c_{i_j}\in [ \underline{c_i},\bar{c_i} ]$\\ $c_i$ & $=(c_{i_1},c_{i_2})$ \\ \hline $\theta_{i}$ & True type i.e. costs for $i$, $\theta_{i}=(c_{i_1},c_{i_2})$\\ \hline $b_i$ & Bid of the seller $i$. $b_i=(\hat{c_{i_1}},\hat{c_{i_2}})$\\ \hline $x_{i_j}=x_{i_j}(b)$ & Indicator variable to indicate whether $B_{i_1}$ is to be procured from \\ & the seller $i$ when the bid vector is $b$\tabularnewline \hline $X_{i_j}(b_{i})$ & Probability that $B_{i_j}$ is procured from the seller $i$ when he \\ & submits bid $b_i$. Expectation is taken over all possible values of $b_{-i}$\\ \hline $f_{i_j}(c_{i_j})$ & Probability density function of $(c_{i_j})$\tabularnewline \hline $F_{i_j}(c_{i_j})$ & Cumulative distribution function of $c_{i_j}$\\ \hline $H_{i_j}(c_{i_j})$ & Virtual cost function for seller $i$, for bundle $B_{i_j}$ \\ & $H_{i_j}(c_{i_j})=c_{i_j}+\frac{F_{i_j}(c_{i_j})}{f_{i_j}(c_{i_j})}$ \\ \hline \end{tabular} \end{normalsize} \end{table*} \subsection{Optimal Auctions When Bidders Are XOR Minded} First we characterize the BIC and IIR mechanisms for the settings under consideration in next subsection. We design an optimal auction in subsection \ref{sssec:ocax}. \subsubsection{BIC and IIR: Necessary and Sufficient Conditions} The utility for the agent $i$ is $$U_i(b_i,\theta_i) = -c_{i_1}X_{i_1} - c_{i_2}X_{i_2} + T_i(b_i,\theta_i)$$ Using similar arguments as in the proof of the Theorem \ref{thm:bic_ir}, for any mechanism in the presence of XOR minded bidders, the necessary condition for BIC is, \begin{eqnarray} \frac{\partial U(.)}{\partial c_{i_1}} = X_{i_1}(c_{i_1},c_{i_2}) \nonumber\\ \frac{\partial U(.)}{\partial c_{i_2}} = X_{i_2}(c_{i_1},c_{i_2}) \label{eqn:pde} \end{eqnarray} \noindent and $X_{i_j}(c_{i_1},c_{i_2})$ should be non-increasing in $c_{i_j},\;j=1,2$. \noindent We make an assumption that, \begin{equation} \frac{\partial X_{i_1}}{\partial c_{i_2}} = \frac{\partial X_{i_2}}{\partial c_{i_1}} \label{eqn:assumption} \end{equation} In general, the above assumption is not necessary for the mechanism to be truthful. However, if we assume that Equation (\ref{eqn:assumption}) is true, we can solve PDE (\ref{eqn:pde}) analytically. Now we can state the following theorem, \begin{theorem} With assumption (\ref{eqn:assumption}), a necessary and sufficient condition for a mechanism to be BIC and IIR in the presence of XOR minded bidders is, \begin{enumerate} \item $T_i(.) = c_{i_1}X_{i_1} + c_{i_2}X_{i_2} +\int_{(c_{i_1},c_{i_2})}^{(\overline{c_i},\overline{c_i})} \triangledown U_i(.)d\theta_i$ \item $U_i(\overline{c_i},\overline{c_i}) \geq 0$. \end{enumerate} \end{theorem} \subsubsection{Optimal Auction with Regularity Assumption} \label{sssec:ocax} Suppose we assume that, $H_{i_j}$ is non-decreasing in $c_{i_j}$ for each $i,j$. This is the same regularity assumption as Myerson \cite{MYERSON81}. Now, following similar treatment for buyers problem as in Section \ref{ssec:regular}, reduces the buyers problem to: \begin{equation} \begin{array}{|l|} \hline \min\sum_{i=1}^{n}\sum_{j=1}^{2} x_{i_j}H_{i_j}(c_{i_j})\\ \mbox{subject to}\\ \t{ 1. } x_{i_j} \in {0,1}, \t{ where } x_{i_j} \t{ indicates whether supplier $i$ is supplying his}s \\ j^{th} \t{ bundle or no.} \\ \t{ 2. } x_{i_1} + x_{i_2} \leq 1. \t{ (XOR minded bidder).}\\ \t{ 3. All the items are procured.}\\ \hline \end{array} \label{eqn:regular_ocax} \end{equation} Now, we show that at optimal allocation, the assumption (\ref{eqn:assumption}) holds true. For an agent $i$, fix, $\theta_{-i}$ and consider the square of his types $[\underline{c_i},\overline{c_i}]\times[\underline{c_i},\overline{c_i}]$. When he bids, $b_i=(\overline{c_i},\overline{c_i})$, he does not win any item. However, if he decreases his bid on $c_{i_j}$, he wins the bundle $B_{i_j}$ at some lower bid and at a lower bid for $B_{i_j}$, he continues to win. Also, he being XOR minded, he cannot win both the bundles. Thus, the type set's square can be partitioned into three regions, $R_1,R_2$ and $R_3$ as shown in Figure \ref{fig:xor}. When his type is in region $R_j$, he is asked to supply $B_{i_j}, \;j=1,2$ and when it is in $R_3$ he is not in the list of winning agents. Now, except on the boundary between $R_1$ and $R_2$, the assumption (\ref{eqn:assumption}) holds true. Hence, though we are not using (\ref{eqn:assumption}) as a necessary condition, it is getting satisfied in optimization problem (\ref{eqn:regular_ocax}). Thus OCAX is an optimal combinatorial auction for the buyer in the presence of XOR minded bidders. \begin{figure}[!htb] \centering \includegraphics[width=13cm]{XOR_bids.eps} \caption{X-OR Bidding} \label{fig:xor} \end{figure} \subsubsection{The Case when Regularity Assumption is not Satisfied} Though we do not solve the buyer's problem of optimal mechanism design without the regularity assumption, we highlight some thoughts on this. If we can assume (\ref{eqn:assumption}), then we can design an optimal auction very similar to the OCAS, in the presence of XOR minded bidders. The challenge is, we cannot use (\ref{eqn:assumption}) as a necessary condition nor can we assume it. However, it may happen that in an optimal auction, the condition (\ref{eqn:assumption}), will hold true. We are still working on this. \section{Conclusion} \label{sec:conclusion} In this paper, \begin{itemize} \item we have stated and proved a necessary and sufficient condition for incentive compatible and individually rational multi-unit multi-item auctions in the presence of single minded, capacitated buyers. \item We have given a blueprint of an optimal mechanism, for a buyer seeking to procure multiple units of multiple items in the presence of single minded and capacitated sellers. \item We also have shown that the mechanism minimizes the cost subject to DSIC and IIR if the virtual cost functions satisfy the regularity assumptions. \item When bidders are XOR minded, under certain regularity conditions, we designed an optimal auction for the buyer which we call as OCAX. \end{itemize} There are many natural extensions to this work. First, we can study optimal auctions in which the sellers are willing to give volume discounts. We also plan to study a case where the sellers are interested in supplying multiple bundles. \section*{Acknowledgment} The first author would like to acknowledge the Infosys Technologies Pvt Ltd for awarding Infosys fellowship to pursue PhD. \bibliographystyle{acm}
1,314,259,994,628
arxiv
\section{Introduction} Astrophysical research in the past two decades has accumulated strong evidence for the existence of supermassive black holes at the center of most galaxies, including the Milky Way \cite{gillessen17} and the nearby M87 \cite{broderick15}. One year ago, the international collaboration Event Horizon Telescope (EHT) presented the first reconstructed image \cite{eht19} of the close environment of the supermassive black hole located at the center of the giant elliptical galaxy M87, obtained by a global very long baseline interferometry (VLBI) array in millimeter radio waves (230 GHz). The image shows the presence of the so-called shadow, surrounded by the light coming from the accretion disk around the black hole M87*, with the bright emission ring having a diameter of $42 \pm 3$ $\mu $as. The behavior of the photons in the neighborhood of a black hole results in the shadow or apparent shape as seen by a far away observer. The shadows of non-rotating black holes are circles, but rotating ones show a deformation produced by the spin \cite{bardeen,chandra}. Many researchers have studied this topic in the years previous to the EHT announcement, both in Einstein theory \cite{luminet,falcke00,devries,takahashi,hioki09,bambi09,shadowplas,tsupko17,other18,ghosh20} and in modified gravity \cite{hioki08,amarilla,braneworld,other14,tsukamoto14,perlick14,herdeiro,other16,tsukamoto18,shadowbwcc,ovgun18}. In the case of modified gravity, the size and the shape of the shadow, which always depend on the mass, the angular momentum, and the inclination angle of the black hole, are also related with other parameters specific of the particular model adopted. The EHT discovery has led to an outburst in the number of works published in the field; see, for example, \cite{new19rg,new19at,new20rg,new20at}. It is expected that more detailed direct observations of M87* and also of other black holes will be possible in the coming years \cite{zhu19,millimetron,ehi,observ}, so that the analysis of the shadows will be a useful tool for a better knowledge of astrophysical black holes and also for comparing alternative theories with General Relativity. Other interesting topics concerning the physical nature of the black hole photon ring and the shadow have been recently discussed in the literature \cite{topics}. The presence of fields or fluids modifies the geometry with respect to Kerr vacuum spacetime, leaving its imprint on the apparent shape of a black hole as seen by a distant observer. Isotropic fluids have been investigated extensively in General Relativity, but anisotropic ones have not drawn much attention until recently. Many works related to anisotropic matter can be found in the literature in different contexts, for example, compact stars, relativistic stellar objects, self-gravitating systems, stellar objects consistent with quark stars, and black holes, among others. The covariant Tolman-Oppenheimer-Volkoff equations for an anisotropic fluid were also recently found. A brief review on this topic with many references is done in \cite{cho18}, where spherically symmetric black holes with a simple anisotropic fluid are introduced. In this article, we investigate the shadow cast by charged rotating black holes under the influence of an anisotropic matter field, assuming a recently obtained solution \cite{kim20}. In Sec. \ref{sec:metric}, the metric of the black hole is reviewed, and the null geodesics are analyzed. The shape of the shadow is obtained in Sec. \ref{sec:shadow}, for different values of the parameters, and the corresponding observables are defined and calculated. Finally, the differences with the Kerr-Newman geometry and the future observational prospects are discussed in Sec. \ref{sec:dis}. We work in units where $G = c = 1$, with $G$ the gravitational constant and $c$ the speed of light. \section{The black hole metric}\label{sec:metric} We consider the recently introduced geometry \cite{kim20}, corresponding to a rotating solution of the field equations that result from the action \begin{equation} I= \int d^4 x \sqrt{-g} \left[\frac{1}{16\pi}(R-F_{\mu\nu}F^{\mu\nu}) +{\cal L}_m \right] , \label{action} \end{equation} where $R$ is the Ricci scalar, $F_{\mu\nu}$ is the Maxwell electromagnetic field tensor, and ${\cal L}_m$ describes effective anisotropic matter fields, which can be the result of an extra $U(1)$ field as well as diverse dark matter models. The locally anisotropic fluid corresponds to an effective description of matter having a radial pressure not equal to the angular pressure, within the context of General Relativity. The metric, obtained by applying the Newman-Janis algorithm to a static spherically symmetric solution \cite{kiselev03,cho18}, in Boyer-Lindquist coordinates reads \cite{kim20} \begin{equation} ds^2= - \frac{\rho^2 \Delta}{\Sigma} dt^2 + \frac{\Sigma \sin^2 \theta}{\rho^2} (d\phi - \Omega\, dt)^2 + \frac{\rho^2}{\Delta} dr^2 + \rho^2 d\theta^2, \end{equation} where \begin{gather} \rho^2 = r^2 + a^2 \cos^2 \theta, \\ \Delta = \rho^2 F(r,\theta) + a^2\sin^2\theta, \label{eq:delta} \\ \Sigma = (r^2 + a^2)^2 - a^2 \Delta \sin^2\theta, \\ \Omega = \frac{[1-F(r,\theta)]\rho^2 a}{\Sigma}, \end{gather} with \begin{equation} F(r, \theta) = 1 - \frac{2Mr - Q^2 + K r^{2(1-w)}}{\rho^2}. \end{equation} Here $M$ is the mass, $a = J/M$ the angular momentum per unit mass, and $Q$ the electric charge of the black hole, and the parameters $K$ and $w$, respectively, control the density and anisotropy of the fluid surrounding the black hole \cite{kiselev03,cho18,kim20}. A related rotating black hole spacetime \cite{toshmatov17} was obtained by applying the Newman-Janis algorithm to the solution \cite{kiselev03}, with quintessential energy but without the electromagnetic field (no charge). When $K = 0$ in the equations above, we recover the Kerr-Newman metric. If the charge term comes from the presence of an electromagnetic field, then $Q^2$ is clearly positive. However, it is easy to also allow for negative values of $Q^2$; that is, replace $Q^2$ by a parameter $q$ which may be either positive or negative. Such a metric which in the $K=0$ case may be dubbed ``Kerr-Newman-like'', can arise from alternative theories of gravity or from different kinds of fields surrounding the black hole, with the ``charge'' term not coming from the electromagnetic Lagrangian; see, for example, \cite{braneworld} and the references therein. We will therefore rewrite the function $F(r, \theta)$ in the form \begin{equation} F(r, \theta) = 1 - \frac{2Mr - q + K r^{2(1-w)}}{\rho^2} \label{eq:F}, \end{equation} where $q$ may in principle take any real value, subject only to the condition that there exists an event horizon, in order that the geometry corresponds to a black hole. In an orthonormal frame $(e_{\hat{t}}, e_{\hat{r}}, e_{\hat{\theta}}, e_{\hat{\phi}})$, in which the stress-energy tensor of the anisotropic matter field is diagonal $T_{\hat{\mu}\hat{\nu}}= \mathrm{diag}(\varepsilon,p_{\hat{r}},p_{\hat{\theta}},p_{\hat{\phi}})$, the expressions for the energy density $\varepsilon$, the radial pressure $p_{\hat{r}}$, and the angular pressures $p_{\hat{\theta}}$ and $p_{\hat{\phi}}$ are \cite{kim20} \begin{equation} \varepsilon=\frac{q + (1-2w)K r^{2(1-w)}}{8\pi \rho^{4}}, \end{equation} \begin{equation} p_{\hat{r}}=-\varepsilon , \end{equation} \begin{equation} p_{\hat{\theta}}=p_{\hat{\phi}} = \left( \rho^{2}w-a^2\cos^2\theta\right) \frac{\varepsilon}{r^2} + (1-w)\frac{q}{8\pi \rho^2 r^2}. \end{equation} There are some restrictions one can impose on the parameters $w$ and $K$. The metric is clearly only asymptotically flat for $w>0$, and we consequently restrict ourselves to that case. In addition, the total energy is finite only when $w > 1/2$, because otherwise the energy density is not localized sufficiently, and the condition $q + (1-2w)K r^{2(1-w)} \geq 0$ must hold to have a non-negative energy density at a radius $r$ in the rest frame of the matter surrounding the black hole \cite{kim20}. This, however, does not affect the calculation of the black hole shadow, and hence we will only demand that $w$ is positive, allowing $K$ to take either sign. \subsection{Event horizon}\label{sec:eh} The event horizon of this spacetime is located at the largest radius for which $\Delta(r) = 0$. After substituting the expression \eqref{eq:F} for $F(r,\theta)$ in the definition \eqref{eq:delta} of $\Delta$, we find \begin{equation} \begin{split} \Delta(r) &= r^2 + a^2 + q - 2Mr - K r^{2(1-w)} \\ &\equiv \Delta_{KN} - K r^{2(1-w)}, \end{split} \end{equation} where $\Delta_{KN} = r^2 + a^2 + q -2Mr$ is the functional form of $\Delta$ in a Kerr-Newmann-like spacetime. We want to find the regions in parameter space for which an event horizon exists. The disappearance of the event horizon corresponds to a double root of $\Delta$, i.e., a simultaneous solution of the system of equations \begin{equation} \begin{cases} r^2 + a^2 + q - 2Mr - K r^{2(1-w)} &= 0 \\ 2(r-M) - 2(1-w)K r^{1-2w} &= 0. \end{cases} \end{equation} Except for special values of $w$ this system cannot be solved for $r$ in closed form, but we can easily find parametric expressions for $K$ and $w$ as functions of $r$, assuming that $M$, $a$, and $q$ have been fixed beforehand. In fact, the solutions depend on $a$ and $q$ only through the combination $a^2+q$, which reduces by one the number of independent parameters. To start, we simply solve for $K$ in the first equation to obtain $K = \Delta_{KN} r^{2(w-1)}$; then, by plugging this expression into the second equation, we find \begin{equation}\label{eq:wcrit} w = \frac{a^2 + q - Mr}{\Delta_{KN}}, \end{equation} from which it follows immediately that \begin{equation}\label{eq:kcrit} K = \frac{\Delta_{KN}}{r^{2r(r-M)/\Delta_{KN}}}. \end{equation} These relations are plotted in Fig. \ref{fig:horizontes}. The curve shown separates the spacetimes with a naked singularity corresponding to the shaded region, from those having the presence of the event horizon, represented by the region above this curve. In fact, the full graph of Eqs. \eqref{eq:wcrit} and \eqref{eq:kcrit} is more complex than shown in the picture, because there are additional interior horizons that can appear or disappear. Since we are not interested in the black hole interior, we have simply shown the separation curve between the naked singularities and the spacetimes with an event horizon. Note that it is possible to have $|a| > 1$, and in fact any value of $a$ at all, if $K$ and $w$ are chosen appropriately. Some care is required when graphing because the dimensions of $K$ depend on $w$, and therefore values of $K$ for different values of $w$ cannot be directly compared. More concretely, from the definition of $F(r,\theta)$ we see that $K r^{-2w}$ must be dimensionless, so that $K$ has dimensions of length raised to the $2w$ power. In our plots, we take $M=1$ throughout, making all quantities dimensionless; this is equivalent to replacing $K$ by a new dimensionless parameter $\tilde{K} = K / M^{2w}$. \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{f1-wk} \caption{The allowed values of $w$ and $K$. The solid curves show the parameter values for which the event horizon disappears: in the shaded regions below them, the spacetime contains a naked singularity. As mentioned in the text, we take $M=1$, so that $K$ is dimensionless.} \label{fig:horizontes} \end{figure} We can see that Eqs. \eqref{eq:wcrit} and \eqref{eq:kcrit} together with Fig. \ref{fig:horizontes} show how to find the values of $w$ and $K$ at which the event horizon disappears, for given values of $M$, $a$, and $q$. In this work, however, we will be plotting various quantities as functions of $q$, and so we are mostly concerned with the opposite problem: solving for the value of $q$ (or, as mentioned before, of $a^2+q$) at which the event horizon disappears, for given values of $K$ and $w$. This amounts to finding the values of $r$ and $q$ at which both $\Delta(r,q)$ and $\partial_r \Delta(r,q)$ equal zero, and should be done numerically. \subsection{Null geodesics}\label{sec:ngeo} To find the geodesics in this spacetime we follow the standard method, first introduced in \cite{carter} for the Kerr geometry (see, for example, \cite{chandra}), of separating variables in the Hamilton-Jacobi equation \begin{equation} \frac{\partial S}{\partial \lambda}=-\frac{1}{2}g^{\mu\nu}\frac{\partial S}{\partial x^\mu}\frac{\partial S}{\partial x^\nu}, \end{equation} where $\lambda$ is an affine parameter along the geodesics and $S$ is the Jacobi action. For the geometry considered here, this action is separable in the simple form \begin{equation} S=\frac{1}{2}\mu^2 \lambda -E t + L\phi + S_r(r) + S_{\theta}(\theta), \end{equation} with $\mu$ the mass of a test particle, $E$ the energy, and $L$ the azimuthal angular momentum. The quantities $E$ and $L$ are constants of motion along the geodesic, related to the symmetries of the spacetime, with cyclic coordinates $t$ and $\phi$, and the associated Killing vectors. The functions $S_r(r)$ and $S_{\theta}(\theta)$ only depend on $r$ and $\theta$, respectively. Considering null geodesics, i.e., $\mu =0$, the Hamilton-Jacobi equation can be rewritten in the form \begin{equation} -\Delta \left(\frac{d S_r}{dr}\right)^2 + \frac{\left[(r^2+a^2)E-aL \right]^2}{\Delta} = \left(\frac{d S_\theta}{d\theta}\right)^2 + \frac{\left(L-aE\sin^2\theta \right)^2}{\sin^2\theta }. \end{equation} In this equation, one side is a function only of $r$ and the other side depends only on $\theta$; therefore, both sides should be equal to the same positive constant, denoted by $\mathcal{K}$. Using that $dx^\mu /d\lambda =p^\mu = g^{\mu \nu } p_\mu$ and $p_\mu = \partial S / \partial x^\mu $, the separation procedure leads in our case to the first-order geodesic equations \begin{align} \rho^2 \dot{t} &= \frac{r^2+a^2}{\Delta} P(r) - a(a \sin^2\theta E - L), \\ \rho^2 \dot{\phi} &= \frac{a P(r)}{\Delta} - aE + \frac{L}{\sin^2 \theta}, \\ \rho^2 \dot{r} &= \pm \sqrt{\mathcal{R}(r)}, \\ \rho^2 \dot{\theta} &= \pm \sqrt{\Theta(\theta)}, \end{align} where \begin{align} P(r) &= E(r^2+a^2) - aL, \\ \mathcal{R}(r) &= P(r)^2 - \Delta [(L-aE)^2 + \mathcal{Q}], \\ \Theta(\theta) &= \mathcal{Q} + \cos^2\theta \left(a^2 E^2 - \frac{L^2}{\sin^2\theta}\right); \end{align} in them, the dot represents the derivative with respect to $\lambda$ and $\mathcal{Q}=\mathcal{K}-(L-aE)^2$ is the Carter constant \cite{carter}. Since null geodesics are unaffected by a rescaling of the affine parameter, we rescale the conserved quantities to obtain the impact parameters \begin{equation} \xi= \frac{L}{E}, \qquad \eta = \frac{\mathcal{Q}}{E^2}, \end{equation} which are independent of the chosen affine parametrization. \section{Black hole shadow}\label{sec:shadow} We now proceed with the study of the apparent shape of the black hole in the sky of a far away observer. \subsection{Shape of the shadow}\label{sec:shape} The shadow of a black hole as seen by a distant observer is the set of directions in the sky from which no light rays can arrive from infinity. Its contour is then the border between those trajectories that when propagated backward in time fall into the black hole and those that reach infinity. It corresponds to the trajectories that asymptotically approach the spherical photon orbits of constant $r$ and which therefore have the same impact parameters as them. We review here the procedure, described in detail in \cite{tsukamoto18}, for finding these trajectories for a certain class of metrics. To begin, we note that we can write the function $\Delta$ appearing in the metric as \begin{equation} \Delta = r^2 - 2 m(r) r + a^2, \end{equation} with \begin{equation} m(r) = M - \frac{q}{2r} + \frac{K}{2r^{2w-1}}. \end{equation} The metric is otherwise identical to the Kerr metric and reduces to it if we set $m(r) \equiv M$. To find the orbits of constant radius, we look for double roots of the radial potential $\mathcal{R}$, which together with its derivative can be expressed as \begin{align} \frac{\mathcal{R}(r)}{E^2} &= r^4 + (a^2 - \xi^2 - \eta)r^2 + 2 [(\xi-a)^2 + \eta] m(r) r - a^2 \eta, \\ \frac{\mathcal{R}'(r)}{E^2} &= 4 r^3 + 2 (a^2 - \xi^2 - \eta) r + 2 [(\xi-a)^2 + \eta] (m(r) + m'(r)r). \end{align} Setting $\mathcal{R}(r) = 0 = \mathcal{R}'(r)$ gives a pair of equations which is quadratic in the impact parameters $\xi$ and $\eta$, and thus can be solved for them as parametric functions of the radius $r$ of the spherical photon orbit. This yields two solutions, one of which is not relevant for the black hole shadow (see \cite{chandra,tsukamoto18} for details); the other solution is \begin{gather} \xi = \frac{4 m(r) r^2 - (r + m(r) + m'(r)r)(r^2 + a^2)}{a(r - m(r) - m'(r)r)}, \label{eq:xiC} \\ \eta = r^3 \frac{4a^2 (m(r)-m'(r)r) - r(r - 3m(r) + m'(r)r)^2}{a^2 (r - m(r) - m'(r)r)^2}. \label{eq:etaC} \end{gather} \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{f2-sombras} \caption{Contour of the black hole shadow for $a = 0.9$ and $\theta_\text{o} = \pi/2$; all quantities have been adimensionalized by setting $M=1$.} \label{fig:sombras} \end{figure} Assuming an observer at infinity, we adopt the celestial coordinates \cite{bardeen,chandra} \begin{gather} \alpha = - r_0 \frac{p^{\hat{\phi}}}{p^{\hat{t}}} \bigg|_{r_0 \to \infty}, \\ \beta = - r_0 \frac{p^{\hat{\theta}}}{p^{\hat{t}}} \bigg|_{r_0 \to \infty}, \end{gather} where $p^{\hat\mu}$ are the components of the momentum in the orthonormal tetrad of the observer; the orientation is such that the $\beta$ axis is aligned with the spin of the black hole. Writing this momentum in terms of the impact parameters, the coordinates of an incoming photon can then be shown to be \cite{bardeen,chandra} \begin{gather} \alpha = - \frac{\xi}{\sin\theta_\text{o}}, \\ \beta = \pm \sqrt{\eta + \cos^2\theta_\text{o} \left(a^2 - \frac{\xi^2}{\sin^2\theta_\text{o}}\right)}, \end{gather} where $\theta_\text{o}$ is the inclination angle of the observer from the spin axis. Using the expressions \eqref{eq:xiC} and \eqref{eq:etaC} for the critical impact parameters, these equations give a curve in the $(\alpha, \beta)$ plane parametrized by the radius $r$ of the spherical photon orbit corresponding to each point. The domain is bounded by the radii $r_\pm$ for which $\beta(r_\pm) = 0$, which should be found numerically. Some example contours are shown in Fig. \ref{fig:sombras}. The usual effects of the rotation of the black hole are clearly seen: for non-null $a$, the shadow is asymmetrical and displaced from the origin of coordinates. To show these effects as clearly as possible, in this and in subsequent figures we have chosen to place the observer at the equatorial plane, with $\theta_\text{o} = \pi/2$. We recall at this point that even though the shadow can be found for any values of the parameters $w$ and $K$, physical considerations impose some restrictions: as mentioned in Sec. \ref{sec:metric}, the total energy in the rest frame of the surrounding matter will only be finite for $w > 1/2$, and the energy density will only be non-negative at radii for which $q + (1-2w)K r^{2(1-w)} \geq 0$. The dependence of the shape of the shadow on the parameters is discussed in the next section. \subsection{Observables}\label{sec:obs} \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{f3-areas} \caption{The area of the black hole shadow for some representative values of the parameters. All quantities have been adimensionalized by setting $M=1$.} \label{fig:areas} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{f4-elipt} \caption{The oblateness of the black hole shadow for some representative values of the parameters.} \label{fig:elipt} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{f5-centroide} \caption{The horizontal displacement of the center of the shadow.} \label{fig:despl} \end{figure} Various observables have been proposed in the literature \cite{hioki09,tsukamoto14,tsupko17,ghosh20} as a way to describe the shape of the black hole shadow and its dependence on the parameters. We start with the area of the shadow, defined as \begin{equation} A = 2 \int_{\alpha_-}^{\alpha_+} \beta\, d\alpha = 2 \int_{r_+}^{r_-} \beta(r) |\alpha'(r)|\, dr, \end{equation} with a factor of two since the curve $(\alpha(r), \beta(r))$, choosing the positive sign for $\beta$, only describes half the shadow. Here $\alpha_-$ and $\alpha_+$ are the coordinates of the left and right ends of the shadow, respectively, and $r_\pm$ the corresponding values of $r$. With standard software, this integral can be calculated numerically, but instead it turned out to be faster to find the area by solving the differential equation $A'(r) = 2 \beta(r) |\alpha'(r)|$. The dependence of the area on the parameters $a$, $K$, $w$, and $q$ is shown in Fig. \ref{fig:areas}; as mentioned before, we consider negative and positive values of $q$, up to a maximum value for which the event horizons disappear, and we set $M=1$. It can be seen that the area is in all cases a decreasing function of $q$, generalizing the behavior of the Kerr-Newman black hole; it is also an increasing function of $K$, which is reasonable since $K$ and $q$ appear with opposite signs in the metric. The middle curve, with $K=0$, corresponds to the Kerr-Newman black hole and is independent of $w$. It might be tempting to say that the three curves get closer together as $w$ increases; however, one should remember that $K$ has dimensions of mass raised to the power $2w$, so that values of $K$ for different values of $w$ are not directly comparable. To quantify the deformation of the shadow, we can also define the oblateness \begin{equation} D = \frac{\Delta \alpha}{\Delta \beta}, \end{equation} where $\Delta \alpha$ and $\Delta \beta$ are the extent of the shadow in the horizontal and vertical directions respectively; the circular shadow of a non-rotating black hole has $D=1$. In our case, it is plotted as a function of $q$ in Fig. \ref{fig:elipt} for some representative values of the parameters. The dependence on the charge is in this case rather weak and more pronounced for larger spins, with the curves being barely distinguishable for $a = 0.2$, where the shadow is nearly circular. We see that the shadow becomes less circular as $K$ becomes more negative, where the black hole is closer to being extremal, but the difference in oblateness between the lowest and highest values of $K$ stays below $\sim 15\%$. The exception is when $K$ is positive and $w$ is greater than one, which allows $q$ to become arbitrarily large. In this case, we have $D > 1$, and the shadow becomes horizontally stretched instead of compressed. Last, we consider the displacement between the optical axis and the centroid of the shadow, which lies on the $\alpha$ axis at a position $\alpha_c$ defined by the integral \begin{equation} \alpha_c = \frac{1}{A} \int_{\alpha_-}^{\alpha_+} 2 \alpha \beta\, d\alpha = \frac{1}{A} \int_{r_+}^{r_-} 2 \alpha(r) \beta(r) |\alpha'(r)|\, dr. \end{equation} From an observational perspective, this is the most difficult quantity to measure, since it requires an independent knowledge of the true position of the black hole in the sky. Its dependence on the parameters is shown in Fig. \ref{fig:despl}. Here we see for the first time non-monotonic behavior: for $w = 0.25$ the shadow is displaced further to the left as $K$ increases, while the opposite occurs for $w \geq 0.75$. In fact, the reversal in behavior is gradual, in the sense that it does not happen at a single value of $w$. Rather, as $w$ increases toward $1/2$, the curves begin to cross each other; when $w = 1/2$, all three curves cross at $q=0$: in other words, the displacement for $q=0$ is independent of $K$. As $w$ increases past $1/2$, the crossing points move toward negative values of $q$, as can be seen in the right column of Fig. \ref{fig:despl}. As in the case of the oblateness, we also see drastically different behavior when $q$ becomes large enough: in this case, the shadow eventually moves to the right of the optical axis. \section{Discussion}\label{sec:dis} In this work, we have analyzed a spacetime that describes a black hole with electric charge $Q$ surrounded by a perfect fluid with anisotropic pressure. However, due to its generality, it can have a wide variety of physical origins \cite{kim20}, all describing asymptotically flat black holes as long as the conditions in Sec. \ref{sec:eh} are satisfied. In particular, we have found that for $w>0$, which is the asymptotically flat case, we can have $a > M$ as long as the fluid parameters $K$ and $w$ take values in the allowed region for the presence of the event horizon. This suggests that an observation of a black hole with a spin greater than the Kerr bound could be evidence toward the presence of an additional term in the metric. Our basic assumption is that the matter does interact with the photons only gravitationally. If the matter absorbs or scatters the photons, our study applies to a frequency band where these effects can be neglected, which depends on the particular details of the matter model. The shadow of a black hole geometry with the same form as the one considered here has been described before in the absence of electric charge \cite{kumar20}, albeit with a different physical motivation for the $K$-dependent term. We have extended the analysis to include the possibility of an electrically charged black hole, or, as mentioned in Sec. \ref{sec:metric}, other Kerr-Newman-like spacetimes, characterized by a parameter $q$ having any sign, instead of $Q^2$. We are interested in the possibility of finding non-Kerr features in a hypothetical observation of a black hole shadow, and to this end we have calculated the values of three observables, assuming an equatorial observer to provide some numbers: the area of the shadow, its oblateness or departure from circularity, and the displacement of the center of the shadow from the optical axis. The latter is probably the most difficult to determine observationally, since it requires an independent measurement of the true position of the black hole in the sky. Still, all three observables can be determined from an observation of a black hole shadow, and they can be used to contrast alternatives to the Kerr metric. We have found that the properties of the shadow mostly mirror the Kerr-Newman case: as the charge increases, the shadow becomes smaller, the oblateness becomes less than unity, and the shadow moves to the left of the optical axis. This is observed independently of the parameters $w$ and $K$. Increasing $K$ gives the opposite effect to increasing the charge, which is reasonable since the charge term and the $K$ term in the metric have similar behavior, and they appear with opposite signs. As it has been noted, this is however not the case for the displacement of the center of the shadow in the $w < 1/2$ region: increasing $K$ moves the center to the left, as does increasing $q$. There is another exception in the $K>0$, $w>1$ region, where $q$ does not have a maximum value: in this case, the oblateness and the horizontal displacement reverse their behavior and start increasing for large values of $q$. Geometrically, the shadow moves to the right of the optical axis, and its horizontal diameter becomes larger than its vertical diameter. This regime is particularly interesting because it is easily distinguishable from the Kerr shadow and is thus a prime candidate for comparison with future observations by the EHT or other forthcoming instruments. In the examples presented throughout this work, we have only considered the case of an equatorial observer, but it is straightforward to extend our results to observers with other inclination angles, for which similar characteristic features of the shadow are obtained. Also, for theoretical purposes and completeness, we have taken values of the parameters $q$, $K$, and $\omega$ in a range much larger than physically expected in nature. However, the reasonably small deviations from the Kerr case expected in our work would not be observable with the current or upgraded EHT instruments and will surely demand more advanced facilities. Future observations by the EHT and subsequent analysis will allow to study the stability, shape and depth of the shadow with greater precision. One of its key characteristics is that it must remain constant over time, since the mass of M87* is not expected to change on human time scales. Polarimetric image analysis will provide information on the accretion rate and magnetic flux. The black hole Sgr A* has a mass three orders of magnitude smaller than that of M87* and dynamic time scales of minutes instead of days. The observation of the shadow of Sgr A* will require taking into account this variability and mitigating the dispersion effects caused by the interstellar medium \cite{zhu19}. Higher resolution images can be achieved by going to a shorter wavelength, for example, to $0.8$ mm ($345$ GHz), by adding telescopes and, in a farther future, with interferometry with space instruments. The distances between the telescopes in space are larger and they can operate at higher radio frequencies --filtered out by the atmosphere for instruments on Earth-- resulting in a better resolution of images. Millimetron \cite{millimetron} is a planned space-based mission that will operate from far infrared to millimeter bands (in principle, up to 900 GHz), with an expected resolution as a ground-space system of $0.1$ $\mu$as or better. Another recently proposed space VLBI mission \cite{ehi} is the Event Horizon Imager (EHI), that will work at high frequencies (up to $690$ GHz), allowing for extremely high-resolution and high-fidelity imaging of radio sources, and may be suitable for Sgr A*. Proposed x-ray instruments would also have an improved resolution that will allow a detailed exploration of galactic centers in a more distant future. Other interesting observational aspects are discussed in \cite{observ}. The comparison between the observed shadow of black holes and the theoretical models will be a valuable tool in forthcoming astrophysics. \section*{Acknowledgments} This work has been supported by CONICET and Universidad de Buenos Aires.
1,314,259,994,629
arxiv
\section{Introduction} Quantum computing is emerging as a powerful computational paradigm~\cite{cao2019quantum,kandala2017hardware,biamonte2017quantum,farhi2014quantum,harrow2009quantum,rebentrost2014quantum}, showing impressive efficiency in tackling traditionally intractable problems, including cryptography~\cite{shor1999polynomial} and database search~\cite{grover1996fast}. With trainable weights in quantum circuits, quantum neural networks (QNNs), such as quantum convolution~\cite{henderson2020quanvolutional} and quantum Boltzmann machine~\cite{amin2018quantum}, have achieved speed-up over classical algorithms in machine learning tasks, including metric learning~\cite{lloyd2020quantum} and principal component analysis~\cite{lloyd2014quantum}. Despite the successful outcomes in processing structured data (e.g., images~\cite{wang2021roqnn,wang2021roqnn}), QNNs are rarely explored for the graph analysis. Graphs are ubiquitous in the real-world systems, such as biochemical molecules~\cite{dai2016discriminative,duvenaud2015convolutional,zhou2021multi} and social networks~\cite{hamilton2017inductive,velivckovic2017graph,zhou2021dirichlet,chen2022bag}, where graph convolutional networks (GCN) has become the de-facto-standard analysis tool~\cite{kipf2017semi}. By passing the large-volume and high-dimensional messages along edges of the underlying graph, GCN learns the effective node representations to predict graph property. Given the time-costly graph computation, QNNs could provide the potential acceleration for via the superposition and entanglement of quantum circuits. However, the existing quantum GCN algorithms cannot be directly applied for the real-world graph analysis. They are either developed for the image recognition or quantum physics~\cite{zheng2021quantum,verdon2019quantum}, or are only the counterpart simulations in the classical machines~\cite{dernbach2018quantum,beer2021quantum}. Even worse, most of them do not provide the open-source implementations. To tackle these challenges, as shown in Figure~\ref{fig:system}, we propose quantum graph convolutional networks (QuanGCN) towards the graph classification tasks in real-world applications. Specifically, we leverage differentiable pooling layer to cluster the input graph, where each node is encoded by a quantum bit (qubit). The crossing-qubit gate operations are used to define the local message passing between nodes. QuanGCN delivers the promising classification accuracy in the real quantum system of IBMQ-Quito. The existing quantum devices suffer from non-negligible error rate in the quantum gates, which may lead to the poor generation of QuanGCN. To mitigate the noisy impact, we propose to apply the sparse constraint and skip connection. While the sparse constraint sparsifies the pooled graph and reduces the scales of crossing-qubit gate operations, the skip connection augments the quantum outputs with the classical node features. In summary, we make the following three contributions: (1) The first QuanGCN to address the real-world graph property classification tasks; (2) Two noise mitigation techniques used to improve model's robustness; (3) The extensive experiments in validating the effectiveness of QuanGCN compared with classical algorithms. \section{Methodology} We represent an undirected graph as $G = (A, X)$, where $A\in\mathbb{R}^{n\times n}$ denotes adjacency matrix, and $X\in\mathbb{R}^{n\times d}$ denotes feature matrix, $n$ is the number of nodes, and the $i$-th row $x_i$ at matrix $X$ is the feature vector of node $v_i$. The goal of graph classification task is to predict label of each graph (e.g., biochemical molecule property). Specifically, given a set of graphs $\{(G_1, y_1), (G_2, y_2), \cdots\}$ where $y_g$ is the corresponding label of graph $G_g$, we learn the representation vector $h_g$ to classify the entire graph: $y_g = f(h_g)$. \begin{figure} \centering \includegraphics[width=\textwidth]{Figs/methods.pdf} \vspace{-10pt} \caption{The classical-quantum hybrid framework of QuanGCN for graph classification problem, which encoder pools the input molecule and encodes the clustered node features with qubits, graph convolutional layer inserts quantum gate between every pair of qubits to approximate classical message passing, and measurement circuits read out graph representation to estimate labels.} \label{fig:system} \end{figure} \subsection{Preliminary of Graph Convolutional Networks} \label{sec: GCN} The node embedding $x^{(l)}_i\in\mathbb{R}^{d}$ at the $l$-th layer of a graph neural networks is generally learned according to~\cite{hamilton2017inductive,zhou2019auto,sun2022gppt}: \begin{equation} \label{equ:GCN} x^{(l)}_i = \mathrm{Aggregate}(\{a_{ij}x^{(l-1)}_jW^{(l)}: j\in\mathcal{N}(i)\cup v_i \}). \end{equation} $\mathcal{N}(i)$ denotes the set of neighbors adjacent to node $v_i$; $a_{ij}$ denotes the edge weight connecting nodes $v_i$ and $v_j$, which is given by the $(i,j)$-th element at matrix $A$; $\mathrm{Aggregate}$ denotes the permutation-invariant function to aggregate the neighborhood embeddings, and combine them with the node itself, i.e., $v_i$. The widely-used aggregation modules include sum, add, and mean functions. $W^{(l)}\in\mathbb{R}^{d\times d}$ is trainable projection matrix. Considering a $L$-layer GCN, READOUT function (e.g., sum or mean) collects all the node embeddings from the final iteration to obtain the graph representation: $h_g = \mathrm{READOUT}(x^{(L)}_i | i=1, \cdots, n)$, which is used for the graph classification task. \subsection{Quantum Graph Convolutional Networks} We propose QuanGCN to realize the graph representation learning in the classical-quantum hybrid machine in Figure~\ref{fig:system}, which is consisted of three key components. \paragraph{\textbf{Pooling Quantum State Encoder.}} This state is responsible to encode the classical node features into quantum device, where each node is is represented by a qubit. Since the existing quantum machines have a limited number $q$ of qubits, it is intractable to encode the graphs with thousands of nodes. In this work, we leverage a differentiable pooling module to cluster each graph to a fixed $q$-node coarsened graph. Specifically, let $S= \mathrm{Pool}(A, X) \in \mathbb{R}^{n\times q}$ denote the clustering matrix, where $\mathrm{Pool}$ is trainable module of MLP or other advanced pooling networks~\cite{gao2019graph,ying2018hierarchical,zhou2020towards}. Each row at matrix $S$ indicates the probabilities of a node being pooled to the $q$ clusters. We could obtain the adjacency matrix and node features of coarsened graph as follow: \begin{equation} \label{eq:pool} A_p = S^T A S \in \mathbb{R}^{q\times q}; \quad X_p = S^T X \in \mathbb{R}^{q\times d}. \end{equation} We encode the node features of pooled graph with rotation gates. To simplify the analysis and without loss of generality, we assume node feature dimension to be $d=1$. The high dimensional feature could be encoded by repeating the process or using complicated quantum gates. To be specific, let $\ket{\phi}=\ket{0,...,0}$ denote the ground statevector in $q$-qubit quantum system. The computation on a quantum system is implemented by a sequence of parameterized quantum gates on statevector $\ket{\phi}$. Parameterized by node features, we use a sequence of $\mathsf{R_y}$ gates to encode the pooled graph as: $\ket{\phi} = \mathsf{R_y}(x_p, [q])\cdots \mathsf{R_y}(x_1, [1])\cdot\ket{\phi}$. $\mathsf{R_y}(x_i, [i])$ denotes the single-qubit quantum gate rotating the $i$-th qubit along $y$-axis, at which the rotation angle is characterized by node feature $x_i$ (i.e., the $i$-th row in $X_p$). In other word, the node features are memorized in the quantum system by the rotation angles of quantum states. \paragraph{\textbf{Quantum Graph Convolution.}} As defined in Eq.~\eqref{equ:GCN}, the representation of node $v_i$ is computed by incorporating the self-loop information and aggregating the neighborhood embeddings. In the quantum counterpart, we use the quantum gates of $\mathsf{U1}$ and $\mathsf{CU1}$ to model the self-loop and node-pairwise message passing, respectively: \begin{equation} \label{eq: quangcn} \ket{\phi} = \bigcap_{j=1 \& j\neq i}^{q}\mathsf{CU1}(\hat{a}_{ij}, [j, i]) \cdot \mathsf{U1}(\hat{a}_{ii}, [i])\cdot \ket{\phi}. \end{equation} $\mathsf{CU1}(\hat{a}_{i,j}, [j, i])$ is a two-qubit quantum gate, where the ordered pair $[j, i]$ means the $j$-th quantum circuit is a control qubit and the $i$-th one is target qubit. The unitary operation working on qubit $i$ is parameterized edge weight $\hat{a}_{ij}$, i.e., the $(i,j)$-th element at matrix $A_p$. Symbol $\bigcap$ denotes the sequential gate operations. $\mathsf{U1}(\hat{a}_{ii}, [i])$ is a single-qubit quantum gate working on qubit $i$, which is parameterized by the self-loop weight $\hat{a}_{ii}$. By applying Eq.~\eqref{eq: quangcn} to all the qubits, we model the quantum message passing between node pairs. A following trainable quantum layer is then used as shown in Figure~\ref{fig:system}. \paragraph{\textbf{Measurement.}} After $L$ layers of quantum graph convolutions, we measure the expectation values with $\mathsf{Pauli}$-$\mathsf{Z}$ gate and obtain the classical float value from each qubit. The measurements are concatenated to predict the graph labels as described in Section~\ref{sec: GCN}. \subsection{Noise Mitigation Techniques} In the real quantum systems, noises often appear due to the undesired gate operation. To mitigate noise in our QuanGCN, we propose to apply the following two techniques. \paragraph{\textbf{Pooling sparse constraint.}} The operation error generally increases with the number of included quantum gates. One of the intuitive solutions to relieve noise is to sparsify the adjacency matrix of pooled graph, where most of the edge weights are enforced to be zero. In this way, the applied quantum gate $\mathsf{CU1}$ or $\mathsf{U1}$ could be treated as an identity operation, which rotates the target qubit with the angle of zero. Specifically, we adopt the entropy constraint to learn the sparse adjacency matrix: $\mathcal{L}_{\mathrm{sparse}}= -\sum_{i}\sum_{j}\hat{a}_{ij}\log \hat{a}_{ij}$, which is co-optimized with the graph classification loss. \paragraph{\textbf{Skip connection.}} We mitigate the quantum noise from the architectural perspective by introducing the skip connection. In Figure~\ref{fig:system}, we concatenate the quantum measurements with the input classical features, which is not sensitive to the quantum noise. \section{Experimental Setting and Results} \paragraph{\textbf{Dataset.}} We adopt four graph datasets, including two bioinformatic datasets of ENZYMES, PROREINS~\cite{borgwardt2005protein,feragen2013scalable}, and two social network datasets of MUTAG and IMDB-BINARY~\cite{dobson2003distinguishing}. They contain 600, 1113, 188, and 1000 graphs, respectively. \paragraph{\textbf{Implementations.}} We adopt the classical baselines of MLP, simplified graph convolutions (SGC)~\cite{wu2019simplifying}, GCN, and the graph pooling method of Diffpool~\cite{ying2018hierarchical}. SGC uses MLP to learn the node presentations based on the preprocessed node features. For QNN algorithms, besides QuanGCN, we include quantum MLP (QuanMLP) and quantum SGC (QuanSGC), where their MLP layers are replaced by the quantum layer of $\mathsf{U3CU3}$. The numbers of qubits and graph convolutional layers are set to $4$ and $2$, respectively. \paragraph{\textbf{Comparison of classical and quantum neural networks.}} We compare the classification accuracies in Table~\ref{tab:classical_quantum}, where the mean results and standard variances are reported with $10$ random runs. It is observed that our QuanGCN obtains the comparable or even superior results than the cassical algorithms, while generally outperforming QuanMLP and QuanSGC on the benchmark datasets. These results validate the effectiveness of quantum graph convolution in dealing with the graph data. By modeling the time-expensive message passsing in the efficient quantum device, QuanGCN provides the potential speed-up over the classical algorithms. Similar to other QNNs, QuanGCN is accompanied with higher variance due to the indeterminate quantum operations. \begin{table}[h] \centering \begin{tabular}{c|c|cccc} \toprule Frameworks & Methods & ENZYMES & MUTAG & IMDB-BINARY & PROTEINS \\ \hline \multirow{4}*{Classical} & MLP &32.17±1.77 &78.95±0.00 & 70.10±1.10 & 70.37±0.76 \\ & SGC &49.00±4.66 &84.21±0.00 &69.70±3.13 & 72.66±1.72 \\ & GCN &52.33±3.44 &82.63±2.54 & 70.40±1.90 & 71.65±1.26 \\ & DiffPool &50.00±3.60 & 78.95±6.08 & 71.90±1.91 & 69.63±1.64 \\ \hline \multirow{3}*{Quantum} & QuanMLP-w/o noise &31.67±1.57 &78.95±0.00 & 72.10±0.99 & 67.68±2.34\\ & QuanSGC-w/o noise &37.83±4.52 &80.53±4.33 & 69.90±2.92 & 67.86±0.94 \\ & QuanGCN-w/o noise&50.00±6.57 &83.16±5.44 & 71.10±2.77 & 70.00±2.77 \\ \bottomrule \end{tabular} \caption{Graph classification accuracies in percent; QNNs are trained and inferred in GPUs without inserting quantum noise.} \label{tab:classical_quantum} \end{table} \vspace{-30pt} \paragraph{\textbf{Testing in quantum simulator and real machine.}} In Table~\ref{tab:inference}, we deploy the above well-trained QNNs in Qiskit simulator and quantum computer of IBMQ-Quito to evaluate their inference performances. Since the inference in real quantum computer has to pay plenty of queuing time, we test QNNs only once in the real device. Comparing with the inference performances in GPUs (i.e., in Table~\ref{tab:classical_quantum}), QNNs generally have lower accuracies due to the high error rates existing inherently in the quantum devices. Notably, QuanGCN instead obtains the better performances. One of the possible reasons is due to the graph pooling, which highly reduces the crossing-qubit gate usages and the resultant noises. The quantum graph convolution over the pooled graph provides the more informative encoding for the underlying graph structure. \begin{table}[h] \centering \begin{tabular}{c|c|ccccc} \toprule Frameworks & Methods & ENZYMES & MUTAG & IMDB-BINARY & PROTEINS \\ \hline \multirow{3}*{Simulator} & QuanMLP-noise &20.50±5.21 &62.11±9.54 & 70.90±5.02 & 48.07±9.95\\ & QuanSGC-noise &22.83±4.91 &61.05±13.18 & 73.50±4.93 & 50.55±10.86 \\ & QuanGCN-noise &78.67±5.76 &88.95±5.23 & 76.30±3.74 & 74.77±2.49\\ \hline \multirow{3}*{Real QC} & QuanMLP-noise &18.33 &63.16 & 54.00 & 40.37\\ & QuanSGC-noise &21.67 &63.16 & 65.00 & 59.63\\ & QuanGCN-noise &83.33 &84.21 & 78.00 & 78.90 \\ \bottomrule \end{tabular} \caption{Inference results of graph classification accuracies in the environments of Qiskit simulator and real quantum computer.} \label{tab:inference} \end{table} \paragraph{\textbf{Noise mitigation results.}} To address the inherent noisy impacts, we apply the skip connection to all the QNNs, and use the sparse constraint to regularize the graph pooling in QuanGCN. We compare them with one popular noise cancellation baseline~\cite{wang2021roqnn}, which randomly inserts quantum gates during model training to improve robustness. The comparison results in Tabel~\ref{tab:denoise} show that the technique of skip connection is consistently effective to mitigate noise in all models. In QuanGCN, the combination of skip connection and sparse constraint obtains the best noise mitigation performances. \begin{table}[h] \centering \begin{tabular}{c|c|ccccc} \toprule Frameworks & Methods & ENZYMES & MUTAG & IMDB-BINARY & PROTEINS \\ \hline \multirow{2}*{QuanMLP-noise} & Random injection &22.33±6.15 &60.53±14.09 & 56.70±5.21 & 50.37±6.26\\ & Skip connection &27.83±2.09 &63.68±11.22 & 72.20±1.03 & 64.22±6.27\\ \hline \multirow{2}*{QuanSGC-noise} & Random injection &20.50±6.19 &64.74±6.59 & 61.80±8.46 & 51.19±6.78\\ & Skip connection &29.00±5.45 &67.37±14.42 & 71.30±1.77 & 68.07±4.98 \\ \hline \multirow{4}*{QuanGCN-noise} & Random injection &35.17±18.53 &59.47±30.09 & 57.50±14.18 & 63.12±8.02 \\ & Skip connection &49.33±9.27 &86.84±3.72 & \bf{71.90±2.42} & 72.02±2.42 \\ & Sparse &41.67±16.52 &63.16±20.61 & 64.30±9.9 & 60.46±7.90\\ & Skip + Sparse &\bf{49.83±8.22} &\bf{86.84±6.68} & 70.40±2.01 & \bf{73.30±1.91}\\ \bottomrule \end{tabular} \caption{Quantum noise mitigation results with skip connection and sparse constraint.} \label{tab:denoise} \end{table} \section{Conclusion} In this work, we propose and implement QuanGCN towards addressing the graph property classification tasks in the real-world applications. To mitigate the noisy impact in the real quantum machine, we propose techniques of skip connection and sparse constraint to improve model's robustness. The extensive experiments on the benchmark graph datasets demonstrate the potential advantage and applicability of quantum neural networks to the graph analysis, which is a new probem introduced to quantum domain. \clearpage \bibliographystyle{unsrt}
1,314,259,994,630
arxiv
\section{Introduction} \label{sec:intro} \noindent This paper investigates the complexity and size-constraints related to languages over the unary alphabet -- this is assumed everywhere throughout the paper -- when these languages are given by a nondeterministic finite automaton with a special emphasis on the case where this nfa is unambiguous; unambiguous nondeterministic finite automata (ufas) have many good algorithmic properties, even under regular operations with the languages, as long as no concatenation is involved. Here a bound of type $2^{\Theta(n)}$ is called exponential and a bound of type $2^{n^{\Theta(1)}}$ is called exponential-type. If $p$ is a non-constant polynomial in logarithms and iterated logarithms, then the class of bounds of the form $n^{\Theta(p(\log n, \log \log n, \log \log \log n, \ldots))}$ for any $p$ as above is called quasipolynomial. In an expression of the form $2^{\alpha(n)}$, the function $\alpha(n)$ is called the exponent of the function. Note that, under the Exponential Time Hypothesis, all NP-complete problems have an exponential-type complexity and solving $k$SAT for $k \geq 3$ has an exponential complexity. For several important cases, the exponential of an exponential-type function is determined up to a logarithmic or sublogarithmic expression in the exponent. Unambiguous finite automata found much attention in recent research with a lower quasipolynomial lower bound for the blow-up of the size at complementation by Raskin at ICALP 2018 and a better lower bound for complementation of form $n^\Omega(\log n)$ for the case of the binary alphabet (results on larger alphabets are only mentioned for comparison purposes) by G\"o\"os, Kiefer and Yuan \cite{GKY22}. Fernau and Krebs \cite{FK17} proved that under the assumption of the Exponential Time Hypothesis (ETH), one needs at least $2^{\Omega(n^{1/3})}$ time to check whether an nfa with $n$ states over the unary alphabet accepts all strings; they coded three-colourability of graphs for this problem. Tan \cite{Ta22} provides an alternative proof using a coding of three-occur 3SAT; the ETH implies that one cannot solve this problem in $2^{o(m)}$ time where $m$ is the number of variables. This is a special case of comparing languages and Fernau and Krebs \cite{FK17} pointed to the $n^{O(n^{1/2})}$ algorithm which converts unary nfas into dfas to compare two unary nfas of up to $n$ states. The present work provides a faster algorithm for this task and the exponent of the computation time of this algorithm matches the exponent of the lower bound of Fernau and Krebs up to a factor $O((\log n)^{1/3})$. Recall that for an unambiguous nondeterministic finite automata (ufa), every word outside the language has zero accepting runs and every word inside the language has exactly one accepting run. Prior research had established that the intersection of two $n$-state ufas can be represented by an $O(n^2)$ ufa and that, over the binary alphabet, the complexity of the Kleene star of an $n$-state ufa can be recognised by an $O(n^2)$ ufa \cite{Co15,HK11,JO18,Pi00}. But the size-increase of the other regular operations (complement, union, concatenation) remained open; it was however known that disjoint union has linear complexity. The present work shows that complementation has at most the quasipolynomial size blow-up $n^{\log n+O(1)}$ and thus the same holds for the union; furthermore, the concatenation is much worse and requires exponential-type size-increase where the exponent is at least $\Omega(n^{1/6})$. Raskin's \cite{Ra18} showed a lower bound $(n^{(\log \log \log n)^{\Omega(1)}})$ for unary complementation and thus the quasipolynomial upper bound cannot be improved to polynomial. Furthermore, it is not efficient to compare ufas with respect to inclusion of the generated languages by constructing complementation of the second automaton and then taking intersection and checking for emptyness. This is due to a further result shows that one can directly compare $n$-state ufas with respect to equality or inclusion in polynomial time. If the ufas are in Chrobak Normal Form, the comparison can even be carried out in LOGSPACE - the transformation into this normal form is a polynomial time algorithm which does not increase the number of states for ufas. Finally algorithmic properties with respect to the $\omega$-word $L(0)L(1)L(2)\ldots$ of a language $L$ where $L(k)$ is $1$ if the word of length $k$ is in the language $L$ and $L(k)$ is $0$ if this word is not in the language $L$ are investigated. If $L$ is given by a dfa, membership tests for a fixed regular $\omega$-language can be decided in polynomial time; for ufas and nfas there is a known trivial upper bound obtained by converting these automata into nfas; this upper bound has, up to logarithmic factors, the exponent $n^{1/2}$ for nfas and $n^{1/3}$ for ufas. The result of the present work is that these upper bounds can be matched by conditional lower bounds up to a logarithmic factor of the exponent assuming the exponential time hypothesis. The following table summarises the results for ufas and compares to previously known results. Here $c(n) = n^{\log n + O(1)}$ and results with a theorem plus number are proven in the present work. Lower bounds on size imply lower bounds on Computations; for concatenation a better lower bound is found (assuming ETH). In the first part of the table the bounds on the size are given and in the second part the bounds on the computation time for finding ufas with the desired property or determining the truth of a formula. The third part of the table summarises the results of the other parts of the paper. Here ufa-word and nfa-word are the $\omega$-words defined when viewing the characteristic function of the recognised language as an $\omega$-word. \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Operation & Lower Bound & Source & Upper Bound & Source \\ \hline Intersection & $n^2-n$ & Holzer and & $n^2$ & Holzer and \\ & & Kultrib \cite{HK02} & & Kuoltrib \cite{HK02} \\ Complement & $n^{(\log \log \log n)^{\Omega(1)}}$ & Raskin \cite{Ra18} & $c(n)$ & Theorem~\ref{th:complement} \\ Disjoint union & $2n-4$ & -- & $2n$ & Jir\'askov\'a and \\ & & & & Okhotin \cite{JO18} \\ Union & -- & -- & $n + n \cdot c(n)$ & Corollary~\ref{co:regularoperationbounds} \\ Symmetric difference & -- & -- & $2n \cdot c(n)$ & Theorem~\ref{th:complement} \\ Kleene Star & $(n-1)^2+1$ & \v Cevorov\'a \cite{Ce13} & $(n-1)^2+1$ & \v Cevorov\'a \cite{Ce13} \\ Concatenation & $2^{\Omega(n^{1/6})}$ & Theorem~\ref{th:concat} & $2^{O((n \log^2 n)^{1/3})}$ & Okhotin \cite{Ok12}\\ Finite Formula & ETH $\Rightarrow$ $2^{\Omega(n^{1/3})}$ & Theorem~\ref{thm:ethformula} & $2^{O((n \log^2 n)^{1/3})}$ & Okhotin \cite{Ok12}\\ \hline Space for Universality & -- & -- & $O(\log n)$ & Theorem~\ref{th:logspace} \\ Time for Universality & -- & -- & $O(n \log^2 n)$ & Theorem~\ref{th:logspace} \\ (both for Chrobak NF) & & & & \\ Time for Comparison & -- & -- & Polynomial Time & Corollary~\ref{co:logspace} \\ Time for Concatenation & ETH $\Rightarrow$ $2^{\Omega(n^{1/4})}$ & Theorem~\ref{th:compbound} & $2^{O((n \log^2 n)^{1/3})}$ & Okhotin \cite{Ok12}\\ Time for Complement & $n^{(\log \log \log n)^{\Omega(1)}}$ & Raskin \cite{Ra18} & $n^{O(\log n)}$ & Theorem~\ref{th:complement} \\ Time for Formulas & ETH $\Rightarrow$ $2^{\Omega(n^{1/3})}$ & Theorem~\ref{thm:ethformula} & $2^{O((n \log^2 n)^{1/3})}$ & Okhotin \cite{Ok12} \\ Time for Formulas & quasipolynomial & Raskin \cite{Ra18} & quasipolymomial & Remark~\ref{re:quasipolynomial} \\ Without Concatenation &&&& \\ \hline Time for nfa Comparison & $2^{\Omega(n^{1/3})}$ & Krebs and & $2^{O((n \log n)^{1/3})}$ & Theorem~\ref{thm:algo} \\ & & Fernau \cite{FK17} & & \\ Nfa-word in $\omega$-language & $2^{\Omega((n \log\log n/\log n)^{1/2})}$ & Theorem~\ref{th:nfaomega} & $2^{O((n \log n)^{1/2})}$ & dfa-conversion \\ Ufa-word in $\omega$-language & $2^{\Omega((n \log n)^{1/3})}$ & Remark~\ref{re:ufaomega} & $2^{O((n \log^2 n)^{1/3})}$ & dfa-conversion \\ \hline \end{tabular} \end{center} \section{The Nondeterministic Finite Automata Comparison Algorithm} \noindent The upper bound of the next result matches the lower bound $2^{\Omega(n^{1/3})}$ of the timebound from the universality problem of unary nfas given by Fernau and Krebs \cite{FK17} up to a factor $O((\log n)^{1/3})$ when comparing the logarithms of the run time corresponding bounds. Here the universality problem is the problem to check whether the finite automaton generates all the words over the given alphabet (which is unary in this paper). \begin{theorem} \label{thm:algo} Given two nondeterministic finite automata over the unary alphabet and letting $n$ denote the maximum number of their states, one can decide whether the first nfa computes a subset of the second nfa in deterministic time $O(c^{(n \log n)^{1/3}})$ for a suitable constant $c > 1$. This timebound also applies directly to the comparison algorithm for equality and the algorithm for checking universality (all strings in the language). \end{theorem} \begin{proof} Let $n$ denote the maximum number of states of the two automata. One can transform the nondeterministic finite automata into Chrobak Normal Form \cite{Ch86} where it consists of a ``stem'' of up to $n^2$ states followed by parallel cycles which, together, use up to $n$ states. Note that one can assume that the two stems have the same length, as each stem can be made one longer by moving the entry point into each cycle by one state and adding one state at the end of the stem which is accepting iff one of the nodes at the prior entry points was accepting; this is done repeatedly until the stems have the same length, at most $O(n^2)$ times. The comparison of the behaviour on the stems of equal length before entering the cycles can just be done by comparing the acceptance mode of the states of the same distance from the start. For comparing two nfas in such a normal form, the comparison of the cycle part is therefore the difficult part. For the following, one ignores the stems and considers the special case where the nfa just consists of disjoint cycles, each of them having one start state and the only nondeterminism is the choice of the start state, that is, the cycle to be used. So assume that such cycles are given. Now let $P$ be the set of all primes $p$ below $n$ such that either (a) $p < (n \log n)^{1/3}$ or (b) the condition (a) fails and there is a further prime $q \geq (n \log n)^{1/3}$ for which some cycle in one of the two nfas has a length divisible by $p \cdot q$. By the prime number theorem, those primes entering $P$ by condition (a) are at most $O((n/\log^2 n)^{1/3})$ many and those entering by condition (b) obey the same bound, as there can be at most $2(n/\log^2 n)^{1/3}$ cycles having at least the length $(n \log n)^{2/3}$ in the two automata --- this bound follows from the fact that the product of two primes not obeying condition (a) is at least the length $(n \log n)^{2/3}$ and that all cycles in each automaton are disjoint and the number of cycles of such a length in each automaton can therefore not be larger than $n / (n \log n)^{2/3}$. Let $Q$ be the set of all primes up to $n$ which are not in $P$. Let $r$ be the product of all primes $p \in P$ raised to the largest power $k$ such that $p^k \leq n$. The number $r$ is in $O(n^{c' \cdot (n / \log^2 n)^{1/3}})$ for some $c'$ and thus $2^{c' \cdot (n \log n)^{1/3}}$ when replacing $n$ by $2^{\log n}$ and applying a simple mathematical rearrangement. This expression is equal to $c^{(n \log n)^{1/3}}$ for some $c > 1$. Now one constructs for the two given nfas the comparison normal forms of size $r \cdot n$ as follows: For each $q \in Q$ one constructs a cycle of length $r \cdot q^2$ such that the state $s$ in this cycle is accepting iff there is a cycle of length $p$ in the original automaton where $p$ divides $r \cdot q^2$ and when going in that cycle $s$ steps (or just $s$ mod $p$ steps) the corresponding state is accepting. The comparison normal form can be constructed in time $r \cdot Poly(n)$ time by constructing each cycle seperately in both automata and compare it with all cycles of length $p$ where $p$ divides $r \cdot q^2$. Now the first automaton recognises a subset of the set recognised by the second automaton iff for all $s < r$ one of the following two options holds: (A) There is a $q \in Q$ such that in the second normalised automaton, all states of the form $s + t \cdot r$ in the cycle of length $r \cdot q^2$ are accepting; (B) For every $q \in Q$ and for all states of the form $s + t \cdot r$, if this state in the cycle of length $r \cdot q^2$ is accepting in the normalised form of the first automaton then it is also accepting in the corresponding cycle of the normalised form of the second automaton. This condition can be checked in time $r \cdot Poly(n)$: There are $r$ possible values of $s$ and for each such $s$, one has to check only $O(n^3)$ states, namely for each $q \in Q$ all states of the form $s+t \cdot r$ where $t \in \{0,1,\ldots,q^2-1\}$; note that $q^2 \leq n^2$. For correctness, one first shows that (A) and (B) are sufficient conditions. Let $s$ be given. If (A) is satisfied, then the second automaton accepts all strings of length $s+t \cdot r$ whatever $t$ is, as for the given $q$, all these strings are accepted by the cycle of length $r \cdot q^2$ when looking at states reached from the origin in $s+t \cdot r$ steps. If (B) is satisfied and the first nfa accepts after $s+t \cdot r$ steps then there is a $q \in Q$ such that the cycle of length $r \cdot q^2$ has an accepting state at position $s+t \cdot r$ (modulo $r \cdot q^2$). From the condition it followed that the second automaton has an accepting state at the same position in the corresponding cycle and therefore also accepts the corresponding string. One still needs also to see the converse. So assume that the following condition (C) holds: For every $q \in Q$ there exists a $t_q$ such that the state at position $s + t_q \cdot r$ is rejecting in the cycle of length $r \cdot q^2$ in the second automaton and furthermore, for one $q$, the state at position $s + t_q \cdot r$ is accepting in the cycle of length $r \cdot q^2$ in the first automaton. So (C) is true iff both (A) and (B) fail. Now there is a step number $s'$ such that, after going each cycle $s'$ steps, the cycle is at position $s+t_q \cdot r$ for the cycle of length $r \cdot q^2$ for each $q \in Q$. It follows that the first automaton accepts a string of length $s'$ while the second automaton rejects it. \end{proof} \noindent Holzer and Kutrib \cite[Theorem 15]{HK11} write that for every n-state nfa there is an equivalent $O((n \log n)^{1/2})$ state afa (alternating finite automaton) recognising the same language; in the case that this translation is efficient, the conditional lower bound of Fernau and Krebs \cite{FK17} of $2^{\Omega(n^{1/3})}$ for deciding universality or equivalence of $n$-state nfas using a unary alphabet would translate into a $2^{\Omega((n / \log n)^{2/3})}$ lower bound for the same task for afas using a unary alphabet. \section{Unambiguous Finite Automata and their Algorithmic Properties} \noindent An unambiguous automaton (ufa) satisfies that for every input word, there is either exactly one accepting run or none. On one hand, these are more complicated to handle than nondeterministic finite automata so that the union of $n$ $n$-state automata cannot be done with $n^2$ states; on the other hand, they still, at least for unary alphabets, have good algorithmic properties with respect to regular operations (union, intersection, complementation, formation of Kleene star) and comparison (subset and equality). It is well known that, for the unary alphabet, the intersection of two ufas can be carried out by the usual product construction for nfas, as that construction preserves the property to be an ufa in the case that both input automata are an ufa, thus the size increase is $O(n^2)$ for forming the interesection of two $n$-state ufas. G\"o\"os, Kiefer and Yuan \cite{GKY22} showed that there is a family of languages $L_n$ over the binary alphaber recognised by an $n$-state ufa such that the size of the smallest nfas recognising the complement grows at least with $n^{\Omega((\log n)/polylog(\log n))}$. This lower bound also trivially applies to ufas. For complementation of ufas over the unary alphabet, a smaller bound might be possible: So far, Raskin \cite{Ra18} showed that the complementation of a unary ufa increases the states from $n$ to some superpolynomial function in the worst case; this result is based on a lower bound of the form $n^{(\log \log \log n)^q}$ where $q$ is some positive rational constant. Thus it is impossible to carry out the complementation with polynomial size-increase; however, the next results shows that one can do this with quasipolynomial size-increase. This confirms a weak version of a conjecture of Colcombet \cite{Co15} that complementation of ufas can be done with a polynomial blow-up. Though Raskin results refutes that, when replacing ``polynomial'' by ``quasipolynomial'' the conjecture would hold in the unary case. For larger alphabets, the lower bounds are below the upper bounds given here, but the construction does not seem to generalise to larger alphabets, as it utilises quite some facts which only hold for unary ufas. The theorem is first stated for the case of a periodic ufa, as there the proof is clearer, afterwards it is indicated how to obtain it for the general case. \begin{theorem} \label{th:complement} A ufa with up to $n$ states has a complement with $n^{\log(n)+O(1)}$ states which can be computed in quasipolynomial time from the original automaton. \end{theorem} \noindent The proof will use the following straight-forward lemma. \begin{lemma}\label{lem-trans} Consider a ufa in Chrobak Normal Form (consisting only of cylces without the stem part). (a) Suppose a cycle with states $s_0, s_1, \ldots, s_{p-1}$ (with the transition from $s_i$ to $s_{i+1}$ on the unary input, where $i+1$ is taken mod $p$), is converted to a cycle with states $s'_0, s'_1, \ldots, s'_{k \cdot p -1}$, (with the transition from $s'_i$ to $s'_{i+1}$ on the unary input, where $i+1$ is taken mod $p\cdot k$), with $s'_{r+p\cdot c}$, for $c<p$, being an accepting state iff $s_r$ is an accepting state in the original cycle. Then the new automaton is still a ufa accepting the same set of strings. (b) Consider a cycle with states $s_0, s_1, \ldots, s_{p-1}$, and (with the transition from $s_i$ to $s_{i+1}$ on the unary input, where $i+1$ is taken mod $p$). Suppose one converts the cycles into $q$ cycles, with the $j$-th cycle $(j<q)$ having states $s_i^j$, $i<p$, where $s_i$ is not accepting then none of $s_i^j$ is accepting, and if $s_i$ is acccepting then exactly one of $s_i^j$, $j< q$ is accepting. Then the new automaton is still a ufa and accepts the same set of strings as the original automaton. (c) Converse of (b) above: that is, two cycles with same number of states can be combined into one cycle. (d) If a cycle with no accepting states is removed, it does not change the language accepted by the ufa. \end{lemma} \begin{proof} Jiang, McDowell and Ravikumar \cite{JMR91} -- see also \cite{Ok12} -- proved that one can transform, in polynomial time, an ufa over the unary alphabet into an ufa in Chrobak normal form without a size-increase, so the new ufa has still $n$ states. Note that for the complement of an ufa in Chrobak Normal Form, one swaps acceptance / rejection for the states on the stem and then, for the parallel cycles, they form a periodic ufa concatenated to a single string language. Thus complementing the periodic ufa is the main work and it is assumed, without loss of generality, that the stem does not exist, so every cycle has some start state and the only amount of non-determinism is the choice of the start state. For each input word, there is exactly one cycle which goes either into an accept or reject state while all other cycles are into a state which does not signal any decision; swapping accept and reject states then allows to complement the so obtained ufa. For ease of notation, in a cycle with $p$ states $s_0, s_1, \ldots, s_{p-1}$ (with the transition from $s_i$ to $s_{i+1}$ on the unary input, where $i+1$ is taken mod $p$), one can call the state $s_i$, the $i$-th state of the cycle (with $0$-th state being the starting state of the cycle). Intuitively, the original ufa will be transformed into a new ufa using Lemma~\ref{lem-trans}. The new ufa will have some blue states (denoting the accepting states for the language accepted by the original ufa) and some states green (denoting the accepting states for the complement of the language accepted by the original ufa). The blue states will only be formed by considering the accepting states of the original automaton, or due to the transformations as done by Lemma~\ref{lem-trans}. The green states will be introduced during this construction when it is safe to do so (that is, whenever the automaton reaches that state after processing a unary string, the string will be in the complement of the language). Thus, it will always be the case (after a modification step) that the blue/green states being considered as accepting states keeps the automaton as a ufa, with blue nodes accepting exactly the original language accepted by the ufa and the green nodes accepting a subset of the complement. The final ufa will have the green states accepting exactly the complement. It is convenient to view the progress in the construction as a tree. At any step, the leaves of the tree represent cycles of the automaton, and the current automaton is the union of cycles at the leaves. A modification may be done during the construction by introducing children to some leaves based on Lemma~\ref{lem-trans} and converting some states into green. Each leaf is associated with some cycles of the ufa as mentioned above along with the parameters $(p,r,A)$, where $p$ is a number, $r<p$, and A is a set of numbers $\{q_1,q_2,\ldots q_m\}$, such that there are $m$ cycles associated with the leaf have lengths $p \cdot q_i$ respectively. The following invariants will be satisfied: \begin{description} \item{(I1)} For a leaf $L$ with parameters $(p,r,A)$: only $(r+p \cdot c)$-th states, for some $c$, in the cycles associated with leaf $L$ can be of colour blue (accepting) or green (accepting for the complement). \item{(I2)} For two distinct leaves with parameters $(p,r,A)$ and $(p',r',A')$, there can be no natural numbers $\ell$ with the property that $\ell \mod p =r$ and $\ell \mod p'=r'$. Thus, in short, the different leaves are ``independent'' automata: if at the end of a string a blue/green state is reached in a cycle of one leaf then blue/green state cannot reached at the end of the same string in a cycle of another different leaf. \item{(I3)} Furthermore, for every number $\ell$ there will be a leaf with parameters $(p,r,A)$ such that $\ell \mod p= r $. \end{description} \noindent At the start of the construction, the tree has only one node, with parameters $(1,0,A)$, where $A$ contains the lengths of all the cycles in the original ufa, and the cycles associated with the node are all the cycles in the original ufa. At any stage, one modifies a leaf $L$ with parameters $(p,r,A)$ in the tree using the following steps: \begin{enumerate} \item Merge any cycles with same length at the leaf $L$ (along with updating parameter $A$ correspondingly). This is fine by Lemma~\ref{lem-trans}(c), and preserves the invariants. \item If there are no blue or green nodes in a cycle associated with the leaf $L$, then delete that cycle (if this leaves no cycles associated with the leaf $L$, then introduce a cycle of size $p$, with no blue/green states), along with updating parameter $A$ correspondingly. \item If leaf $L$ now has only one associated cycle, then for the parameter $(p,r,\cdot)$ associated with the leaf $L$, if $(r+p\cdot c)$-th node of the cycle (for any $c$) is not-coloured, then it is coloured green. Note that this is safe based on invariant (I2). This node will not have any children in the future. \item If there are still more than one cycles associated with the leaf $L$, then suppose $A=\{q_1,q_2,\ldots,q_m\}$. Note that by the automaton being a ufa, and each of the cycles having at least one blue state, it follows using (I1) that each pair of numbers in $A$ have gcd greater than $1$ (thus in particular each $q_i>1$). \begin{enumerate} \item Now convert the cycles of length $p\cdot q_i$, $i>1$ associated with the leaf to a cycle of length $p\cdot q_1 \cdot q_i/ gcd(q_1,q_i)$ based on Lemma~\ref{lem-trans}(a), and update $A$ correspondingly. \item Make $q_1$ children of the node, with parameters $(p\cdot q_1,r+j\cdot p,A)$, with $j<q_1$. Cycles associated with the new children are defined as follows: Each cycle $C$ of length $p\cdot q_1\cdot s$ associated with $L$ is converted into $q_1$ cycles of length $p\cdot q_1\cdot s$ where the $j$-th cycle (with $j<q_1$) has the $i$-th state blue/green iff $i$-th state of $C$ is blue/green and $i \mod (p\cdot q_1)=r+j\cdot p$. Otherwise the state is uncoloured. This $j$-th cycle is associated with the child with parameter $(p\cdot q_1,r+j\cdot p,A)$. The parameter $A$ is appropriately computed for each child based on the children associated with it. \end{enumerate} \end{enumerate} \noindent Based on Lemma~\ref{lem-trans} (b), the above transformation preserves the properties/invariants. Note that all the above steps do not change the invariants, and keep the new automaton still an nfa accepting (using blue states) the same set of strings are the earlier automaton. If green state is introduced, then is consistent due to invariant (I2). Now, note that after steps 4.1 and 4.2 are excecuted, the values in $A$ (before step 4.1) are divided by at least $2$ each (by $gcd(q_1,q_i)$ for the number correponding to old $q_i$). Thus, the number of levels in the tree can be at most $\log n$, where $n$ is the length of the longest cycle at the beginning. It follows that the parameter $p$ associated with any leaf can be at most $n \cdot \frac{n}{2} \cdot \frac{n}{4} \ldots$, which is bounded by $n^{0.5\log n+O(1)}$. The computation time at each node is polynomial in the size of the cycles and $n$. Thus, the whole computation takes time $O(n^{O(\log n)})$. There are at most $n^{0.5 \log n + O(1)}$ cycles of length up to $n^{0.5 \log n + O(1)}$, as for each length, the cycles could be combined. Thus the overall number of states is at most $n^{\log n + O(1)}$, the upper bound is just the square of the length of the longest cycle. As each step of the algorithm preserves the invariants and the changes in the cycles are done only based on Lemma~\ref{lem-trans}, it follows that the final automaton is an ufa with respect to either blue or green states being accepting. Blue accepting states preserve the language accepted by the original automaton. Green accepting states are introduced only in step 3, in which case clearly green states are consistent with acceptings strings of the complement of the language. As the eventual leaves can have only one cycle, it follows that each leaf with parameters $(p,r,\cdot)$ decides for all strings of length equivalent to $r \mod p$. Furthermore, as root covers all lengths and each splitting into children covers the lengths covered by parent, all lengths are covered by some leaf. It follows that the final ufa decides the language. \end{proof} \iffalse \noindent The proof of the quasipolynomial decidability above is not included, as the following result is better. \begin{theorem} One can decide the inclusion problem for two $n$-state ufas in polynomial time. \end{theorem} \begin{proof} A union $L_1 \cup L_2 \cup \ldots \cup L_k$ is a subset of a further language $H$ iff every $L_\ell$ is a subset of $H$. Thus, for the subset problem, one assumes that both ufas are in Chrobak Normal Form and for the first language $L$ one splits this into the finite language of all words which are so short that one has not reached the cycle stage in both automata and the further languages are all of the form $\{v\} \cdot \{w\}^*$ where $v$ is the way to an accepting state in some cycle and $w$ the shortest way from that state to itself; the length of $v$ and $w$ are both bounded by $n$. So there is one language of up to $n$ words of length up to $n$ and furthermore, up to $n$ languages of the form $\{v\} \cdot \{w\}^*$ with $|v|,|w| \leq n$. The disjoint union of these languages is $L$. Membership of single strings can, in the case of the unary alphabet, be checked polynomially in the size $n$ of the automaton and logarithm of the length of the string (the binary length is given to the algorithm). This is because one can first subtract the length of the stem from the number of steps to be run so that one starts in each cycle at the start state and then, for each cycle of length $\ell$, take $($length of the word minus length of the stem$)$ modulo $\ell$ to get the position in the cycle for which one just tests whether it is accepting. There are at most $n$ cycles -- actually at most $O(\sqrt{n})$ under the assumption that any two cycles have a different length. Furthermore, each cycle has length at most $n$. Similarly, for testing whether $\{v\} \cdot \{w\}^*$ is a subset of $H$, one construct a new ufa $H'$ as follows. First suppose without loss of generality that $H$ has no stem, as one can subtract the stem length from $v$. For each cycle $C$ in $H$ with length $r$ and states $s_0,s_1,\ldots,s_{r-1}$ ($s_0$ being starting state, and transitions on unary input being from $s_i$ to $s_{i+1}$, where $i+1$ is taken mod $r$) form a cycle $C'$ in $H'$ with states $s'_0,s'_1,\ldots,s'_{r-1}$ ($s'_0$ being starting state, and transitions on unary input being from $s'_i$ to $s'_{i+1}$, where $i+1$ is taken mod $r$) where $s'_i$ is an accepting state iff $s_{|v|+i\cdot |w| \mod r}$ was an accepting state in $C$. This new ufa $H'$ also has at most $n$ states, the new length of each cycle is still at most the old one and cycles do not increase in number. Thus one just has to check whether the cyclic part of the modified $H'$ satisfies that every word is accepted. For this, one multiplies out the length of every cycle to get a number $p$ and then one adds up, over all cycles, the number $p \cdot h/\ell$ where $\ell$ is the number of states in the cycle and $h$ the number of accepting states. The property of being an ufa avoids double counting of accepted words in one big cycle of length $p$. It follows that the $H'$ contains all words iff the so computed number is equal to $p$. Note that $p \leq n^{O(\sqrt{n})}$ and therefore the logarithm of this number is at most $O(\sqrt{n} \log n)$ and thus all operations on these numbers needed can be done in polynomial time using binary or decimal numbers, as one needs for each number only $O(\sqrt{n} \log n)$ digits. \end{proof} The above proof can be modified to do the comparison in LOGSPACE. \fi \begin{theorem} \label{th:logspace} (a) One can decide in LOGSPACE whether an ufa in Chrobak Normal Form accepts all words. Without the LOGSPACE constraint, the running time can be estimated as quasilinear, that is, of the form $O(n \log^2(n))$. (b) Furthermore, one can decide in LOGSPACE whether an nfa $U_1$ in Chrobak Normal Form accepts a subset of the language accepted by another ufa $U_2$ in Chrobak Normal Form. \end{theorem} \begin{proof} (a) First check if the states of the stem are all accepting, which can clearly be done in LOGSPACE by automata walkthrough - pointers needed $O(\log n)$ memory to track positions in the ufa. Then one walks through each cycle and counts the number $i_k$ of accepting states and the length $j_k$ of the cycle. Now the ufa accepts all words, that is, is universal iff $\sum_k i_k/j_k = 1$. We initially show how to do this without being careful about space, but later show how the computation can be modified to be done in LOGSPACE. As the computation with rational numbers might be prone to rounding, one first normalises to one common denominator, namely $p = \prod_k j_k$ and furthermore computes $s = \sum_k i_k \cdot \prod_{h \neq k} j_k$. Now the above equality $\sum_k i_k/j_k = 1$ holds iff $s = p$. The values of $s$ and $p$ can be computed iteratively by the following algorithm, note that there are at most $n$ cycles and each time a cycle is processed, the corresponding values $i_k$ and $j_k$ can be established by an automata walk-through. So the loop is as follows: \begin{enumerate} \item Initialise $s=0$ and $p=1$; \item For each $k$ do begin find $i_k$ and $j_k$; update $s = (s \cdot j_k)+i_k \cdot p$; $p = p \cdot j_k$ endfor; \item if $s = p$ then accept else reject. \end{enumerate} In this algorithm, only the variables $p$ and $s$ need more space than $O(\log n)$; the other variables are all pointers or numbers between $0$ and $n$ which can be stored in $O(\log n)$ space. The values of $s$ and $p$ are at most $n^{ 2(n^{1/2})}$, as there are, when one assumes that lengths of cycles in Chrobak normal form are different, at most $2 n^{1/2}$ cycles, as the sum of their lengths is at most $n$ and half of them are larger than $n^{1/2}$. Thus, instead of computing the above algorithm once with exact numbers, one computes in time $O(n \log n)$ the first $5 \cdot n^{1/2}+2$ primes out of which 80\% are above $n^{1/2}$ so that their product is above the maximum values $s$ and $p$ can take --- note that this part has even the better bound $O((n^{1/2} \log^8(n))$, if one uses the algorithm of Agrawal, Kayal and Saxena \cite{AKS04,LP19} for the primality test; the bound includes that the primes to be found are all bounded by a constant times $n^{1/2} \log n$, using the prime number theorem. As their product is larger than the upper bound $n^{1+2(n^{1/2})}$ of $s$ and $p$, one then accepts iff all computations modulo each such prime $q$ result in $s = p$ modulo $q$; this condition is, by the Chinese remainder theorem, equivalent to $s = p$ without taking any remainders. So the modified algorithm would be this. \begin{enumerate} \item Let $q=2$; $\ell = 1$; \item Initialise $s=0$ and $p=1$ (both are kept modulo $q$) \item For each $k$ do begin find $i_k$ and $j_k$ by transversal of the corresponding cycle; update (both computations modulo $q$) $s = (s \cdot j_k)+(i_k \cdot p)$; $p = p \cdot j_k$ endfor; \item if $s \neq p$ (modulo $q$) then reject; \item Let $\ell = \ell+1$ and replace $q$ by the next prime using a fast primality test and exhaustive search; \item If $\ell < 5 n^{1/2}+2$ (or an easier to compute upper bound linear in this expression like $2^{0.5\log n+4}$ assuming that the binary system is used and $\log n$ is the number of binary digits of $n$) then goto step 2. \end{enumerate} The automata transversal of each of these $O(n^{1/2})$ loops needs at most time $O(n \log n)$ as one transverses each of the cycles of the ufa to determine the corresponding $i_k$ and $j_k$, the cycles are disjoint and have together at most $n$ states. So the overall running time is $O(n^{3/2} \log n)$. If one would use more space, about $O(n^{1/2}\log n)$, then one can avoid the repeated computation of $i_k,j_k$ by automata walk-throughs and gets the upper bound on the computation time of $O(n \log^2(n))$. For the space usage, the computations modulo $q$ need only $O(\log q)$ space which is then $O(\log n)$ space, as the first $5 n^{1/2}+2$ primes $q$ and the storage of variables modulo $q$ is all of $O(\log n)$. Primality tests can be done in $O(\log n)$ space for the usual way of doing is -- checking all divisors up to the root -- or with slightly more space when doing more advanced algorithms. (b) First note that basically the same idea as in (a) can be used to check if a ufa accepts all unary strings in sets $v w^*$ for some $v,w$ of length up to $2n$ can be done as above with a slight modification of the ufa walk-throughs. For this one partitions the words accepted by $U_1$ into two groups: (i) A finite set $X_1$ of strings of length at most $n$ (where $n$ is the size of the ufa) and (ii) A set $X_2$ consisting of subsets of the form $vw^*$, where $v,w$ are unary strings with $n < |v| \leq 2n$ and $|w| \leq n$. The strings in group (ii) above are from the cycles in $U_1$, by considering each accepting state in a cycle separately, and taking $|v|$ as the length of the smallest string leading to the state and $|w|$ as the length of the corresponding cycle. Strings in group (i) can easily be checked using a walkthrough of $U_2$, where if there is a branching into the cycles, one can do a depth first search. For strings in group (ii), each set of form $vw^*$, where $n < leq |v| \leq 2n$ and $|w| \leq n$, is checked separately. As $|v|>$ the length of the stem part of $U_2$, one can first modify the cycle part of $U_2$ to always start in a state which is reached after $|v|$ steps, and ignore the stem part. This would basically mean that we need to check if $w^*$ is accepted in the modified $U_2$ (denote this modified $U_2$ as $U_2'$). For space constraints, note that we do not need to write down $U_2'$, but just need to know the length by which the starting state of each cycle is shifted (which is the difference between $|v|$ and the length of the stem part of $U_2$). Now, for checking whether $w^*$ is accepted by $U_2'$, consider a further modified $U_2''$ formed as follows: for each cycle $C$ in $U_2'$ with length $r$ and states $s_0,s_1,\ldots,s_{r-1}$ ($s_0$ being starting state, and transitions on unary input being from $s_i$ to $s_{i+1}$, where $i+1$ is taken mod $r$) consider a cycle $C'$ in $U_2''$ with states $s'_0,s'_1,\ldots,s'_{r-1}$ ($s'_0$ being starting state, and transitions on unary input being from $s'_i$ to $s'_{i+1}$, where $i+1$ is taken mod $r$) where $s'_i$ is an accepting state iff $s_{|v|+i\cdot |w| \mod r}$ was an accepting state in $C$. This new ufa $U_2''$ also has at most $n$ states, the new length of each cycle is still the same as the lengths of the old cycles and the number of cycles do not increase. Now, similar to part (a), one just needs to check if $U_2''$ is accepting all unary strings. Here again note that one doesn't need to write down $U_2''$ fully, but just needs to check, for each cycle, its length and the number of accepting states, which can be done in LOGSPACE. \iffalse Here for accepting state on the stem, $v$ is the only way to do it, while for accepting states in the cycle-part, $v$ is the way to the first occurrence of the state in some run and $w$ is the length one has to go until reaching the state for the second time. Note that by doing the replacements $X_1 = X_1 \cup v$, $v = vw$ finitely often, one can obtain that $X_1$ contains all visits of the accepting state studied until the length $n$ is reach and $X_2$ contains the subsequent visits which are guaranteed to be in the cycle part. As the ufa $U_1$ can have several cycles, one can, more generally, view it as a finite union of sets $X_1$ and $X_2$, each determined from inspecting one accepting state in $U_1$ in Chrobak Normal Form. For each string $u$ in the finite set $X_1$, one can check acceptance by $U_2$ by walking $|u|$ steps and then checking whether it leads to an accepting state (in case of branching, one can do a depth first search). Automata walkthroughs for each accepting state lead to sets $U_1$ and $U_2$ and for those in $U_2$, one just counts the number of states visited when going from an accepting state to itself in steps of $|w|$ and furthermore, one let $i_k$ be the number of accepting states visited of the ufa visited and $j_k$ the number of steps of length $|w|$ done in the cycle of the second ufa until one visits again that state which was visited after reaching $|v|$. The remaining algorithm then checks that all states in $U_2$ are also accepted by the ufa and that is the same algorithm as in (a), only with big steps consisting of $|w|$ small steps used to move around in the cycles of the second ufa. Here note that one can find all the finitely many sets of the form $vw^*$ which are determined by the accepting states of the first ufa (or nfa) and for each of these cycles, one checks whether the corresponding sets $U_1$ and $U_2$ are all subsets of the sets of words accepted by the second ufa. This finishes the explanations for this proof. \fi \end{proof} \noindent As converting an ufa into Chrobak Normal Form goes in polynomial time without increasing the size and as logarithmic space computations are in polynomial time, one directly gets the following corollary. \begin{corollary} \label{co:logspace} One can decide the universality problem and the inclusion problem for two $n$-state ufas in polynomial time. \end{corollary} \iffalse \begin{remark} The test can even be carried out in LOGSPACE using the following trick: Instead of computing the numbers directly, one does, for each prime $q$ in $O(n \log n)$ which is writeable with $O(\log n)$ bits the corresponding computations and when, for all these $q$, the corresponding computations modulo $q$ indicate that $p$ and the share of accepting states in a $p$-cycle are the same modulo $q$. \end{remark} \fi \begin{remark} \label{re:quasipolynomial} For an ufa (of size $n$) for a language $L$ over the unary alphabet, the language $L^*$ can be recognised even by a dfa of quadratic size in $n$ \cite{Ce13}. Thus if one allows in regular languages Kleene star, Kleene plus and the Boolean set-theoretic operations (but no concatenation), then the output of constant-size expressions, with parameters being given by languages accepted by $n$-state ufas, can be recognised by ufas of quasipolynomial size. Furthermore, constant-sized quantifier-free formula with same type of parameters and comparing such subexpressions by $=$, $\subseteq$ and $\neq$ can be evaluated in quasipolynomial time. \end{remark} \begin{theorem} \label{th:concat} There is an exponential-type blow-up for ufa sizes when recognising the concatenation of unary languages; the concatenation of two languages given by $n$-state ufas requires, in the wrost case, an ufa with $2^{\Omega(n^{1/6})}$ states. \end{theorem} \begin{proof} Let $m$ be a numeric parameter and let $p_1,\ldots,p_k$ be the first $k=m/\log m$ primes of size at least $m$; note that $m > k+4$ for all sufficiently large $m$. These primes are within $\Theta(m)$ by the prime number theorem (which says that the $h$-th prime number is of size $\Theta(h \log h)$). Now the ufa $U$ to be constructed contains $k$ cycles $C_{\ell}$ of length $p_\ell \cdot (k+3)$ for $\ell = 0,1,\ldots,k-1$ and the cycle $C_\ell$ has, at positions $\ell+2+h \cdot (k+3)$ for $h=0,1,\ldots,p_\ell-2$ an accepting state. There is one further cycle $C'$ of length $k+3$ which has accepting states at positions $0,1,k+2$. Let $L$ denote the language recognised by this ufa. The lengths of $k$ consecutive unary strings not being accepted by the above ufa are exactly at lengths $r \cdot (k+3) +2, \ldots, r \cdot (k+3)+k-1$ where $r$ is $p_\ell -1$ modulo $p_\ell$ for $\ell = 0,1,\ldots,k-1$, and this does not happen at any other lengths. Let $H$ be the finite language which contains the words of length $0,1,\ldots,k-2$ and no other words. Now $L \cdot H$ contains all words whose length does, for at least one $\ell$, not have the remainder $k-1+(k+3) \cdot (p_\ell-1)$ modulo $(k+3) \cdot p_\ell$. The complement of $L \cdot H$ is a periodic language which contains exactly those words which have, for all $\ell$, the remainder $k-1+(k+3) \cdot (p_\ell-1)$ modulo $(k+3) \cdot p_\ell$. So every nfa or ufa recognising this language needs cycles of length at least $(k+3) \cdot p_0 \cdot p_1 \cdot \ldots \cdot p_{k-1}$. The length of this cycle is approximately $\Theta(m^k) \cdot (k+3)$ and any nfa or ufa recognising it needs at least the same number of states. Furthermore, the ufa $U$ has $n$ states with $n \in \Theta(m) \cdot \Theta(k^2) = \Theta(m^3/\log^2 m)$. It follows that $n \cdot \Theta(\log^2(m)) = \Theta(m^3)$ and, using $\Theta(\log n)= \Theta(\log m)$ as $n,m$ are polynomially related, that $\Theta(n \cdot \log^2(n)) = \Theta(m^3)$. So one can estimate $m \in \Theta ((n \cdot \log^2 n)^{1/3})$. Now the concatenation of $L \cdot H$ has an ufa of size $o$ to be determined and its complement, by the above, has an ufa of at least size $O(m)^{m/\log m}$. Using that the complement of an ufa of size $h$ can be done in size $h^{\log o+O(1)}$, one has that the logarithm of the corresponding sizes satisfies this: $\log^2 o \geq (\log m+O(1)) \cdot m / \log m) \geq m$ and $\log o \geq m^{1/2}$. Now the input ufa is of size $n$ with $m \in \Theta((n \log^2 n)^{1/3})$ and thus one has that $\log o \in \Omega(n^{1/6})$. So the concatenation of languages given by $n$-state ufas can result in an ufa of size at least $2^{\Omega(n^{1/6})}$. \end{proof} \begin{remark} This result stands in contrast to the situation of nfas. It is well-known that the concatenation of two $n$-state nfas needs only $2n$ states. However, the above construction shows that forming the complement in the unary nfas then blows up from $2n$ states to $2^{\Omega(n^{1/6})}$ states, giving an exponential-type lower bound for this operation; a direct construction leading to the bound $2^{\Omega((n\log n)^{1/2})}$ is known \cite{Ok12}. Furthermore, \^Ceverova \cite{Ce13} showed that the concatenation of two unary dfas of size $n$ can be realised by an dfa of size $O(n^2)$; actually, he gives an explicit formula on the size of the stem and the cycle which depends only on the size of the stems and the cycles in the two input automata. Pighizzini's result allows for an implementation of the following concatenation algorithm \cite{Pi00}: Convert the two ufas into dfas and then apply the algorithm to make the concatenation of dfas; this gives the upper bound of $2^{O((n \log n)^{1/2})}$ for the size of the concatenation ufa, thus the exponent of the size is up to a constant the same as the exponent of the blow-up at the transformation of an $n$-state ufa to a dfa. \end{remark} \noindent The following corollary summarises the costs of operations. It takes the following information into account. Holzer and Kutrib \cite{HK02} showed that the intersection of two ufas is just given by the $n^2$ state sized product automaton of the two $n$-state ufas which preserves the property of being ufas. The bounds for the disjoint union are listed by Jir\'askov\'a and Okhotin \cite{JO18} and are obtained by the simple union of the two ufas (this might give multiple start states, which is allowed for ufas); note that as the languages are disjoint, on each word in the union, one can reach as accepting state only in one of the ufas and there, by assumption, the way into the accepting state is unique. For the general union of two languages $L,H$ given by $n$-state ufas, consider the formula $L \cup (H-L)$ where $H-L$ is equal to the intersection of $H$ and the complement of $L$. The symmetric difference of two languages $L,H$ is the disjoint union of $L-H$ and $H-L$, thus the corresponding formula applies. For concatenation, the best known bound is that of Okhotin's conversion of an ufa into a dfa \cite{Ok12} and then the corresponding operations are done with the resulting dfa or dfas which are often polynomial, that is, increase the exponent only by a constant factor. The lower bound for the disjoint union is just the length of two even length cycles differing by length $2$; one cycle has the accepting states at the even positions and the other one at the odd positions. Finite formulas refers to a formula with several input automata combining the input $n$-state ufas with regular operations to a new ufa. \begin{corollary} \label{co:regularoperationbounds} Given ufas with up to $n$ states recognising the languages $L,H$ and let $c(n)$ satisfying $c(n) \in n^{\log n + O(1)}$ be the bound on the size of an ufa from Theorem~\ref{th:complement} which is recognising the complement $c(L)$ of $L$. For the standard regular operations on languages over the unary alphabet, one obtains the following bounds. \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Operation & Lower Bound & Source & Upper Bound & Source \\ \hline Intersection & $n^2-n$ & Holzer and & $n^2$ & Holzer and \\ & & Kultrib \cite{HK02} & & Kuoltrib \cite{HK02} \\ Complement & $n^{(\log \log \log n)^{\Omega(1)}}$ & Raskin \cite{Ra18} & $c(n)$ & Theorem~\ref{th:complement} \\ Disjoint union & $2n-4$ & see above & $2n$ & Jir\'askov\'a and \\ & & & & Okhotin \cite{JO18} \\ Union $L \cup H$ & -- & -- & $n + n \cdot c(n)$ & $L \cup (c(L) \cap H)$ \\ Symmetric difference & -- & -- & $2n \cdot c(n)$ & Theorem~\ref{th:complement} \\ Kleene Star & $(n-1)^2+1$ & \v Cevorov\'a \cite{Ce13} & $(n-1)^2+1$ & \v Cevorov\'a \cite{Ce13} \\ Concatenation & $2^{\Omega(n^{1/6})}$ & Theorem~\ref{th:concat} & $2^{O((n \log^2n)^{1/3})}$ & Okhotin \cite{Ok12} \\ Finite Formula & ETH $\Rightarrow$ $2^{\Omega(n^{1/3})}$ & Theorem~\ref{thm:ethformula} & $2^{O((n \log^2 n)^{1/3})}$ & Okhotin \cite{Ok12}\\ \hline \end{tabular} \end{center} \noindent In summary: All standard regular operations except concatenation have polynomial or quasipolynomial size-increase but concatenation has exponential-type size-increase. \end{corollary} \noindent One might ask whether one can evaluate constant-sized formulas using as inputs unary languages given by $n$-state ufas in a way which avoids the exponential-type blow-up in the evaluation, which the usage of concatenation brings with it. The answer is ``no'' as the following theorem shows. In the following, $H,I,J$ are sets of words given by $n$-state nfas and $K$ is a finite language. \begin{remark} \label{rem:varoccur} For the next result, one needs the following fact about the Exponential Time Hypothesis. Impagliazzo, Paturi and Zane \cite{IPZ01} list only a constant $\ell$ for how often each variable occurs, however, one can replace each variable by $\ell$ copies and then each copy occurs once; however, now one has to add $\ell$ 2SAT-clauses per variable which code a circular implication between the truth-values of the $\ell$ copies replacing the original variable. Thus one has $\ell n$ variables and total of (upto) $\ell n$ clauses with 3-literals each and (upto) $\ell n$ clauses with 2-literals each. If the original instances required, in the worst case, time $c^n$, the new instances require, in the worst case, time $(c^{1/\ell})^{\ell n}$ where $\ell n$ is the new number of variables. Note that one can obtain the same three-occur result also for some other NP-complete variants of 3SAT. For example, for X3SAT where a clause is true iff exactly one literal is true. To obtain such a formula, one takes the three-occur 3SAT instance obeying the Exponential Time Hypothesis with $n$ variables for some $n$ and requiring time $c^n$ for some $c>1$ to be solved and for each clause with three literals $x_1,x_2,x_3$, one replaces this clause by four X3SAT clauses with $6$ new variables $y_1,y_2,y_3,z_1,z_2,z_3$ which satisfy $y_1+y_2+y_3=1$ and $y_k+\neg x_k+z_k=1$ representing the implication $y_k \rightarrow x_k$ for $k=1,2,3$; the $z_1,z_2,z_3$ occur all exactly once. So if the new X3SAT-instance is satisfied then the original 3SAT instance is satisfied due to one of the $y_k$ being true and thus implying that the literal $x_k$ is true; on the other hand, one can easily compute from a satisfying assingment of the original 3SAT instance one of the new X3SAT instance. This gives a further dilution and the constant $c$ from the runtime bound of the original three-occur 3SAT instance for $n$ variables (where without loss of generality all clauses have at least two literals) would be transformed to $(c^{1/10}){10n}$ due to there being at most $3n/2$ clauses (as each clause has at least two literals, and each variable appearing at most thrice) and $6$ new variables being added per clause resulting in a total of $(1+6 \cdot 3/2) \cdot n$ variables in the translated instance. Now each $y_k$ occurs in two clauses, each $z_k$ in one clause and each original variable in up to three clauses which now code that an $y$-variable implies the corresponding literal for the three occurrences of that variable -- the $y$-variables are then one for each such occurrence. However, these variants are not needed in the present work. \end{remark} \iffalse HELLO SANJAY, SHOULD BE REWORK THE BELOW PROOF SO THAT IT SHOWS $$ L \subseteq (H_1 \cap H_2 \cap H_3 \cap H_4 \cap H_5) \cdot K $$ CANNOT BE DECIDED IN $2^{\Omega(n^{1/3})}$. WE WOULD THEN HAVE TO CODE A SATIFYING LITERAL WITH A $0$ AND DO OTHER MINOR ADJUSTMENTS OF THAT TYPE. BUT IT WOULD REMOVE THE UNION AND THE BOUND FOR THE N-FOLD INTERSECTION IS JUST $n^5$ WITHOUT QUASIPOLYNOMIAL BLOW-UP WHICH THE UNION AND THE COMPLEMENTATION MIGHT HAVE. WHAT DO YOU THINK? FOR NOW I HAVE LEFT THE PROOF AS IT WAS. \fi \begin{theorem} \label{thm:ethformula} Assuming ETH, it needs time $2^{\Omega(n^{1/3})}$ to evaluate the truth of the formula $$ (L - (H_1 \cup H_2 \cup H_3 \cup H_4 \cup H_5)) \cdot K = L $$ where $H_1,H_2,H_3,H_4,H_5,K$ are given by $n$-state ufas and $L$ is the set of all words. \end{theorem} \begin{proof} As indicated in Remark~\ref{rem:varoccur}, assume that each variable of a 3SAT instance occurs at most in one 3-literal clause and at most in two 2-literal clauses. Let $m$ denote the number of variables of this modified problem; without loss of generality, $m / \log m$ and $m/3$ are natural numbers; $\log m$ is uprounded for this. The total number of clauses is bounded by $4m/3$ as each variable is in one 3-literal clause (shared with two other variables) and two 2-literal clauses. For ease of presentation, assume without loss of generality that there are exactly $4m/3$ clauses. Partition the $m$ variables into five sets such that no two variables of any clause appear in the same set. That is, colour each variable with colour $c=1,2,3,4,5$ such that no two variables occuring in the same clause have the same colour. Select $5m/\log m$ primes greater than $2m$, but still bounded by some constant times $m$ (such primes exist by the prime number theorem). Suppose these primes are $p_{c,h}$, with $h=0,1,\ldots,m/(\log m) -1$ and $c=1,2,3,4,5$. Divide the $m$ variables into groups of $\log m$, and assign the variables of colour $c$ in the $h$-th group to the prime $p_{c,h}$, for $h=0,1,\ldots,m/(\log m) -1$ and $c=1,2,3,4,5$. For each colour $c=1,2,3,4,5$ construct a ufa for $H_c$ as follows. For $h=0,1,\ldots,m/(\log m) -1$, create a cycle $C_{c,h}$ of length $(4m/3+1) \cdot p_{c,h}$. There are up to $p_{c,h}$ Boolean combinations of the truth-values of the variables assigned to $p_{c,h}$; for the $\ell$-th such combination, for $k=0,1,\ldots,4m/3$, one makes the $[(4m/3+1) \cdot \ell+k+1]$-th state in $C_{c,h}$ accepting iff the $\ell$-th combination of truth-values assigned to the variables assigned to $p_{c,h}$ makes the $k$-th clause true. As there is at most one variable in each clause which is coloured $c$, it follows that the above automaton for $H_c$ has at most one accepting path for any unary word. If there is an satisfying truth assignment to the variables, then let such a truth assignement be the $\ell_{c,h}$-th for the variables assigned to $p_{c,h}$. Let $r$ be such that $r \mod p_{c,h}=\ell_{c,h}$, for $h=0,1,\ldots,m/(\log m) -1$ and $c=1,2,3,4,5$. Now, for each $k<4m/3$, for some $c$ and $h$, $[(4m/3+1) \cdot \ell_{c,h} + k+1]$-th state in the cycle $C_{c,h}$ would be accepting. Thus, one has a sequence of $4m/3$ consecutive words in the language $H_1 \cup H_2 \cup H_3 \cup H_4 \cup H_5$ (starting with length which is $(4m/3+1) \cdot r +1$ upto length $(4m/3+1) \cdot r +4m/3$). However there is no sequence of $4m/3+1$ words in $H_1 \cup H_2 \cup H_3 \cup H_4 \cup H_5$ due to all cycles being in a rejecting state whenever the length of the word is a multiple of $4m/3+1$. On the other hand, if there is a sequence of $4m/3$ consecutive words in the language $H_1 \cup H_2 \cup H_3 \cup H_4 \cup H_5$ it must be starting with some length $(4m/3+1) \cdot r+1$ and ending at length $(4m/3+1) \cdot r + 4m/3+1$, for some $r$. But, then considering $\ell_{c,h}=r \mod p_{c,h}$, it must be the case that for each $k<4m/3$, for some $c$ and $h$, $[(4m/3+1) \cdot \ell_{c,h} + k+1]$-th state in the cycle $C_{c,h}$ is accepting, and thus the 3SAT formula is satisfiable. Now the complement of $H_1 \cup H_2 \cup H_3 \cup H_4 \cup H_5$ has a hole of $4m/3$ consecutive words iff the 3SAT formula is satisfiable. Let $K$ contains all words of length strictly below $4m/3$ and no other words, an ufa with $4m/3+1$ states recognises $K$. Now $(L-(H_1 \cup H_2 \cup H_3 \cup H_4 \cup H_5)) \cdot K$ equals $L$ iff there is no satisfying assignment of the 3SAT formula. The size $n$ of the five ufas for $H_1,H_2,H_3,H_4,H_5$ is $O(m^3 /\log m)$ and $n \log n = \Theta(m^3)$, thus $m = \Theta((n \log n)^{1/3})$. It follows that checking the correctness of the given formula on seven $n$-state ufas requires at least time $2^{\Omega((n \log n)^{1/3})}$ under assumption of the Exponential Time Hypothesis. \end{proof} \noindent Note that Okhotin \cite{Ok12} provides an upper bound by converting the ufas into dfas and then carrying out the operations with dfas. These operations run in time polynomial in the size of the dfas constructed. While the size lower bound for concatenation of two $n$-state ufas is just $2^{\Omega(n^{1/6})}$, the following conditional bound on the computational complexity of finding an ufa for the concatenation is even higher: $2^{\Omega(n^{1/4})}$ when using the Exponential Time Hypothesis. \begin{theorem} \label{th:compbound} Under the assumption of the Exponential Time Hypothesis, one needs at least $2^{\Omega(n^{1/4})}$ time to compute an ufa for the language of the concatenation of the languages of two given $n$-state ufas in worst case. \end{theorem} \begin{proof} Let $m$ be an $m$-variable 3SAT instance with at most $3m$ clauses, where each variable occurs at most three times. Assume without loss of generality that $m \geq 4$ and $\log m$ is a whole number. Let the clauses be $u_1, u_2,\ldots,u_{3m}$, and the variables be $x_1,x_2,\ldots,x_m$. Let $r=\lfloor (\log m +3)/3 \rfloor$. Intuitively, we will assign $r$ clauses (and variables in these clauses) to each cycle in the ufa. It follows from the prime number theorem that there is a constant $c'$ independent of $m$ such that the set $P_m$ of primes between $8m$ and $c'm$ has at least cardinality $s=3m/r$. Consider the primes $p_1,p_2,\ldots,p_s$ in $P_m$. Now one assigns to each prime number $p_i$, the clauses $u_{(i-1)*r+1}, u_{(i-1)*r+2},\ldots, u_{i*r}$ and the variables appearing in them. As there are at most $3r=\log m +3$ variables assigned to these clauses, the number of possible truth assignments to such variables is at most $8m< p_i$. Also, each variable might be in clauses assigned to at most three such primes. The cycles in the ufa have length of the following form, where the start position of each cycle is at position $0$ and the accepting states for each cycle is mentioned. Each cycle accesses only positions which have some fixed value modulo $5m+1$ (with no intersection on the positions), which maintains the ufa nature. \begin{enumerate} \item One cycle is of length $5m+1$, where exactly the state at the position $0$ is accepting. \item For each prime $p_i$, which has clauses $u_{(i-1)*r+1}, u_{(i-1)*r+2},\ldots, u_{i*r}$ assigned to it, make a cycle of length $(5m+1) \cdot p_i$. The cycle has accepting state at position $h+(5m+1)k$, for $h \in \{(i-1)*r+1, (i-1)*r+2,\ldots, {i*r}\}$, if the clause $u_h$ is not satisifed by the $k$-th possible truth-assignment to the variables (upto $3r=\log m +3$) in the clauses assigned to $p_i$, or the number $k$ is bigger than the number of possible truth assignments to the variables but less than $p_i$. \item For each variable $x_j$ consider primes $p_i,p_{i'},p_{i''}$ with $p_i < p_{i'} < p_{i''}$ to which it is associated with. Note that only $p_i$ is guaranteed to exist; the others might be absent in the case that all clauses involving $x_j$ are coded in the same variable $p_i$. If $p_i$ and $p_{i'}$ exist one has a cycle of length $p_i \cdot p_{i'} \cdot (5m+1)$ and the position $k \cdot (5m+1)+j+3m$ is accepting provided that ($k$ modulo $p_i$)-th possible truth assignment to variables associated with $p_i$ and ($k$ modulo $p_{i'}$)-th possible truth assignment to variables associated with $p_{i'}$ give different truth assginment to $x_j$. Similarly, if $p_i,p_{i'},p_{i''}$ all three exist, one has a cycle of length $p_{i'} \cdot p_{i''} \cdot (5m+1)$ and the position $k \cdot (5m+1)+j+3m$ is accepting provided that ($k$ modulo $p_{i'}$)-th possible truth assignment to variables associated with $p_{i'}$ and ($k$ modulo $p_{i''}$)-th possible truth assignment to variables associated with $p_{i''}$ give different truth assginment to $x_j$. \end{enumerate} Now all unary strings of length in multiples of $5m+1$ are on the accepting state of the first cycle. Furthermore, given some $\ell$, the unary strings of length of form $(5m+1)\ell+1,(5m+1)\ell+2, \ldots, (5m+1)+5m$ are all rejecting iff ($\ell$ modulo $p_i$)-th truth assignment for variables associated with $p_i$ (for each $i$), defines a partial satisfying assignment by the cycle of the second type and, furthermore, for every variable $x_j$ being assigned to two or three primes $p_i,p_{i'},p_{i''}$, the ($\ell$ modulo $p_i$)-th, ($\ell$ modulo $p_{i'}$)-th and ($\ell$ modulo $p_{i''}$)-th truth assignments to variables associated with $p_i, p_{i'}, p_{i''}$ respectively coincide in their assignement to $x_j$, so that there is no accepting state by a cycle of the third type activated at positions $(5m+1)\ell+j+3m$ and $(5m+1)\ell+j +4m$ in the two corresponding cycles. Thus an interval of length $5m$ without any word in the language occurs iff the coded 3SAT formula has a satisfying assignment. This allows to conclude that the concatenation of the so defined language $L$ and the language of all words strictly shorter than $5m$ is universal, that is, contains all unary words, iff the coded 3SAT instance does not have a solution. Now let $2^{F(n)}$ be the time to compute the ufa for the concatenation of the languages of two given $n$-state ufas; as the algorithm writes in each cycle at most one symbol of the output, the number of states in the ufa computed is also bounded by $2^{F(n)}$. Now by Theorem~\ref{th:logspace} and its corollary, one can decide in time $2^{O(F(n))}$ whether the so constructed ufa is universal, that is, contains all binary words. The above constructed ufa coding the solvability of an $m$-variable 3SAT instance has up to $n = (5m+1) \cdot (1 + ((9m)/(\log m + 3)) c' m + 2m(cm')^2) \leq (9m)^2 (cm')^2 \in O(m^4)$ states; this bound is obtained by counting the number of cycles created above and multiplying them with the cycle length and then taking upper bounds on the resulting expressions. As said, concatenating the language $L$ of this ufa with the language of all words up to length $5m-1$ results in a language which contains all unary words iff the corresponding 3SAT formula is not satisfiable. Thus one can decide in time $2^{O(F(n))}$ whether the coded three-occur 3SAT instance is satisfiable. By the assumption of the Exponential time hypothesis, $2^{\Omega(m)} \subseteq 2^{O(F(n))}$. Furthermore, $m \in \Omega(n^{1/4})$. Thus $2^{\Omega(n^{1/4})} \subseteq 2^{O(F(n))}$ and it follows that also $2^{F(n)}$ is a function in $2^{\Omega(n^{1/4})}$. This proves the given time bound. \end{proof} \begin{remark} This result also proves, that for deciding whether the concatenation of two languages given by $n$-state ufas is universal, the Exponential Time Hypothesis implies a $2^{\Omega(n^{1/4})}$ lower bound. The upper bound of this is slightly better than the dfa-conversion bound $2^{O((n \log^2 n)^{1/3})}$ of Okhotin \cite{Ok12}: Theorem~\ref{thm:algo} proves that the universality of an nfa can be checked in time $2^{O((n \log n)^{1/3})}$ and as the concatenation of two $n$-state ufas can be written as an $2n$-state nfa, this upper bound also applies to checking the universality of the so obtained nfa. \end{remark} \section{Membership in Regular Languages of Infinite Words} \noindent For a finite alphabet, a finite word is a finite string over this alphabet; in the present work, the alphabet has exactly one member. The characteristic function of a language (set) $L$ of finite words, can be viewed as the infinite word $L(0)L(1)\ldots$; if the $i$-th word is in $L$ then $L(i)=1$ else $L(i)=0$. Thus the characteristic function of the language $L$ can be viewed as an $\omega$-word and $L(0)L(1)\ldots$ is the $\omega$-word generated by the language $L$. An $\omega$-language is a set of $\omega$-words and it is regular iff a nondeterministic B\"uchi automaton recognises the language. B\"uchi \cite{Bu60} showed in particular that an $\omega$-language is regular iff there exist finitely many regular languages $A_1,A_2,\ldots,A_n,B_1,B_2,\ldots,B_n$ such that $B_1,B_2,\ldots,B_n$ do not contain the empty word and the given $\omega$-language equals $\bigcup_k A_k \cdot B_k^\infty$ where $B_k^\infty$ is the set of infinite concatenations of members in $B_k$; all elements of an $\omega$-language are $\omega$-words. Now the question investigated is the following: Given a fixed $\omega$-regular language, what is the complexity to check whether the $\omega$-word generated by a unary $n$-state nfa is in this given $\omega$-language. The lower bound of Fernau and Krebs \cite{FK17} directly shows that this task requires, under the exponential time hypothesis, at least $2^{\Omega(n^{1/3})}$ deterministic time, where $n$ is the number of states of the input nfa. An upper bound is obtained by converting the nfa to a dfa and then computing words $v,w$ such that the $\omega$-word generated by the language equals $v w^\omega$. Then, one checks if is a $k$ such that the $v w^\omega \in A_k \cdot B_k^\infty$; this can be done by constructing a deterministic Muller automaton for the given $\omega$-language. For this, one first feeds the Muller automaton $v$ and records the state after processing $v$. Then one feeds the Muller automaton each time with $w$ and records the entry and exit states and which states were visited. After some time, the Muller automaton goes, when processing fintely many copies of $w$, through a loop and the states visited in this loops are the infinitely often visited states; from these one can compute the membership of $v w^\omega$ in $A_k \cdot B_k^\infty$. The complexity of this algorithm is $Poly(n) \cdot 2^{\Theta((n \log n)^{1/2})}$-- mainly by using the length bound on the period mentioned in the survey of Holzer and Kutrib \cite{HK11} and other sources as well as stepwise simulating the nfa for each input, whether the next bit in the $\omega$-word generated by the language accepted by the nfa is $0$ or $1$, in order to feed this bit into the Muller automaton until one has seen enough periods of $w$ in order to determine the infinitely often visited states of the Muller automaton. \begin{theorem} \label{th:nfaomega} Assuming the Exponential Time Hypothesis, checking whether an $n$-state nfa defines an $\omega$-word in $\cal L$ takes at least time $2^{\Omega((n \log\log n / log n)^{1/2})}$. \end{theorem} \begin{proof} Impagliazzo, Paturi and Zane \cite{IPZ01} showed that whenever the Exponential Time Hypothesis holds then this is witnessed by a sequence of 3SAT formulas which has linearly many clauses when measured by the number of variables. Such clauses will be coded up as follows in an nfa. One codes an $m$ variable, $k$ clause 3SAT with $k\in O(m)$ using an nfa as follows. Suppose $x_1,\ldots,x_m$ are the variables used in the 3SAT formula, and $C_1,\ldots, C_k$ are the $k$ clauses. Let $r'=\ceil{\log \log m}$, $r=2^{r'}$. Without loss of generality assume $m$ is divisible by $r'$: otherwise we can increase $m$ (upto doubling) to make this hold and add one literal clauses to the 3SAT formula using the new variables, without changing the big $O$ complexity. The nfa has disjoint cycles of different prime lengths $p_1,p_2,\ldots,p_t$, where $t \in \Theta(m/\log \log m)$ and each of these prime numbers are at least $4(rk+20)$ and in $\Theta(m \log m)$. Note that by the prime number theorem, as $k \in O(m)$, there are such different primes where the constant in the $\Theta$ also depends on the constant of the linear bound of $k$ in $m$. So the overall size of the nfa is $\Theta(m^2 \log m/ \log \log m)$. Intuitively, we will use a cycle of length $p_i$ to handle the variables $x_{(i-1)*r'+1}$ $x_{(i-1)*r'+2}$ $\ldots$ $x_{(i-1)*r'+r'}$. For each prime length $p_i$, where $i=1,2,\ldots,t$ as above, the cycle of length $p_i$ codes $(1100000(10a_{i,\ell,h}0)_{h=1}^{k}1000011)_{\ell=1}^{r}1^{p_i-r(k+14)}$ where $1$ denotes accepting state in the cycle and $0$ denotes rejecting state in the cycle. Here $a_{i,\ell,h}$ is $1$ if the truth assignment to $x_{(i-1)*r'+1}$ $x_{(i-1)*r'+2}$ $\ldots$ $x_{(i-1)*r'+r'}$ being the $\ell$-th binary string in $\{0,1\}^{s'_i}$ makes the $h$-th clause $C_{h}$ true, and $0$ otherwise. Each item $(1100000(10a_{i,\ell,h}0)_{h=1}^{k}1000011)$ is called a block and each block codes a possible truth assignment to the variables encoded by prime $p_i$: block corresponding to a value $\ell$ corresponds to the $\ell$-th truth assignment to the variables $x_{(i-1)*r'+1}$ $x_{(i-1)*r'+2}$ $\ldots$ $x_{(i-1)*r'+r'}$, and the part $10a_{i,\ell,h}0$ corresponds to checking if the $h$-th clause is satisfied. Note that $r = 2^{r'}$ is the number of possible truth-assignments to these $r'$ variables. Note that five consecutive zeros only occur at the beginning of a block and four consecutive zeros only occur at the end of a block. Each cycle has a different prime length; thus, by the Chinese remainder theorem, for each possible truth assignment to the variables, there is a number $s$ such that $s \mod p_i$ is the starting position of the block where the corresponding variable values are used for evaluating which clauses are satisfied. Thus, if a truth assignment leads to the 3SAT formula being true, then for the language $L$ recognised by the nfa, $L(s)L(s+1)\ldots L(s+4k+13)$ would be $1100000(1010)^k1000011$ which is in $1100000(1010)^+1000011$. On the other hand, if $1100000(1010)^+1000011$ is a substring of $L(0)L(1)\ldots$, then let $L(s)$ be the starting point for the substring $1100000(1010)^+1000011$ in $L(0)L(1)\ldots$. Then, as $11000001$ can happen only at start of a block of any cycle of the nfa and $1000011$ only at the end of a block of any cycle of the nfa, we must have that $(1010)^+$ must be of the form $(1010)^k$. Thus, for some $\ell_i$ corresponding to each $i$, for each value of $h$ in $\{1,2,\ldots, k\}$, for some $i$, $a_{i,\ell_i,h}$ is $1$. Thus the 3SAT formula is satisfiable. This proves the reduction of a 3SAT formula to the question whether the $\omega$-word generated by the corresponding nfa is in the $\omega$-language $$\{0,1\}^*1100000(1010)^+1000011\{0,1\}^{\infty}.$$ Now $n \in \Theta(m^2 \log m / \log \log m)$ and therefore $\Theta(\log n) = \Theta(\log m)$; thus $m \in \Omega((n \log\log n/\log n)^{1/2})$ and, by the Exponential Time Hypothesis, determining the membership of the $\omega$-language defined by an unary $n$-state nfa requires at least time $c^{(n \log\log n/ \log n)^{1/2}}$ for some constant $c$. It follows that logarithms of the computation time of the upper and lower bounds match up to a factor in $O(\log n)$. \end{proof} \begin{remark} One can make a marker of the form $1100011000110001100011$ at the beginning of the bits coded in each cycle -- the primes have to be slightly larger for that -- and this marker only occurs as a word in the subword of the $\omega$-word iff all cycles are in the initial position; otherwise some overlay with a consecutive run of three $1s$ or the subword $10a010b01$ $(a,b \in \{0,1\})$ inside the area of the above marker would occur which provides an additional $1$ to go into the marker. Therefore (using an $\omega$-automaton) one can recognise the beginning of intervals which code, for each combinations of variable assignments, one block which is of the form $1100000\{1010,1000\}^+1000011$ and which has the subword $10001$ iff the variable-assignment does not satisfy the given instance. Thus one can -- modulo some fixed constant -- count the number of solutions of the instance represented by the $\omega$-word. Therefore the membership in the corresponding $\omega$-language, which can be chosen as fixed, can check whether the correctly coded $\omega$-words representing an $m$-variable $k$-clause 3SAT instance has, modulo a fixed natural number, a nonzero number of solutions. Such counting checks are not known to be in the polynomial hierarchy, thus the membership test for fixed $\omega$-languages is represented by a complexity class possibly larger than {\bf NP} or {\bf PH}. \end{remark} \begin{remark} \label{re:ufaomega} The construction in Theorem~\ref{thm:ethformula} can be adjusted as follows. One combines all five ufas for $H_1,H_2,H_3,H_4,H_5$ such that the cycle length of $(4m/3+1) \cdot p_{c,h}$ is adjusted to $(20+8m) \cdot p_{c,h}$. Furthermore, one codes for each clause not one bit in five ufas, but six bits in one ufa, with the first bit being always $0$ and the next five bits being $1$ based on whether the corresponding coloured variable satisfies the clause (that is, if in the Theorem~\ref{thm:ethformula}, $H_i$ would have set the clause true), for the colours $1,2,3,4,5$. The whole sequence of blocks is not preceded by one single $0$ but by $1111110$. The sequence ends with $0111111000000$. These are $20$ constant bits to be coded instead of one previously. Now let $R = \{0\} \cdot (\{0,1\}^5-\{00000\})$ and consider the $\omega$-language $\{0,1\}^* \{1111110\} \cdot R^* \cdot \{0111111000000\} \cdot \{0,1\}^\omega$. This $\omega$-language contains the $\omega$-word of the translated instance iff the 3SAT formula has a satisfying assignment. In this $\omega$-language, its defining formula just says that there is somewhere a pattern where the members of $R$ just indicate that there is at least one literal per coded clause which makes the corresponding clause true and that there are no codes of unsatisfied clauses (which would be $000000$) between the code for the start and the end of the instance; the blocks at the start and the end are chosen such that the end block has so many trailing zeroes that one cannot swap the start and end block in order to get something which is falsely interpreted as the code of a satisfied instance. Thus one can decide $m$-variable 3SAT if the corresponding ufa is in the language. The size of the ufa is $\Theta(m^3 / \log m)$ and thus the value $m$ is $\Theta((n \log n)^{1/3})$. It follows that by the Exponential Time Hypothesis, one needs at least time $2^{\Omega((n \log n)^{1/3})}$ to check the membership of an $\omega$-word given by an $n$-state ufa in a fixed $\omega$-language in the worst case. The exponent in this conditional lower bound differs by a factor of $(\log n)^{1/3}$ from the upper bound obtained by converting the ufa into a dfa and then running the corresponding test. \end{remark}
1,314,259,994,631
arxiv
\section{Introduction} Collaborative Filtering (CF) is widely applied to various scenarios of recommender systems, which provides personalized item recommendation based on past user behaviors, such as purchasing a product~\cite{koren2009matrix,salakhutdinov2008bayesian,rendlebayesian}. Recently, graph based recommendations have shown huge success for solving data sparsity problem~\cite{berg2017graph,wang2019neural,wu2019neural}. Since the user-item interactions naturally form a graph, graph based recommendations obtain better user and item representations by aggregating higher-order neighbor information in a data sparsity setting. However, the cold start problem is still a challenge in CF based recommendation. Since new users or items have no historical interaction records, a conventional way to solve the cold start problem is to introduce additional data such as reviews, social networks, attributes, etc. Among them, user and item attributes are easily acquired in most online platforms (e.g., Facebook, Amazon) and described specific features. In this paper, we focus on attribute information in the cold start setting. For most attribute enhanced recommendation methods, we summarize them into three categories according to the difference of input data: CF-based, content-based, and hybrid methods. Given the history interaction data and attributes, some researchers leverage collaborative information of the existing entities and the attribute similarity for new user (item) recommendations ~\cite{goldberg2001eigentaste,zhou2011functional,sedhain2014social}. However, they do not model attribute information to feature space. Deep neural networks have achieved better performance in feature engineering modeling. Content-based methods make full use of auxiliary information of users and items to enhance the modeling of preference embedding~\cite{gantner2010learning,van2013deep,lian2018xdeepfm,wang2015collaborative,cheng2021long}. For example, DeepMusic~\cite{van2013deep} and CDL~\cite{wang2015collaborative} were proposed to incorporate content data into deep neural networks and learned a general transformation function for content representations. A simple assumption is that the attribute information can be mapped into the embedding space by a general transformation function, which ignores collaborative signals for new users or items side. In order to overcome this shortcoming and further improve the model performance based on the content information, hybrid methods are proposed. Hybrid models fuse the CF and content embedding, and model the relations between CF and content space~\cite{volkovs2017dropoutnet,zhu2020recommendation}. For example, DropoutNet~\cite{volkovs2017dropoutnet} was proposed to make full use of content and pretrained CF embedding for recommendation. However, most of these methods still have some weaknesses in dealing with those new users (items), that have no interactions with existing items (users). Graph based recommendations are limited by user-item links. To obtain unseen node embedding in a graph, inductive representation learning combines node features and graph structures for node embedding ~\cite{hamilton2017inductive,ying2018graph,wu2020learning}. For example, PinSage is a content-based Graph Convolutional Networks (GCN) model for recommending items, which gathers both graph structure and node features for embedding learning~\cite{ying2018graph}. They still have weaknesses in tackling new user (item) problem mentioned above. In other words, how to make recommendations for new users (items), who have no links during test, is still challenging. Since user-item links are available during train while not available during test, interaction data is capable of providing privileged information. This problem can also be treated as how to leverage attribute information to distill privileged information for better recommendations of new users (items). To this end, in this paper, we take advantages of graph learning and knowledge distillation in privileged information modeling and propose a novel \emph{privileged graph distillation model~(PGD)}~for the cold start problem, which new users (items) have no link during test. Specifically, we introduce attributes of users (items) as nodes into a user-item graph and construct a heterogeneous graph, so that attribute representations can capture higher order information during embedding propagation. Since privileged information is only available offline and effective for prediction, we employ knowledge distillation method to tackle the cold start problem. More specifically, the teacher model can access all the information and make full use of attributes for privileged information learning and user preference modeling. The student model is constructed on an entity-attribute graph without CF links, which can obtain privileged information based on attributes under the guidance of the teacher model. Then, the student model can fuse CF signals of user or item embedding for final recommendations. Thus, \emph{PGD}~can not only make full use of attribute information for a better recommendation, but also alleviate the cold start problem when recommending for new users or items. Finally, we detail the cold start problem in recommendation into three sub-tasks and evaluate the model performance with three datasets. Extensively experimental results demonstrate the superiority of our proposed \emph{PGD}. \section{Related Work} \subsection{Cold Start Recommendation} CF-based algorithms personally recommend products by collecting explicit rating records and implicit feedback, which are widely applied in various recommendation systems~\cite{koren2009matrix,salakhutdinov2008bayesian,rendlebayesian}. These methods leverage matrix factorization to obtain low-dimensional representations of users and items. For example, Salakhutdinov et al.~\cite{rendlebayesian} proposed Bayesian Personalized Ranking (BPR), which learned user and item latent vectors based on implicit feedback. Moreover, with the development of GCN, plenty of GCN-based CF methods are proposed to learn better collaborative filtering and alleviate the data sparsity problem~\cite{chen2020revisiting,wu2019neural,he2020lightgcn}. For example, Chen et al.~\cite{chen2020revisiting} designed LR-GCCF model to simplify the embedding propagation process with linear graph convolutions, which achieved excellent performance. However, most CF-based methods require links between users and items, which limit their applications. In order to solve the cold start problem, CF-based methods leverage social data and basic matrix factorization to capture the new users' preferences conventionally~\cite{goldberg2001eigentaste,zhou2011functional,ren2017social,sedhain2014social}. Social data based methods first keep the pretrained CF representations on implicit feedback data, and then generate the new user's embedding with the connection between new users and old users~\cite{sedhain2014social}. Despite the achievements they have made, most of these models still have some drawbacks. These methods cannot be widely used in the case of both new users and new items, and underestimate the potential of users' and items' attribute information . In order to remedy the shortcomings of CF-based methods, researchers proposed to utilize additional content information and designed content-based methods. Content-based methods take the profile as input and train a general transform function for content information, in which new user or item representation can be generated. These methods usually learn a mapping function to transform the content representation into collaborative space~\cite{gantner2010learning,wang2015collaborative,van2013deep}, and leverage deep cross-network structure to capture higher-order relationships between features~\cite{wang2017deep,lian2018xdeepfm}. For example, xDeepFM was proposed to model cross interactions at the vector-wise level explicitly~\cite{lian2018xdeepfm}. In order to solve the cold start problem in graph based recommendations, PinSage was proposed to leverage both attributes as well as the user-item graph structure to generate better embeddings~\cite{ying2018graph}. However, most of these methods do not consider the complicated connection between CF embedding space and content space for each user (item), in which new user (item) representations cannot reflect the association with CF information. To make full use of both CF-based methods and content-based methods, hybrid models are proposed to make better recommendations~\cite{volkovs2017dropoutnet,zhu2020recommendation,wu2020learning}. Most of these methods learn CF embedding and transformation functions to minimize prediction errors. A typical example is Heater ~\cite{zhu2020recommendation}, which dropped CF signals randomly to imitate new users or items situations. In particular, the CF representation is pretrained as a constraint for content embedding learning. The final prediction is conducted with a random choice of CF representation or content representation. Since the construction of user-item bipartite graph relies on interaction records, the learning of new user (item) representation is still a problem in graph based recommendations. Thus, inductive learning methods of graph are proposed to tackle unseen nodes' representation problem~\cite{hamilton2017inductive,chami2019hyperbolic,zeng2019graphsaint,zhang2019inductive}. Among these methods, TransGRec was proposed to feed the item's CF information and content information into the node initialization layer of the graph~\cite{wu2020learning}. Especially, TransGRec was designed to learn graph's structure information with the transfer network which is used to solve new user (item) problem. \subsection{Knowledge Distillation and Applications in Recommendations} Knowledge distillation is first proposed to address the lack of data and devices with limited resources. It aims to learn a better student model from a large teacher model and abandon the teacher model at the testing stage. In recent years, the knowledge distillation is presented in three ways: logits output~\cite{hinton2015distilling,mirzadeh2020improved,zhou2018rocket}, intermediate layers~\cite{romero2014fitnets,zagoruyko2016paying}, and relation-based distillation~\cite{park2019relational,chen2020learning,peng2019correlation,liu2019knowledge}. Most of methods assume that the teacher model and the student model input the same regular data in the distillation process, which means the available information at test is the same as at train. In the real world, some information is helpful for prediction tasks but not always available, which called privileged information (e.g., medical reports in pathology analysis). Therefore, privileged distillation is proposed to tackle the lack of data problem in testing online, in which privileged information is only fed into the teacher model. Lopez et al.~\cite{lopez2015unifying} proposed an approach that guided the student model with fewer data and distilled the teacher model's privileged information. Since knowledge distillation is capable to solve the data missing and time-consuming problems, it attracts attention in recommendation areas. There are some works that get light models with better performance by model distillation~\cite{tang2018ranking,zhang2020distilling,wang2020next,kang2020rrd}, which solve the problem of limited equipment resources and reduce the running time. For example, Zhang et al.~\cite{zhang2020distilling} constructed an embedding based model to distill user's meta-path structure and improve accuracy and interpretability. Meanwhile, to solve the problem that privileged information is unavailable in online recommendations, researchers proposed to introduce privileged distillation into recommendations~\cite{chen2018adversarial,xu2019privileged}. Selective Distillation Network ~\cite{chen2018adversarial} was proposed to use a review process framework as the teacher model, so that the student model can distill effective review information. Xu et al.~\cite{xu2019privileged} proposed Privileged Features Distillation (PFD) to distill privileged features and in click-through rate and achieved better performance in click-through rate and conversion rate. However, most methods haven't addressed the new user or item problem. In this paper, we treat interaction data as privileged information and design student network to imitate the situation of new users or items. Our goal is to improve model performance on cold start problems by distilling teacher's graph structure information and privileged information. \section{Problem Definition} \label{s:problem} In a collaborative filtering based recommendation system, there are two sets of entities: a userset~{\small$U$~($|U|\!=\!M$)}, and an itemset~{\small$V$~($|V|\!=\!N$)}. Since implicit feedback is available in most scenarios, we use a rating matrix {\small$\mathbf{R}\in\mathbb{R}^{M\times N}$} to denote the interaction information, with $r_{ij} = 1$ indicates observed interaction between user $i$ and item $j$, otherwise it equals to 0. Traditionally, the user-item interaction behavior could be naturally formulated as a user-item bipartite graph: $\mathcal{G}_R=<U \cup V, \mathbf{A}^R>$, where the graph adjacent matrix is constructed from the interaction matrix {\small$\mathbf{R}$}: \begin{small} \begin{flalign}\label{eq:adj_matrix1} \mathbf{A}^R=\left[\begin{array}{cc} \mathbf{0}^{M\times M} & \mathbf{R}\\ \mathbf{R}^T & \mathbf{0}^{N\times N} \end{array}\right]. \end{flalign} \end{small} Most of the attributes are sparse and categorical, and we generally convert continuous attributes to discrete distributions. Meanwhile, the entity attribute matrix {\small$\mathbf{X}\in\mathbb{R}^{(M+N)\times D}$} is usually treated as the supplement information for user-item bipartite graph, where {$D$} is the dimension of user and item attributes. Besides, we employ $\mathbf{x}_i\in\mathbb{R}^D$ and $\mathbf{x}_{M+j}\in\mathbb{R}^D$ to denote the $i^{th}$ user one-hot attribute and the $j^{th}$ item one-hot attribute ~{($0 \leq i \textless \small{M}$~, $0 \leq j \textless \small{N}$)}. For $\mathbf{x}_i$, the attribute's indices are between $0$ and $(D_u-1)$. For $\mathbf{x}_j$, the attribute's indices are between $D_u$ and $(D-1)$, where $D_u$ is the dimension of user attributes. The goal of graph based recommendations is to measure the user preference and predict the preference score matrix {\small$\hat{\mathbf{R}}\in\mathbb{R}^ {M\times N}$}. In order to evaluate the model performance, we also split the recommendation task into three sub-tasks to analyze the real-world scenarios in a detailed way. \begin{itemize} \item[\textit{Task 1}:] When a new user with attributes appears, we recommend existing~(old) products to new users; \item[\textit{Task 2}:] When a new product with attributes appears, we have to recommend new products to existing~(old) users; \item[\textit{Task 3}:] When new users and new products appear at the same time, we have to recommend new products to new users. \end{itemize} To this end, we propose a novel \emph{privileged graph distillation model~(PGD)}\ to tackle the above challenges. Next, we will introduce the technical details of \emph{PGD}. \section{The Proposed Model} \label{s:model} Figure~\ref{fig:framework} illustrates the overall architecture of our proposed \emph{PGD}, which consists of three main components: 1) \textit{Teacher model}: leveraging existing user-item interactions to learn user preference representation and item representation; 2) \textit{User Student model}: focusing on new user preference modeling; 3) \textit{Item Student model}: concentrating on new item modeling. Before introducing the technical details, we first introduce the necessary notations for the sake of convenience. We use {\small$\mathbf{U}\in\mathbb{R}^{M\times d}$} and {\small$\mathbf{V}\in\mathbb{R}^{N\times d}$} to denote the free embedding matrix of user and item respectively, where $M$ and $N$ represent the number of users and items. $d$ is the dimension of free embedding. Moreover, we leverage {\small$\mathbf{Y}\in\mathbb{R}^{D\times d}$} to represent the user attribute and item attribute node embedding matrix. Besides, we employ $\mathbf{y}_k$ and $\mathbf{y}_l$ to denote the $k^{th}$ user attribute and the $l^{th}$ item attribute~{($0 \leq k \textless D_u$~, $D_u \leq l \textless D$)}. Next, we will introduce the technical details of our proposed \emph{PGD}. \begin{small} \begin{figure*} [htb] \begin{center} \includegraphics[width=0.9\textwidth]{overall_graph.pdf} \end{center} \vspace{-0.3cm} \caption{The overall framework of our proposed model.}\label{fig:framework} \vspace{-0.3cm} \end{figure*} \end{small} \begin{small} \begin{figure} [htb] \begin{center} \includegraphics[width=0.4\textwidth, height=60mm]{student_graph.pdf} \end{center} \vspace{-0.5cm} \caption{The user student framework of \emph{PGD}.} \vspace{-0.3cm} \label{fig:student} \end{figure} \end{small} \subsection{Teacher Model} \label{s:teacher_model} As mentioned before, we intend to leverage attribute information to build connections for new users and new items. To this end, we construct a novel graph with the attributes as the nodes, and design a novel GCN, which we name as \textit{Teacher model}, to generate comprehensive user and item embeddings, as well as predict the ratings of users to items. The teacher model's structure could be formulated as a user-item-attributes graph: $\mathcal{G}=<U \cup V \cup \mathbf{X}, \mathbf{A}>$, where the graph matrix is constructed from the rating adjacent matrix $\small\mathbf{A}^R$ and attribute matrix $\small\mathbf{X}$: \begin{small} \begin{flalign}\label{eq:adj_matrix2} \mathbf{A}=\left[\begin{array}{cc} \mathbf{A}^R & \mathbf{X}\\ \mathbf{X}^T & \mathbf{0}^{D\times D} \end{array}\right], \end{flalign} \end{small} Next, we first introduce the graph construction and model initialization. Then, we give a detailed description of the embedding propagation and model prediction. \textbf{Model Initialization Layer. } In this layer, we leverage the free embedding matrix {\small$\mathbf{U}\in\mathbb{R}^{M\times d}$} and {\small$\mathbf{V}\in\mathbb{R}^{N\times d}$} to denote users and items. The attribute embeddings of users and items are represented with {\small$\mathbf{Y}$}. They are treated as input and initialized with Gaussian Distribution, then updated during the propagation of GCN. We have to note that the free embedding matrix {\small$\mathbf{U}$}, {\small$\mathbf{V}$} will be shared with \textit{Student model}, which will be introduced in the following parts. \textbf{Embedding Propagation Layer.} In this part, we employ GCN to propagate users' (items', user attributes', item attributes') embeddings to capture higher-order information and obtain the proximity between four different type nodes for better node representation. Specifically, let $\mathbf{u}_i^t$ and $\mathbf{v}_j^t$ denote user $i$'s embedding and item $j$'s embedding at $t^{th}$ layer. And, $\mathbf{y}_k^{t}$ denotes the attribute embedding for user, $\mathbf{y}_l^{t}$ denotes the attribute embedding for item. We leverage the output of Initial Embedding Layer as the initial input of this layer, which means $\mathbf{u}_i^0 = \mathbf{u}_i$, $\mathbf{v}_j^0 = \mathbf{v}_j$, $\mathbf{y}_{k}^{0} = \mathbf{y}_{k}$, $\mathbf{y}_{l}^{0} = \mathbf{y}_{l}$. In order to extract the node embedding at $(t+1)^{th}$ with the consideration of its neighbors' embeddings and its own free embedding at the $t^{th}$ layer, we utilize the graph propagation and pooling operation to update the embedding of each node. Taking user $i$ as an example, we leverage $A_i = \{j|r_{ij} = 1\}\cup \{k|x_{ik} = 1\}$ to denote the item set that he has clicked and his corresponding attribute set. The updating process can be formulated as follows: \begin{flalign} \label{eq:embedding propagation1} \mathbf{u}_i^{t+1} &= (\mathbf{u}_i^{t}+ \sum_{j\in {A}_i} \frac{\mathbf{v}_j^{t}}{|{A}_i|}+ \sum_{k\in {A}_i} \frac{\mathbf{y}_k^{t}}{|{A}_i|}). \end{flalign} By employing this layer, \emph{PGD}~not only utilizes item neighbor information to describe the user's implicit preference, but also makes full use of attributes for the user's explicit feature. Similarly, \emph{PGD}~is capable of updating the item embedding based on users who have clicked it and the corresponding item attributes. Therefore, we leverage $A_{M+j} = \{i|r_{ij} = 1\} \cup \{l|x_{(j+M)l} = 1\}$ to denote the user set who has clicked the item $j$ and the corresponding attribute set of item $j$. Then, the updating operation for item $j$ in the $(t+1)^{th}$ layer can be described as follows: \begin{flalign}\label{eq:embedding propagation2} \mathbf{v}_j^{t+1} &= (\mathbf{v}_j^{t}+ \sum_{i\in {A}_{M+j}} \frac{\mathbf{u}_i^{t}}{|{A}_{M+j}|}+ \sum_{l\in {A}_{M+j}} \frac{\mathbf{y}_l^{t}}{|{A}_{M+j}|}). \end{flalign} Besides, we add attribute nodes in GCN to enhance user preference modeling. Thus, user attribute embedding can be updated based on all users who have the same attributes. Meanwhile, the item attribute embedding can be updated in a similar way. The updating process at the $(t+1)^{th}$ layer can be formulated as follows: \begin{align} \begin{split} \mathbf{y}_k^{t+1} &= \mathbf{y}_k^{t}+ \sum_{i\in {A}_{k+M+N}} \frac{\mathbf{u}_i^{t}}{|{A}_{k+M+N}|}, 0 \leq k \textless D_u, \\ \mathbf{y}_l^{t+1} &= \mathbf{y}_l^{t} + \sum_{j\in {A}_{l+M+N}} \frac{\mathbf{v}_j^{t}}{|{A}_{l+M+N}|}, D_u \leq l \textless D, \end{split} \end{align} where $A_{k+M+N}=\{i|x_{ik}=1\} \in {\small\mathbf{X}}$ denotes the user set who has the attribute $y_k$. $A_{l+M+N}=\{j|x_{(j+M)l} = 1\} \in {\small\mathbf{X}}$ denotes the item set that has the attribute $y_l$. In order to illustrate the embedding propagation process more clearly, we formulate the fusion embedding in the matrix norm. Let matrix {\small$\mathbf{U}^t$}, {\small$\mathbf{V}^t$}, {\small$\mathbf{Y}^t$} denote the embedding matrices of users ,items and attributes after $t^{th}$ propagation, then the updated embedding matrices after $(t+1)^{th}$ propagation as: \begin{small} \begin{flalign}\label{eq:new GCN} \left[\begin{array}{c} \mathbf{U}^{t+1} \\ \mathbf{V}^{t+1} \\ \mathbf{Y}^{t+1} \end{array}\right] =(\left[\begin{array}{c} \mathbf{U}^{t} \\ \mathbf{V}^{t} \\ \mathbf{Y}^{t}\end{array}\right]+ \mathbf{D}^{-1}\mathbf{A}\times \left[\begin{array}{c} \mathbf{U}^{t} \\ \mathbf{V}^{t}\\ \mathbf{Y}^{t}\end{array}\right]), \end{flalign} \end{small} where {\small$\mathbf{D}$} is the degree matrix of {\small$\mathbf{A}$}, which could efficiently propagate neighbors' embeddings and update fusion matrices. \textbf{Model Prediction Layer.} In this layer, we treat the output of Embedding Propagation Layer as the final user embedding $\hat{\mathbf{u}}_i$ and item embedding $\hat{\mathbf{v}}_j$. In this layer, we treat the output of Embedding Propagation Layer as the final user embedding and item embeddings (i.e., $\mathbf{u}_i^L, \mathbf{v}_j^L$), where $L$ is the number of GCN layers in Teacher model. Then, we predict user i's rating to item j by calculating the dot product of their embeddings, which can be formulated as follows: \begin{align} \label{eq:teacher_predict} \hat{r}_{ij} = \hat{\mathbf{u}}_i(\hat{\mathbf{v}}_j)^T = \mathbf{u}_i^L(\mathbf{v}_j^L)^T. \end{align} \subsection{Student Model} As mentioned before, we introduce attribute information of users and items to alleviate the cold start problem in GCN based recommendation. However, attribute information still has some weaknesses in analyzing the collaborative filtering information of users, which is very important for user preference modeling. To this end, we intend to leverage distillation techniques to train a student model, which can utilize the attribute information to access the collaborative signals in the teacher model. Along this line, the student model can make full use of attribute information to model user preference comprehensively. In concerned details, the student model can be classified into two sub-models based on attribute source~(i.e., user attributes or item attributes): 1) \textit{User Student model}, 2) \textit{Item Student model}. Specially, the attribute embedding of users in \textit{User Student model} represented with {\small$\mathbf{E}=\{\mathbf{e}_0, \mathbf{e}_1,..., \mathbf{e}_{(D_u-1)}\}$} and the attribute embedding of items in \textit{Item Student model} represented with {\small$\mathbf{F}=\{\mathbf{f}_{D_u}, \mathbf{f}_{D_u+1},..., \mathbf{f}_{(D-1)}\}$}. The former focuses on the new user problem and takes user attributes and items as input. The latter focuses on the new item problem and takes item attributes and users as input. The framework is illustrated in Figure~\ref{fig:student}. Since these two sub-models perform in a similar way, we take the \textit{User Student model} as an example to introduce the technical details for the sake of simplicity in the following parts. \textbf{Graph Construction.} Since the direct connections between new users and items are unavailable in the student model, we first need to construct the graph between new users and items based on the attribute information. As illustrated in Figure~\ref{fig:student}, if user $i$ has clicked the item $j$, we could obtain the direct link between user $i$ and item $j$ in the teacher graph. However, this direct link is unavailable in the student graph. To this end, we employ indirect links between user attributes and items to replace the direct link between user and items. Specifically, if user $i$ have clicked item $j$, which will not be provided to the student model, we link the attributes of user $i$ to item $j$ to construct the user-\textit{attribute}-item graph for \textit{User Student model}. Moreover, if multiple users with attribute $k$ have clicked item $j$, we will assign a higher weight to the indirect link between attribute $k$ and item $j$. We employ {\small$\mathbf{S}_u\in\mathbb{R}^{N\times D_u}$} to denote item-user attribute matrix and {\small$\mathbf{S}_v\in\mathbb{R}^{M\times D_v}$} denote user-item attribute matrix, where {\small$\mathbf{S}_u$} and {\small$\mathbf{S}_v$} is constructed from the user-item graph adjacent matrix $\mathbf{A}^R$ and entity attribute matrix {\small$\mathbf{X}$}: \begin{small} \begin{flalign}\label{eq:adj_matrix3} \mathbf{A}^R\mathbf{X}=\left[\begin{array}{cc} \mathbf{0}^{M\times D_u} & \mathbf{R}\mathbf{X}^V\\ \mathbf{R}^T\mathbf{X}^U & \mathbf{0}^{N\times D_v} \end{array}\right] =\left[\begin{array}{cc} \mathbf{0}^{M\times D_u} & \mathbf{S}_v\\ \mathbf{S}_u & \mathbf{0}^{N\times D_v} \end{array}\right]. \end{flalign} \end{small} where {\small$\mathbf{X}^U$} represents the user attribute part of {\small$\mathbf{X}$} and {\small$\mathbf{X}^V$} represents the item attribute part of {\small$\mathbf{X}$}. Since {\small$\mathbf{S}_u$} is a two-order link matrix, in which $s_{jk} \geq 1$ indicates the count that item $j$ has indirect links with user attribute $k$. $s_{jk} = 0$ denotes there is no indirect link between item $j$ and user attribute $k$. The user student model's graph structure could be formulated as a item-user attribute graph: $\mathcal{G}_{S_u} = <V \cup \mathbf{X}^U, \mathbf{A}^{S_u}>$, where the graph adjacent matrix is constructed from the item-user attribute matrix {\small$\mathbf{A}^{S_u}$}: \begin{small} \begin{flalign}\label{eq:adj_matrix4} \mathbf{A}^{S_u}=\left[\begin{array}{cc} \mathbf{0}^{N\times N} & \mathbf{S}_u\\ \mathbf{S}_u^T & \mathbf{0}^{D_u\times D_u} \end{array}\right]. \end{flalign} \end{small} Since this student graph $\mathcal{G}_{S_u}$ is constructed based on second-order connections, it will be a little denser than traditional user-item graph. After graph construction, we employ the item embedding from the teacher model as the initial embedding of the item in the student model. For the user attribute embedding $\mathbf{e}_k \in \mathbb{R}^{d}$, since user attributes only have indirect connection with items, we do not employ the user attribute embedding from teacher model and initialize it with Gaussian Distribution on the other hand. \textbf{Embedding Propagation Layer.} Since there only exist indirect links between items and user attributes, we leverage the item free embedding to update the attribute embedding $\mathbf{e}_k$. Taking the update in the $(t+1)^{th}$ layer as an example, we aggregate the item neighbors of user attribute $k$ to update its embedding. Let ${A_{k+N}^{S_u}} = \{j|s_{jk} \geq 1 \}$ denotes the item set that has indirect connection with user attribute $k$, the $(t+1)^{th}$ updating operation can be formulated as follows: \begin{flalign}\label{eq:embedding propagation3} \mathbf{e}_k^{t+1} &= (\mathbf{e}_k^{t}+\sum_{j\in {A_{k+N}^{S_u}}} \frac{\mathbf{v}_j^{t}}{|A_{k+N}^{S_u}|}). \end{flalign} Meanwhile, item embedding can be updated with the corresponding user attribute neighbors in a similar way. Let ${A_j^{S_u}} = \{k|s_{jk} >=1 \}$ denotes the user attribute set that has indirect connections with item $j$. The $(t+1)^{th}$ updating operation can be described as follows: \begin{flalign}\label{eq:embedding propagation4} \mathbf{v}_j^{t+1} &= (\mathbf{v}_j^{t}+\sum_{j\in {A_j^{S_u}}} \frac{\mathbf{e}_k^{t}}{|A_j^{S_u}|}). \end{flalign} Similar to the teacher model, let matrix {\small$\mathbf{E}^t$}, {\small$\mathbf{V}^t$} denote the embedding matrices of user attribute in the user student model and items after $t^{th}$ propagation, then the updated embedding matrices after $(t+1)^{th}$ propagation as: \begin{small} \begin{flalign}\label{eq:new GCN1} \left[\begin{array}{c} \mathbf{V}^{t+1} \\ \mathbf{E}^{t+1}\end{array}\right] =(\left[\begin{array}{c} \mathbf{V}^{t} \\ \mathbf{E}^{t}\end{array}\right]+ {\mathbf{D}^{S_u}}^{-1}\mathbf{A}^{S_u}\times \left[\begin{array}{c} \mathbf{V}^{t} \\ \mathbf{E}^{t}\end{array}\right]). \end{flalign} \end{small} Finally, we can get the user attribute embedding and the updated item free embedding. Taking new user $i$ and item $j$ as an example, the attribute set of new user $i$ can be represented with $X_{u_i} = \{k|\bar{x}_{ik} = 1\}$. Their embeddings can be represented as follows: \begin{equation} \begin{split} \mathbf{u}_i^U = \sum_{k \in X_{u_i}}\mathbf{e}_k^{L_{S_u}}, \quad \mathbf{v}_j^U = \mathbf{v}_j^{L_{S_u}}, \end{split} \end{equation} where $L_{S_u}$ is the number of GCN layers in the user student model. Meanwhile, we can obtain the user embedding $\mathbf{u}_i^I$ and item embedding $\mathbf{v}_j^I$ in a similar way. \textbf{Prediction Layer.} In this layer, we intend to utilize the learned user embedding and item embedding to calculate the corresponding rating. Taking user $i$ and item $j$ as an example, the predicted rating can be calculated with the following function: \begin{equation} \label{eq:overall_prediction} \begin{split} \hat{r}_{ij} = \hat{\mathbf{u}}_i(\hat{\mathbf{v}}_j)^T. \end{split} \end{equation} If the user and item are available simultaneously, the predicted rating can be obtained with $\hat{\mathbf{u}}_i=\mathbf{u}_i^L, \hat{\mathbf{v}}_j=\mathbf{v}_j^L$, as illustrated in Eq.\ref{eq:teacher_predict}. When dealing with cold start problem, we employ different components in \emph{PGD}~to generate different implementations of user embedding $\hat{\mathbf{u}}_i$ and item embedding $\hat{\mathbf{v}}_j$ in Eq.~\ref{eq:overall_prediction}, which is in favor of tackling different situations of cold start problem in a unified way. \textit{1) Task 1.} In this task, we select the user student model. User embedding $\mathbf{u}_i$ can be represented with the sum of corresponding attribute embedding $\mathbf{e}_k (k\in{X_{u_i}})$ in user student model. Item embedding can be represented with the free embedding $\mathbf{v}_j^L$ generated in teacher model. Finally, Eq.~\ref{eq:overall_prediction} can be modified as follows: \begin{equation} \begin{split} \hat r_{ij} = \hat{\mathbf{u}}_i(\hat{\mathbf{v}}_j)^T = \mathbf{u}_i^U(\mathbf{v}_j^L)^T = (\sum_{k \in X_{u_i}}\mathbf{e}_k^{L_{S_u}})(\mathbf{v}_j^L)^T. \end{split} \end{equation} \textit{2) Task 2.} In this task, we select the item student model. For user embedding, we select the user free embedding $\mathbf{u}_i^L$ from Teacher model as the representation. For item representation, we make full use of its attribute embedding $f_l (l\in{X_{v_j}})$ as the needed embedding. Therefore, Eq.~\ref{eq:overall_prediction} is modified as follows: \begin{equation} \begin{split} \hat r_{ij} = \hat{\mathbf{u}}_i(\hat{\mathbf{v}}_j)^T = \mathbf{u}_i^L(\mathbf{v}_j^I)^T = (\mathbf{u}_i^L)(\sum_{l \in X_{v_j}}\mathbf{f}_l^{L_{S_v}})^T. \end{split} \end{equation} \textit{3) Task 3.} In this task, the user and item free embedding are not available at the same time. Therefore, we employ both user student model and item student model to generate the user and item embeddings with their attribute information. Specifically, we select the user embedding $\mathbf{u}_i^U$ and item embedding $\mathbf{v}_j^I$, which are driven from their own attributes, and modify Eq.~\ref{eq:overall_prediction} as follows: \begin{equation} \begin{split} \hat r_{ij} = \hat{\mathbf{u}}_i(\hat{\mathbf{v}}_j)^T = \mathbf{u}_i^U(\mathbf{v}_j^I)^T = (\sum_{k \in X_{u_i}}\mathbf{e}_k^{L_{S_u}})(\sum_{l \in X_{v_j}}\mathbf{f}_l^{L_{S_v}})^T. \end{split} \end{equation} \subsection{Model Optimization} Since \emph{PGD}~contains two main components, the optimization also consists of two parts: \textit{Rating Prediction Loss} for Teacher Model, and \textit{Graph Distillation Loss} for \emph{PGD}. \textbf{Rating Prediction Loss.} For recommender system based on implicit feedback, BPR-based on pair-wise ranking is the most popular optimization algorithm. Thus, the objective function can be formulated as follows: \begin{align} L_r = \sum_{u\in U}\sum_{(i,j)\in B_u}-ln\sigma(\hat{r}_{ui} - \hat{r}_{uj}) + \gamma ||\theta||^2, \end{align} where $\sigma(\cdot)$ is a sigmoid activation function. $B_u=\{(i,j)|r_{ui}=1\!\wedge\!r_{uj}\neq 1\}$ denotes the pairwise training data for user $u$. $\hat{r}_{ui}$ and $\hat{r}_{uj}$ are computed by the free embedding of the teacher model. $\theta$ represents the user and item free embedding matrices. $\gamma$ is a regularization parameter that restrains the user and item free latent embedding matrices. \textbf{Graph Distillation Loss.} Since distillation techniques are employed in \emph{PGD}~to help the student model to learn better user and item embeddings, as well as make accurate predictions based on the attribute information, with the guidance of teacher model. Thus, the learned user embedding $\mathbf{u}_i^L$~(item embedding $\mathbf{v}_j^L$) from teacher model and $\mathbf{u}_i^U$~($\mathbf{v}_j^I$) from student model should be similar. This optimizing target can be formulated as follows: \begin{equation} \begin{split} L_u = \sum_{i=0}^{M-1} ||\mathbf{u}_i^L-\mathbf{u}_i^U||^2, \quad L_v = \sum_{j=0}^{N-1} ||\mathbf{v}_j^L-\mathbf{v}_j^I||^2. \end{split} \end{equation} Meanwhile, we intend the student model to predict user preference correctly. {\small$\mathbf{U}$} and {\small$\mathbf{V}$} represent the embedding matrices of users and items in the teacher model. {\small$\mathbf{U}^U$} and {\small$\mathbf{V}^I$} represent the embedding matrices of users and items in the student model. Thus, its prediction result should be similar to the results of the teacher model. which can be formulated as follows: \begin{small} \begin{equation} \begin{split} L_s = ||\mathbf{U}\mathbf{V}^T-\mathbf{U}^U(\mathbf{V}^I)^T||^2. \end{split} \end{equation} \end{small} The Graph Distillation Loss will be formulated as follows: \begin{small} \begin{equation} \label{eq:distillation_eq} \begin{split} L_d = \lambda L_u + \mu L_v + \eta L_s, \end{split} \end{equation} \end{small} where $\lambda, \mu, \eta$ are the weight of different information distillation loss. We can adjust their values to focus our proposed \emph{PGD}~on tackling different sub-tasks in code start problem in recommendation. After obtaining the two parts objective functions, The final optimization of our model can be formulated as follows: \begin{align} Loss = L_{r} + L_{d} \end{align} \section{Experiments} \label{s:experiment} In this section, we conduct extensive experiments on three datasets to verify the effectiveness of our proposed \emph{PGD}~for cold start recommendation. We aim to answer the following questions: \begin{itemize} \item Will the attribute information and the utilization method in \emph{PGD}~be useful for solving the cold start problem~(e.g., new users or new items) in recommendations? \item Is the distillation technique helpful for student model to learn useful knowledge from teacher model for user or item embedding? \item What is the influence of each component in our proposed \emph{PGD}~to the overall performance? \end{itemize} \subsection{Datasets} In this paper, we select three suitable and public available datasets to evaluate all the models, i.e., Yelp, XING~\cite{abel2017recsys}, and Amazon-Video Games~\cite{he2016vbpr}. Table~\ref{tab:statistics} report the statistics of three datasets. \begin{table}[] \caption{The statistics of the three datasets.}\label{tab:statistics} \vspace{-0.2cm} \scalebox{0.85}{ \begin{tabular}{|l|l|c|c|c|} \hline \multicolumn{2}{|c|}{Dataset} & Yelp & XING & \begin{tabular}[c]{@{}c@{}}Amazon-\\ Video Games\end{tabular} \\ \hline \multicolumn{1}{|c|}{\multirow{4}{*}{Train}} & Old Users & 29,777 & 20,640 & 29,129 \\ \cline{2-5} \multicolumn{1}{|c|}{} & Old Items & 27,737 & 17,793 & 22,547 \\ \cline{2-5} \multicolumn{1}{|c|}{} & Ratings & 159,857 & 133,139 & 172,089 \\ \cline{2-5} \multicolumn{1}{|c|}{} & Density & 0.019\% & 0.036\% & 0.026\% \\ \hline \multicolumn{1}{|c|}{\multirow{3}{*}{Val}} & Old Users & 2,109 & 17,058 & 26,506 \\ \cline{2-5} \multicolumn{1}{|c|}{} & Old Items & 1,812 & 10,357 & 10,189 \\ \cline{2-5} \multicolumn{1}{|c|}{} & Ratings & 2,109 & 20,258 & 29,870 \\ \hline \multirow{3}{*}{Test new user} & New Users & 12,749 & 7,105 & / \\ \cline{2-5} & Old Items & 17,121 & 7,665 & / \\ \cline{2-5} & Ratings & 65,127 & 12,858 & / \\ \hline \multirow{3}{*}{Test new item} & Old Users & 27,067 & 11,013 & 22,027 \\ \cline{2-5} & New Items & 11,975 & 7,598 & 10,170 \\ \cline{2-5} & Ratings & 69,524 & 33,079 & 98,044 \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Test new user\\ and new item\end{tabular}} & New Users & 11,662 & 4,618 & / \\ \cline{2-5} & New Items & 8,734 & 4,276 & / \\ \cline{2-5} & Ratings & 30,288 & 7,318 & / \\ \hline \multicolumn{2}{|l|}{User Attributes} & 80 & 108 & / \\ \hline \multicolumn{2}{|l|}{Item Attributes} & 183 & 81 & 76 \\ \hline \end{tabular} } \vspace{-4mm} \end{table} In order to evaluate the model performance on each of three sub-tasks in cold start problem, we manually set the new users or new items in the test sets\cite{zhu2020recommendation}. Specifically, we randomly select $30\%$ users in the test set. Then, we keep the corresponding items and remove their connections to construct the new user test set for Task 1. Meanwhile, we apply the same operation to generate a new items test set for Task 2. As for Task 3, we collated interaction records belonging to both the new user and the new product as the test set. Then, we split 10\% validation set from the rest old users and old items. The details are reported in Table~\ref{tab:statistics}. \subsection{Experimental Setup} \textbf{Evaluation Metrics.} Since the cold start problem still can be treated as top-K recommendation task, we select two popular ranking metrics to evaluate our model: HR@K and NDCG@K ($K=\{10,20,50\}$). \begin{table*}[] \setlength{\belowcaptionskip}{2pt} \textbf {\caption{HR@K and NDCG@K comparisons for Yelp and Amazon-Video Games. '-' represents unavailable result. } \label{t:yelp_amazon_result} } \scalebox{0.85}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{1}{l|}{\multirow{2}{*}{Metrics}} & \multicolumn{3}{c|}{Yelp(Task1)} & \multicolumn{3}{c|}{Yelp(Task2)} & \multicolumn{3}{c|}{Yelp(Task3)} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}Amazon-Video Games\\ (Task2)\end{tabular}} \\ \cline{3-14} & \multicolumn{1}{l|}{} & @10 & @20 & @50 & @10 & @20 & @50 & @10 & @20 & @50 & @10 & @20 & @50 \\ \hline \multirow{2}{*}{KNN} & HR & 0.01810 & 0.03104 & 0.05655 & 0.01590 & 0.02775 & 0.06126 & - & - & - & 0.001270 & 0.001898 & 0.008407 \\ \cline{2-14} & NDCG & 0.01528 & 0.02067 & 0.02917 & 0.009864 & 0.01370 & 0.02219 & - & - & - & 0.0007551 & 0.0009667 & 0.002447 \\ \hline \multirow{2}{*}{LinMap} & HR & 0.02030 & 0.03220 & 0.05784 & 0.02011 & 0.03436 & 0.06640 & 0.01286 & 0.02480 & 0.05108 & 0.01833 & 0.02491 & 0.03911 \\ \cline{2-14} & NDCG & 0.01724 & 0.02231 & 0.03076 & 0.01277 & 0.01743 & 0.02561 & 0.007353 & 0.01124 & 0.01792 & 0.008335 & 0.009481 & 0.01333 \\ \hline \multirow{2}{*}{xDeepFM} & HR & 0.01984 & 0.03234 & 0.05973 & 0.02024 & 0.03491 & 0.06678 & 0.01310 & 0.02438 & 0.04898 & 0.01847 & 0.02498 & 0.03900 \\ \cline{2-14} & NDCG & 0.01613 & 0.02147 & 0.03054 & 0.01280 & 0.01752 & 0.02564 & 0.007516 & 0.01120 & 0.01772 & 0.008253 & 0.009465 & 0.01295 \\ \hline \multirow{2}{*}{CDL} & HR & 0.01930 & 0.03257 & 0.06041 & 0.01959 & 0.03410 & 0.06536 & 0.01268 & 0.02001 & 0.04211 & 0.02023 & \underline{0.02775} & \underline{0.04192} \\ \cline{2-14} & NDCG & 0.01603 & 0.02161 & 0.03082 & 0.01209 & 0.01673 & 0.02472 & \underline{0.008057} & 0.01049 & 0.01613 & 0.009470 & \underline{0.01112} & \underline{0.01439} \\ \hline \multirow{2}{*}{DropoutNet} & HR & 0.02006 & 0.03278 & 0.06029 & 0.01731 & 0.02821 & 0.05594 & 0.01143 & 0.02141 & 0.04297 & 0.01143 & 0.01612 & 0.02876 \\ \cline{2-14} & NDCG & 0.01675 & 0.02208 & 0.03121 & 0.01052 & 0.01411 & 0.02049 & 0.006913 & 0.009972 & 0.01547 & 0.005350 & 0.006693 & 0.009857 \\ \hline \multirow{2}{*}{Heater} & HR & \underline{0.02055} & \underline{0.03365} & 0.05880 & \underline{0.02443} & \underline{0.04179} & \underline{0.07974} & 0.01226 & 0.02440 & 0.04915 & 0.02032 & 0.02659 & 0.04101 \\ \cline{2-14} & NDCG & \underline{0.01726} & \underline{0.02271} & 0.03110 & \underline{0.01495} & \underline{0.02059} & \underline{0.03027} & 0.007329 & 0.01131 & \underline{0.01780} & 0.009280 & 0.01031 & 0.01334 \\ \hline \multirow{2}{*}{PinSage} & HR & 0.01985 & 0.03302 & \underline{0.06250} & 0.02080 & 0.03704 & 0.07089 & 0.01173 & 0.02110 & 0.04097 & 0.02030 & 0.02491 & 0.03498 \\ \cline{2-14} & NDCG & 0.01709 & 0.02267 & \underline{0.03254} & 0.01331 & 0.01856 & 0.02722 & 0.007142 & 0.01013 & 0.01523 & 0.008590 & 0.009135 & 0.01157 \\ \hline \multirow{2}{*}{PFD} & HR & 0.02015 & 0.03318 & 0.05837 & 0.02240 & 0.04008 & 0.07955 & 0.01152 & 0.02427 & 0.04766 & \underline{0.02187} & 0.02745 & 0.04065 \\ \cline{2-14} & NDCG & 0.01716 & 0.02247 & 0.03086 & 0.01376 & 0.01948 & 0.02952 & 0.007248 & \underline{0.01143} & 0.01758 & \underline{0.009953} & 0.01086 & 0.01388 \\ \hline \multirow{2}{*}{Student} & HR & 0.01886 & 0.03133 & 0.05944 & 0.02290 & 0.03984 & 0.07625 & \underline{0.01317} & \underline{0.02540} & \underline{0.05109} & 0.01812 & 0.02328 & 0.03457 \\ \cline{2-14} & NDCG & 0.01612 & 0.02140 & 0.03074 & 0.01419 & 0.01968 & 0.02897 & 0.007294 & 0.01122 & 0.017952 & 0.008275 & 0.009040 & 0.01205 \\ \hline \multirow{2}{*}{\textbf{\emph{PGD}}} & \textbf{HR↑} & \textbf{0.02077} & \textbf{0.03404} & \textbf{0.06426} & \textbf{0.02717} & \textbf{0.04712} & \textbf{0.08856} & \textbf{0.01443} & \textbf{0.02589} & \textbf{0.05117} & \textbf{0.02240} & \textbf{0.02953} & \textbf{0.04507} \\ \cline{2-14} & \textbf{NDCG↑} & \textbf{0.01767} & \textbf{0.02323} & \textbf{0.03324} & \textbf{0.01659} & \textbf{0.02306} & \textbf{0.03366} & \textbf{0.008653} & \textbf{0.01240} & \textbf{0.01890} & \textbf{0.01008} & \textbf{0.01164} & \textbf{0.01601} \\ \hline \end{tabular}} \end{table*} \begin{table*}[] \setlength{\belowcaptionskip}{2pt} \textbf {\caption{HR@K and NDCG@K comparisons for XING. '-' represents unavailable result. KNN cannot work for task3.} \label{t:xing_result} } \scalebox{0.95}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{1}{l|}{\multirow{2}{*}{Metrics}} & \multicolumn{3}{c|}{XING(Task1)} & \multicolumn{3}{c|}{XING(Task2)} & \multicolumn{3}{c|}{XING(Task3)} \\ \cline{3-11} & \multicolumn{1}{l|}{} & @10 & @20 & @50 & @10 & @20 & @50 & @10 & @20 & @50 \\ \hline \multirow{2}{*}{KNN} & HR & 0.002977 & 0.005945 & 0.01249 & 0.001345 & 0.002246 & 0.005768 & - & - & - \\ \cline{2-11} & NDCG & 0.001586 & 0.002436 & 0.003920 & 0.0006946 & 0.0009711 & 0.001913 & - & - & - \\ \hline \multirow{2}{*}{LinMap} & HR & 0.007926 & 0.01483 & 0.02628 & 0.002039 & 0.003692 & 0.007225 & 0.001552 & 0.003338 & 0.007983 \\ \cline{2-11} & NDCG & 0.004242 & 0.006225 & 0.008781 & 0.001047 & 0.001559 & 0.002492 & 0.0007650 & 0.001291 & 0.002255 \\ \hline \multirow{2}{*}{xDeepFM} & HR & 0.007733 & 0.01530 & 0.02752 & 0.001991 & \underline{0.003892} & 0.007474 & 0.002526 & 0.005242 & \underline{0.009932} \\ \cline{2-11} & NDCG & 0.004240 & 0.006289 & 0.009048 & 0.0009840 & 0.001526 & 0.002450 & 0.0009600 & 0.001765 & \underline{0.002794} \\ \hline \multirow{2}{*}{CDL} & HR & 0.007546 & 0.01469 & 0.02815 & 0.001521 & 0.003213 & 0.006708 & \underline{0.002992} & 0.004580 & 0.007668 \\ \cline{2-11} & NDCG & 0.004250 & 0.006255 & 0.009263 & 0.0008030 & 0.001357 & 0.002334 & \underline{0.001479} & 0.001854 & 0.002444 \\ \hline \multirow{2}{*}{DropoutNet} & HR & 0.006997 & 0.01278 & 0.02345 & 0.001404 & 0.003784 & 0.007138 & 0.001805 & 0.003901 & 0.007610 \\ \cline{2-11} & NDCG & 0.003311 & 0.004959 & 0.007376 & 0.0007770 & 0.001531 & 0.002458 & 0.0008680 & 0.001332 & 0.002222 \\ \hline \multirow{2}{*}{Heater} & HR & 0.006934 & 0.01524 & 0.02717 & 0.001766 & 0.003633 & 0.007661 & 0.002635 & 0.004788 & 0.007963 \\ \cline{2-11} & NDCG & 0.003354 & 0.005713 & 0.008451 & 0.001061 & \underline{0.001667} & \underline{0.002722} & 0.001429 & 0.001704 & 0.002415 \\ \hline \multirow{2}{*}{PinSage} & HR & 0.004862 & 0.01119 & 0.02193 & 0.001646 & 0.003693 & \underline{0.007953} & 0.001002 & 0.002315 & 0.003741 \\ \cline{2-11} & NDCG & 0.002680 & 0.004436 & 0.006818 & 0.0009460 & 0.001610 & 0.002705 & 0.0004690 & 0.001046 & 0.001358 \\ \hline \multirow{2}{*}{PFD} & HR & \underline{0.009043} & 0.01552 & 0.02855 & \underline{0.002331} & 0.003877 & 0.007373 & 0.002833 & \underline{0.005251} & 0.008742 \\ \cline{2-11} & NDCG & \underline{0.005273} & \underline{0.007073} & 0.01005 & \underline{0.001151} & 0.001666 & 0.002578 & 0.001300 & \underline{0.001942} & 0.002695 \\ \hline \multirow{2}{*}{Student} & HR & 0.008985 & \underline{0.01725} & \underline{0.03114} & 0.001998 & 0.003734 & 0.007789 & 0.001777 & 0.004040 & 0.006881 \\ \cline{2-11} & NDCG & 0.004734 & 0.007033 & \underline{0.01015} & 0.0009520 & 0.001460 & 0.002506 & 0.0008830 & 0.001526 & 0.002144 \\ \hline \multirow{2}{*}{\textbf{\emph{PGD}}} & \textbf{HR↑} & \textbf{0.01149} & \textbf{0.02204} & \textbf{0.04060} & \textbf{0.002539} & \textbf{0.004216} & \textbf{0.008276} & \textbf{0.003999} & \textbf{0.006727} & \textbf{0.01018} \\ \cline{2-11} & \textbf{NDCG↑} & \textbf{0.006522} & \textbf{0.009160} & \textbf{0.01330} & \textbf{0.001322} & \textbf{0.001758} & \textbf{0.002780} & \textbf{0.001694} & \textbf{0.002222} & \textbf{0.002886} \\ \hline \end{tabular}} \end{table*} \begin{table*}[] \setlength{\belowcaptionskip}{2pt} \caption{HR@20 and NDCG@20 results of our model with different propagation depth $L$ on Yelp and Amaon-Video Games (We fix the same gcn layer $L$ of student model and teacher model).} \label{t:yelp_amazon_gcn_layer} \scalebox{0.90}{ \begin{tabular}{|c|r|r|r|r|r|r|r|r|} \hline \multirow{2}{*}{Num. of GCN Layers} & \multicolumn{2}{c|}{Yelp(Task1)} & \multicolumn{2}{c|}{Yelp(Task2)} & \multicolumn{2}{c|}{Yelp(Task3)} & \multicolumn{2}{c|}{Amazon Video Games} \\ \cline{2-9} & \multicolumn{1}{c|}{HR@20} & \multicolumn{1}{c|}{NDCG@20} & \multicolumn{1}{c|}{HR@20} & \multicolumn{1}{c|}{NDCG@20} & \multicolumn{1}{c|}{HR@20} & \multicolumn{1}{c|}{NDCG@20} & \multicolumn{1}{c|}{HR@20} & \multicolumn{1}{c|}{NDCG@20} \\ \hline $L=1$ & 0.03365 & 0.02294 & 0.04541 & 0.02239 & 0.02467 & 0.01142 & 0.02946 & 0.01125 \\ \hline $L=2$ & \textbf{0.03404} & \textbf{0.02323} & 0.04606 & 0.02255 & \textbf{0.02589} & \textbf{0.01240} & \textbf{0.02953} & \textbf{0.01164} \\ \hline $L=3$ & 0.03355 & 0.02186 & \textbf{0.04712} & \textbf{0.02306} & 0.02577 & 0.01198 & 0.02801 & 0.01124 \\ \hline $L=4$ & 0.03225 & 0.02102 & 0.04693 & 0.02298 & 0.02533 & 0.01192 & 0.02707 & 0.01104 \\ \hline \end{tabular} } \end{table*} \begin{table*}[] \setlength{\belowcaptionskip}{2pt} \caption{HR@20 and NDCG@20 results of our model with different propagation depth $L$ on XING (We fix the same gcn layer $L$ of student model and teacher model). \label{t:xing_gcn_layer}} \scalebox{1}{ \begin{tabular}{|c|r|r|r|r|r|r|} \hline & \multicolumn{2}{c|}{XING(Task1)} & \multicolumn{2}{c|}{XING(Task2)} & \multicolumn{2}{c|}{XING(Task3)} \\ \cline{2-7} \multirow{-2}{*}{Num. of GCN Layers} & \multicolumn{1}{c|}{HR@20} & \multicolumn{1}{c|}{NDCG@20} & \multicolumn{1}{c|}{HR@20} & \multicolumn{1}{c|}{NDCG@20} & \multicolumn{1}{c|}{HR@20} & \multicolumn{1}{c|}{NDCG@20} \\ \hline $L=1$ & 0.02071 & 0.008274 & 0.004003 & 0.001754 & 0.006107 & 0.001962 \\ \hline $L=2$ & 0.02107 & 0.009037 & \textbf{0.004216} & \textbf{0.001758} & \textbf{0.006727} & \textbf{0.002222} \\ \hline $L=3$ & \textbf{0.02204} & \textbf{0.009160} & 0.003992 & 0.001752 & 0.006439 & 0.002100 \\ \hline $L=4$ & 0.02176 & 0.008947 & 0.003907 & 0.001672 & 0.006359 & 0.002054 \\ \hline \end{tabular}} \end{table*} \begin{figure*} \vspace{-0.2cm} \vspace{-0.2cm} \subfigure[Varying \(\lambda\) in Yelp (Task 1)]{\includegraphics[width=52mm]{task1.pdf}} \subfigure[Varying \(\mu\) in Yelp (Task 2)]{\includegraphics[width=52mm]{task2.pdf}} \subfigure[Varying \(\eta\) in Yelp (Task 3)]{\includegraphics[width=52mm]{task3.pdf}} \vspace{-0.2cm} \caption{NDCG@20 results of our model with different hyper-parameters.} \label{fig:hyper_parameters_results} \vspace{-0.2cm} \end{figure*} \begin{table}[] \setlength{\belowcaptionskip}{2pt} \caption{HR@20 and NDCG@20 results of output distillation and multi-layer distillation. \label{t:output_multi_layer}} \scalebox{0.95}{ \begin{tabular}{|c|c|c|c|} \hline \multicolumn{1}{|l|}{} & Metrics & 2-Layer Output & 2-Layer Multi-Layer \\ \hline \multirow{2}{*}{Yelp(Task1)} & HR@20 & 0.03404 & \textbf{0.03415} \\ \cline{2-4} & NDCG@20 & 0.02323 & 0.02152 \\ \hline \multirow{2}{*}{Yelp(Task2)} & HR@20 & 0.04606 & \textbf{0.04627} \\ \cline{2-4} & NDCG@20 & 0.02255 & \textbf{0.02302} \\ \hline \multirow{2}{*}{Yelp(Task3)} & HR@20 & 0.02589 & \textbf{0.02638} \\ \cline{2-4} & NDCG@20 & 0.01240 & 0.01205 \\ \hline \multirow{2}{*}{Amazon} & HR@20 & 0.02953 & 0.02812 \\ \cline{2-4} & NDCG@20 & 0.01164 & 0.01113 \\ \hline \end{tabular} } \end{table} \textbf{Parameter Settings.} First of all, the dimensions of collaborative filtering embedding and the attribute representation are all set as $64$. The batch size is set as $2,048$. The depth $L$ of GCN is selected from $\{1,2,3,4\}$, and we also make an experiment to verify the influence of different depths. During training, Adam is employed as the optimizer with learning rate $0.001$. Gaussian distribution with a mean of 0 and variance of 0.01 is employed to initialize the embedding matrices. At each iteration of the training process, we randomly sample one candidate negative sample to compose a triple data. In the testing phase, to avoid the unfairness caused by the randomly negative samples, we evaluted all models in the condition of all negative samples. As shown in Eq.~\ref{eq:distillation_eq}, there are three hyper-parameters $\lambda, \mu$ and $\eta$. We tune the three hyper-parameters on three different tasks respectively. The combination for Yelp is $\{\lambda=100, \mu=1, \eta=0.01\}$, for Amazon-Video Games is $\{\mu=10\}$ and for XING is $\{\lambda=1, \mu=100, \eta=0.001\}$. \subsection{Overall Results} Tables~\ref{t:yelp_amazon_result} and \ref{t:xing_result} report the overall results on three datasets. We can obtain that \emph{PGD}~outperforms all baselines across all the datasets with different evaluation metrics. Specifically, \emph{PGD}~achieves average {$2.03\%$, $11.67\%$, $6.01\%$} improvement across three sub-tasks on Yelp, average {$27.83\%$, $7.28\%$, $16.06\%$} improvement on XING, and average {$5.6\%$} improvement on Amazon-Game Videos, respectively. This phenomenon demonstrates the effectiveness of introducing attribute information into graph as node and learning attribute embedding and entity embeddings simultaneously under the graph constraint. Moreover, \emph{PGD}~makes full use of distillation techniques to narrow down the gap between attribute embedding and CF-based embedding and help the student model to learn entity embedding from the teacher model with the attribute information as input. Meanwhile, \emph{PGD}~tries to tackle all three sub-tasks in a unified framework. To this end, we also designed a student baseline to address new item or new user problem independently. Specifically, for Task 1, we only select the user student model to learn the user attribute embedding and item CF-based embedding. For Task 2, we have similar operations. As for Task 3, we select the user attribute embedding and item attribute embedding from two student models. The corresponding results are illustrated in Tables~\ref{t:yelp_amazon_result} and \ref{t:xing_result}. We can obtain that \emph{PGD}~still outperforms the student baselines, indicating the superiority and necessity of distilling and modeling user preference in a unified way. \subsection{The Impact of Different Propagation Layer Depth L and Detailed Model Analysis. } As introduced in Section~\ref{s:model}, the number of GCN layers will has a big impact on the model performance. Therefore, we conduct additional experiments to verify its impact. Corresponding results are illustrated in Tables~\ref{t:yelp_amazon_gcn_layer} and \ref{t:xing_gcn_layer}. From the results, we can obtain that with the increasing number of GCN layers in the teacher model, the overall performance first rises and then falls. When the number of GCN layers is 2 or 3, \emph{PGD}~achieved the best performance. The possible reason is that with the increasing number of GCN layers, each node could aggregate more neighbors' information, which not only alleviate the data sparsity problem, but also gather more useful information for node embedding learning. On the other hand, too many GCN layers in the teacher model will cause the student hard to follow and node feature over smoothing problem. Therefore, we select $2$ or $3$ as the GCN layer number in teacher model according to tasks and datasets. The above analysis shows that \emph{PGD}~can distill knowledge at the output layer. Intuitively, applying distillation operations to each layer seems to get better performance. We conduct experiments to compare the effects of the two distillation methods in Table~\ref{t:output_multi_layer}. At 2 layer, multi-layer distillation has a little improved effect on task2 of the yelp dataset. However, there is no general enhancement but still competitive against baselines on the other tasks. We speculate the reason is that, there is still a gap between the intermediate layer embedding distillation and the final output embedding distillation. Our model is not a direct node-to-node distillation between teacher graph and student graph, and the final entity embedding of the student model fuses the attribute node information. Multi-layer distillation only relies on the weighted sum operation which does not capture well the positive impact of the distillation of the first layer on the final output distillation. \subsection{Ablation Study} In the previous parts, we have illustrated the superiority of our proposed \emph{PGD}. However, the student model tries to distill knowledge from teacher model with three constraints~(i.e., user embedding constraint, item embedding constraint, and prediction constraint), which component plays a more important role in user preference modeling is still unclear. To this end, we conduct an ablation study on parameters $\{\lambda,\mu,\eta\}$ to verify the impact of each component with NDCG@20. When verifying the effectiveness of one constraint, we fix other two parameters and modify the corresponding weight to obtain the results. Figure~\ref{fig:hyper_parameters_results} reports the corresponding results, from which we can obtain the following observations. With the increase of each component, model performance first increases and then decreases. The distillation loss constraint has a negative impact on the teacher model when the distillation loss is overweight. Moreover, when \emph{PGD}\ achieves the best performance, $\lambda$ and $\mu$ have similar values. Thus, we can conclude that the user embedding constraint and item embedding constraint have similar impacts on model performance. Furthermore, we can observe that the best value for $\eta$ is very small. Since this is a top-K recommendation task, the prediction constraint may have a big impact on the final performance. We also observed that the boosting effect of these parameters is different for different tasks and different datasets. For instance, the metrics of task1 in Yelp improved $2.03\%$, but improved $11.67\%$ of task2 in Yelp. We speculate the possible reason is that the types of user attributes are less than item attributes. Thus, user attributes cannot provide as much information as item attributes do. Therefore, the user embedding distillation may not be as good as item embedding distillation. As a result, item embedding constraint has a bigger impact on the model performance. \section{CONCLUSION} In this paper, we argued that attribute information is not fully explored in cold start recommendations. Thus, we proposed a novel \emph{privileged graph distillation model~(PGD)}~to constrain the attribute embedding and CF-based embedding learning in a graph manner and leverage distillation technique to tackle the cold start recommendation. In concerned details, we first introduce attributes as nodes into user-item graph and learn attribute embedding and CF-based embedding simultaneously. Then, we employed distillation technique to guide \emph{PGD}~to learn the transformation between CF-based embedding and attribute embedding. Thus, the student model can learn effective user (item) embedding based on attribute information from the teacher model. Extensive experiments on three public datasets show the performance improvement of \emph{PGD}~over state-of-the-art baselines. In the future, we plan to explore different distillation architectures to better attribute node embedding. \vspace{-0.1cm} \section*{Acknowledgements} This work was supported in part by grants from the National Natural Science Foundation of China (Grant No. U1936219, U19A2079, 62006066, 61932009), the Young Elite Scientists Sponsorship Program by CAST and ISZS, CCF-Tencent RAGR20200121, and the Open Project Program of the National Laboratory of Pattern Recognition (NLPR). \bibliographystyle{ACM-Reference-Format}
1,314,259,994,632
arxiv
\section{Introduction} Recent progress in computer hardware with the democratization to perform intensive calculations has enabled researchers to work with models, that have millions of free parameters. Convolutional neural networks (CNN) have already demonstrated their success in image classification, object detection, scene understanding etc. For almost any computer vision problems, CNN-based approaches outperform other techniques and in many cases even human experts in the corresponding field. Now almost all computer vision application try to involve deep learning techniques to improve traditional approaches. They influence our everyday lives and the potential uses of these technologies look truly impressive. Reliable image segmentation is one of the important tasks in computer vision. This problem is especially important for medical imaging that can potentially improve our diagnostic abilities and in scene understanding to make safe self-driving vehicles. Dense image segmentation essentially involves dividing images into meaningful regions, which can be viewed as a pixel level classification task. The most straightforward (and slow) approach to such problem is manual segmentation of the images. However, this is a time-consuming process that is prone to mistakes and inconsistencies that are unavoidable when human data curators are involved. Automating the treatment provides a systematic way of segmenting an image on the fly as soon as the image is acquired. This process requires providing necessary accuracy to be useful in the production environment. In the last years, different methods have been proposed to tackle the problem of creating CNN's that can produce a segmentation map for an entire input image in a single forward pass. One of the most successful state-of-the-art deep learning method is based on the Fully Convolutional Networks (FCN) \cite{fcn_2015}. The main idea of this approach is to use CNN as a powerful feature extractor by replacing the fully connected layers by convolution one to output spatial feature maps instead of classification scores. Those maps are further upsampled to produce dense pixel-wise output. This method allows training CNN in the end to end manner for segmentation with input images of arbitrary sizes. Moreover, this approach achieved an improvement in segmentation accuracy over common methods on standard datasets like PASCAL VOC \cite{pascal_voc_2015}. This method has been further improved and now known as U-Net neural network \cite{ronneberger_2015}. The U-Net architecture uses skip connections to combine low-level feature maps with higher-level ones, which enables precise pixel-level localization. A large number of feature channels in upsampling part allows propagating context information to higher resolution layers. This type of network architecture proven themselves in binary image segmentation competitions such as satellite image analysis \cite{iglovikov_2017} and medical image analysis \cite{rakhlin_2017, kalinin_2017} and other \cite{carvana_2017}. In this paper, we show how the performance of U-Net can be easily improved by using pre-trained weights. As an example, we show the application of such approach to Aerial Image Labeling Dataset \cite{aerialimagelabeling_2017}, that contains aerospace images of several cities with high resolution. Each pixel of the images is labeled as belonging to either "building" or "not-building" classes. Another example of the successful application of such an architecture and initialization scheme is Kaggle Carvana image segmentation competition \cite{carvana_2017}, where one of the authors used it as a part of the winning (1st out 735 teams) solution. \begin{figure*} \includegraphics[width=\textwidth,height=12cm]{unet111.pdf} \caption{Encoder-decoder neural network architecture also known as U-Net where VGG11 neural network without fully connected layers as its encoder. Each blue rectangular block represents a multi-channel features map passing through a series of transformations. The height of the rod shows a relative map size (in pixels), while their widths are proportional to the number of channels (the number is explicitly subscribed to the corresponding rod). The number of channels increases stage by stage on the left part while decrease stage by stage on the right decoding part. The arrows on top show transfer of information from each encoding layer and concatenating it to a corresponding decoding layer.} \label{fig::unetvgg11} \end{figure*} \section{Network Architecture} In general, a U-Net architecture consists of a contracting path to capture context and of a symmetrically expanding path that enables precise localization (see for example Fig. \ref{fig::unetvgg11}). The contracting path follows the typical architecture of a convolutional network with alternating convolution and pooling operations and progressively downsamples feature maps, increasing the number of feature maps per layer at the same time. Every step in the expansive path consists of an upsampling of the feature map followed by a convolution. Hence, the expansive branch increases the resolution of the output. In order to localize, upsampled features, the expansive path combines them with high-resolution features from the contracting path via skip-connections \cite{ronneberger_2015}. The output of the model is a pixel-by-pixel mask that shows the class of each pixel. This architecture proved itself very useful for segmentation problems with limited amounts of data, e.g. see \cite{iglovikov_2017}. U-Net is capable of learning from a relatively small training set. In most cases, data sets for image segmentation consist of at most thousands of images, since manual preparation of the masks is a very costly procedure. Typically U-Net is trained from scratch starting with randomly initialized weights. It is well known that training network without over-fitting the data set should be relatively large, millions of images. Networks that are trained on the Imagenet \cite{russakovsky_2014} data set are widely used as a source of the initialization for network weights in other tasks. In this way, the learning procedure can be done for non-pre-trained several layers of the network (sometimes only for the last layer) to take into account features of the date set. As an encoder in our U-Net network, we used relatively simple CNN of the VGG family \cite{vgg_2014} that consists of 11 sequential layers and known as VGG11 see Fig. \ref{fig::vgg11}. VGG11 contains seven convolutional layers, each followed by a ReLU activation function, and five max polling operations, each reducing feature map by $2$. All convolutional layers have $3\times3$ kernels and the number of channels is given in Fig. \ref{fig::vgg11}. The first convolutional layer produces 64 channels and then, as the network deepens, the number of channels doubles after each max pooling operation until it reaches 512. On the following layers, the number of channels does not change. \begin{figure}[ht] \centering \includegraphics[width=4cm]{vgg11.pdf} \caption{VGG11 network architecture. In this picture each convolutional layer is followed by ReLU activation function. The number in each box represents the number of channels in the corresponding feature map.} \label{fig::vgg11} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=9cm]{jaccard.png} \caption{Jaccard index as a function of a training epoch for three U-Net models with different weight initialization. The blue line shows a model with randomly initialized weights, orange line shows a model, where the encoder was initialized with VGG11 network pre-trained on ImageNet. Green line shows a model, where the entire network was pre-trained on Carvana data set.} \label{fig::jaccard} \end{figure} To construct an encoder, we remove the fully connected layers and replace them with a single convolutional layer of 512 channels that serves as a bottleneck central part of the network, separating encoder from the decoder. To construct the decoder we use transposed convolutions layers that doubles the size of a feature map while reducing the number of channels by half. And the output of a transposed convolution is then concatenated with an output of the corresponding part of the decoder. The resultant feature map is treated by convolution operation to keep the number of channels the same as in a symmetric encoder term. This upsampling procedure is repeated 5 times to pair up with 5 max poolings, as shown in Fig. \ref{fig::unetvgg11}. Technically fully connected layers can take an input of any size, but because we have 5 max-pooling layers, each downsampling an image two times, only images with a side divisible by 32 ($2^5$) can be used as an input to the current network implementation. \begin{figure*} \includegraphics[width=\textwidth,height=12cm]{chicago.pdf} \caption{Binary masks with green pixels indicate class membership (buildings). Image A) shows an original image with the superimposed ground true mask; Images B) to D) show predictions, initialized with different schemas and trained for 100 epochs. Network in image B) had randomly initialized weights. The model in image C) used randomly initialized decored weights and encoder weights initialized with VGG11, pre-trained on ImageNet. The model in image D) used weights, pre-trained on Carvana data set.} \label{fig::chicago} \end{figure*} \section{Results} We applied our model to Inria Aerial Image Labeling Dataset \cite{aerialimagelabeling_2017}. This dataset consists of 180 aerial images of urban settlements in Europe and United States, and is labeled as a building and not building classes. Every image in the data set is RGB and has $5000 \times 5000$ pixels resolution where each pixel corresponds to a $30 \times 30$ cm$^2$ of Earth surface. We used 30 images (5 from every 6 cities in the train set) for validation, as suggested in \cite{building_footprints_2017} (valid. IoU $\simeq$ 0.647) and \cite{inria_label_2017} (best valid. IoU $\simeq$ 0.73) and trained the network on the remaining 150 images for 100 epochs. Random crops of $768 \times 768$ were used for training and central crops $1440 \times 1440$ for validation. Adam with learning rate $0.001$ as an optimization algorithm \cite{adam_2014}. We choose Jaccard index (Intersection Over Union) as evaluation metric. It can be interpreted as similarity measure between a finite number of sets. Intersection over union for similarity measure between two sets $A$ and $B$, can be defined as following: \begin{equation} \label{jaccard_iou} J(A, B) = \frac{|A\cap B|}{|A\cup B|} = \frac{|A\cap B|}{|A|+|B|-|A\cap B|} \end{equation} where normalization condition takes place: $$ 0 \le J(A, B) \le 1 $$ Every image is consists of pixels. To adapt the last expression for discrete objects, we can write it in the following way \begin{equation} J=\frac{1}{n}\sum\limits_{i=1}^n\left(\frac{y_i\hat{y}_i}{y_{i}+\hat{y}_i-y_i\hat{y}_i}\right) \end{equation} where $y_i$ is a binary value (label) of the corresponding pixel $i$ and $\hat{y}_i$ is predicted probability for the pixel. Since, we can consider image segmentation task as a pixel classification problem, we also use the common loss function for binary classification tasks - binary cross entropy that is defined as: \begin{equation} H=-\frac{1}{n}\sum\limits_{i=1}^n(y_i\log \hat{y}_i+(1-y_i)\log (1-\hat{y}_i)) \end{equation} Join these expressions, we can generalized the loss function, namely, \begin{equation} \label{free_en} L=H-\log J \end{equation} Therefore, minimizing this loss function, we simultaneously maximize probabilities for right pixels to be predicted and maximize the intersection, $J$ between masks and corresponding predictions. For more details, see \cite{iglovikov_2017}. At the output of a given neural network, we obtain an image where each pixel corresponds to a probability to detect interested area. The size of the output image is coincides with the input image. In order to have only binary pixel values, we choose a threshold 0.3. This number can be found using validation data set and it is pretty universal for our generalized loss function and many different image data sets. For different loss function this number is different and should be found independently. All pixel values below the specified threshold, we set to be zero while all values above the threshold, we set to be 1. Then, multiplying by 255 every pixel in an output image, we can get a black and white predicted mask In our experiment, we test 3 U-Nets with the same architecture as shown in Fig. \ref{fig::unetvgg11} differing only in the way of weights initialization. For the basic model we use network with weights initialized by LeCun uniform initializer. In this initializer samples draw from a uniform distribution within $[-L, L]$, where $L=\sqrt{1/f_{in}}$ and $f_{in}$ is the number of input units in the weight tensor. This method is implement in pytorch \cite{pytorch} as a default method of weight initialization in convolutional layers. Next, we utilize the same architecture with VGG11 encoder pre-trained on ImageNet while all layers in decoder are initialized by the LeCun uniform initializer. Then, as a final example, we use network with weights pre-trained on Carvana dataset \cite{carvana_2017} (both encoder and decoder). Therefore, after 100 epochs, we obtain the following results for validation subset: 1) LeCun uniform initializer: IoU = 0.593 2) The Encoder is pre-trained on ImageNet: IoU = 0.686 3) Fully pre-trained U-Net on Carvana: IoU = 0.687 Validation learning curves in Fig. \ref{fig::jaccard} show benefits of our approach. First of all, pre-trained models converge much faster to its steady value in comparison to the non-pre-trained network. Moreover, the steady-state value seems higher for the pre-trained models. Ground truth, as well as three masks, predicted by these three models, are superimposed on an original image in Fig. \ref{fig::chicago}. One can easily notice the difference in the prediction quality after 100 epochs. Moreover, validation learning curves in Our results for the Inria Aerial Image Labeling Dataset can be easily further improved using different hyper-parameters optimization techniques or standard computer vision methods applying them during pre- and post-processing. \section{Conclusion} In this paper, we show how the performance of U-Net can be improved using technique knows as fine-tuning to initialize weights for an encoder of the network. This kind of neural network is widely used for image segmentation tasks and shows state of the art results in many binary image segmentation, competitions. Fine-tuning is already widely used for image classification tasks, but to our knowledge is not with U-Net type family architectures. For the problems of image segmentation, the fine-tuning should be considered even more natural because it is problematic to collect a large volume of training dataset (in particular for medical images) and qualitatively label it. Furthermore, pre-trained networks substantially reduce training time that also helps to prevent over-fitting. Our approach can be further improved considering more advanced pre-trained encoders such as VGG16 \cite{vgg_2014} or any pre-trained network from ResNet family \cite{resnet_2015}. With this improved encoders the decoders can be kept as simple as we use. Our code is available as an open source project under MIT license and can be found at \url{https://github.com/ternaus/TernausNet}. \section*{Acknowledgment} The authors would like to thank Open Data Science community \cite{ods} for many valuable discussions and educational help in the growing field of machine/deep learning. The authors also express their sincere gratitude to Alexander Buslaev who originally suggested to use a pre-trained VGG network as an encoder in a U-Net network. \ifCLASSOPTIONcaptionsoff \newpage \fi
1,314,259,994,633
arxiv
\section{INTRODUCTION} \label{sec:introduction} The problem of tracking and monitoring targets using position-fixed sensors is relevant to a variety of applications, including monitoring of moving targets using cameras \cite{}, tracking anomalies in manufacturing plants \cite{CullerOverviewSensorNetworks04}, and tracking of endangered species \cite{LuRandomizedHybridSystemRRT10,JuanWildlifeTrackingZebraNet02,LuSPIE11}. The position-fixed sensor is deployed to measure targets based on limited information that only becomes available when the target enters the sensor's field-of-view (FOV) or visibility region \cite{LuInformationPotential14}. The sensor's FOV is defined as a compact subset of the region of interest, in which the sensor can obtain measurements from the targets. In many such sensor applications, filter techniques are often required to estimate unknown variables of interest, for examples, target number and target state. When the noise in the measurement model is an additive Gaussian distribution, the target state can be estimated from frequent observation sequence using a Kalman filter \cite{WelchIntroductionKalmanFilter}. This approach is well suited to long-range high-accuracy sensors, such as radars, and to moving targets with a known dynamical model and initial conditions. However, most of these underlying assumptions are violated in modern applications, because the targets' motion models are unknown, and, possibly, random and nonlinear. Also, due to the use of low-cost passive sensors, measurement errors and noise may be non-additive and non-Gaussian. An extended Kalman filter (EKF) can be used when the system dynamics are nonlinear, but can be linearized about nominal operating conditions \cite{JulierExtensionKalmanFilterNonlinearSystems97}. An unscented Kalman filter (UKF) method, based on the unscented transformation (UT) method, can be applied to compute the mean and covariance of a function up to the second order of the Taylor expansion \cite{WanunscentedKalmanfilterNonlinearEstimation00}. However, the efficiency of these filters decreases significantly when the system dynamics are highly nonlinear or unknown, and when the measurement noise are non-Gaussian. Recently, a non-parametric method based on condensation and Monte Carlo simulation, known as a particle filter, has been proposed for tracking multiple targets exhibiting nonlinear dynamics and non-Gaussian random effects \cite{ZiaMCMCBasedparticleFilterTrackingMultipleInteractingTargets04}. Particle filters are well suited to modern surveillance systems because they can be applied to Bayesian models in which the hidden variables are connected by a Markov chain in discrete time. In the classical particle filter method, a weighted set of particles or point masses are used to represent the probability density function (PDF) of the target state by means of a superposition of weighted Dirac delta functions. At each iteration of the particle filter, particles representing possible target state values are sampled from a proposal distribution \cite{ArulampalamTutorialParticleFilter02}. The weight associated with each particle is then obtained from the target-state likelihood function, and from the prior estimation of the target state PDF. When the effective particle size is smaller than a predefined threshold, a re-sampling technique can be implemented \cite{CarpenterImprovedparticleFilterNonlinearProblems99}. One disadvantage of classical particle-filtering techniques is that the target-state transition function is used as the importance density function to sample particles, without taking new observations into account \cite{RuiBetterProposalDistributionsTrackingParticleFilter01}. Recently, an particle filter with Mixture Gaussian representation was proposed by author for monitoring maneuvering targets \cite{luParticle14}, where the particles are sampled based on the supporting intervals of the target-state likelihood function and the prior estimation function of the target state. In this case, the supporting interval of a distribution is defined as the $90\%$ confidence interval\cite{GormanTestSignificanceConfidenceInterval04}. The weight for each particle is obtained by considering the likelihood function and the transition function simultaneously. Then, the weighted expectation maximization (EM) algorithm is implemented to use the sampled weighted particles to generate a normal mixture model of the distribution. Kreucher proposed joint multitarget probability density (JMPD)\cite{KreucherMultitargetTrackingJMPD05} to estimate the number of total targets in workspace and their state, where targets are moving. By using JMPD, the data association problem is avoided, however, the JMPD results in a joint state space, the dimension of which is the dimension of a target state times number of total targets. Since the number of total targets is unknown, the joint space size remain unavailable. To overcome this problem, it is assumed the number of total targets has a maximum value. Therefore, when the maximum number of targets is large, the joint state space becomes intractable. Inspired by \cite{WonKPF10}, this paper presents a novel filter technique which combines Kalman filter and particle filter for estimating the number and state of total targets based on the measurement obtained online. The estimation is represented by a set of weighted particles, different from classical particle filter, where each particle is a Gaussian instead of a point mass. The weight of each particle represents the probability of existing a target, while its gaussian indicates the state distribution for this target. More importantly, the update of particles is different from classical particle filter. For each particle, the gaussian parameters are updated based using Kalman filter given a measurement. To overcome the data association problem, in this paper, when one particle is updated, the other particles are considered as the measurement condition, which will be explained in Section {\ref{sec:method}}. The novel Gaussian particle filter technique requires less particles than classical particle filters, and can solve multiple target estimation problem without increasing the state space dimensions. The paper is organized as follows. Section \ref{sec:problemformulation} describes the multiple targets estimation problem formulation and assumptions. The background on the particle filter and Kalman filter is reviewed in Section \ref{sec:background}. Section \ref{sec:method} presents the Gaussian Particle filter technique. The method is demonstrated through numerical simulations and results, presented in Section \ref{sec:results}. Conclusions and future work are described in Section \ref{sec:conclusion}. \section{Problem Formulation} \label{sec:problemformulation} There $N$ targets are moving in a two dimensional workspace denoted as $\mathcal{W}$ where $N$ denotes the unknown number of total targets. For simplicity, there is zero obstacle in the workspace. The goal of the sensor is to obtain the state estimation for all the targets, denoted as $\mathbf{X}_k$, and target number estimation, denoted as $T_k$, at time step $k$. The states for total targets at $k$ is denoted as $\mathbf{X}_k=[\mathbf{x}_k^1,\mathbf{x}_k^2,\cdots,\mathbf{x}_k^{N}]$ that has $N$ state vectors. The estimation of total target state at $k$ is denoted as $\mathbf{X}^k=[\mathbf{x}_k^1,\mathbf{x}_k^2,\cdots,\mathbf{x}_k^{T^k}]$. Let $T^k$ denote the estimation of total target number at $t^k$. The $i$th target is modeled as \begin{equation} \label{eqn:targetModel} \mathbf{x}_k^i=\mathbf{F}_k\mathbf{x}_{k-1}^i+\bm{\nu}_k, \end{equation} where \begin{equation} \bm{\nu}_k\sim N(0,\mathbf{Q}_k). \end{equation} Furthermore, $\mathbf{F}_k$ and $\mathbf{Q}_k$ are assumed known. In standard estimation theory, a sensor that obtains a vector of measurements $\mathbf{z}^k \in \mathbb{R}^r$ in order to estimate an unknown state vector set $\mathbf{X}^k \in \mathbb{R}^n$ at time $k$ is modeled as, \begin{equation} \label{eqn_SensorModelEstimation} \mathbf{z}^k = \mathbf{h}(\mathbf{X}^k,\bm{\lambda}^k), \end{equation} where $\mathbf{h}: \mathbb{R}^{n+\wp} \rightarrow \mathbb{R}^r$ is a deterministic vector function that is possibly nonlinear, the random vector $\bm{\lambda}^k \in \mathbb{R}^{\wp}$ represents the sensor characteristics, such as sensor action, mode \cite{LuADPcdc13}, environmental conditions, and sensor noise or measurement errors. In this paper, the sensor is modeled as \begin{equation} \label{eqn:SensorModel} \mathbf{z}_k=\frac{1}{N}\sum_{i=1}^{N}\mathbf{x}_i+\bm{\omega}_k. \end{equation} It is further assumed that the whole workspace is visible to a position fixed sensor (not shown). \section{Background} \label{sec:background} \subsection{Particle Filter Methods} The particle filter is a recursive model estimation method based on sequential Monte Carlo Simulations. Because of their recursive nature, particle filters are easily applicable to online data processing and variable inference. More importantly, it is applicable to nonlinear system dynamics with non-Gaussian noises. The PDF functions are represented with properly weighted and relocated point-mass, known as particles. These particles are sampled from an importance density that is crucial to the particle filter algorithm and is also referred to as a proposal distribution. Let $\{\mathbf{x}^{\kappa}_{j,p},w^{\kappa}_{j,p}\}^{N}_{p=1}$ denote the weighted particles that are used to approximate the posterior PDF $f(\mathbf{x}^{\kappa}_j~|~Z^{\kappa}_j)$ for the $j$th target at $t_{\kappa}$, where $Z^{\kappa}_j = \{\mathbf{z}^{0}_j, \ldots, \mathbf{z}^{\kappa}_j\}$ denotes the set of all measurements obtained by sensor $i$, from target $j$, up to $t_{\kappa}$. Then, the posterior probability density function of the target state, given the measurement at $t_{\kappa}$ can be modeled as, \begin{equation} f(\mathbf{x}^{\kappa}_j~|~Z^{\kappa}_j)=\sum^N_{p=1} w^{\kappa}_{j,p}\delta(\mathbf{x}^{\kappa}_{j,p}), ~~\sum^N_{p=1}w^{\kappa}_{j,p}=1 \end{equation} where $w^{\kappa}_{j,p}$ is non-negative and $\delta$ is the Dirac delta function. The techniques always consist of the recursive propagation of the particles and the particle weights. In each iteration, the particles $\mathbf{x}^{\kappa}_{j,p}$ are sampled from the importance density $q(\mathbf{x})$. Then, weight $w^k_{j,p}$ is updated for each particle by \begin{equation} w^{\kappa}_{j,p}\propto \frac{p(\mathbf{x}^{\kappa}_{j,p})}{q(\mathbf{x}^{\kappa}_{j,p})} \end{equation} where $p(\mathbf{x}^{\kappa}_{j,p})\propto f(\mathbf{x}^{\kappa}_{j,p}~|~Z^{\kappa}_j)$. Additionally, the weights are normalized at the end of each iteration. One common drawback of particle filters is the degeneracy phenomenon\cite{RuiBetterProposalDistributionsTrackingParticleFilter01}, i.e., the variance of particle weights accumulates along iterations. A common way to evaluate the degeneracy phenomenon is the effective sample size $N_{e}$\cite{CarpenterImprovedparticleFilterNonlinearProblems99}, obtained by, \begin{equation} N_e=\frac{1}{\sum^N_{p=1}(w^{\kappa}_{j,p})^2} \end{equation} where $w^{\kappa}_{j,p}, ~p=1,2,\dots,N$ are the normalized weights. In general, a re-sampling procedure is taken when $N_e <N_s$, where $N_s$ is a predefined threshold, and is usually set as $\frac{N}{2}$. Let $\{\mathbf{x}^{\kappa}_{j,p},w^{\kappa}_{j,p}\}^{N}_{p=1}$ denote the particle set that needs to be re-sampled, and let $\{\mathbf{x}^{{\kappa}*}_{j,p},w^{{\kappa}*}_{j,p}\}^{N}_{p=1}$ denote the particle set after re-sampling. The main idea of this re-sampling procedure is to eliminate the particles having low weights by re-sampling $\{\mathbf{x}^{{\kappa}*}_{j,p},w^{{\kappa}*}_{j,p}\}^{N}_{p=1}$ from $\{\mathbf{x}^{\kappa}_{j,p},w^{\kappa}_{j,p}\}^{N}_{p=1}$ with the probability of $p(\mathbf{x}^{{\kappa}*}_{j,p}=\mathbf{x}^{\kappa}_{j,s})=w^{\kappa}_{j,s}$. At the end of the resampling procedure, $w^{{\kappa}*}_{j,p}, p=1,2,\dots,N$ are set as $1/N$. \subsection{Kalman Filter Methods} The well known Kalman filter is also a recursive method to estimate system/target state based on a measurement sequence, minimizing the estimation uncertainty. The measurement of the system state with an additive Gaussian noise are given by the sensor. Then, in each iteration, the Kalman filter consists of two precesses: i) it predicts the system state and their uncertainties; ii) it updates the system state and uncertainties with the measurement that newly becomes available. The system dynamics is given as \begin{equation} \mathbf{x}_k=\mathbf{F}_k\mathbf{x}_{k-1}+\mathbf{B}_k\mathbf{u}_k+\bm{\nu}_k \end{equation} where subscript $k$ and $k-1$ denote the current and previous time index, while $\mathbf{F}_k$ is the system discrete transition matrix, and $\mathbf{B}_k$ and $\mathbf{u}_k$ are the control matrix and control input. $\bm{\nu}_k$ is the white noise, defined as \begin{equation} \bm{\nu}_k\approx N(0,\mathbf{Q}_k) \end{equation} where $\Sigma_k$ is the covariance. At $k$th time step, an measurement of the system true state $\mathbf{x}_k$ is made by a sensor, is given by \begin{equation} \label{eqn:KalmanZ} \mathbf{z}_k=\mathbf{H}_k\mathbf{x}_k+\bm{\omega}_k \end{equation} where $\mathbf{H}_k$ is a mapping from system state space to measurement space, and white noise $\mathbf{Q}_k$ is defined as \begin{equation} \bm{\omega}_k\approx N(0,R_k) \end{equation} It is assumed that the noise $\bm{\omega}_k$ and $\bm{\nu}$ at each time step are independent. Let $\tilde{\mathbf{x}}_k$ denote the predicted state estimation given $\hat{\mathbf{x}}_{k-1}$, where $\hat{\mathbf{x}}_{k-1}$ is the updated estimation of system state at $k-1$ time step. Furthermore, let $\tilde{\bm{\Sigma}}_k$ denote the predicted covariance given $\hat{\bm{\Sigma}}_{k-1}$, where $\hat{\bm{\Sigma}}_{k-1}$ is the updated estimation covariance. Then, in the predicting step, \begin{align} \tilde{\mathbf{x}}_k=\mathbf{F}_k\hat{\mathbf{x}}_{k-1}+\mathbf{B}_k\mathbf{u}_k\\ \tilde{\bm{\Sigma}}_k=\mathbf{F}_k\hat{\bm{\Sigma}}_{k-1}\mathbf{F}^T_k+\mathbf{Q}_k \end{align} In the updating step, the measurement $\mathbf{z}_k$ is used, together with above predicted state and covariance, to update the state and covariance. The residual, $\mathbf{y}_k$ between measurement and predicted state is given by \begin{equation} \mathbf{y}_k=\mathbf{z}_k-\mathbf{H}_k\tilde{\mathbf{x}}_k \end{equation} The innovation covariance $\mathbf{S}_k$ is given by \begin{equation} \mathbf{S}_k=\mathbf{H}_k\tilde{\bm{\Sigma}}_k\mathbf{H}^T_k+\mathbf{R}_k \end{equation} Then, the optimal Kalman gain is calculated as \begin{equation} \mathbf{K}_k=\tilde{\bm{\Sigma}}_k\mathbf{H}^T_k\mathbf{S}^{-1}_k \end{equation} Then, the state and covariance can be updated by \begin{align} \hat{\mathbf{x}}_{k}=\tilde{\mathbf{x}}_k+\mathbf{K}_k\mathbf{y}_k\\ \hat{\bm{\Sigma}}_{k}=(\mathbf{I}-\mathbf{K}_k\mathbf{H}_k)\tilde{\bm{\Sigma}}_k \end{align} \section{Methodology} \label{sec:method} In this paper, a novel Gaussian particle filter technique follows the main idea of particle filter for estimating the number and state of total targets based on the measurement obtained online. Different from classical particle filter, each particle here is a gaussian instead of a point mass. The estimation for number of total targets and their state is presented by a set of weighted particles. The $i$th particle at time $k$ is denoted as \begin{equation} P^i_k=\{w^i_k,\mathcal{N}(\mathbf{x}^i_k|\bm{\mu}^i_k,\bm{\Sigma}^i_k)\} \end{equation} where $w_i$ is the probability of existing a target having a state distribution as $N(\bm{\mu},\mathcal{N}(\mathbf{x}_i|\bm{\mu},\bm{\Sigma}_i))$. By this particle definition, the dimensions of the system state remains the same as the dimensions of each individual target. When these particles are available, the estimated number of total targets can be given as $T=\sum_{i=1}^{N_p}w_i$, where $N_p$ is the particle number. Notice that the particle representation is different from classical particle filter, where each particle represents a possible value of system state. The updating of each particle and total weights are also different from classical particle filter. Kalman filter is used to update each particle, the weight and the distribution. Notice that since the measurement at each time step is conditioned on all the targets in the FOV, while in the classical Kalman filter method one measurement is associated with one target, which means data association problem is avoid. Therefore, the Kalman filter is modified to updated the particles which are coupled by one measurement, and some approximations and assumptions are further needed. Similar to Kalman filters and particle filters, the algorithm proposed in this paper is a recursive method. Assume that at time step $k$, the measurement $\bm{z}_k$ is available, and the estimation of the system at time step $k-1$ is represented by a particle set, denoted as $\mathcal{P}_{k-1}=\{P^1_{k-1},P^2_{k-1},\dots,P^{N_p}_{k-1}\}$, where $N_p$ is the number of all particles. By using the target dynamic function \ref{eqn:targetModel}, $\mathcal{P}_{k-1}$ can be updated to $\tilde{\mathcal{P}}_{k}$ without using the $\bm{z}_k$. Due to limit of FOV, only a few particles may have contribution to the measurement. Let $\mathcal{P}_S$ denote set including the particles lie in the FOV, while let $\bar{\mathcal{P}}=\tilde{\mathcal{P}}_{k}/\mathcal{P}_S$ denote the compensation set. Only the particles in $\mathcal{P}_S$ are updated. Please Note the size of $\mathcal{P}_S$ is small. Without generality, assume that $\mathcal{P}_S=\{\tilde{P}^1_k,\tilde{P}^2_k,\dots,\tilde{P}^s_k\}$, where $s$ the number of particles The update of each particle in $\mathcal{P}_S$ is calculated separately. Without generally, we focus on updating $\tilde{P}^j_k$, Let a boolean set $E=[e_1,e_1,\dots,e_s]$, where $e_i\in\{0,1\}$. For any $E$ with $e_j=1$ such that $\Pi (w_i)^{e_i}(1-w_i)^{1-e_i}>\epsilon$, where $\epsilon$ is a predefined threshold, a particle is calculated and denoted as $\tilde{p}_j$, Then, the modified Kalman filter is used to give the updated gaussian parameters of all particles with $e_i=1$. According to sensor model \ref{eqn:SensorModel}, the measurement is given by \begin{equation} \bm{z}_k=\frac{\sum_{i=1}^{s}\bm{\mu}^i_k e_i}{\sum_{i=1}^{s}{e_i}}+\frac{1}{(\sum_{i=1}^{s}{e_i})^2}\sum_{i=1,i\ne j}^{s}{e_i}\bm{\Sigma_i}+\bm{\omega}_k \end{equation} Compare the above function to (\ref{eqn:KalmanZ}), we have following setting \begin{eqnarray} \bm{H}_k=I\times\frac{1}{(\sum_{i=1}^{s}{e_i})}\\ \bm{z}_k=\bm{z}_k-\frac{\sum_{i=1,i\ne j}^{s}\bm{\mu}^i_k e_i}{\sum_{i=1}^{s}{e_i}}\\ \bm{R}_k=\frac{1}{(\sum_{i=1}^{s}{e_i})^2}\sum_{i=1,i\ne j}^{s}{e_i}\bm{\Sigma_i}+\bm{\omega}_k \end{eqnarray} Then, by applying Kalman procedure \begin{eqnarray} \mathbf{y}_k=\mathbf{z}_k-\mathbf{H}_k\bm{\mu}_k\\ \mathbf{S}_k=\mathbf{H}_k\bm{\Sigma}_k\mathbf{H}^T_k+\mathbf{R}_k\\ \mathbf{K}_k=\bm{\Sigma}_k\mathbf{H}^T_k\mathbf{S}^{-1}_k\\ \bm{\mu}_{k}=\bm{\mu}_k+\mathbf{K}_k\mathbf{y}_k\\ \bm{\Sigma}_{k}=(\mathbf{I}-\mathbf{K}_k\mathbf{H}_k)\bm{\Sigma}_k \end{eqnarray} Its proof can be found in the appendix. Once each particle appearing in combination $E$ has been updated, the weight $w_c$ is for the particle combination $E$ can be obtained by \begin{eqnarray} w_c&=&\Pi (w_i)^{e_i}(1-w_i)^{1-e_i} \times\frac{1}{(2\pi)^2\|\bm{\Sigma}^{-1}_c\|}\nonumber\\&&\times\exp\{-(\mathbf{z}_k-\bm{\mu}_c)^T\bm{\Sigma}^{-1}_c(\mathbf{z}_k-\bm{\mu}_c)\} \end{eqnarray} where $c\in I_E$, where $I_E$ is the combination index, and $\bm{\mu}_c)$ and $\bm{\Sigma}^{-1}_c$ is given by \begin{eqnarray} \bm{\mu}_c=\mathbf{H}_k\sum\bm{\mu}^i_{k}\\ \bm{\Sigma}_c=\mathbf{H}_k\sum\bm{\Sigma}^i_{k}\mathbf{H}^T_k+\mathbf{R}_k \end{eqnarray} Then, insert particle $\{\mathcal{N}^i(\bm{\mu}_{k},\bm{\Sigma}_{k})\}$ into a set $G_c$ for combination $c\in I_E$, the set $G_c$ has a weight $w_c$. After all $G_c$ the combination of $E$ that $\Pi (w_i)^{e_i}(1-w_i)^{1-e_i}>\epsilon$ is obtained. Weights are updated by \begin{equation} w_c=\frac{w_c}{\sum w_c} \end{equation}. Then in each group $G_c$, the weight of $i$th particle is updated as \begin{equation} w^i_c=\frac{w^i_k}{\Pi w^i_k}*w_c \end{equation} The particles in all set $G_c$ are updated from the same particle in the previous set. If two particles in is close enough, then they are combined as one particle, the weight of which is set as the summation of both weights. The distance between $\bm{\mu}_i$ and $\bm{\mu}_j$ is defined as Mahal-distance \begin{equation} (\bm{\mu}_i-\bm{\mu}_j)^T\bm{M}(\bm{\mu}_i-\bm{\mu}_j) \end{equation} and its covariance is updated as \begin{equation} \bm{\Sigma}=\sum\bm{\Sigma}^i_{k} \end{equation} \section{Simulation and Results} \label{sec:results} As shown in figure (\ref{fig:workspace_cell}), $N$ targets, represented by blue dots, are moving in the $2$ dimensional workspace. The whole workspace is visible to a position fixed sensor (not shown) and the workspace is discretized into $12\times12$ cells. Each cell represents a $1\times1$ rectangular area. The $i$th cell, denoted as $C_i, i\in\mathcal{C}$, is defined by $[x^{ul}_i, y^{ul}_i, x^{dr}_i, y^{dr}_i ]$, where $\mathcal{C}$ is the cell index set and $(x^{ul}_i, y^{ul}_i)$ and $(x^{dr}_i, y^{dr}_i)$ are up left and down right corner coordinates of the $i$th rectangular area respectively. Only $M$ cells can be measured at each time step $k$, and they don't have to be adjacent. The goal of the sensor is to estimate the target states and target number at time $k$. In this paper, information value function based $\alpha$ divergence is used to select the best $M$ cells to measure at each step \cite{LuInfoMove12}. The estimation of target states and target number at time $k$ is represented by joint multitarget probability density(JMPD) and it is updated after obtaining new measurements. \begin{figure}[h] \centering \includegraphics[width=3in]{workspace_cell.pdf} \begin{minipage}{0.5\textwidth} \caption{The workspace contians three point targets.} \label{fig:workspace_cell} \end{minipage} \end{figure} The target time-discrete state transition function can be written as \begin{equation} \label{eqn:TargetDiscreteTransition} \mathbf{x}^{k+1}_i=\mathbf{F}\mathbf{x}^{k}_i+\mathbf{w}^k_i \end{equation} where \begin{equation} \mathbf{F}=\left[ \begin{matrix} 1 & \tau& 0 &0\\ 0 & 1&0 &0 \\ 0& 0& 1&\tau \\ 0&0&0&1 \end{matrix}\right] \end{equation} and $\mathbf{w}^k_i$ is $0$ mean Gaussian noise with covariance $\mathbf{Q}=$diag$(20,0.2,20,0.2)$, and $\tau$ is the time step length, and $i\in \{1,2,\cdots, T^k\}$\cite{KreucherMultitargetTrackingJMPD05}. It is further assumed that i) the sensor can measure any cell at time $k$; ii) the sensor can only measure up to $M$ cells at time $k$. The sensor condition $\bm{\lambda}^k_c$ represents the signal to noise ratio $\mbox{SNR}$, currently, it has only one possible value, fixed and known. The measurement $z^k_i$ is a discrete variable, then joint PMF can be written as \begin{equation} \label{eqn:factorizationP} f(\mathbf{z}^k,\mathbf{X}^k,T^k, \bm{\lambda}^k) \!=\! f(\mathbf{z}^k|\mathbf{X}^k, T^k,\bm{\lambda}^k) f(\mathbf{X}^k, T^k) f(\bm{\lambda}^k) \end{equation} When measuring a cell, the imager sensor will give a Raleigh return, either a $0$ (no detection) or a $1$ (detection) governed by detecting probability, denoted as $p_d$, and false alarm probability, denoted as $p_f$. According to standard model for threshold detection of Rayleigh returns, $p_f=p_d^{(1+\mbox{\footnotesize SNR})}$. When $T$ targets are in the same cell, then the detection probability is $p_d(T)=p_d^{(1+\mbox{\footnotesize SNR})/(1+T\times \mbox{\footnotesize SNR})}$ and the $i$th sensor measurement at time $k$ can be evaluated by \begin{eqnarray} \nonumber &&p(z^k_i|\mathbf{X}^k,T^k, \lambda^k_{a, i},\bm{\lambda}^k_c)=\begin{cases} p_d(T) & z^k_i=1 \\1-p_d(T) & z^k_i=0 \end{cases} \\&& \nonumber T=\sum^N_{j=1} (x^k_j\geq x^{ul}_c)\cap(x^k_j<x^{dr}_c)\nonumber\\ &&\quad\quad\cap(y^k_j\geq y^{ul}_c)\cap(x^k_j<x^{dr}_c), ~c=\lambda^k_{a, i} \\&& p_d(T)=p_d^{(1+\bm{\lambda}^k_c)/(1+T\bm{\lambda}^k_c)} \end{eqnarray} where $x^k_j, y^k_j$ are two position components of $\mathbf{x}^k_j\in\mathbf{X}^k$ and $T^k$ is the target number. Additionally, operators "$\geq$" and "$<$" return either $1$ if true or $0$ if false, while "$\cap$" is the Boolean operator "and". For example, as shown in figure [\ref{fig:workspace_cell}], when $c=k$, $T=2$, similarly, when $c=j(i)$, $T=1(0)$. A snapshot of simulations is shown in Fig. \ref{fig:figsnap}, where magenta squares represent positive measurement return and blue dots represent the true targets' positions. The simulation results are summarized in Fig. \ref{fig:result}, where the black curve represents the target state estimation error and the red curve represents the target number estimation error. As shown in Fig. \ref{fig:result}, both errors decreases as more measurements become available. \begin{figure}[h] \centering \includegraphics[width=3in]{result.pdf} \caption{Simulation Result} \label{fig:result} \end{figure} \begin{figure*}[h] \centering \includegraphics[width=5in]{figsnap.pdf} \caption{Snapshot of simulations} \label{fig:figsnap} \end{figure*} \section{Conclusion and Future Work} \label{sec:conclusion} A Gaussian particle filter that combines Kalman filter and particle filter is presented in this paper for estimating the number and state of total targets based on the measurement obtained online. The estimation is represented by a set of weighted particles, different from classical particle filter, where each particle is a Gaussian instead of a point mass. The weight of each particle represents the probability of existing a target, while its Gaussian indicates the state distribution for this target. This approach is efficient for the problem of estimating number of total targets and their state. \section{Appendix} Without losing generality, $\mathcal{P}_S=\{\tilde{P}^1_k,\tilde{P}^2_k,\dots,\tilde{P}^s_k\}$, $E=[e_1,e_2,\dots,e_s]$, for any particle such that $e_j=1$, its $\bm{\mu}^j_k$ and $\bm{\Sigma}^j_k$, given $\bm{z}_k$ and $\mathcal{P}_S$. \begin{equation} y_k=\bm{z}_k-\frac{\sum_{i=1}^{s}\bm{x}^i_k e_i}{\sum_{i=1}^{s}{e_i}}) \end{equation} \begin{eqnarray} \bm{\Sigma^j_k}&=&\mbox{COV}(\bm{x}^j_k-\hat{\bm{x}}^j_k)\nonumber\\ &=&\mbox{COV}(\bm{x}^j_k-(\tilde{\bm{x}}^j_k+\bm{K}^j_k y_k))\nonumber\\ &=&\mbox{COV}\Big(\bm{x}^j_k-(\tilde{\bm{x}}^j_k+\bm{K}^j_k (\frac{\sum_{i=1}^{s}\bm{x}^i_k e_i}{\sum_{i=1}^{s}{e_i}})\nonumber\\ &&+\bm{\nu}_k-\frac{\sum_{i=1}^{s}\tilde{\bm{x}}^i_k e_i}{\sum_{i=1}^{s}{e_i}})\Big)\nonumber\\ &=&\mbox{COV}\Big((\bm{I}-\frac{1}{\sum_{i=1}^{s}{e_i}}\bm{I}\bm{K}^j_k)(\bm{x}^j_k-\tilde{\bm{x}}^j_k)\nonumber\\ &&-\bm{K}^j_k\bm{\nu}_k-\sum_{i=1,i\ne j}^{s}\bm{K}^j_k(\bm{x}^i_k-\tilde{\bm{x}}^i_k)\Big)\\ &=&(\bm{I}-\frac{1}{\sum_{i=1}^{s}{e_i}}\bm{I}\bm{K}^j_k)\tilde{\Sigma}^j_k(\bm{I}-\frac{1}{\sum_{i=1}^{s}{e_i}}\bm{I}\bm{K}^j_k)^T \nonumber\\ &&+\frac{1}{\sum_{i=1}^{s}{e_i}}\bm{I}\bm{K}^j_k\sum_{i=1,i\ne j}^{s}\bm{\Sigma}^i_k(\frac{1}{\sum_{i=1}^{s}{e_i}}\bm{I}\bm{K}^j_k)^T\nonumber\\ &&+\bm{K}^j_k\bm{R}_k\bm{K}^j_k. \end{eqnarray} By setting $\partial_{\partial \bm{K}^j_k}=0$, therefore \begin{eqnarray} \bm{K}^j_k&=&\bm{\Sigma}^j_k(\frac{1}{\sum_{i=1}^{s}{e_i}}\bm{I})^T\nonumber\\ &&\times(\bm{R}_k+\frac{1}{\sum_{i=1}^{s}{e_i}}\bm{I}\sum_{i=1}^{s}\bm{\Sigma}^i_k(\frac{1}{\sum_{i=1}^{s}{e_i}}\bm{I})^T)^{-1}\nonumber\\ &=&\frac{1}{\sum_{i=1}^{s}{e_i}}\!\bm{\Sigma}^j_k(\bm{R}_k\!+\!\frac{1}{(\sum_{i=1}^{s}{e_i})^2}\sum_{i=1}^{s}\bm{\Sigma}^i_k)^{-1}. \end{eqnarray} Then, \begin{equation} \bm{\mu}^j_k=\tilde{\bm{\mu}}^j_k+\bm{K}^j_ky_k \end{equation} \begin{eqnarray} \bm{\Sigma}^j_k&=&\tilde{\bm{\Sigma}}^j_k-\frac{1}{\sum_{i=1}^{s}{e_i}}\tilde{\bm{\Sigma}}^j_k\nonumber\\ &&\times(\bm{R}_k\!+\!\frac{1}{\sum_{i=1}^{s}{e_i}}\bm{I}\sum_{i=1}^{s}\bm{\Sigma}^i_k(\frac{1}{\sum_{i=1}^{s}{e_i}}\bm{I})^T)^{-1}\nonumber\\ &&\times\tilde{\bm{\Sigma}}^j_k \end{eqnarray}
1,314,259,994,634
arxiv
\section{Introduction} Quantum Annealing (QA) is a metaheuristic for solving combinatorial optimization problems using quantum fluctuations~\cite{kadowaki1998quantum, brooke1999quantum, santoro2002theory, santoro2006optimization, das2008colloquium, morita2008mathematical,hauke2019perspectives}, and is closely related with adiabatic quantum computation~\cite{farhi2001quantum,albash2018adiabatic}. The goal of QA is to obtain the ground state of a classical Ising model, to which a combinatorial optimization problem can be reduced \cite{lucas2014ising}. It is usually the case that the resulting Ising Hamiltonian has long-range interactions including all-to-all interactions, but the current annealing device, the D-Wave quantum annealer, does not directly implement long-range interactions. One therefore has to employ the procedure of embedding \cite{Choi2008,Choi2011} to represent long-range interactions in terms of a combination of short-range interactions. This leads to an overhead in the qubit count and also tends to cause errors induced by imperfect realization of embedding in the real device. Lechner, Hauke, and Zoller (LHZ) \cite{lechner2015quantum} proposed an ingenious scheme to partly mitigate the above problems by mapping all-to-all interactions to single-qubit terms in the Hamiltonian supplemented by local four-body interactions introduced to guarantee the equivalence of two formulations. Although the issue of overhead in the qubit count still exists, the reduction of all-to-all interactions to a local representation is certainly advantageous in the device implementation as well as from the viewpoint of mitigation of errors caused by imperfections in embedding. Several proposals have been made to realize the LHZ scheme \cite{leib2016transmon,chancellor2017circuit,puri2017engineering,puri2017quantum,goto2019quantum}. There have also been attempts to analyze the LHZ scheme theoretically. Leib, Zoller, and Lechner \cite{leib2016transmon} proposed a method to reduce four-body interactions in the LHZ Hamiltonian to two-body interactions by introducing auxiliary qubits. They also showed that a proper control of the coefficient of the constraint terms is likely to improve the performance. Hartmann and Lechner \cite{hartmann2019quantum} used non-stoquastic counter-diabatic drivers for better performance, and the same authors recently introduced a mean-field like method to analyze the effect of inhomogeneity in the transverse field for increased success probabilities \cite{hartmann2019rapid}. We have been inspired by these developments and have analyzed the LHZ scheme within the framework of mean-field theory. The result shows that a non-linearity in the coefficient of the constraint term as a function of time leads to avoidance of first-order phase transitions that exist in the original linear time dependence of the coefficient. This implies an exponential speedup from the view point of adiabatic quantum computation because a first-order phase transition usually accompanies an exponentially-closing energy gap as a function of the system size, meaning an exponential computation time according to the adiabatic theorem of quantum mechanics \cite{jansen2007,lidar2009}. This paper is organized as follows. After an introduction to the LHZ model and its mean-field version in Sec. \ref{section:LHZ_model}, we analyze the problem analytically in Sec. \ref{section:MF_numerical}. Conclusion is described in Sec. \ref{section:conclusion}. \section{The LHZ Model and the Mean-field Model} \label{section:LHZ_model} The conventional QA has the Hamiltonian \begin{align} \label{eq:qa_hamiltonian} \hat{H}(s) =s \hat{H}_{P} + \left(1-s\right) \hat{V}, \end{align} where $\hat{H}_P$ is the Ising Hamiltonian representing a combinatorial optimization problem, \begin{align} \hat{H}_P=-\sum_{i\ne j} J_{ij} \hat{\sigma}_i^z\hat{\sigma}_j^z-\sum_{i} h_i \hat{\sigma}_i^z \label{eq:Ising_model0} \end{align} and $\hat{V}$ is the transverse field to induce quantum fluctuations, \begin{align} \hat{V}=-\sum_i \hat{\sigma}_i^x \end{align} with $\hat{\sigma}_i^{z(x)}$ denoting the $z$($x$) component of the Paul operator at site (qubit) $i$. The parameter $s=t/T$ is the normalized time running from 0 to $1$ as the time $t$ proceeds from 0 to $T$, and thus $T$ is the total computation time (annealing time). One starts at $s=0$ from the trivial ground state of $\hat{V}$ and increases $s$ with the expectation that the ground state of the Ising Hamiltonian $\hat{H}_P$ is reached at $s=1~(t=T)$. According to the adiabatic theorem of quantum mechanics, the computation time $T$ necessary for the system to stay close enough to the instantaneous ground state is proportional to the inverse polynomial of the minimum energy gap $\Delta =\min_{s}\{E_1(s)-E_0(s)\}$, where $E_0(s)$ and $E_1(s)$ are the instantaneous ground and first-excited state energies, respectively. If this energy gap closes exponentially as a function of the system size $N_l$ (the number of logical qubits) as is usually the case at a first-order quantum phase transition, the computation time $T$ is grows exponentially $e^{aN_l}~(a>0)$, which means that the problem is hard to solve by QA. It is therefore highly desirable to avoid or remove first-order phase transitions. The LHZ scheme \cite{lechner2015quantum} reduces the all-to-all interactions implied in the Ising Hamiltonian eq. (\ref{eq:Ising_model0}) to single-body terms supplemented by four-body constraint terms to enforce equivalence to the original problem, \begin{align} \label{eq:lhz} \hat{H}_{P_1} &= - \sum_{k=1}^{N} J_k \hat{\sigma}_k^z -\sum_{l=1}^{N_c} \hat{\sigma}_{(l,n)}^z \hat{\sigma}_{(l,w)}^z \hat{\sigma}_{(l,s)}^z \hat{\sigma}_{(l,e)}^z \end{align} through the correspondence \begin{align} J_{ij} \hat{\sigma}_i^z\hat{\sigma}_j^z \longrightarrow J_k\hat{\sigma}_k^z. \end{align} The number of physical qubits $N$ in the LHZ Hamiltonian eq. (\ref{eq:lhz}) is the number of all-to-all interactions in the original model, \begin{align} N=\frac{1}{2}N_l (N_l-1), \end{align} and the number of constraints in the second term on the right-hand side of eq. (\ref{eq:lhz}) is \begin{align} N_c=\frac{1}{2}(N_l-1)(N_l-2). \end{align} The four-body term in eq. (\ref{eq:lhz}) consists of four neighboring qubits (three at the bottom boundary) as depicted in Fig. \ref{fig1}, and is therefore local and is possibly amenable to direct experimental implementation. \begin{figure}[tb] \centering \includegraphics[width=0.3\textwidth]{Fig1.pdf} \caption{(Color online) Qubit configurations in the LHZ model for $N_l=5$ logical qubits. Large green circles denote physical qubits and the four qubits around each small red circle (plaquette) consist a four-body interaction. The number in a green or red circle is the index of a physical qubit $k$ or a plaquette $l$, respectively. The state of auxiliary physical qubit at the bottom row (yellow) is fixed to $\ket{\uparrow}$.} \label{fig1} \end{figure} A recent contribution by Hartmann and Lechner \cite{hartmann2019rapid} showed that an infinite-range (mean-field) version of the four-body term serves as a good approximation to the original nearest-neighbor (short-range) interactions, which greatly facilitates analytical studies. We therefore follow their idea and introduce the following Hamiltonian, \begin{align} \label{eq:p-spin} \hat{H}_{P_2} = -\sum_{i=1}^{N} J_i \hat{\sigma}_i^z - N\left(\frac{1}{N} \sum_{i=1}^{N} \hat{\sigma}_i^z\right)^4. \end{align} This is the problem Hamiltonian we study in the present paper. \section{Mean-field Analysis} \label{section:MF_numerical} It turns out to be convenient to introduce an additional parameter $\tau$ to control the time dependence of the constraint term. The mean-field Hamiltonian is then written as \begin{align} \label{eq:p-spin_prime} \hat{H}_{P_2^{\prime}}(s,\tau) &= -s \sum_{i=1}^{N} J_i \hat{\sigma}_i^z -\tau N\left(\frac{1}{N} \sum_{i=1}^{N} \hat{\sigma}_i^z\right)^4. \end{align} The total Hamiltonian is \begin{align} \label{eq:hamiltonian} \hat{H}(s,\tau) =\hat{H}_{P_2^{\prime}}(s,\tau) + (1-s) \hat{V}. \end{align} The parameters $s(t)$ and $\tau(t)$ are no longer linear in general as functions of $t$ and change from $s=\tau=0$ at $t=0$ to $s=\tau=1$ at $t=T$. It is straightforward to apply the standard procedure to derive the free energy per qubit as a function of the ferromagnetic order parameter $m$ \cite{jorg2010energy,seki2012quantum,seoane2012,seki2015,hartmann2019rapid,susa2018quantum,ohkuwa2018reverse}. We therefore just write the result for the free energy and its minimization condition, i.e. the self-consistent equation, \begin{subequations} \begin{align} \label{eq:free_energy_finite_temp} f(m) =& 3 \tau m^4 -\frac{1}{\beta} \left[ \ln 2\cosh \beta \sqrt{(4 \tau m^3+sJ_i)^2+(1-s)^2}\right]_i, \\ \label{eq:magnetization_finite_temp} m =&\Biggl[ \frac{4 \tau m^3+sJ_i}{\sqrt{(4 \tau m^3+sJ_i)^2+(1-s)^2}} \notag \\ &\times \tanh \beta \sqrt{( 4\tau m^3+sJ_i)^2+(1-s)^2}\Biggl]_i, \end{align} \end{subequations} where $\beta$ is the inverse temperature and the brackets $[\cdots ]_i$ stand for the average over the values of $J_i$, $\frac{1}{N}\sum_{i}\cdots$. Let us first focus on the simplest case of zero temperature $\beta\to\infty$ and a uniform interactions $J_i=J$. Then eqs.(\ref{eq:free_energy_finite_temp}) and (\ref{eq:magnetization_finite_temp}) reduce to \begin{subequations} \begin{align} f(m) &= 3 \tau m^4- \sqrt{(4 \tau m^3+sJ)^2+(1-s)^2}, \\ \label{eq:magnetization} m &=\frac{ 4\tau m^3+sJ}{\sqrt{(4 \tau m^3+sJ)^2+(1-s)^2}}. \end{align} \end{subequations} Numerical solutions to these equations reveal the phase diagram on the $s$-$\tau$ plane for $J=0.5$ as shown in Fig. \ref{fig2a}, where the thick blue line denotes a line of first-order phase transitions terminating at a critical point marked in orange. The precise location of this critical point can be derived following the standard prescription that the derivatives up to third order should vanish at a critical point \cite{nishimori2010elements}, \begin{subequations} \begin{align} \label{eq:s_c} s_c&=\frac{2^{5/2}}{3^{5/2} J +2^{5/2}}, \\ \label{eq:tau_c} \tau_c &= \frac{J}{\sqrt{2}}\left(\frac{3^{5/2}-2^{5/2}}{3^{5/2} J +2^{5/2}}\right). \end{align} \end{subequations} \begin{figure}[tb] \centering \subfigure[]{ \includegraphics[height = 5cm]{Fig2a.pdf} \label{fig2a} } \subfigure[]{ \includegraphics[height = 5cm]{Fig2b.pdf} \label{fig2b} } \caption{(Color online) (a) Phase diagram of the Hamiltonian of eq. (\ref{eq:hamiltonian}) with eq. (\ref{eq:p-spin_prime}). The solid blue line denotes a line of first-order phase transitions (PT) and the orange dot represents the critical point (CP) of eqs. (\ref{eq:s_c}) and (\ref{eq:tau_c}). Each curve corresponds to the annealing $\tau=s^r$ with four values of $r$. (b) The minimum energy gap as a function of $N$ in a log-log scale. Full curves are fits to exponential or polynomial dependence. } \end{figure} The conventional protocol of quantum annealing corresponds to the straight line $\tau=s$ in the phase diagram, which crosses the line of first-order phase transitions. If we instead choose a trajectory $\tau =s^r$ with $r>1,56$, the annealing process does not encounter a phase transition. The critical point is touched when $r=1.56$. Correspondingly, the minimum energy gap between the ground state and the first excited state closes exponentially as a function of the system size for $\tau=s$ whereas it is polynomial for $r=1.56$ as depicted in Fig. \ref{fig2b}. When $r>1.56$, the gap is expected to reach a constant in the large-$N$ limit because there is no phase transition, but the numerical data show a slow decay. This would probably due to the proximity of the curves $\tau=s^2$ and $\tau=s^3$ to the critical point and the asymptotic region for $N\gg 1$ is not yet reached. It is anyway the case that an exponential speedup of the computation time can be achieved by the choice of $r\ge 1.56$ in comparison with the conventional annealing with $r=1$ because an exponential gap closing is avoided. Similar results are obtained for different parameter values. Examples are shown in Fig. \ref{fig3a} for $J=1, 0.5$, and 0.1, and Fig. \ref{fig3b} for $\beta=5, 2, 1.5$ and 1 with $J=0.5$. \begin{figure}[tb] \centering \subfigure[]{ \includegraphics[height = 5cm]{Fig3a.pdf} \label{fig3a} } \subfigure[]{ \includegraphics[height = 5cm]{Fig3b.pdf} \label{fig3b} } \caption{ (Color online) (a) Phase diagram for $J=1,\ 0.5$, and $0.1$. (b) Phase diagram at finite temperatures. All lines are for first-order phase transitions and the orange dots indicate the critical point (CP). } \end{figure} A more complex case of random interactions with the distribution function \begin{align} P(J_i)=\epsilon \delta(J_i-J)+ (1-\epsilon) \delta(J_i+J)\ \ \ (0\leq \epsilon \leq 1) \end{align} is interesting because this is for random and frustrated all-to-all interactions in the original problem, corresponding to the Sherrington-Kirkpatrick model of spin glasses \cite{Sherrington1975}. The ground-state free energy for this problem can be derived from eq. (\ref{eq:free_energy_finite_temp}) with the result \begin{align} f =& 3 \tau m^4- \epsilon \sqrt{\left( 4\tau m^3+s J\right)^2+\left(1-s\right)^2} \notag \\ &-(1-\epsilon)\sqrt{\left( 4\tau m^3-s J\right)^2+\left(1-s\right)^2}. \end{align} The phase diagram is drawn in Fig. \ref{fig4a} for a set of values of $\epsilon$. \begin{figure}[tb] \centering \subfigure[]{ \includegraphics[height = 5cm]{Fig4a.pdf} \label{fig4a} } \subfigure[]{ \includegraphics[height = 5cm]{Fig4b.pdf} \label{fig4b} } \caption{(Color online) (a) Phase diagram for the mean-field model with randomness in the original interactions for the parameters $J=0.5$. Each curve indicates the line of first-order phase transitions. (b) Jump in magnetization $m$ along the first-order transition line. The same color code is used for $\epsilon$ as in (a).} \end{figure} It is observed that the lines of first-order phase transitions have breaks in the intermediate ranges of $s$ if $\epsilon$ is not close to 0.5. The latter is reasonable because $\epsilon=0.5$ represents a completely random spin-glass model, which is known as a very difficult problem to solve \cite{nishimoriSGbook}. Nevertheless, even when the line of first-order transitions traverses the phase diagram as in the case of $\epsilon=0.8$, the jump in magnetization across a first-order transition can be tuned much smaller than the naive case of $\tau=s$ by an ingenious choice of the trajectory in the phase diagram as seen in Fig. \ref{fig4b}, which shows the magnetization jump along the first-order transition line. This implies that quantum tunneling probability, which strongly depends on the width of an energy barrier represented by the magnetization jump, can be tuned to be larger by an appropriate choice of a trajectory connecting $s=\tau=0$ and $s=\tau=1$. The extreme case of $\epsilon=0.5$ has no such properties. \section{Conclusion} \label{section:conclusion} We have studied the scheme of Lechner, Hauke and Zoller \cite{lechner2015quantum} to express long-range (all-to-all), two-body interactions by short-range, many-body interactions. Mean-field method was used to show that non-linear driving of the four-body constraint term as a function of time is advantageous for improved performance. In particular, increasing the amplitude of the constraint term more slowly than for the intrinsic problem term can lead to an exponential speedup according to the mean-field prediction for the phase diagram although it would be difficult in practice to observe such a drastic effect because of non-ideal environmental effects as well as due to the limited applicability of mean-field theory. Since $\tau=s^r$ with $r>1.56$ means that the coefficient $\tau$ of the four-body constraint term increases slowly than that of the main problem term $s$, we may learn a generic lesson that constraints are better introduced later in the process of quantum annealing compared to the main problem Hamiltonian. In other words, one may first search for good solutions without constraints and then gradually select among candidate solutions those that satisfy the constraint. It is an interesting future topic to test this idea for various problems. \section*{Acknowledgments} This paper is based on results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO). We have used QuTiP python library~\cite{johansson2012qutip,johansson2013qutip} in some of the calculations.
1,314,259,994,635
arxiv
\section{Introduction} In quantum chaos a lot of work is devoted to the investigation of the statistics of eigenvalues and properties of eigenfunctions of quantum systems whose classical counterpart is chaotic. For ergodic systems the behavior of almost all eigenfunctions in the semiclassical limit is described by the quantum ergodicity theorem, which was proven in \cite{Shn74,Shn93,Zel87,CdV85,HelMarRob87}, see also \cite{Sar95,KnaSin97} for general introductions. Roughly speaking, it states that for almost all eigenfunctions the expectation values of a certain class of quantum observables tend to the mean value of the corresponding classical observable in the semiclassical limit. Another commonly used description of a quantum mechanical state is the Wigner function \cite{Wig32}, which is a phase space representation of the wave function. According to the ``semiclassical eigenfunction hypothesis'' the Wigner function concentrates in the semiclassical limit on regions in phase space, which a generic orbit explores in the long time limit $t\to\infty$ \cite{Vor76,Vor77,Ber77b,Ber83}. For integrable systems the Wigner function $W(p,q)$ is expected to localize on the invariant tori, whereas for ergodic systems the Wigner function should semiclassically condense on the energy surface, i.e.\ $W(p,q) \sim \frac{1}{\operatorname{vol} (\Sigma_E )}\,\delta(H(p,q)-E)$, where $H(p,q)$ is the Hamilton function and $\operatorname{vol} (\Sigma_E )$ is the volume of the energy shell defined by $H(p,q)=E$. As we will show below the quantum ergodicity theorem is equivalent to the validity of the semiclassical eigenfunction hypothesis for almost all eigenfunctions if the classical system is ergodic. Thus a weak form of the semiclassical eigenfunction hypothesis is proven for ergodic systems. For practical purposes it is important to know not only the semiclassical limit of expectation values or Wigner functions, but also how fast this limit is achieved, because in applications one usually has to deal with finite values of $\hbar$, or finite energies, respectively. Thus the so-called rate of quantum ergodicity determines the practical applicability of the quantum ergodicity theorem. A number of articles have been devoted to this subject, see e.g.~\cite{Zel94a,Zel94b,LuoSar95, Sar95,EckFisKeaAgaMaiMue95} and references therein. The principal aim of this paper is to investigate the rate of quantum ergodicity numerically for different Euclidean billiards, and to compare the results with the existing analytical results and conjectures. A detailed numerical analysis of the rate of quantum ergodicity for hyperbolic surfaces and billiards can be found in \cite{AurTag97:p}. Two problems arise when one wants to study the rate of quantum ergodicity numerically. First the fluctuations of the expectation values around their mean can be so large that it is hard or even impossible to infer a decay rate. This problem can be overcome by studying the cumulative fluctuations, \begin{equation} S_1(E,A) = \frac{1}{N(E)} \sum_{E_n\le E} \left |\langle \psi_n , A\psi_n \rangle - \overline{\sigma (A)} \right | \;\;, \end{equation} where $\langle\psi_n , A\psi_n \rangle$ is the expectation value of the quantum observable $A$, $\overline{\sigma (A)}$ is the mean value of the corresponding classical observable $\sigma (A)$ and $N(E)$ is the spectral staircase function, see section \ref{sec:quantum-ergodicity} for detailed definitions. So $S_1(E,A)$ contains all information about the rate by which the quantum expectation values tend to the mean value, but is a much smoother quantity than the sequence of differences itself. Secondly, since the quantum ergodicity theorem makes only a statement about almost all eigenfunctions (i.e.\ a subsequence of density one, see below), there is the possibility of not quantum-ergodic subsequences of eigenfunctions. Such eigenfunctions can be for example so-called scarred eigenfunctions \cite{Hel84,McDKau88}, which are localized around unstable periodic orbits, or in billiards with two parallel walls so-called bouncing ball modes, which are localized on the family of bouncing ball orbits. Although such subsequences of exceptional eigenfunctions are of density zero, they may have a considerable influence on the behavior of $S_1(E,A)$. This is what we find in our numerical computations for the cosine, stadium and cardioid billiard, which are based on 2000 eigenfunctions for the cosine billiard and up to 6000 eigenfunctions for the stadium and cardioid billiard. In order to obtain a quantitative understanding of the influence of not quantum-ergodic subsequences on the rate, we develop a simple model for $S_1(E,A)$ which is tested successfully for the corresponding billiards. The application of this model in the case of the stadium billiard reveals in addition to the bouncing ball modes a subsequence of eigenfunctions, which appear to be not quantum-ergodic in the considered energy range. A further interesting question is if the boundary conditions have any influence on the rate of quantum ergodicity. This is indeed the case, for observables located near the boundary a strong influence on the behavior of $S_1(E,A)$ is observed. But for $E\to\infty$ this influence vanishes, so the asymptotic rate is independent of the boundary conditions. After having some knowledge on the rate by which the expectation values $\langle \psi_n,A\psi_n\rangle$ tend to their quantum-ergodic limit $\overline{\sigma (A)}$ one is interested how the suitably normalized fluctuations $\langle \psi_n,A\psi_n\rangle -\overline{\sigma (A)}$ are distributed. It is conjectured that they obey a Gaussian distribution, which we can confirm from our numerical data. The outline of the paper is as follows. In section \ref{sec:quantum-ergodicity} we first give a short introduction to the quantum ergodicity theorem and its implications. Then we discuss conjectures and theoretical arguments for the rate of quantum ergodicity given in the literature. In particular we study the influence of not quantum-ergodic eigenfunctions. In section \ref{sec:numerical} we give a detailed numerical study on the rate of quantum ergodicity for three Euclidean billiard systems for different types of observables, both in position and in momentum space. This includes a study of the influence of the boundary and a study of the fluctuations of the normalized expectation values around their mean. We conclude with a summary. Some of the more technical considerations using pseudodifferential operators are given in the appendix. \section{Quantum ergodicity} \label{sec:quantum-ergodicity} The classical systems under consideration are given by the free motion of a point particle inside a compact two--dimensional Euclidean domain $\Omega\subset \mathbb{R}^2$ with piecewise smooth boundary, where the particle is elastically reflected. The phase space is given by $\mathbb{R}^2\times \Omega $, and the Hamilton function is (in units $2m=1$) \begin{equation} H(p,q)=p^2 \,\, . \end{equation} The trajectories of the flow generated by $H(p,q)$ lie on surfaces of constant energy $E$, \begin{equation} \Sigma_E := \left\{ (p,q) \in \mathbb{R}^2\times \Omega \;\; | \;\; p^2=E \right\} \;\; , \end{equation} which obey the scaling property $\Sigma_E=E^{\frac{1}{2}}\Sigma_1 :=\{ (E^{\frac{1}{2}}p,q) \;\; | \;\; (p,q)\in \Sigma_1\}$ since the Hamilton function is quadratic in $p$. Note that $\Sigma_1$ is just $S^1\times \Omega$. The classical observables are functions on phase space $\mathbb{R}^2\times \Omega$, and the mean value of an observable $a(p,q)$ at energy $E$ is given by \begin{equation}\label{mval} \overline{a}^E=\frac{1}{\operatorname{vol}(\Sigma_E)} \int\limits_{\Sigma_E} a(p,q) \;\text{d}\mu = \frac{1}{\operatorname{vol}(\Sigma_E)} \iint\limits_{\mathbb{R}^2\times \Omega}a(p,q) \, \delta(p^2-E) \;\text{d} p\,\text{d} q \;\;, \end{equation} where $\text{d}\mu=\tfrac{1}{2} \,\text{d}\varphi\, \text{d} q $ is the Liouville measure on $\Sigma_E$ and $\operatorname{vol}(\Sigma_E)=\int_{\Sigma_E} \;\text{d}\mu$. The unusual factor $1/2$ in the Liouville measure is due to the fact that we have chosen $p^2$ and not $p^2/2$ as Hamilton function. For the mean value at energy $E=1$ we will for simplicity write $\overline{a}$. The corresponding quantum system which we will study is given by the Schr\"odinger equation (in units $\hbar=2m=1$) \begin{equation} - \Delta \psi_n(q) = E_n \psi_n(q) \;\;, \quad q\in \Omega \;\;, \end{equation} with Dirichlet boundary conditions: $\psi_n(q)=0$ for $q\in\partial\Omega$. Here $\Delta=\frac{\partial^2}{\partial q_1^2}+\frac{\partial^2}{\partial q_2^2}$ denotes the usual Laplacian, and we will assume that the eigenvalues are ordered as $E_1\leq E_2\leq E_3\ldots$ and that the eigenfunctions are normalized, $\int_\Omega |\psi_n(q)|^2 \;\text{d} q = 1$. The quantum ergodicity theorem describes the behavior of expectation values $\langle \psi_n , A \psi_n \rangle$ in the high energy (semiclassical) limit $E_n\to\infty$, and relates it to the classical mean value (\ref{mval}). The observable $A$ is assumed to be a pseudodifferential operator, so before we state the theorem we have to introduce the concept of pseudodifferential operators, see e.g.\ \cite{Hoe85a,Fol89,Tay81,Sch96:Diploma}. \subsection{Weyl quantization and pseudodifferential operators} It is well known that every continuous operator $A:C^{\infty}_0(\Omega )\to \mathcal{D}'(\Omega )$ is characterized by its Schwarz kernel $K_A\in \mathcal{D}' (\Omega \times \Omega )$ such that $A\psi (q)=\int_{\Omega}K_A(q,q')\psi (q') \, \text{d} q' $, where $\mathcal{D}'(\Omega )$ is the space of distributions dual to $C^{\infty}_0(\Omega )$, see e.g.\ \cite[chapter 5.2]{Hoe83a}. In Dirac notation one has $K_A(q,q')=\langle q|A|q'\rangle$. With such an operator $A$ one can associate its Weyl symbol, defined as \begin{equation}\label{Weylfunction} \operatorname{W} [A] (p,q):=\int\limits_{\mathbb{R}^2}\text{e}^{\text{i} q'p}\, K_A\left( q-\frac{q'}{2}, q+\frac{q'}{2}\right) \, \text{d} q' \;\;, \end{equation} which in general is a distribution \cite{Fol89}. An operator $A$ is called a pseudodifferential operator, if its Weyl symbol belongs to a certain class of functions. One of the simplest classes of symbols is $S^m(\mathbb{R}^2\times \Omega)$ which is defined as follows: $a(p,q)\in S^m(\mathbb{R}^2\times \Omega)$, if it is in $C^{\infty}(\mathbb{R}^2\times \Omega )$ and for all multiindices $\alpha$, $\beta$ the estimate \begin{equation}\label{defsm} \left|\frac{\partial^{|\alpha |}}{\partial p^{\alpha}} \frac{\partial^{|\beta |}}{\partial q^{\beta}}a(p,q) \right|\leq C_{\alpha ,\beta}\left(1+|p|^2\right)^{\frac{m-|\alpha |}{2}} \end{equation} holds. Here $m$ is called the order of the symbol. The main point in this definition is that differentiation with respect to $p$ lowers the order of the symbol. For instance polynomials of degree $m$ in $p$, $\sum_{|\alpha ' |\leq m}c_{\alpha '}(q)p^{\alpha '}$, whose coefficients satisfy $\left|\frac{\partial^{|\beta |}}{\partial q^{\beta}} c_{\alpha '}(q)\right| \leq C_{\alpha ' ,\beta}$ are in $S^m(\mathbb{R}^2\times \Omega)$. An operator $A$ is called a pseudodifferential operator of order $m$, $A\in S^{m}(\Omega)$, if its Weyl symbol belongs to the symbol class $S^m(\mathbb{R}^2\times \Omega)$, \begin{equation} A\in S^{m}(\Omega) \,\, :\,\Longleftrightarrow \quad \operatorname{W} [A] (p,q)\in S^m(\mathbb{R}^2\times \Omega)\,\, . \end{equation} For example if the Weyl symbol is a polynomial in $p$, then the operator is in fact a differential operator and so pseudodifferential operators are generalizations of differential operators. Further examples include complex powers of the Laplacian, $(-\Delta)^{z/2}\in S^{\Re z}(\Omega )$, see \cite{See67,See69,Tay81}. On the other hand, to any function $a\in S^m(\mathbb{R}^n\times \Omega)$ one can associate an operator $\Op a \in S^m(\Omega)$, \begin{equation} \Op a f (q):= \frac{1}{(2\pi)^2}\iint\limits_{\Omega\times\mathbb{R}^n} \operatorname{e}^{\text{i} (q-q')p} a\left(p, \frac{q+q'}{2}\right) f(q')\, \text{d} q' \text{d} p \,\, , \end{equation} such that its Weyl symbol is $a$, i.e.\ $\operatorname{W} [\Op a]=a$. This association of the symbol $a$ to the operator $\text{A}$ is called Weyl quantization of $a$. In practice one often encounters symbols with a special structure, namely those which have an asymptotic expansion in homogeneous functions in $p$, \begin{equation} a(p,q)\sim \sum_{k=0}^{\infty} a_{m-k}(p,q), \quad \text{with} \quad a_{m-k}(\lambda p,q)=\lambda^{m-k} a_{m-k}(p,q) \quad\text{for}\,\, \lambda >0 \;\;. \end{equation} Note that it is not required that $m$ is an integer, all $m\in\mathbb{R}$ are allowed. Since the degree of homogeneity tends to $-\infty$ this can be seen as an expansion for $|p|\to\infty$; see \cite{Hoe85a,Fol89} for the exact definition of this asymptotic series. These symbols are often called classical or polyhomogeneous, and we will consider only operators with Weyl symbols of this type. The space of this operators will be denoted by $S^m_{\text{cl}}(\Omega)$. If $A\in S^m_{\text{cl}}(\Omega)$ and $\operatorname{W} (A)\sim \sum_{k=0}^{\infty}a_{m-k}$, then the leading term $a_m(p,q)$ is called the principal symbol of $A$ and is denoted by $\sigma (A)(p,q):= a_m(p,q)$. It plays a distinguished role in the theory of pseudodifferential operators. One reason for this is that operations like multiplication or taking the commutator are rather complicated in terms of the symbol, but simple for the principal symbol. For instance one has \cite{Hoe85a,Fol89} \begin{equation}\label{prod} \sigma (AB)=\sigma (A)\sigma (B)\;\;, \qquad \sigma ([A,B])=\text{i}\{\sigma (A) ,\sigma (B)\}\;\;, \end{equation} where $\{\cdot ,\cdot \}$ is the Poisson bracket. It furthermore turns out that the principal symbol is a function on phase space, i.e.\ has the right transformation properties under coordinate transformations, whereas the full Weyl symbol does not have this property. So every operator $A$ with principal symbol $\sigma(A)$ can be seen as a quantization of the classical observable $\sigma (A)$. The existence of different operators with the same principal symbol just reflects the fact that the quantization process is not unique. Furthermore, one can show that the leading asymptotic behavior of expectation values of such operators for high energies only depends on the principal symbol, as it should be according to the correspondence principle. This is a special case of the Szeg\"o limit theorem, see \cite[chapter 29.1]{Hoe85b}. One advantage of the Weyl quantization over other quantization procedures is that the Wigner function of a state $|\psi\rangle $ appears naturally as the Weyl symbol of the corresponding projection operator $|\psi\rangle\langle \psi |$ \begin{equation}\label{def-Wigner} \operatorname{W}\left[ |\psi\rangle\langle \psi |\right](p,q)= \int\limits_{\mathbb{R}^2} \text{e}^{ \text{i} q' p}\, \, \overline{\psi} \left( q-\frac{q'}{2} \right) \psi\left(q+\frac{q'}{2}\right)\, \text{d} q' \;\;. \end{equation} In the following we will use for a Wigner function of an eigenstate $\psi_n$ the simpler notation $\operatorname{W}_n (p,q):= \operatorname{W}\left[ |\psi_n\rangle\langle \psi_n |\right](p,q)$. For the expectation value $\langle \psi ,A\psi \rangle $ one has the well known expression in terms of the Weyl symbol $\operatorname{W} [A]$ and the Wigner function $\operatorname{W} \left[ |\psi\rangle\langle \psi |\right]$, \begin{equation}\label{WignerWeylexpvalue} \langle \psi , A\psi \rangle = \frac{1}{(2\pi)^2} \iint\limits_{\Omega\times\mathbb{R}^2} \operatorname{W} [A](p,q) \operatorname{W} \left[ |\psi\rangle\langle \psi |\right](p,q)\, \text{d} p\; \text{d} q \;\;. \end{equation} Pseudodifferential operators of order zero have a bounded Wigner function, and therefore a bounded principal symbol $\sigma (A)$; this boundedness of the classical observable carries over to the operator level: the operators in $S^0(\Omega )$ are bounded in the $L^2$--norm. The definition of pseudodifferential operators can be generalized to manifolds of arbitrary dimension, the preceeding formulas are then valid in local coordinates. The symbols of these operators only live in local charts, but the principal symbols can be glued together to a function on the cotangent bundle $T^*\Omega$ which is the classical phase spac \footnote{If one wants to realize the semiclassical limit not as the high energy limit, but as the limit of $\hbar\to 0$ one has to incorporate $\hbar$ explicitly in the quantization procedure. In the framework of pseudodifferential operators this has been done by Voros in \cite{Vor76,Vor77}, see also \cite{Robe87,KnaSin97}.}. \subsection{Quantum limits and the quantum ergodicity theorem} In quantum mechanics the states are elements of a Hilbert space, or more generally linear functionals on the algebra of observables. In classical mechanics the pure states are points in phase space, and the observables are functions on phase space. More generally the states are measures on phase space, which are linear functionals on the algebra of observables. The pure states are then represented as delta functions. The eigenstates of a Hamilton operator are those which are invariant under the time evolution defined by $H$. In the semiclassical limit they should somehow converge to measures on phase space which are invariant under the classical Hamiltonian flow. The measures which can be obtained as semiclassical limits of quantum eigenstates are called quantum limits. More concretely the quantum limits can be described as limits of sequences of Wigner functions. Let $\{\psi_n\}_{n\in\mathbb{N}}$ be an orthonormal basis of eigenfunctions of the Dirichlet Laplacian $-\Delta$, and $\{ \operatorname{W}_n \}_{n\in\mathbb{N}}$ the corresponding set of Wigner functions, see equation (\ref{def-Wigner}). We first consider expectation values for operators of order zero, and then extend the results to operators of arbitrary order. Because pseudodifferential operators of order zero are bounded, the sequence of expectation values $\{\langle \psi_{n}, A\psi_{n}\rangle\}_{n\in\mathbb{N}}$ is bounded too. Every function $a\in C^{\infty}(\Sigma_1 )$ can be extended to a function in $C^{\infty}( \mathbb{R}^2\backslash \{ 0\}\times \Omega)$, by requiring it to be homogeneous of degree zero in $p$. Via the quantization $\Op a$\footnote{Strictly speaking is $a$ not an allowed Symbol because it is not smooth at $p=0$. Let $\chi (p)\in C^{\infty}(\mathbb{R}^2)$ satisfy $\chi (p)=0$ for $|p|\leq 1/4$ and $\chi (p)=1$ for $|p|\geq 1/2$. By multiplying $a$ with this excision function $\chi (p)$ we get a symbol $\chi a \in S^0(\mathbb{R}^2\times \Omega )$, whose Weyl quantization $\Op{\chi a}$ is in $S^0( \Omega )$. But the semiclassical properties of $\Op{\chi a}$ are independent of the special choice of $\chi (p)$, which can be see e.g.\ in \eqref{eq:distribution}, since $W_n$ is concentrated on the energy shell $\Sigma_{E_n}$ for $n\to\infty$. Therefore we will proceed for simplicity with $a$ instead of $\chi a$. } of $a$ and equation (\ref{WignerWeylexpvalue}) one can view the Wigner function $\operatorname{W}_n (p,q)$ as a distribution on $ C^{\infty}(\Sigma_1 )$, \begin{equation}\label{eq:distribution} a\mapsto \langle \psi_n,\Op { a}\psi_n\rangle =\frac{1}{(2\pi)^2}\iint\limits_{\Omega\times \mathbb{R}^2}a(p,q)\operatorname{W}_n (p,q)\; \text{d} p\, \text{d} q \,\, . \end{equation} The sequence of these distributions is bounded because the operators $\Op{a}$ are bounded. The accumulation points of $\{ \operatorname{W}_n (p,q)\}_{n\in\mathbb{N}}$ are called quantum limits $\mu _k(p,q)$, and we label them by $k\in I $, where $I$ is some index-set. Corresponding to the accumulation points $\mu _k(p,q)$, the sequence $\{ \operatorname{W}_n (p,q)\}_{n\in\mathbb{N}}$ can be split into disjoint convergent subsequences $\bigcup_{k\in I}\{ \operatorname{W}_{n_j^k} (p,q)\}_{j\in\mathbb{N}}=\{ \operatorname{W}_n (p,q)\}_{n\in\mathbb{N}}$. That is, for every $k$ we have \begin{equation} \lim_{j\to\infty} \iint\limits_{\Omega\times\mathbb{R}^2} a(p,q)\operatorname{W}_{n_j^k} (p,q)\, \text{d} p\; \text{d} q = \iint\limits_{\Omega\times\mathbb{R}^2} a(p,q)\mu _k(p,q)\, \text{d} p\; \text{d} q \,\, , \end{equation} for all $a\in C^{\infty}(\Sigma_1 )$ viewed as homogeneous functions of degree zero on phase space. This splitting is unique up to a finite number of terms, in the sense that for two different splittings the subsequences belonging to the same accumulation point differ only by a finite number of terms. As has been shown in \cite{Zel90}, the quantum limits $\mu _k$ are measures on $C^{\infty}(\Sigma_1 )$ which are invariant under the classical flow generated by $H(p,q)$. One of the main questions in the field of quantum chaos is which classical invariant measures on $C^{\infty}(\Sigma_1 )$ can actually occur as quantum limits of Wigner functions. E.g., if the orbital measure along an unstable periodic orbit occurs as quantum limit $\mu _k$, then the corresponding subsequence of eigenfunctions has to show an enhanced probability, i.e.\ scarring, along that orbit. Given any quantum limit $\mu_k$ one is furthermore interested in the counting function $N_k(E):=\# \{ E_{n_j^k}\leq E \}$ for the corresponding subsequence $\{\operatorname{W}_{n_j^k}\}_{j \in\mathbb{N}}$ of Wigner functions. Since the subsequence $\{\operatorname{W}_{n_j^k}\}_{j \in\mathbb{N}}$ is unique up to a finite number of elements, the corresponding counting function $N_k(E)$ is unique up to a constant. One should keep in mind that we have defined the quantum limits and their counting functions here with respect to one chosen orthonormal basis of eigenfunctions $\{\psi_n(q)\}_{n\in\mathbb{N}}$. If one takes a different orthonormal base of eigenfunctions $\{\tilde{\psi}_n(q)\}_{n\in\mathbb{N}}$, the counting functions corresponding to the quantum limits, or even the quantum limits themselves, may change. So when studying the set of all quantum limits one has to take all bases of eigenfunctions into account. The lift of any quantum limit from $\Sigma_1$ to the whole phase space $\mathbb{R}^2\times \Omega$ follows straightforward from some well known methods in pseudodifferential operator theory, as shown in appendix \ref{app:generalizations-of-the-qet}. For a pseudodifferential operator of order $m$, $A\in S^m_{\text{cl}}(\Omega )$, one gets for the expectation values \begin{equation} \lim_{j\to\infty}E_{n_j^k}^{-m/2} \langle \psi_{n_j^k}, A\psi_{n_j^k}\rangle = \mu _k (\sigma (A)|_{\Sigma_1}) = \int\limits_{\Sigma_1}\sigma (A)(p,q)\mu _k(p,q)\, \text{d}\mu \,\, . \end{equation} In terms of the Wigner functions this expression can be written as (see appendix \ref{app:connection-to-sc-eigenfunction-hypothesis}) \begin{equation} \lim_{j\to\infty}E_{n_j^k}^{\frac{n}{2}}\operatorname{W}_{n_j^k}(E_{n_j^k}^{\frac{1}{2}}p,q) = \mu _k(p,q)\, \frac{\delta (H(p,q)-1)}{\operatorname{vol} (\Sigma_1 )} \,\, . \end{equation} Without the scaling of $p$ with $\sqrt{E}$ we have \begin{equation}\label{eq:Wigner-qlim} \operatorname{W}_{n_j^k}(p,q)\sim \mu _k(p,q)\frac{\delta (H(p,q)-E_{n_j^k})}{\operatorname{vol} (\Sigma_{E_{n_j^k}} )}\,\, , \end{equation} for $E_{n_j^k}\to\infty$, and $\mu _k(p,q)$ is extended from $\Sigma_1$ to the whole phase space by requiring it to be homogeneous of degree zero in $p$. For ergodic systems the only invariant measure whose support has nonzero Liouville measure is the Liouville measure itself. For these systems the quantum ergodicity theorem states that almost all eigenfunctions have the Liouville density as quantum limit. {\bf Quantum ergodicity theorem }\cite{ZelZwo96}: {\it Let $\Omega\subset\mathbb{R}^2$ be a compact 2-dimensional domain with piecewise smooth boundary, and let $\{\psi_n \}$ be an orthonormal set of eigenfunctions of the Dirichlet Laplacian $\Delta$ on $\Omega$. If the classical billiard flow on the energy shell $\Sigma_1=S^1\times \mathbb{R}^2$ is ergodic, then there is a subsequence $\{n_j\}\subset\mathbb{N}$ of density one such that \begin{equation} \label{eq:qet} \lim_{j\to\infty} \; \langle \psi_{n_j}, A\psi_{n_j} \rangle = \overline{\sigma(A)} \;\; , \end{equation} for every polyhomogeneous pseudodifferential operator $A \in S^0(\Omega )$ of order zero, whose Schwarz kernel $K_A(q,q')=\langle q | A |q'\rangle$ has support in the interior of $\Omega\times\Omega$. Here $\sigma (A)$ is the principal symbol of A and $\overline{\sigma(A)}$ is its classical expectation value, see eq.~(\ref{mval}).} A subsequence $\{n_j\} \subset \mathbb{N}$ has density one if \begin{equation} \lim_{E\to\infty} \frac{\#\{ n_j \;|\; E_{n_j} < E \}} {N(E)} = 1 \;\;, \end{equation} where $ N(E) := \#\{ n \;|\; E_n < E \}$ is the spectral staircase function, counting the number of energy levels below a given energy $E$. So almost all expectation values of a quantum observable tend to the mean value of the corresponding classical observable. The special situation that there is only one quantum limit, i.e.\ the Liouville measure, is called unique quantum ergodicity. This behavior is conjectured to be true for the eigenfunctions of the Laplacian on a compact manifold of negative curvature \cite{Sar95,LuoSar95}. We have stated here for simplicity the quantum ergodicity theorem only for two dimensional Euclidean domains, but it is true in far more general situations. For compact Riemannian manifolds without boundary the quantum ergodicity theorem was given by Shnirelman \cite{Shn74}, Zelditch \cite{Zel87} and Colin de Verdi\`ere \cite{CdV85}. For a certain class of manifolds with boundary it was proven in \cite{GerLei93}, without the restriction on the support of the Schwarz kernel of the operator $A$. The techniques of \cite{GerLei93} can possibly be used to remove these restrictions here as well, see the remarks in \cite{ZelZwo96}. One can allow as well more general Hamilton operators; on manifolds without boundary every elliptic selfadjoint operator in $S_{\text{cl}}^2(\Omega)$ is allowed, and on manifolds with boundary at least every second order elliptic selfadjoint differential operator with smooth coefficients is allowed. This includes for instance a free particle in a smooth potential or in a magnetic field. In the semiclassical setting, where the Hamilton operator and the observables depend explicitly on $\hbar$, a similar theorem for the limit $\hbar\to 0$ has been proven in \cite{HelMarRob87}, see also \cite{KnaSin97} for an introduction. In light of the correspondence principle the quantum ergodicity theorem appears very natural: Classical ergodicity means that for a particle moving along a generic trajectory with energy $E$, the probability of finding it in a certain region $U\subset\Sigma_E$ of phase space is proportional to the volume $\operatorname{vol} (U)$ of that region, but does not depend on the shape or location of $U$. The corresponding quantum observable is the quantization of the characteristic function $\chi_U$ of $U$, and by the correspondence principle one expects that the expectation value of this observable in the state $\psi_n$ tends to the classical expectation value for $E_n\to\infty$. And this is the content of the quantum ergodicity theorem. In terms of the Wigner functions $\operatorname{W}_n$ the theorem gives, see eq.~\eqref{eq:Wigner-qlim}, \begin{equation} \operatorname{W}_{n_j}(p,q )\sim \frac{\delta (H(p,q)-E_{n_j})}{\operatorname{vol} (\Sigma_{E_{n_j}}) } \,\, , \end{equation} for $j\to\infty$, for a subsequence $\{n_j\}\subset\mathbb{N}$ of density one. So almost all Wigner functions become equidistributed on the energy shells $\Sigma_{E_{n_j}}$. That is, for ergodic systems the validity of the semiclassical eigenfunction hypothesis for a subsequence of density one is equivalent to the quantum ergodicity theorem. \subsection{Some examples \label{sec:Some-examples}} As an illustration of the quantum ergodicity theorem, and for later use, we now consider some special observables whose symbol only depends on the position $q$ or on the momentum $p$. If the symbol only depends on the position $q$, i.e.\ $a(p,q)=a(q)$, the operator is just the multiplication operator with the function $a(q)$, and one has \begin{equation} \label{num1} \langle \psi ,\text{A} \psi \rangle = \langle \psi ,a \psi \rangle = \int\limits_\Omega a (q) \, |\psi (q)|^2 \; \text{d} q \;\; . \end{equation} In the special case that one wants to measure the probability of the particle to be in a given domain $D\subset \Omega$, the symbol is the characteristic function of $D$, i.e.\ $a(p,q) = \chi_D(q)$. Then $\Op{\chi_D}$ is not a pseudodifferential operator, but nevertheless the quantum ergodicity theorem remains valid for this observable \cite{CdV85}. Since the principal symbol is then $\sigma (\text{A}) = \chi_D $ we obtain for its mean value \begin{equation} \label{eq:qet-rhs} \overline{\sigma (\text{A} )}=\frac{1}{\text{Vol} (\Sigma_1)} \int\limits_{S^1\times\Omega} \chi_D (q) \, \text{d}\mu = \frac{\operatorname{vol} (D) }{\operatorname{vol} (\Omega) } \;\; . \end{equation} Thus the quantum ergodicity theorem gives for this case \begin{equation} \label{eq:qet-position} \lim_{j\to\infty} \int\limits_D |\psi_{n_j}(q)|^2 \;\text{d} q = \frac{\operatorname{vol}{(D)}}{\operatorname{vol}{(\Omega )}} \qquad \end{equation} for a subsequence $\{n_j\}\subset\mathbb{N}$ of density one. As discussed at the end of the previous section this is what one should expect from the correspondence principle. If instead the symbol depends only on the momentum $p$, i.e.\ $a(p,q)=a(p)$, one obtains from \eqref{WignerWeylexpvalue} for the expectation value \begin{equation} \langle \psi , \text{A}\psi \rangle = \int\limits_{\mathbb{R}^2} a(p) \, |\widehat{\psi} (p)|^2 \; \text{d} p \;\;. \end{equation} In the same way as in \cite{CdV85} for a characteristic function in position space, it follows that the quantum ergodicity theorem remains valid for the case where $a(p)=\chi_{C(\theta,\Delta\theta)}(p)$ is the characteristic function of a circular sector in momentum space of angle $\theta$. In polar coordinates this is given by the set \begin{equation}\label{circ-sector} C(\theta,\Delta\theta) := \left\{ (r,\varphi) \;\; | \;\; r \in \mathbb{R}^+, \; \varphi \in[\theta-\Delta\theta/2,\theta +\Delta\theta/2] \right\} \;\;. \end{equation} The mean value of the principal symbol then reduces to \begin{equation} \overline{\sigma (\text{A})} = \frac{1}{\operatorname{vol} (\Sigma_1 )} \int\limits_{S^1\times\Omega} \chi_{C(\theta,\Delta\theta)} (p) \, \text{d}\mu = \frac{\Delta\theta }{2\pi } \;\; , \end{equation} which does not depend on $\theta$. Thus the quantum ergodicity theorem reads in the case of a characteristic function in momentum space \begin{equation} \label{eqn:qet-ft-version} \lim_{j \to \infty} \int\limits_{C(\theta,\Delta\theta)} | \widehat{\psi}_{n_j}(p) |^2 \; \text{d} p = \frac{\Delta\theta}{2\pi} \end{equation} for a subsequence $ \{n_j\}\subset\mathbb{N}$ of density one. This means that quantum ergodicity implies an asymptotic equidistribution of the momentum directions of the particle. It is instructive to compute the observables discussed above for certain integrable systems. First consider a two-dimensional torus. The eigenfunctions, labeled by the two quantum numbers $n,m\in \mathbb{Z}$, read $\psi_{n,m}(x,y)=\exp (2\pi\text{i} n x) \exp(2\pi\text{i} m y)$. Obviously, these are ``quantum-ergodic'' in position space, since $|\psi_{n,m}(x,y)|^2=1$, but they are not quantum-ergodic in momentum space. Even in position space the situation changes if one takes a different orthogonal basis of eigenfunctions (note that the multiplicities tend to infinity), see \cite{Jak97} for a complete discussion of the quantum limits on tori. A similar example is provided by the Dirichlet or Neumann eigenfunctions of a rectangular billiard. The circle billiard shows a converse behavior. Let the radius be one, then the eigenfunctions are given in polar coordinates by \begin{equation} \psi_{kl}(r,\phi) = N_{kl} J_l(j_{k,l} r) \; \text{e}^{\text{i} l \phi} \;\; . \end{equation} Here $j_{k,l}$ is the $k$--th zero of the Bessel function $J_l(x)$, $x>0$, and $N_{kl}$ is a normalization constant. These eigenfunctions do not exhibit quantum ergodicity in position space. But for their Fourier transforms one can show that \begin{align} \int\limits_{C(\theta,\Delta\theta)} \left| \widehat{\psi}_{kl}(p)\right|^2 \; \text{d} p = \frac{\Delta\theta}{2\pi} \;\;, \end{align} and so we have ``quantum ergodicity'' in momentum space. A remarkable example was discussed by Zelditch \cite{Zel92}. He considered the Laplacian on the sphere $S^2$. Since the multiplicity of the eigenvalue $l(l+1)$ is $2l+1$, which tends to infinity as $l\to\infty$, one can choose infinitely many orthonormal bases of eigenfunctions. Zelditch showed that almost all of these bases exhibit quantum ergodicity in the whole phase space. Although this is clearly an exceptional case due to the high multiplicities, it shows that one has to be careful with the notion of quantum ergodicity. In a recent work Jakobson and Zelditch \cite{JakZel97:p} have furthermore shown that for the sphere all invariant measures on phase space do occur as quantum limits. One might conjecture that for an integrable system all classical measures which are invariant under the flow and all symmetries of the Hamilton function do occur as quantum limits. The general question whether quantum ergodicity for all orthonormal bases of eigenfunctions in the whole phase space implies ergodicity of the classical system is still open. \subsection{The rate of quantum ergodicity \label{sec:rate-of-quantum-ergodicity}} We now come to the central question of the approach to the quantum-ergodic limit. First we note that an equivalent formulation of the quantum ergodicity theorem, which avoids choosing subsequences, is given by \begin{equation} \label{eqn:qet-sum-version} \lim_{E \to\infty} \frac{1}{N(E)} \sum_{E_n \le E} \left| \langle \psi_n, A\psi_n \rangle -\overline{\sigma(A)} \right|= 0 \;\;. \end{equation} This equivalence follows from a standard lemma concerning the influence of subsequences of density zero on the average of a sequence, see e.g.\ \cite[Theorem 1.20]{Wal82}. In order to characterize the rate of approach to the ergodic limit the quantities \begin{equation} S_m(E,A) = \frac{1}{N(E)} \sum_{E_n\le E} \left |\langle \psi_n , A\psi_n \rangle - \overline{\sigma (A)} \right |^m \end{equation} have been proposed and studied in \cite{Zel94a,Zel94b}. Quantum ergodicity is equivalent to $S_m(E,\text{A}) \to 0$ for $E\to\infty$ and $m\geq 1$. Let us first summarize some of the known results for the rate of quantum ergodicity. Zelditch proved in \cite{Zel94a} by relating the rate of quantum ergodicity to the rate of convergence of classical expectation values, and using a central limit theorem for the classical flow, that for compact manifolds of negative curvature $S_m(E,\text{A})=O((\log E)^{-m/2})$. However this bound is believed to be far from being sharp. Moreover in \cite{Zel94b} lower bounds for $S_m(E,\text{A})$ have been derived. In \cite{LuoSar95,Jak94,Jak97b} it is proven for a Hecke basis of eigenfunctions on the modular surface that $S_2(E,A) <C(\varepsilon) E^{-\frac{1}{2}+\varepsilon}$ for every $\varepsilon >0$. It is furthermore conjectured \cite{Sar95,LuoSar95} that this estimate is also valid for the eigenfunctions of the Laplacian on a compact manifolds of negative curvature, and moreover that it is satisfied for each eigenstate individually: $|\langle \psi_n, A\psi_n\rangle -\overline{\sigma (A)}|< C(\varepsilon)E^{-\frac{1}{4}+\varepsilon}$ for every $\varepsilon >0$. \newcommand{\rho(A)}{\rho(A)} \newcommand{S_2}{S_2} \newcommand{\eta}{\eta} In \cite{EckFisKeaAgaMaiMue95} a study of $S_2(E,A)$ based on the Gutzwiller trace formula has been performed. For completely desymmetrized systems having only isolated and unstable periodic orbits, the so-called diagonal approximation for a double sum over periodic orbits and further assumptions lead to \begin{equation}\label{eq:rate-hypsyst} S_2(E,A)\sim g\frac{2}{\operatorname{vol} ( \Omega) }\, \rho(A) \,E^{-\frac{1}{2}} \;\; . \end{equation} Here $g=2$ if the system is invariant under time reversal, and otherwise $g=1$, and $\rho(A)$ is the variance of the fluctuations of $ A_{\gamma}=\tfrac{1}{T_{\gamma}}\int_0^{T_{\gamma}} \sigma (A)(\gamma (t))\, \text{d} t$ around their mean $\overline{\sigma (A)}$, computed using all periodic orbits $\gamma$ of the system. More precisely, it is assumed that $|A_{\gamma}-\overline{\sigma (A)}|^2\sim \rho(A) /T_{\gamma}$, where $T_{\gamma}$ denotes the primitive length of $\gamma$. In the general case where not all periodic orbits are isolated and unstable it is argued that the rate of quantum ergodicity is related to the decay rate of the classical autocorrelation function $C(\tau)$ \cite{EckFisKeaAgaMaiMue95}. If $C(\tau) \sim \tau^{-\eta}$ then the result is \begin{equation}\label{eq:gen-rate} S_2(E,A) \sim \int\limits_0^{T_H}C(\tau )\, \text{d} \tau \sim \begin{cases} E^{-1/2} & \text{ for } \eta>1 \\ \ln \Big(\frac{\operatorname{vol} (\Omega)}{2} E^{1/2}\Big) E^{-1/2} & \text{ for } \eta=1 \\ E^{-\eta/2} & \text{ for } \eta<1 \end{cases} \;\; , \end{equation} where $T_H=\frac{\operatorname{vol} (\Omega)}{2} E^{1/2}$ is the so-called Heisenberg time. For the stadium billiard \cite{Bun74} and the Sinai billiard \cite{Sin70} it is believed that the correlations decay as $\sim 1/\tau$, see \cite{Bun85} and \cite{DahArt96} for numerical results for the Sinai billiard. Thus for both the stadium and the Sinai billiard a logarithmic contribution to the decay of $S_2(E,A)$ is expected. Also a Gaussian random behavior of the eigenfunctions \cite{Ber77b} implies in position space a rate $S_2(E,A)=O(E^{-\frac{1}{2}})$, which follows from \cite[chapter IV]{Sre94}, see also \cite{EckFisKeaAgaMaiMue95,SreSti96}. Random matrix theory (see \cite[section VII]{BroFloFreMelPanWon81}) predicts for suitable observables the same rate $S_2(E,A)=O(E^{-\frac{1}{2}})$, and furthermore Gaussian fluctuations of $(\langle \psi_n, A\psi_n\rangle -\overline{\sigma(A)})/\sqrt{S_2(E_n, A)}$ around zero, which we study numerically in section~\ref{sec:fluct-of-exp-values}. Since for the systems under investigation we have not quantum-ergodic subsequences of eigenfunctions, we now discuss in general the influence of such subsequences on the behavior of $S_1(E,A)$. To this end we split the sequence of eigenfunctions into two subsequences. The first, denoted by $\{\psi_{n'}\}$, contains all quantum-ergodic eigenfunctions, i.e.\ the corresponding quantum limit of the associated sequence of Wigner functions is the Liouville measure. The counting function of this subsequence will be denoted by $N'(E)$. The other sequence $\{\psi_{n''}\}$ contains all not quantum-ergodic eigenfunctions. This subsequence may have different quantum limits $\mu _k$ which are all different from the Liouville measure. Their counting function will be denoted by $N''(E)$. Examples would be a subsequence of bouncing ball modes or eigenfunctions scarred by an unstable periodic orbit. Similarly we split $S_1(E,A)$ into two parts corresponding to the two classes of eigenfunctions. Due to the separation $N(E)=N'(E)+N''(E)$ we obtain \begin{equation} \label{eq:split-of-S1} \begin{split} S_1(E,A)=\frac{1}{N(E)}\sum_{E_n\leq E} \left|\langle \psi_n ,A\psi_n \rangle -\overline{\sigma (A)}\right| &= \frac{N'(E)}{N(E)}S_1'(E,A)+\frac{N''(E)}{N(E)}S_1''(E,A)\\ &=\left(1-\frac{N''(E)}{N(E)}\right)S_1'(E,A)+\frac{N''(E)}{N(E)}S_1''(E,A)\,\, . \end{split} \end{equation} Here we defined \begin{align} S_1'(E,A)&:=\frac{1}{N'(E)} \sum_{E_{n'}\leq E}\left|\langle \psi_{n'} ,A\psi_{n'} \rangle -\overline{\sigma (A)}\right|\,\, ,\\ S_1''(E,A)&:=\frac{1}{N''(E)}\sum_{E_{n''}\leq E} \left|\langle \psi_{n''} ,A\psi_{n''} \rangle -\overline{\sigma (A)}\right|\,\, . \end{align} So the behavior of $S_1(E,A)$ is given in terms of the three quantities $S_1'(E,A)$, $S_1''(E,A)$ and $N''(E)$, which describe the behavior of the quantum-ergodic and the not quantum-ergodic subsequences, respectively. The behavior of $S_1 ''(E,A)$ can be described in terms of the non ergodic quantum limits and their counting functions. We split the not quantum-ergodic subsequence into convergent subsequences corresponding to the quantum limits $\mu _k\neq \mu $, $\{\psi_{n''}\}=\bigcup_k\{\psi_{n^k_j}\}_{j\in\mathbb{N}}$, with $N''(E)=\sum_k N_k(E)$, and $\langle \psi_{n^k_j} ,A\psi_{n^k_j} \rangle -\overline{\sigma (A)}\sim \mu _k \Bigl(\sigma (A)-\overline{\sigma (A)}\Bigr)$. Then $S_1''(E,A)$ is asymptotically given by \begin{equation} S_1''(E,A)\sim \frac{1}{\sum_k N_k(E)}\sum_k N_k(E) \left|\mu _k \Bigl(\sigma (A)-\overline{\sigma (A)}\Bigr)\right|\,\, , \end{equation} and the limit \begin{equation}\label{eq:nu-pp-def} \nu ''(A):=\lim_{E\to\infty}S_1''(E,A) \end{equation} only depends on $\sigma (A)$ and defines an invariant measure on $\Sigma_1$. Let us assume for the quantum-ergodic part of $S_1(E,A)$ a certain rate of decay, \begin{equation}\label{quant-erg-rate} S_1'(E,A)=\nu '(A)E^{-\alpha }+o(E^{-\alpha })\,\, , \end{equation} and for the counting function of the not quantum-ergodic states \begin{equation} \label{eq:counting-fct-non-qerg-states} N''(E) =c E^{\beta }+o(E^{\beta }) \,\, , \end{equation} where by quantum ergodicity $\alpha >0$ and $\beta<1$. With Weyl's law $N(E)=\frac{\operatorname{vol} (\Omega )}{4\pi}\, E +O(E^{\frac{1}{2}})$ we then obtain in eq.~\eqref{eq:split-of-S1} for $S_1(E,A)$ \begin{equation}\label{eq:reduced-rate-due-to-nerg-states} S_1(E,A)=\nu '(A)E^{-\alpha }+\frac{4\pi c}{\operatorname{vol} (\Omega )}\nu ''(A)E^{\beta -1}+o(E^{-\alpha })+ o(E^{\beta -1})\,\, . \end{equation} One sees that if $-\alpha >\beta -1$, the asymptotic behavior of $S_1(E,A)$ is governed by the quantum-ergodic sequences of eigenfunctions, whereas in the opposite case, $-\alpha \le \beta -1$, the not quantum-ergodic sequences dominate the behavior asymptotically. Especially if $\beta -1> - 1/4$, i.e.\ $\beta>3/4$, the rate of quantum ergodicity cannot be $O(E^{-\frac{1}{4}})$. \newcommand{S_1^{\text{model}}}{S_1^{\text{model}}} To obtain a simple model for the rate of quantum ergodicity, let us now assume that the conjectured optimal rate is valid for the subsequence of quantum-ergodic eigenfunctions, that is $\alpha =1/4$ can be chosen in eq.~(\ref{quant-erg-rate}). To be more precise it should be $S_1'(E,A)=O(E^{-1/4+\varepsilon})$ for every $\varepsilon >0$, but for comparison with numerical data on a finite energy range we will assume that $\varepsilon =0$. For the not quantum-ergodic eigenfunctions the knowledge of their counting function $N''(E)$ is very poor; in general it is unknown. Thus if we neglect the higher order terms in eqs.~(\ref{quant-erg-rate}) and (\ref{eq:counting-fct-non-qerg-states}) we obtain from \eqref{eq:split-of-S1} and \eqref{eq:nu-pp-def} a simple model for the behavior of $S_1(E,A)$, \begin{equation}\label{mod-rate} S_1^{\text{model}}(E,A)=\left(1-\frac{4\pi c}{\operatorname{vol}(\Omega)}E^{\beta -1}\right)\,\nu '(A)E^{-\frac{1}{4}} +\frac{4\pi c}{\operatorname{vol} (\Omega )}\nu ''(A)E^{\beta -1}\,\, . \end{equation} The first factor in braces will only be important if $\beta$ is close to 1. We will now discuss the influence of a special type of not quantum-ergodic subsequences in more detail. In billiards with two--parallel walls, one has a subsequence of so-called bouncing ball modes \cite{McDKau79}, which are localized on the bouncing ball orbits, see fig.~\ref{fig:stad-waves}b) for an example of such an eigenfunction. In our previous work \cite{BaeSchSti97a} we showed that for every $\beta <1$ there exists an ergodic billiard which possesses a not quantum-ergodic subsequence, given by bouncing ball modes, whose counting function is asymptotically of order $E^{\beta}$. But for $\beta =1-\delta$, with some small $\delta >0$, equation (\ref{eq:reduced-rate-due-to-nerg-states}) shows that $S_1(E,A)=O(E^{-\delta})$ at least for some $A$. So the best possible estimate on the rate of quantum ergodicity which is valid without further assumptions on the system other than ergodicity is \begin{equation} S_1(E,A)=o(1) \,\,, \qquad \text{i.e. } \lim_{E\to\infty} S_1(E,A) =0 \,\, . \end{equation} Especially for the Sinai billiard the result for the exponent is $\beta=9/10$ and therefore $S_1(E,A)\sim cE^{-1/10}$, which contradicts the result \eqref{eq:gen-rate} from \cite{EckFisKeaAgaMaiMue95}. If the bouncing ball modes are the only not quantum-ergodic eigenfunctions, or at least constitute the dominant contribution to them, then $N''(E)\sim N_{\text{bb}}(E)\sim cE^{\beta}$. The exponent $\beta$ and $\nu ''(A)$ are explicitly known, and the constant $c$ is known from a numerical fit in \cite{BaeSchSti97a} for the billiards we will consider in the next section. Thus in this case the only free parameter in the model \eqref{mod-rate} is $\nu '(A)$. The asymptotic behavior of (\ref{mod-rate}) is governed by the term with the larger exponent, but this can be hidden at low energies if one of the constants is much larger than the other. Assume for instance that $\beta -1> 1/4$, i.e.\ the not quantum-ergodic eigenfunctions dominate the rate asymptotically. If \begin{equation}\label{frac-const} \frac{4\pi c\nu ''(A)}{\operatorname{vol} (\Omega )\nu '(A)}\ll 1\,\, , \end{equation} for an observable $A$, then up to a certain energy $S_1(E,A)$ will be approximately proportional to $E^{-\frac{1}{4}}$. In numerical studies where only a finite energy range is accessible such a behavior can hide the true rate of quantum ergodicity. This will be seen most drastically for the cosine billiard, see section~\ref{sec:qerg-rate-cos-billiard}. The main ingredient of the model (\ref{mod-rate}) is the conjectured behavior of the rate for the quantum-ergodic eigenfunctions. By comparing (\ref{mod-rate}) with numerical data for different observables one can test this conjecture. If this conjecture is true then it means that the only deviations from the optimal rate of quantum ergodicity are due to subsequences of not quantum-ergodic eigenfunctions. Clearly similar models based on a splitting like \eqref{eq:split-of-S1} can be developed for other situations as well. E.g., if the eigenfunctions split into a quantum-ergodic subsequence of density one with rate proportional to $E^{-1/4}$ and a quantum-ergodic subsequence of density zero with a slower, and maybe spatial inhomogeneous, rate, one would expect a similar behavior of $S_1(E,A)$ as in the case considered above. So it will be hard without some a priori information on not quantum-ergodic eigenfunctions to distinguish between these two scenarios. \section{Numerical results} \label{sec:numerical} In order to study the rate of quantum ergodicity numerically we have chosen three different Euclidean billiard systems, given by the free motion of a point particle inside a compact domain with elastic reflections at the boundaries. See \figref{fig:billiard-domains} for the chosen billiard shapes. The first is the stadium billiard, which is proven to be ergodic, mixing and a $K$-system \cite{Bun74,Bun79}. The height of the desymmetrized billiard is chosen to be 1, and $a$ denotes the length of the upper horizontal line. For this system our analysis is based on computations of the first 6000 eigenfunctions for odd-odd parity, i.e.\ everywhere Dirichlet boundary conditions in the desymmetrized system with parameter $a=1.8$. We also studied stadium billiards with parameters $a=0.5$ and $a=4.0$ using the first 2000 eigenfunctions in each case to investigate the dependence on $a$, see below. The stadium billiard is one of the most intensively studied systems in quantum chaos, for investigations of the eigenfunctions see e.g.\ \cite{McDKau79,Hel84,McDKau88,Li97,SimVerSar97:p} and references therein. The second system is the cosine billiard, which is constructed by replacing one side of a rectangular box by a cosine curve. The cosine billiard has been introduced and studied in detail in \cite{Sti93:Diploma,Sti96:PhD}. The ergodic properties are unknown, but numerical studies do not reveal any stability islands. If there were any they are so small that one expects that they do not have any influence in the energy range under consideration. The height of the cosine billiard is 1 and the upper horizontal line has length 2 in our numerical computations. The cosine is parameterized by $B(y)=2+\frac{1}{2}(1+\cos(\pi y))$, see \figref{fig:billiard-domains}b). For our analysis of this system we used the first 2000 eigenfunctions with Dirichlet boundary conditions everywhere. \newcommand{\einstadbild}[2]{ \vspace*{1.5ex} \PSImagx{stad_wav_#1.ps}{8.0cm} \vspace*{-2.8cm}\hspace*{7.5cm}{#2}\vspace*{2.8cm} \vspace*{0.5ex} } \newcommand{\eincardibild}[2]{ \vspace*{1.5ex} \PSImagx{cardi_wav_#1.ps}{6.0cm} \vspace*{-3.5cm}\hspace*{5.5cm}{#2}\vspace*{3.5cm} \vspace*{0.5ex} } \BILD{tbh} { \hspace*{0.4cm} \begin{minipage}{8.5cm} \einstadbild{1992}{a)} \einstadbild{1660}{b)} \einstadbild{1771}{c)} \end{minipage} \hspace*{1cm} \begin{minipage}{8.5cm} \eincardibild{1816}{d)} \vspace*{0.5cm} \eincardibild{1817}{e)} \end{minipage} } {Left: Density plots $|\psi_n(q)|^2$ for three different odd-odd eigenfunctions of the $a=1.8$ stadium billiard: a) $n=1992$, ``generic'' b) $n=1660$, bouncing ball mode c) $n=1771$ localized eigenfunction. Right: Density plots for two eigenfunctions of the cardioid billiard with odd symmetry: d) $n=1816$, ``generic'' e) $n=1817$, localized along the $\overline{AB}$ orbit. Notice that according to the quantum ergodicity theorem the non-localized eigenfunctions of type a) and d) are the overwhelming majority. } {fig:stad-waves} The third system is the cardioid billiard, which is the limiting case of a family of billiards introduced in \cite{Rob83}. The cardioid billiard is proven to be ergodic, mixing, a $K$-system and a Bernoulli system \cite{Woj86,Sza92,Mar93,LivWoj95,CheHas96}. Both the classical system \cite{Rob83,BruWhe96,BaeDul97,BaeChe97:p} and the quantum mechanical system have been studied in detail \cite{Rob84,BaeSteSti95,BruWhe96,AurBaeSte97}. The eigenvalues of the cardioid billiard have been provided by Prosen and Robnik \cite{PrivComProRob} and were calculated by means of the conformal mapping technique, see e.g.\ \cite{Rob84,BerRob86,ProRob93a}. Using these eigenvalues, our study is based on computations for the first 6000 eigenfunctions of odd symmetry, which were obtained from the eigenvalues by means of the boundary integral method \cite{Rid79,BerWil84} using the singular value decomposition method \cite{AurSte93}. The boundary integral method was also used for the computations of the eigenvalues and eigenfunctions of the stadium and the cosine billiard. Let us first illustrate the structure of wave functions by showing density plots of $|\psi_n(q)|^2$ for three different types of wave functions of the stadium billiard and two different types of the cardioid billiard. Fig.~\ref{fig:stad-waves}a) shows a ``generic'' wave function, whose density looks irregular. Example b) belongs to the class of bouncing ball modes, and its Wigner function is localized in phase space on the bouncing ball orbits, see the discussion in section \ref{sec:Some-examples}. Fig.~\ref{fig:stad-waves}c) is another example of an eigenfunction showing some kind of localization. Fig.\ \ref{fig:stad-waves}d) shows a ``generic'' wave function for the cardioid billiard and \ref{fig:stad-waves}e) is an example of an eigenfunction, which shows a strong localization in the surrounding of the shortest periodic orbit (with code $\overline{AB}$, see \cite{BruWhe96,BaeDul97}). We should emphasize that according to the quantum ergodicity theorem the overwhelming majority of states in the semiclassical limit are of the type a) and d), which we also observe for the eigenfunctions of the studied systems. \newcommand{\einstadionplot}[2]{% \PSImagxy{#1.ps}{7.455cm}{3.3cm} \vspace*{-2.95cm}\hspace*{6.8cm}#2 \vspace*{2.95cm} } \newcommand{\eincosplot}[2]{% \PSImagxy{#1.ps}{7.9cm}{3cm}% \vspace*{-2.75cm}\hspace*{6.8cm}#2 \vspace*{2.75cm} } \newcommand{\eincardioidplot}[2]{% \PSImagxy{#1.ps}{6.7cm}{4.4cm}% \vspace*{-3.5cm}\hspace*{6.0cm}#2 \vspace*{3.5cm} } \BILD{b} { \hspace*{0.5cm}\begin{minipage}{8.5cm} \einstadionplot{stad_gebiete}{a)} \vspace*{2ex} \eincosplot{cosine_gebiete}{b)} \end{minipage} \begin{minipage}{7cm} \eincardioidplot{cardi_gebiete}{c)} \end{minipage} \vspace*{1ex} } {Shapes of the billiards studied numerically in this work: a) desymmetrized stadium billiard, b) desymmetrized cosine billiard and c) desymmetrized cardioid billiard. The rectangles in the interior of the billiards mark the domains $D_i$ of integration for studying the rate of quantum ergodicity in configuration space. } {fig:billiard-domains} \subsection{Quantum ergodicity in coordinate space} \BILD{!ht} { \vspace*{-1.0cm} \begin{center} \PSImagxy{stad_area_4.ps}{16cm}{10.cm}\hspace*{3cm} \vspace*{0.25cm} \PSImagxy{odd_n__area_5.ps}{16cm}{10.0cm}\hspace*{3cm} \vspace*{-0.25cm} \end{center} } {Plot of $d_i(n) = \int_{D_i} |\psi_{n}(q)|^2 \;\text{d} q- \tfrac{\operatorname{vol}{(D_i)}}{\operatorname{vol}{(\Omega)}}$ for domain $4$ in the stadium billiard and for domain $5$ in the cardioid billiard. Since $|\psi_{n}(q)|^2\geq 0$ one has $d_i(n)\geq- \tfrac{\operatorname{vol}{(D_i)}}{\operatorname{vol}{(\Omega)}} $. For domain $D_4$ in the stadium this lower bound is attained by the bouncing ball modes whose probability density $|\psi_{n}(q)|^2$ nearly vanishes in $D_4$; they are responsible for the sharp edge seen in the plot of $d_4(n)$. } {fig:stad-qerg-area} The quantum ergodicity theorem applied to the observable with symbol $a(q)=\chi_D(q)$, discussed in section \ref{sec:Some-examples}, states that the difference \begin{equation} \label{eq:an-diff} d_i(n) = \int\limits_{D_i} |\psi_{n}(q)|^2 \;\text{d} q- \frac{\operatorname{vol}{(D_i)}}{\operatorname{vol}{(\Omega)}} \end{equation} vanishes for a subsequence of density one. The first set of domains $D_i$ for which we investigate the approach to the ergodic limit is shown in \figref{fig:billiard-domains}. Plots of $d_i(n)$ for domain $D_4$ of the stadium billiard and $D_5$ of the cardioid billiard in \figref{fig:stad-qerg-area} show quite large fluctuations around zero. In particular for the stadium billiard there are many states for which $d_1(n)$ is quite large and $d_4(n)$ is quite small. As one would expect, a large number of them are bouncing ball modes. The fluctuations of $d_i(n)$ for the cosine billiard behave similarly to the stadium billiard. When trying to study the rate of the approach to the quantum-ergodic limit numerically one therefore is faced with two problems. On the one hand $d_i(n)$ is strongly fluctuating, which makes an estimate of the approach to the mean very difficult, if not impossible for the available numerical data. On the other hand one does not know a priori which subsequences should be excluded in \eqref{eq:an-diff}. Therefore the investigation of the asymptotic behavior of the ``cumulative'' version \eqref{eqn:qet-sum-version} of the quantum ergodicity theorem is much more appropriate. For the observable $\chi_D (q)$ we have \begin{equation} S_1(E, \chi_{D})=\frac{1}{N(E)} \sum_{E_n \le E} \left|\langle \psi_n ,\chi_{D}\psi_n\rangle -\frac{\operatorname{vol}{(D)}}{\operatorname{vol}{(\Omega)}} \right|\,\, . \end{equation} In figs.\ \ref{fig:cos-qerg-S1}, \ref{fig:stad-qerg-S1} and \ref{fig:cardi-qerg-S1} we display $S_1(E,{\chi_{D_i}})$ for the different domains $D_i$, shown in fig.\ \ref{fig:billiard-domains}, in the desymmetrized cosine, stadium and cardioid billiard, respectively. One nicely sees that the numerically determined curves for $S_1(E,{\chi_{D_i}})$ decrease with increasing energy. This is of course expected from the quantum-ergodic theorem, however since this is an asymptotic statement, it is not clear a priori, whether one can observe such a behavior also at low energies. It should be emphasized that \figref{fig:cos-qerg-S1} is based on the expectation values $\langle \psi_n ,{\chi_{D_i}} \psi_n \rangle$ for 2000 eigenfunctions and figs.~\ref{fig:stad-qerg-S1} and \ref{fig:cardi-qerg-S1} are based on 6000 eigenfunctions in each case. In order to study the rate of quantum ergodicity quantitatively a fit of the function \begin{equation} \label{eq:S1-fit-fct} S_1^{\text{fit}} (E) = a E^{-1/4+\varepsilon} \end{equation} to the numerical data for $S_1(E,{\chi_{D_i}})$ is performed. As discussed in section \ref{sec:rate-of-quantum-ergodicity}, for certain systems a behavior $S_1(E,\text{A})=O(E^{-1/4+\varepsilon})$ for all $\varepsilon>0$ is expected, so that the fit parameter $\varepsilon$ characterizes the rate of quantum ergodicity. A positive value of $\varepsilon$ thus means a slower decrease of $S_1(E,\text{A})$ than the expected $E^{-1/4}$. The results for $\varepsilon$ are shown in tables \ref{tab:qerg-S1-cos}--\ref{tab:qerg-S1-cardi}, and the insets in figs.\ \ref{fig:cos-qerg-S1}--\ref{fig:cardi-qerg-S1} show the same curves $S_1(E,\chi_{D_i})$ in a double--logarithmic plot together with these fit curves. The agreement of the fits with the computed functions $S_1(E,{\chi_{D_i}})$ is very good. However, $\varepsilon$ is not small for all domains $D_i$ of the considered systems, rather we find several significant exceptions, which will be explained in the following discussion. \subsubsection{Cosine billiard} \label{sec:qerg-rate-cos-billiard} For the cosine billiard one would expect a strong influence of the bouncing ball modes on the rate, since their number increases according to \cite{BaeSchSti97a} as $N_{\text{bb}}(E)\sim c\; E^{9/10}$. But the prefactor $c$ turns out to be very small and therefore the influence of the bouncing ball modes is suppressed at low energies. The model for $S_1(E,A)$, equation (\ref{mod-rate}), gives for the cosine billiard \begin{equation}\label{s1-cosinus} S_1^{\text{model}} (E,\chi_{D_i})=(1-0.201\, E^{-0.13})\, \nu '(\chi_{D_i})E^{-\frac{1}{4}} +0.201\, \nu_{\text{bb}} ''(\chi_{D_i})E^{-0.13}\,\, , \end{equation} where we have inserted the values $c=0.04$ and $\beta =0.87$, obtained in \cite{BaeSchSti97a} from a fit to $N_{\text{bb}}(E)$ which was performed over the same energy range which we consider here. For sake of completeness we have included the first factor, $(1-0.201\, E^{-0.13})$, but the numerical fits we perform below only change marginally if one sets this factor equal to 1. \BILD{tbh} { \PSImagxy{cosine_area_kumulativ1.ps}{16cm}{11cm}\hspace*{1cm} } {Plot of $S_1(E,{\chi_{D_i}})$ for different domains $D_i$ for the cosine billiard using the first 2000 eigenfunctions, see fig.~\ref{fig:billiard-domains}b) for the location of the domains $D_i$. The inset shows the same curves in double--logarithmic representation together with a fit of $S_1^{\text{fit}}(E) = a E^{-1/4+\varepsilon}$ to the numerical data.} {fig:cos-qerg-S1} \begin{table}[!t] \vspace*{3.5ex} \begin{center} \renewcommand{\arraystretch}{1.25} \begin{tabular}{|c|c||c||c|c|c|}\hline domain & rel. area & $ \varepsilon$ & $a$ & $\nu '(\chi_{D_i})$ & $\nu_{\text{bb}} ''(\chi_{D_i})$ \\ \hline 1 & 0.018 & $ -0.002$ & 0.052 & 0.0525 & 0.0045 \\ 2 & 0.018 & $ +0.012$ & 0.026 & 0.0468 & 0.0067 \\ 3 & 0.008 & $ +0.013$ & 0.043 & 0.0297 & 0.0020 \\ 4 & 0.008 & $ +0.022$ & 0.023 & 0.0273 & 0.0030 \\ 5 & 0.015 & $ +0.020$ & 0.050 & 0.0543 & 0.0150 \\ \hline 6 & 0.336 & $ +0.009$ & 0.258 & 0.2471 & 0.0840 \\ 7 & 0.512 & $ +0.023$ & 0.352 & 0.2920 & 0.1280 \\ 8 & 0.648 & $ +0.009$ & 0.381 & 0.3410 & 0.1620 \\ 9 & 0.800 & $ +0.054$ & 0.279 & 0.3264 & 0.2500 \\ \hline \end{tabular} \end{center} \Caption{ Rate of quantum ergodicity for the cosine billiard with domains $D_i$ as shown in fig.~\ref{fig:billiard-domains} and fig.~\ref{fig:cos-qerg-S1} and in the inset of fig.~\ref{fig:cos-qerg-S1-zusatz1}. Shown are the results for $\varepsilon$ and $a$ of the fit of $S_1^{\text{fit}}(E) = a E^{-1/4+\varepsilon}$ to the numerical data. Also tabulated are the values for the relative area of the corresponding domains, the quantities $\nu_{\text{bb}} ''(\chi_{D_i})$ computed according to \eqref{eq:qlim-bb-cosine} and the result $\nu '(\chi_{D_i})$ of the fit of the model \eqref{s1-cosinus} to $S_1(E,{\chi_{D_i}})$. }{tab:qerg-S1-cos} \end{table} The asymptotic behavior of the probability density $|\psi_{n''}(q)|^2$ of the bouncing ball modes is (in the weak sense) \begin{equation} |\psi_{n''}(q)|^2\sim \begin{cases} 1/\operatorname{vol} (R) & \text{for}\,\, q\in R \\ 0 & \text{for}\,\, q\in \Omega\backslash R \end{cases}\,\, ,\qquad \text{as}\,\, n'' \to\infty \,\, , \end{equation} where $R$ denotes the rectangular part of the billiard. So the expectation values are asymptotically $\langle\psi_{n''}, \chi_D \psi_{n''}\rangle \sim \operatorname{vol} (D\cap R)/\operatorname{vol} (R)$, and since $\nu_{\text{bb}} ''(\chi_{D})=\lim_{E\to\infty}S''(E,\chi_{D})$ is the mean value of $|\langle\psi_{n''}, \chi_D \psi_{n''}\rangle -\operatorname{vol} (D)/\operatorname{vol}(\Omega) |$ over all bouncing ball modes one has \begin{equation} \label{eq:qlim-bb-cosine} \nu_{\text{bb}} ''(\chi_{D})= \left| \frac{\operatorname{vol} (D\cap R)}{\operatorname{vol} (R)}- \frac{\operatorname{vol} (D)}{\operatorname{vol} (\Omega)} \right|\,\, . \end{equation} For fixed volume $\operatorname{vol} (D)$ the quantity $\nu_{\text{bb}} ''(\chi_{D})$ is maximal for domains $D$ lying entirely outside of the rectangular region, $\nu_{\text{bb}} ''(\chi_{D}) =\tfrac{\operatorname{vol} (D)}{\operatorname{vol} (\Omega)} $. For domains lying entirely inside the rectangular part of the billiard, we have the minimal value $\nu_{\text{bb}} ''(\chi_{D}) =\tfrac{1}{4}\tfrac{\operatorname{vol} (D)}{\operatorname{vol} (\Omega)}$. Therefore the strongest contribution of the bouncing ball modes to $S_1(E,\chi_D)$ in eq.~(\ref{s1-cosinus}) is expected for the domains outside the rectangular region. The values for $\nu_{\text{bb}} ''(\chi_{D_i})$ are given in \tabref{tab:qerg-S1-cos}. The largest values for the small domains are obtained for the domains outside the rectangular part of the billiard for which also the rate of quantum ergodicity is the slowest. Furthermore we see from \tabref{tab:qerg-S1-cos} that the factor $0.201\, \nu_{\text{bb}} ''(\chi_{D_i})$ in front of $E^{-0.13}$ in equation (\ref{s1-cosinus}) is for all domains much smaller than the prefactor $a$ from the fit to (\ref{eq:S1-fit-fct}). This already indicates that the contribution of the bouncing ball modes is suppressed, explaining why the rate for the cosine billiard is in such a good agreement with $\varepsilon=0$. \BILD{tbh} { \PSImagxy{cosine_area_kumulativ1_zusatz1.ps}{16cm}{10.0cm}\hspace*{1cm} } {Plot of $S_1(E,{\chi_{D_i}})$ for two further domains $D_8$ and $D_9$ (dashed curve) in the cosine billiard using the first 2000 eigenfunctions. Also shown is the fit $S_1^{\text{model}}(E,\chi_{D_i})$, eq.~\eqref{s1-cosinus}. } {fig:cos-qerg-S1-zusatz1} In order to test this quantitatively we have performed a fit of the model (\ref{s1-cosinus}) to the numerical data, where the only free parameter is $\nu '(\chi_{D_i})$. The accuracy of the fits is very good and the results for $\nu '(\chi_{D_i})$ are shown in \tabref{tab:qerg-S1-cos}; they are much larger than the corresponding prefactors $0.201\, \nu_{\text{bb}} ''(\chi_{D_i})$ of the bouncing ball part of $S_1(E,\chi_{D_i})$. Therefore the influence of the bouncing ball modes on the rate is negligible small on the present energy interval, despite the fact that asymptotically they should dominate the rate. The domains $D_3\subset D_1$ and $D_4 \subset D_2$ show a slightly slower rate than $D_1$ and $D_2$, respectively. This is due to the fact that choosing a smaller domain $D$ implies larger fluctuations of $\langle \psi_n , {\chi_{D}} \psi_n \rangle$ for the same set of eigenfunctions. As an additional test we have computed $S_1(E,\chi_{D_i})$ numerically for four further domains (shown in the inset of fig.~\ref{fig:cos-qerg-S1-zusatz1}), having a much larger area than the previous ones. For these domains $\nu_{\text{bb}} ''(\chi_{D_i})$ is larger, and one therefore expects a stronger influence of the bouncing ball modes and correspondingly a slower rate of quantum ergodicity. The results are shown in table \ref{tab:qerg-S1-cos} and fig.~\ref{fig:cos-qerg-S1-zusatz1} and our findings are completely consistent with the previous ones as well as with the model (\ref{s1-cosinus}). We also observe in \figref{fig:cos-qerg-S1-zusatz1} that for the large domains, except for the whole rectangular part $D_9=R$, the rate is faster at low energies, than at high energies. This is due to the influence of the boundary and will be discussed in section \ref{sec:boundary-effects}. Summarizing the results for the cosine billiard, we found that the rate of quantum ergodicity is in excellent agreement with a rate proportional to $E^{-1/4}$ for the subsequence of quantum-ergodic eigenfunctions. The phenomenological model $S_1^{\text{model}}(E,\chi_{D})$, eq.\ (\ref{s1-cosinus}), is in very good agreement with the numerical data, especially in view of the fact that it contains only one free parameter. Furthermore, the cosine billiard provides an impressive example of a system for which the asymptotic regime for $S_1(E,A)$ is reached very late. Up to the 2000th eigenfunction the asymptotic behavior $S_1(E,A)\sim C E^{-1/10}$ is almost completely hidden. A continuation of $S_1^{\text{model}}(E,\chi_{R})$ for the domain $R=D_9$ with the strongest influence of the bouncing ball modes, shows that at $E\approx 10^6$ the two contributions have the same magnitude, and one has to go up as high as $E\approx 10^{20}$ to see the asymptotic behavior $S_1(E,\chi_R )\sim C E^{-1/10}$. Therefore there is no contradiction between the observed fast rate of quantum ergodicity in the present energy range and the increase of the number of bouncing ball modes $N_{\text{bb}}(E)\sim c\; E^{9/10}$ found in \cite{BaeSchSti97a}. \subsubsection{Stadium billiard} For the stadium billiard the number of bouncing ball modes grows as $N_{\text{bb}}(E)\sim c\, E^{3/4}$ \cite{Tan97,BaeSchSti97a}. Therefore the bouncing ball mode contribution to $S_1(E,A)$ is, according to equation (\ref{mod-rate}), proportional to $E^{-1/4}$, and thus of the same order as the expected rate of quantum ergodicity for the quantum-ergodic eigenfunctions. One therefore expects for all domains in position space a rate of $E^{-1/4}$. We have investigated the rate of quantum ergodicity for the stadium billiard using the small domains shown in fig.~\ref{fig:billiard-domains}a) and for larger domains shown in fig.~\ref{fig:stadium-domains-extra}. The results of the fits of $S_1^{\text{fit}} (E) = a E^{-1/4+\varepsilon}$ to the numerical data for $S_1(E,\chi_{D_i})$ are given in table \ref{tab:qerg-S1-stad}. Let us first discuss the rate for the small domains shown in fig.~\ref{fig:billiard-domains}a). For the domains $D_1$ and $D_2$ which lie inside the rectangular part of the billiard the rate is in very good agreement with $E^{-1/4}$. But both for the domain $D_3$ which lies on the border between the rectangular part and the quarter circle, and in particular for domain $D_4$ which lies inside the quarter circle, one finds a slower rate than expected. This is a behavior which one would expect for a billiard with a much faster increasing number of bouncing ball modes. \BILD{tbh} { \vspace*{-3.5ex} \PSImagxy{stad_area_kumulativ1.ps}{16cm}{11.0cm}\hspace*{1cm} } {Plot of $S_1(E,{\chi_{D_i}})$ for different domains $D_i$ for the stadium billiard using the first 6000 eigenfunctions, see fig.~\ref{fig:billiard-domains}a) for the location of the domains $D_i$. The inset shows the same curves in double--logarithmic representation together with a fit of eq.~\eqref{eq:S1-fit-fct}.} {fig:stad-qerg-S1} \begin{table}[b] \begin{center} \renewcommand{\arraystretch}{1.25} \begin{tabular}{|c|c||c||c|c|c|}\hline domain & rel. area & $ \varepsilon$ & $a$ & $\nu '(\chi_{D_i})$ & $b(A)$ \\ \hline 1 & 0.015 & $ + 0.009$ & 0.041 & 0.0539 & 0.0000 \\ 2 & 0.015 & $ + 0.012$ & 0.041 & 0.0564 & 0.0000 \\ 3 & 0.015 & $ + 0.033$ & 0.035 & 0.0533 & 0.0008 \\ 4 & 0.015 & $ + 0.095$ & 0.029 & 0.0492 & 0.0047 \\\hline 5 & 0.015 & $ + 0.020$ & 0.039 & 0.0551 & 0.0004 \\ 6 & 0.278 & $ + 0.070$ & 0.137 & 0.1401 & 0.0233 \\ 7 & 0.433 & $ + 0.111$ & 0.118 & 0.1071 & 0.0395 \\ 8 & 0.557 & $ + 0.168$ & 0.089 & 0.0292 & 0.0634 \\ 9 & 0.696 & $ + 0.188$ & 0.098 & 0.0384 & 0.0827 \\ 10 & 0.681 & $ + 0.084$ & 0.176 & 0.2474 & 0.0295 \\ \hline \end{tabular} \end{center} \Caption{Rate of quantum ergodicity for the stadium billiard with domains $D_i$ as shown in figs.~\ref{fig:billiard-domains} and \ref{fig:stadium-domains-extra}. Shown are the results for $\varepsilon$ and $a$ of the fit $S_1^{\text{fit}}(E) = a E^{-1/4+\varepsilon}$ to $S_1(E,{\chi_{D_i}})$. Also tabulated are the values for the relative area of the corresponding domains, and the results $\nu '(\chi_{D_i})$ and $b(A)$ of the fit of the model \eqref{s1-stadium2} to $S_1(E,{\chi_{D_i}})$. }{tab:qerg-S1-stad} \end{table} We see three possible explanations for this behavior of the rate for the stadium billiard. First, the counting function $N_{\text{bb}}(E)$ for the bouncing ball modes might increase with a larger exponent than $3/4$, $N_{\text{bb}}(E)\sim c\, E^{\beta}$, $\beta >3/4$. This would contradict the results in \cite{Tan97,BaeSchSti97a}, derived by independent methods. Moreover, the exponent $\beta$ was tested numerically in \cite{BaeSchSti97a} up to energy $E\approx 10 000$ and we found very good agreement with $\beta=3/4$. Even if we relaxed the criteria for the selection of the bouncing ball modes drastically, the exponent did not change significantly, only the prefactor $c$ increased. Therefore we think that this first possibility is clearly ruled out. Secondly, the rate for the quantum-ergodic eigenfunctions might not be proportional to $E^{-1/4}$, but has a slower decay rate. Then we have to assume a position dependence of the rate, in order to explain the different behavior for the different domains: in the rectangular part of the billiard the rate has to be proportional to $E^{-1/4}$ to explain the value of $\varepsilon$ obtained for the domains $D_1$ and $D_2$. Whereas inside the quarter circle the rate of decay has to decrease as $S_1'(E,\chi_{D_4})\sim \nu '(A) E^{-0.15}$, in order to explain the value of $\varepsilon$ obtained for $D_3$ and $D_4$. A priori such a dependence of the rate of the quantum-ergodic eigenfunctions on the location of the domain in the billiard is not impossible. If this is the case then one should observe no dependence of the rate on the volume of the domain $D$, as long as one stays in the same region of the billiard. E.g.\ the rate for a domain like $D_6$, which contains $D_1$ and $D_2$ and is far enough away from the quarter circle, should be the same as the one for $D_1$ and $D_2$. The third possible explanation for the observed behavior of the rate is that there exist more not quantum-ergodic eigenfunctions which have a larger probability density in the rectangular part than in the quarter-circle, and which are not bouncing ball modes. Alternatively the reason could be a subsequence of density zero of quantum-ergodic eigenfunctions, which has a sufficiently increasing counting function and a slow rate, see the remark at the end of section \ref{sec:rate-of-quantum-ergodicity}. In both cases the model for $S_1(E,A)$ discussed in section \ref{sec:rate-of-quantum-ergodicity}, which we already used in the case of the cosine billiard, would be applicable. In contrast to the second possibility in this scenario one expects a dependence of the rate of $S_1(E,\chi_D)$ on the volume of the domain $D$, as in the case of the cosine billiard. \BILD {!t} { \begin{center} \einstadionplot{stad_gebiete_extra}{} \vspace*{-4ex} \end{center} } {Domains in the $a=1.8$ stadium billiard used to decide between the different explanations for the slow rates in the stadium billiard.} {fig:stadium-domains-extra} To decide which explanation is the correct one we studied the rate for a number of large domains shown in fig.~\ref{fig:stadium-domains-extra}. With these domains one necessarily comes closer to the boundary of the billiard. To rule out the possibility that the observed behavior of the rate is due to the influence of the boundary, and not due to the dependence on the volume and location of the domains, we computed in addition $S_1(E,\chi_D)$ for the small domain $D_5$ which is close to the boundary. The results are also given in table \ref{tab:qerg-S1-stad} and some examples of $S_1(E,\chi_{D_i})$ for these large domains are shown in fig.~\ref{fig:stad-qerg-S1-large-domain}. As for the cosine billiard, we also found that for large domains at small energies the rate may be much faster than at higher energies which is nicely seen in fig.~\ref{fig:stad-qerg-S1-large-domain} for the domains $D_7$ and $D_8$. This effect is due to the influence of the boundary, as we will discuss in section \ref{sec:boundary-effects}; here we only note that the boundary influence vanishes for large energies. The observed rate of quantum ergodicity displays a strong dependence on the volume of the domain $D$, whereas the location, as long as one stays inside the rectangular part, has no influence. E.g.\ for the domain $D_6$, which contains $D_1$ and $D_2$, one gets a much slower rate than for $D_1$ and $D_2$. In contrast to $D_6$ the rate for the small domain $D_5$ near the boundary is rather close to the one for $D_1$ and $D_2$. The slightly slower rate for $D_5$ is due to the smaller energy range for which we have computed $S_1(E,\chi_{D_5})$. A fit of $S_1^{\text{fit}} (E) = a E^{-1/4+\varepsilon}$ to $S_1(E,\chi_{D_1})$ and $S_1(E,\chi_{D_2})$ using the first 2000 eigenfunctions gives an $\varepsilon$ of $0.022$ for $D_1$ and $0.011$ for $D_2$, which is of the same magnitude as the result for $D_5$. Moreover the rate decreases monotonically with increasing area of the domains $D_i$, as long as they are inside the rectangular part $R$ of the billiard. The domain $D_{10}$ is interesting because it extends over both parts of the billiard. The enhanced probability density of the exceptional eigenfunctions in the rectangular part is partially compensated by the lower probability density in the quarter circle. Therefore one expects a rate similar to a domain in the rectangular part with relative area $(\operatorname{vol} (D_{10})-2\operatorname{vol} (D_{10}\cap (\Omega\backslash R)))/\operatorname{vol} (\Omega ) =0.371\ldots $. This relative area lies between the values for $D_6$ and $D_7$, and indeed the rate for $D_{10}$ lies between the rate for $D_6$ and $D_7$ too. These results strongly support the third explanation, i.e.\ the existence of a large density zero subsequence which is responsible for the deviations of the rate from $E^{-1/4}$. ``Large'' means that the counting function increases sufficiently strong to cause the rate to deviate from the expected behavior. To test this conjecture quantitatively one has to compare the numerical data with the conjectured behavior \begin{equation}\label{s1-stadium} S_1^{\text{model}} (E,A)=\left(1-c E^{-\beta}\right) \nu '(A)E^{-1/4}+b(A) E^{-\beta}\,\, . \end{equation} Since this model contains the four free parameters $c$, $\beta$, $\nu '(A)$ and $b(A)$, the numerical fit is not very stable. Therefore it is desirable to get some additional information from a different source. \BILD {t} { \PSImagxy{stad_areagrosses_recht_typ7_kumulativ.ps}{16cm}{10.5cm}\hspace*{1cm} } {Plot of $S_1(E,{\chi_{D}})$ for large domains (see fig.~\ref{fig:stadium-domains-extra}) for the $a=1.8$ stadium billiard using the first 2000 eigenfunctions. The inset shows the same curves in double--logarithmic representation together with a fit of eq.~\eqref{eq:S1-fit-fct}. For the domains $D_7$ and in particular for domain $D_8$ a sharp transition from a fast to a slower decay of the rate is visible. This effect is due to the boundary and will be explained in sec.~\ref{sec:boundary-effects}. } {fig:stad-qerg-S1-large-domain} To this end we plotted $d_4(n)=\langle \psi_n ,\chi_{D_4}\psi_n\rangle -\operatorname{vol} (D_4)/\operatorname{vol}(\Omega) $ for domain $D_4$ which shows a slow rate, see \figref{fig:stad-qerg-area}, and divided the spectrum into two parts by inserting a curve $-c_{\text{d}} E^{-1/4}$, and a curve $c_{\text{u}} E^{-1/4}$. The part of the spectrum between the two curves corresponds to the quantum-ergodic eigenfunctions with the optimal rate $\sim E^{-1/4}$, and the part above and below the curves corresponds to the not quantum-ergodic eigenfunctions or to quantum-ergodic eigenfunctions with a slower rate than $E^{-1/4}$. By computing the counting functions for these two subsequences we get a further criterion for distinguishing between the two possible scenarios for the behavior of the eigenfunctions discussed above. If the rate of quantum ergodicity for all quantum-ergodic eigenfunctions is slower than $E^{-1/4}$ inside the quarter circle, then the fraction of eigenfunctions which lie below or above the two curves should grow proportional to $E$. If the deviation of the rate is due to a not quantum-ergodic subsequence of density zero, or a quantum-ergodic subsequence of density zero with exceptionally slow rate, then the number of states which lie below or above the two curves should grow like $N''(E)$, i.e.\ slower than $E$. Proceeding in the described way, we find that the majority of the exceptional states have values of $d_4(n)$ which lies below the lower curve $-c_{\text{d}} E^{-1/4}$. A numerical fit for their counting function gives $N'' (E)=0.06 \,E^{0.93}$. The exponent is very stable under slight variations of the constant $c_{\text{d}}$ which determines the curve. Up to $E\approx 10000$ corresponding to the 2000th state, the counting function even has an almost linear behavior. The nature of these states will be discussed below. The numerical result $N'' (E)=0.06 \,E^{0.93}$ allows to determine the parameters $c=0.06\frac{ 4\pi}{\operatorname{vol} (\Omega )}= 0.29\ldots$ and $\beta = -0.07$ in the model (\ref{s1-stadium}) giving \begin{equation}\label{s1-stadium2} S_1^{\text{model}}(E,A)= \left(1-0.29\, E^{-0.07}\right) \nu ' (A) E^{-1/4}+b(A) E^{-0.07}\,\, . \end{equation} We have now eliminated two of the four free parameters, and can therefore test this formula effectively with the numerical data. The results for $\nu ' (\chi_{D_i})$ and $b (\chi_{D_i})$ are also shown in table \ref{tab:qerg-S1-stad}, and for three large domains the plot of $S_1(E,\chi_D)$ and the corresponding fit $S_1^{\text{model}}(E,\chi_D)$ is shown in \figref{fig:stad-qerg-S1-large-domain}. The agreement of the fits with the numerical data is very good. Moreover the values for $\nu '(D_i)$ and $b(D_i)$ are very reasonable: For $\nu '(D_i)$ one expects that it depends on the volume of $D_i$ only, and not on the location. This is very well confirmed for the domains $D_1$--$D_5$, which have the same volume, where $\nu '(D_i)$ stays almost constant. According to the results in \cite{AurTag97:p} one expects that $\nu '(D_i)$ increases with increasing volume of $D_i$, for small $\operatorname{vol} (D_i)$, then reaches a plateau, and then finally decreases for very large domains. This behavior is not observed, the values for $\nu '(D_i)$ rather oscillate. The most striking difference occurs between $\nu ' (D_9)$ and $\nu ' (D_{10})$, because they have approximately the same volume. We furthermore find that the behavior of $\nu '(D_i)$ is completely analogous to that of $a_i$. The behavior of $b(D_i)$ is in perfect accordance with what one expects for a sum of quantum limits which are concentrated on the rectangular part of the billiard. The values increase when moving $D_i$ into the quarter circle, and they increase with increasing volume of $D_i$, as long as $D_i$ lies entirely inside the rectangular part. For $D_{10}$ the parameter $b(D_{10})$ takes an intermediate value between $b(D_{6})$ and $b(D_{7})$, as one expects. The inclusion of the factor $(1-0.29\, E^{-0.07})$ in eq.~(\ref{s1-stadium2}) turned out to be necessary to get satisfactory results. The contribution of $E^{-0.07}$ cannot be neglected in the present energy range because of the small exponent. Without this factor we obtained for some of the domains negative values for $\nu ' (D_i)$, which is impossible because $S_1'(E,A)$ is by definition positive. \BILD{!b} { \PSImagxy{raten_vergleich.ps}{16cm}{11cm}\hspace*{1cm} } {Plot of $S_2(E,\chi_{D_i})$ for the domains $D_1$ and $D_2$ in the stadium billiard. The dashed lines show the fit of the conjectured behavior $ c\, E^{-1/2} \ln \bigl(\tfrac{\operatorname{vol}(\Omega)}{2} E^{1/2}\bigr)$ to $S_2(E,\chi_{D_i})$. The result of the fit shows that the numerical data for the first 6000 expectation values cannot be described with this rate.} {fig:vergleich} This also sheds some light on the limitations of such a simple model like (\ref{s1-stadium2}). For the exponent $\beta=0.07$ only the order of magnitude is known for sure, the constant $c$ from $N'' (E)$ might still vary, and nothing is known about the behavior of the higher order contributions to $S_1'$ and $S_1''$. In view of this it is surprising how good this model fits with the numerical data. We believe that this gives a very strong support for the underlying conjectures, namely that a density one subsequence of quantum-ergodic eigenfunctions has a rate $S_1'(E,A)\sim cE^{-1/4}$, and the deviations in the rate of $S_1(E,A)$ from this behavior are due to a subsequence of density zero. As mentioned in section \ref{sec:rate-of-quantum-ergodicity}, a behavior $S_2(E,A)\sim cE^{-1/2} \ln \bigl(\tfrac{\operatorname{vol}(\Omega)}{2} E^{1/2}\bigr)$ for the stadium billiard is claimed in \cite{EckFisKeaAgaMaiMue95}. We have tested this both for the small domains $D_1$ and $D_2$, which are not influenced by the bouncing ball modes and also for some larger domains. However, the resulting fits clearly show that this result does not apply to our numerical data, see fig.~\ref{fig:vergleich}. We also tested if this result applies to the quantum-ergodic subsequence, i.e.\ $S_1'(E,A)\sim cE^{-1/4} \sqrt{\ln \bigl(\tfrac{\operatorname{vol}(\Omega)}{2} E^{1/2}\bigr)}$, by replacing the term $E^{-1/4}$ in equation (\ref{s1-stadium2}) by $E^{-1/4}\sqrt{ \ln \bigl(\tfrac{\operatorname{vol}(\Omega)}{2} E^{1/2}\bigr)}$. Again we find that from our numerical data that this possibility is excluded, at least for the energy range under consideration. For the stadium billiard it is known that the asymptotic behavior of the classical autocorrelation $C(\tau )\sim 1/\tau$, which leads to $S_2(E,A)\sim cE^{-1/2} \ln \bigl(\tfrac{\operatorname{vol}(\Omega)}{2} E^{1/2}\bigr)$ according to \cite{EckFisKeaAgaMaiMue95}, sets in rather late. So it would be very interesting to compare the results with those obtained by inserting the numerically computed autocorrelation function in the integral in \eqref{eq:gen-rate}. \BILD{tb} { \begin{minipage}{8.5cm} \einstadbild{1643}{a)} \einstadbild{1797}{c)} \end{minipage} \hspace*{0.5cm} \begin{minipage}{8.5cm} \einstadbild{1652}{b)} \einstadbild{1834}{d)} \end{minipage} } {Four examples of the exceptional eigenfunctions showing localization in the rectangular part of the stadium billiard, which are not bouncing ball modes, a) $n=1643$, b) $n=1652$, c) $n=1797$ and d) $n=1834$. } {fig:exceptional-non-bb-modes} We now return to the question of what type these additional subsequences of eigenfunctions are. As additional information for the model, the counting function for the number of states for which $\langle \psi_n , \chi_{D_4} \psi_n\rangle -\tfrac{\operatorname{vol}(D_4)}{\operatorname{vol}(\Omega)}$ is smaller than $-c_{\text{d}} E^{-1/4}$ has been used. For comparison we have carried out the same procedure for the observable $1-\chi_{D_9}$ which corresponds to the complete quarter circle. As expected the bouncing ball modes appeared in both subsequences, but additionally a considerable number of other types of eigenfunctions showed up. In \figref{fig:exceptional-non-bb-modes} we show some examples of such eigenfunctions. They all show a reduced probability density inside the quarter circle, but their structure is essentially different from the bouncing ball modes. Their semiclassical origin are maybe periodic orbits bouncing up and down between the two perpendicular walls for a long time but then leaving the neighborhood of the bouncing ball orbits in phase space. At least it seems difficult to associate short unstable periodic orbits to the patterns in the shown states, because the lines of enhanced probability do not always obey the laws of reflection, or they look too irregular. \begin{table}[b] \vspace*{3.5ex} \begin{center} \renewcommand{\arraystretch}{1.25} \begin{tabular}{|l|c|c|c|}\hline system & domain A & domain B & domain C \\\hline stadium $(a=0.5)$ & $ + 0.111$ & $ + 0.062$ & $ + 0.056$ \\ \hline stadium $(a=1.8)$ & $ + 0.009$ & $ + 0.033$ & $ + 0.095$ \\ \hline stadium $(a=4.0)$ & $ - 0.008$ & $ + 0.031$ & $ + 0.095$ \\ \hline \end{tabular} \end{center} \Caption{Results for $\varepsilon$ of the fit of $S_1^{\text{fit}}(E) = a E^{-1/4+\varepsilon}$ to the numerically obtained $S_1(E,{\chi_{D_i}})$, for stadium billiards with different parameter $a$ for three different domains $A$, $B$ and $C$. Domain $A$ lies within the rectangular part of the billiard, domain $B$ is centered at $x=a$ and domain $C$ is located in the quarter circle.} {tab:fit-different-stadiums} \end{table} A further test of the hypothesis that a density zero subsequence is responsible for the slow rate is provided by varying the length $a$ of the billiard. Here we used the first 2000 eigenfunctions for both the $a=0.5$ and the $a=4.0$ stadium billiard in addition to the results for the $a=1.8$ stadium based on 6000 eigenfunctions. We have chosen three different domains for these three systems: domain $A$ lies within the rectangular part of the billiard, domain $B$ is centered at $x=a$ and domain $C$ is located in the quarter circle. The results for the rate of quantum ergodicity are shown in \tabref{tab:fit-different-stadiums}. For different parameters the quantities $b (D_i)$ change, and therefore the weights of the different contributions to $S_1(E,A)$ in equation (\ref{s1-stadium2}). For smaller $a$ the relative fraction of the volume of the rectangular part, $\operatorname{vol} (R)/\operatorname{vol} (\Omega)$ becomes smaller. Therefore one expects that for smaller $a$ the influence of the not quantum-ergodic subsequences to $S_1(E,\chi_D)$ becomes stronger in the rectangular part, and weaker in the quarter circle. This is nicely seen in the numerically found behavior of the rate for the domains $A$ and $C$ shown in table \ref{tab:fit-different-stadiums}, which confirms our hypothesis. To summarize our results for the stadium billiard, we have shown the existence of a large, but density zero, subsequence of eigenfunctions which have an enhanced probability distribution on the rectangular part of the billiard but having a different structure than the bouncing ball modes. We demonstrated that the observed effects are due to the influence of this subsequence of density zero. This subsequence shows a different behavior than the majority of quantum-ergodic eigenfunctions for which our results imply a uniform rate of $E^{-1/4}$. Clearly we cannot decide if this exceptional subsequence will ultimately be not quantum-ergodic, or if it is a quantum-ergodic subsequence with a exceptional behavior of the rate. We can only say that on the presently studied energy range up to $E\approx 30 000 $, i.e.\ up to the $6000$th eigenfunction, they behave not quantum-ergodic. \subsubsection{Cardioid billiard} The cardioid billiard is probably the most ``generic'' one of our three billiards, in the sense that it possesses no two dimensional family of periodic orbits like the bouncing ball orbits. One might therefore expect a priori a better rate of quantum ergodicity than for the other billiards. We have computed $S_1(E,\chi_{D_i})$ for five small domains, see \figref{fig:billiard-domains}c), by using the first 6000 eigenfunctions up to energy $E\approx 32 000$, and for three larger domains, see \figref{fig:cardi-qerg-S1-zusatz1}, by using the first 2000 eigenfunctions. The results are displayed in figs.~\ref{fig:cardi-qerg-S1} and \ref{fig:cardi-qerg-S1-zusatz1}. To determine the rate a fit of $S_1^{\text{fit}}(E) = a E^{-1/4+\varepsilon}$ has been performed, and the resulting values for $a$ and $\varepsilon$ are listed in table \ref{tab:qerg-S1-cardi}. We find that domain $D_3$ gives the lowest rate of quantum ergodicity for the small domains $D_1$--$D_5$. This is caused by a considerable number of eigenfunctions showing an enhanced probability as in \figref{fig:stad-waves}e) along the vertical orbit $\overline{AB}$. For domains $D_1$, $D_2$ we also find a slower rate than for the other regions $D_4$, $D_5$; in this case the slower rate seemingly cannot be attributed to one type of localized eigenfunctions. \BILD{tbh} { \vspace*{1ex} \PSImagxy{cardi_area_kumulativ1.ps}{16cm}{11cm}\hspace*{1cm} } {Plot of $S_1(E,{\chi_{D_i}})$ for different domains $D_i$ for the cardioid billiard using the first 6000 eigenfunctions, see fig.~\ref{fig:billiard-domains}a) for the location of the domains $D_i$. The inset shows the same curves in double--logarithmic representation together with a fit of $S_1^{\text{fit}}(E) = a E^{-1/4+\varepsilon}$, eq.~\eqref{eq:S1-fit-fct}.} {fig:cardi-qerg-S1} The larger domains show a slower rate than the small domains, but the rate is not monotonically decreasing with the area of the domain. The rate for the largest domain $D_8$ is even of the same order of magnitude than the one for $D_3$, especially if one takes the smaller energy range for $D_8$ into account. This slower rate is probably caused by the existence of different not quantum-ergodic subsequences with quantum limits $\mu _k$ in different regions of the billiard. For each of the domains the influence of these subsequences is different and therefore one observes different rates. \begin{table}[b] \vspace*{1cm} \begin{center} \renewcommand{\arraystretch}{1.25} \begin{tabular}{|c|c||c||c|}\hline domain & rel. area & $\varepsilon$ & $a$ \\ \hline 1 & 0.01722 & $ +0.047$ & 0.028 \\ 2 & 0.01722 & $ +0.039$ & 0.037 \\ 3 & 0.01722 & $ +0.064$ & 0.046 \\ 4 & 0.01722 & $ +0.007$ & 0.048 \\ 5 & 0.01722 & $ +0.009$ & 0.042 \\\hline 6 & 0.18674 & $ +0.098$ & 0.125 \\ 7 & 0.33104 & $ +0.115$ & 0.140 \\ 8 & 0.50930 & $ +0.071$ & 0.213 \\ \hline \end{tabular} \end{center} \Caption{Rate of quantum ergodicity obtained from a fit of $S_1^{\text{fit}}(E) = a E^{-1/4+\varepsilon}$ to $S_1(E,{\chi_{D_i}})$ for the cardioid billiard with domains $D_i$ as shown in figs.~\ref{fig:billiard-domains}a) and \ref{fig:cardi-qerg-S1-zusatz1}. }{tab:qerg-S1-cardi} \end{table} \BILD{tbh} { \vspace*{0.75cm} \PSImagxy{cardi_area_kumulativ1_zusatz1.ps}{16cm}{10.0cm}\hspace*{1cm} } {Plot of $S_1(E,{\chi_{D_i}})$ for larger domains for the cardioid billiard using the first 2000 eigenfunctions. Also shown are fits to eq.~\eqref{eq:S1-fit-fct} for the corresponding energy regions. } {fig:cardi-qerg-S1-zusatz1} A quantitative test in a similar way as for the other billiards using a model for $S_1(E,A)$ is very difficult, because the deviations from the conjectured optimal rate is not only due to one subsequence. But the results for $D_4$ and $D_5$ clearly shows that here as well one has a density one subsequence of quantum-ergodic eigenfunctions with rate $S_1'(E,\chi_D)\sim \nu '(D)E^{-1/4}$. We hope to return to the problem of determining the not quantum-ergodic subsequences and their quantum limits in the future. The cardioid billiard is the only system we have studied to which the result \eqref{eq:rate-hypsyst} should be applicable. But for most of the domains the rate is much slower than the predicted one. Only the domains 4 and 5 show the expected rate. Therefore we have computed for these domains the factor $\rho(A)$ in eq.~\eqref{eq:rate-hypsyst}. For the computation of $\rho(A)$ the variance of $\langle \chi_{D_i} \rangle_l -\frac{\operatorname{vol}(D_i)}{\operatorname{vol}(\Omega)}$ as a function of $l$ has been computed using trajectory segments of length $l$ of a generic trajectory $\{q(t)\}$. The quantity $ \langle \chi_{D_i} \rangle_l= \tfrac{1}{l}\int_0^l \chi_{D_i}(q(t)) \, \text{d} t$ is the relative length of the trajectory segment lying in the domain $D_i$. By ergodicity we have $ \lim_{l\to\infty} \langle \chi_{D_i} \rangle_l = \frac{\operatorname{vol}(D_i)}{\operatorname{vol}(\Omega)}$. The variance of $\langle \chi_{D_i} \rangle_l -\frac{\operatorname{vol}(D_i)}{\operatorname{vol}(\Omega)}$ decreases like $\rho(A) l^{-1}$. Using the corresponding results in equation \eqref{eq:rate-hypsyst} we obtain $S_2(E,\chi_{D_4})=0.0062\, E^{-1/2}$ and $S_2(E,\chi_{D_5})=0.0074\, E^{-1/2}$. These numbers have to be compared with the result of a fit $S_2^{\text{fit}}(E,A)$ to $S_2(E,\chi_{D_i})$. We obtain $S_2^{\text{fit}}(E,\chi_{D_4})=0.0036\, E^{-0.47}$ and $S_2^{\text{fit}}(E,\chi_{D_5})=0.0031\, E^{-0.48}$. On sees that the theoretical prediction is too large by a factor of approximately 2. This deviation might be related to the factor $g$ in \eqref{eq:rate-hypsyst}, which counts the mean multiplicities in the classical length spectrum. In the cardioid billiard the asymptotic value $g=2$ is reached very late, for the shorter periods one rather has $g\approx 1$, which would lead to a better agreement of eq.~\eqref{eq:rate-hypsyst} with the data for $D_4$ and $D_5$. For a better understanding it seems necessary to check in detail, whether any of the assumptions leading to eq.~\eqref{eq:rate-hypsyst} is not fulfilled for the domains of the cardioid billiard. It would also be very interesting to investigate if the slower rates can be described using the expression in terms of the classical correlation function. We will leave these questions for a separate study. \subsubsection{The influence of the boundary} \label{sec:boundary-effects} \BILD {tbh} { \begin{center} \begin{minipage}{7cm} \hspace*{-2.0cm}\PSImagx{stad18dd_sum250.ps}{9cm} \vspace*{-7cm} a) \vspace*{7cm} \end{minipage} \hspace*{-1cm} \begin{minipage}{6cm} \PSImagxy{stad18dd_sum250_schnitt.ps}{8cm}{7cm} \PSImagxy{stad18dd_sum1000_schnitt.ps}{8cm}{7cm} \end{minipage} \end{center} } {In a) we show a three dimensional plot of the sum $\Psi_E(x,y)=\frac{1}{N(E)} \sum_{E_n\le E} |\psi(x,y)|^2$ involving the first $250$ eigenfunctions of the $a=1.8$ stadium with odd--odd symmetry. The pictures on the right show a cross section $\Psi_E(1,y)$ for using the first b) 250 and c) 1000 eigenfunctions. The dashed curves in b) and c) display the evaluation using the first two terms in formula \eqref{eq:Hoermander}. These results are used to explain the fast rate in the low energy range for the stadium billiard for large domains. } {fig:wfk-sum-stadion} In all three billiards we observe the phenomenon, that for large domains $S_1(E,\chi_D)$ decays faster at low energies than at high energies. This can be seen in \figref{fig:cos-qerg-S1-zusatz1} for domain $D_8$ in the cosine billiard, in \figref{fig:stad-qerg-S1-large-domain} for domains $D_7$ and $D_8$ in the stadium billiard and in \figref{fig:cardi-qerg-S1-zusatz1} for domains $D_6$, $D_7$ and $D_8$ in the cardioid billiard. The other large domains we studied showed the same behavior. The only exceptions are the domains $D_9$ in the cosine billiard and in the stadium billiard, which consist of the whole rectangular part. For these domains no faster rate at low energies is visible. Qualitatively this behavior can be understood by the vanishing of the probability density $|\psi_n(q)|^2$ of the eigenstates at the boundary due to the Dirichlet boundary conditions. Because of the normalization of $\psi_n(q)$ the reduced probability density at the boundary has to be compensated by an enhancement of the probability density inside the billiard, which leads to larger oscillations of the probability density near the boundary. Let us assume that this compensation of the probability density takes place in a strip along the boundary of a few de Broglie wavelength width. Then the integral of the probability density $|\psi_n(q)|^2$ over a domain $D$ feels the influence of the boundary only up to a certain energy, proportional to the inverse square of the distance between $D$ and the boundary $\partial\Omega $. Furthermore the boundary influence will be proportional to the overlap of $D$ and the strip at the boundary. This overlap decreases like $1/\sqrt{E_n}$, and therefore $S_1(E,\chi_D)$ should decrease with such a rate at low energies. So the assumption that the compensation takes place in a small strip along the boundary leads exactly to the behavior we observe. Moreover a domain like $D_9$ which extends to the boundary $\partial\Omega $ will not feel any influence, because the boundary effect is compensated entirely inside this domain. To justify our assumption on the range of the boundary influence we refer to the following result on the asymptotic behavior of the summed probability densities on a two-dimensional Riemannian manifold with $C^{\infty}$-boundary \cite[Theorem 17.5.10]{Hoe85a}; \begin{equation} \label{eq:Hoermander} \sum_{E_n\le E} \left| \psi_n(q) \right|^2 = \frac{1}{4\pi} E - \frac{1}{4\pi} \frac{J_1 \left(2d(q) \sqrt{E}\right)}{d(q)} \sqrt{E} + R(q,E) \;\;, \end{equation} where $d(q)$ is the shortest distance of the point $q\in\Omega$ to the boundary. The remainder $R(q,E )$ satisfies the estimate $|R(q,E )|\leq C\sqrt{E}$. The second term in \eqref{eq:Hoermander} describes the influence of the boundary, for $d(q)\to 0$ the term tends to $-E/(4\pi )$ and cancels the contribution from the first term, such that the boundary conditions are fulfilled. In \figref{fig:wfk-sum-stadion}a) the normalized sum \begin{equation}\label{eq:meanwave} \Psi_E(x,y) = \frac{1}{N(E)} \sum_{E_n\leq E}|\psi_n(x,y)|^2\,\, , \end{equation} is displayed for the stadium billiard, using the first $250$ eigenfunctions. One nicely sees how the probability density is forced to vanish at the boundary, and how the compensation leads to large oscillations near the boundary. In \figref{fig:wfk-sum-stadion} b) and c) we show two cross sections through the function (\ref{eq:meanwave}) at two different energies, and compared it to the result one gets from the first two terms on the right hand side of (\ref{eq:Hoermander}). The agreement is quite impressive, especially near the boundary ($y=0$ and $y=1$). So although the stadium billiard does not have $C^{\infty}$-boundary, the result \eqref{eq:Hoermander} seems to remain valid. One furthermore observes that with higher energies the $y$-range on which the agreement is excellent increases. The averaged probability density (\ref{eq:meanwave}) shows exactly the behavior we assumed for the individual wavefunctions, in order to explain the fast rate of quantum ergodicity at low energies for domains near the boundary. The influence of the Dirichlet boundary condition is concentrated near the boundary, and it decays at a length scale proportional to the de Broglie wavelength. So with the help of eq.~\eqref{eq:Hoermander} one gets a good qualitative understanding of the boundary influence on the rate of quantum ergodicity. In order to try to get a quantitative understanding we used eq.~\eqref{eq:Hoermander} to derive as in \cite{AurTag97:p} a mean eigenfunction which incorporates the boundary influence \begin{equation}\label{eq:mod-wave} |\psi_{n}(q)|^2\approx \frac{1}{\operatorname{vol}(\Omega) - \frac{\operatorname{vol}(\partial\Omega)}{2\sqrt{E_n}}}\, \left(1-J_0(2d(q)\sqrt{E_n})\right) \,\, . \end{equation} Integrating this expression over a domain $D$ should give for the expectation values the mean value plus the corrections due to the boundary of $\chi_D$. By incorporating this into $S_1(E,\chi_D)$ one obtains an expression, which we compared with our numerical data. Although \eqref{eq:mod-wave} implies a faster decay rate at low energies, it is not as strong as the numerically observed one. This deviation must be caused by considerable fluctuations of the boundary influence on the individual states $\psi_n$ around the mean influence described by \eqref{eq:Hoermander} and \eqref{eq:mod-wave}. \subsection{Quantum ergodicity in momentum space} Up to here we have investigated the behavior of the wavefunctions in position space only. Now we turn our attention to the rate of quantum ergodicity in momentum space, which is studied here for the first time numerically. Quantum ergodicity predicts that the angular distribution of the momentum probability distribution $|\widehat{\psi}_n(p)|^2$ tends to $1/(2\pi)$ in the weak sense, see eq.~\eqref{eqn:qet-ft-version}. Therefore we study an observable with symbol $\chi_{C(\theta,\Delta\theta)}(p)$ whose expectation value gives the probability of finding the particle with momentum-direction in the interval $]\theta -\Delta\theta /2,\theta +\Delta\theta /2[$. Recall that $\chi_{C(\theta,\Delta\theta)}(p)$ denotes the characteristic function of the circular sector $C(\theta,\Delta\theta)=\{p\in\mathbb{R}^2\, |\, \arctan (p_y/p_x)\in ]\theta -\Delta\theta /2,\theta +\Delta\theta /2[\}$, and the classical mean value of $\chi_{C(\theta,\Delta\theta)}(p)$ is $\Delta\theta/(2\pi )$. Only eigenfunctions of odd parity of the not desymmetrized systems are considered here due to our method of computing the Fourier transformation directly from the normal derivative $u_n(\omega)$ of the eigenfunction $\psi_n(q)$. From Green's theorem one easily finds the formula \begin{equation} \widehat{\psi}_n(p)=\frac{1}{p^2-E_n}\,\frac{1}{2\pi} \int\limits_{\partial\Omega}\text{e}^{-\text{i} q(\omega )p}\,u_n(\omega)\, \text{d}\omega \,\, , \end{equation} where $q(\omega )$ denotes a point on the boundary $\partial\Omega$. The advantage of this formula is that it allows to compute the Fourier transform directly from $u_n(\omega)$, which can be obtained using the boundary integral method. For desymmetrized systems, like the ones considered here, one uses an appropriate Greens function which vanishes at the lines of symmetry, and therefore removes them from the boundary integral, see e.g.~\cite{SieSte90b}. This reduces the computational effort, but one does not get the normal derivatives on these parts of the boundary of the desymmetrized system. Therefore our results for the rate of quantum ergodicity in momentum space are sufficient to rule out the possibility of a totally different behavior in momentum space than in position space. Since the rate for all eigenfunctions cannot be faster than the one for a subsequence of positive density, we get a lower bound for the rate of the full system. The time reversal invariance leads for the Fourier transformed eigenfunctions to the symmetry $\widehat{\psi}_n(-p)=\overline{\widehat{\psi}}_n(p)$. Therefore $|\widehat{\psi}_n(-p)|^2=|\widehat{\psi}_n(p)|^2$, and this reduces the angle interval we have to study to $[0,\pi[$. The additional reflection symmetries in the considered billiards further reduce the relevant angle interval to $[0,\pi /2[$. For our numerical computations we have chosen five equidistant intervals, centered at $\theta_i=(i-1/2) \frac{\pi}{10}$ with $i=1,\ldots,5$ of width $\Delta \theta=\frac{\pi}{10}$. As in the case of quantum ergodicity in coordinate space, see eq.~\eqref{eq:an-diff} and fig.~\ref{fig:stad-qerg-area}, one observes large fluctuations of $\langle \psi_n ,\chi_{C(\theta,\Delta\theta)} \psi_n\rangle-\Delta\theta/(2\pi ) $ around 0. Therefore we again consider the cumulative version \eqref{eqn:qet-sum-version} of the quantum ergodicity theorem, which reads in this case \begin{equation} S_1(E,\Op{\chi_{C(\theta,\Delta\theta)}}) = \frac{1}{N(E)} \sum_{E_n\le E} \Bigg| \; \int\limits_{C(\theta,\Delta\theta)} |\widehat{\psi}_{n}(p)|^2 \; \text{d} p - \frac{\Delta\theta}{2\pi}\Bigg| \to 0 \quad \text{ for } E\to \infty\;\; . \end{equation} \begin{figure}[tbh] { } \Caption{Plot of $S_1(E,\Op{\chi_{C(\theta_i,\Delta\theta)}})$ for $\theta_i=(i-1/2) \frac{\pi}{10}$ with $i=1,\ldots,5$ and $\Delta \theta=\frac{\pi}{10}$ for the stadium billiard using the first 2000 eigenfunctions.} {fig:ft_vert_serie-stadium} \vspace*{1ex} { } \Caption{Plot of $S_1(E,\Op{\chi_{C(\theta_i,\Delta\theta)}})$ for $\theta_i=(i-1/2) \frac{\pi}{10}$ with $i=1,\ldots,5$ and $\Delta \theta=\frac{\pi}{10}$ for the cardioid billiard using the first 2000 eigenfunctions.} {fig:ft_vert_serie-cardioid} \end{figure} The results for $S_1(E,\Op{\chi_{C(\theta_i,\Delta\theta)}})$ are shown in \figref{fig:ft_vert_serie-stadium} for the stadium billiard and in \figref{fig:ft_vert_serie-cardioid} for the cardioid billiard. In each case 2000 eigenfunctions have been used. For the cardioid billiard the inset shows a double logarithmic representation together with the fits of $S_1^{\text{fit}} (E)$, eq.~\eqref{eq:S1-fit-fct}. For the cosine billiard no computations of the rate $S_1(E,\Op{\chi_{C(\theta_i,\Delta\theta)}})$ in momentum space have been performed. \begin{table}[tb] \renewcommand{\arraystretch}{1.25} \begin{center} \begin{tabular}{|c|c|c|}\hline system & domain & $\varepsilon$ \\\hline & 1 & $0.15 $ \\ & 2 & $0.12 $ \\ stadium & 3 & $0.15 $ \\ & 4 & $0.09 $ \\ & 5 & $0.18 $ \\ \hline & 1 & $0.050 $ \\ & 2 & $0.075 $ \\ cardioid & 3 & $0.026 $ \\ & 4 & $0.079 $ \\ & 5 & $0.076 $ \\ \hline \end{tabular} \end{center} \Caption{Rate of quantum ergodicity obtained from a fit of $S_1^{\text{fit}}(E) = a E^{-1/4+\varepsilon}$ to the numerically obtained function $S_1(E,\Op{\chi_{C(\theta_i,\Delta\theta)}})$ for the different systems and angle sectors $C(\theta_i,\Delta\theta)$. }{tab:qerg-ft-S1} \end{table} As in position space one expects that the rate is strongly influenced by not quantum-ergodic subsequences of eigenfunctions. For the bouncing ball modes in the stadium billiard one has \begin{equation} \lim_{E_{n''}\to\infty}\int\limits_{C(\theta,\Delta\theta)} |\widehat{\psi}_{n''}(p)|^2 \; \text{d} p = \begin{cases} 0\,\, &\text{for}\quad \frac{\pi}{4}\notin \;]\theta-\Delta\theta,\theta+\Delta\theta[\\ 1\,\, &\text{for}\quad \frac{\pi}{4}\in \;]\theta-\Delta\theta,\theta+\Delta\theta[ \end{cases}\,\, , \end{equation} and so the coefficient $\nu_{\text{bb}} ''(\chi_{C(\theta_i,\Delta\theta)})$ in the model \eqref{mod-rate} for $S_1(E,\chi_{C(\theta_i,\Delta\theta)})$ is given by \begin{equation} \nu_{\text{bb}} ''(\chi_{C(\theta_i,\Delta\theta)})= \begin{cases} \frac{1}{20}\,\, & \text{for}\quad i=1,\ldots,4 \\ \frac{19}{20}\,\, & \text{for}\quad i=5 \end{cases}\,\, . \end{equation} The results for the rate of quantum ergodicity, characterized by $\varepsilon$, are listed in table \ref{tab:qerg-ft-S1}. It turns out that the rate is slower than the rate of quantum ergodicity for the small domains in configuration space. Moreover the agreement of $S_1(E,\Op{\chi_{C(\theta_i,\Delta\theta)}})$ with the fit is not as good as in the case of $S_1(E,\chi_D)$, in particular the fluctuations of $S_1(E,\Op{\chi_{C(\theta_i,\Delta\theta)}})$ are much larger than in position space. In the stadium billiard the interval $5$, which corresponds the direction of the bouncing ball orbits, shows the slowest rate. But as we already noted in the discussion of the rate in position space, the bouncing ball modes alone cannot cause such a slow rate, because their counting function increases only as $E^{3/4}$. So a considerable number of the additional not quantum-ergodic states which are responsible for the slow rate in position space must also have an enhanced momentum density around $\pi/2$. But the slow rates for the other angular intervals indicate that not all not quantum-ergodic states show this behavior in momentum space. For both billiards one observes that the order of magnitude of $\varepsilon$ in momentum space is the same as for the large domains in position space. Therefore the results are compatible with the results in position space, but the large fluctuations indicate that one has to go higher in the energy in momentum space than in position space. \subsection{Fluctuations of expectation values}\label{sec:fluct-of-exp-values} Another aspect of great interest is how the expectation values $\langle \psi_n,\text{A} \psi_n\rangle$ fluctuate around their mean value $\overline{\sigma(\text{A})}$. Since the mean fluctuations decrease for large $n$, one has to consider the distribution of \begin{equation}\label{eq:norm-fluc} \xi_n=\frac{\langle \psi_n,\text{A} \psi_n\rangle-\overline{\sigma(\text{A})}} {\sqrt{\tilde{S}_2(E_n,\text{A})}} \;\;. \end{equation} Here $\tilde{S}_2(E,\text{A})=\Xi \, S_2(E,\text{A})$ with $\Xi$ being a correction necessary to ensure that the distribution of $\xi_n$ has unit variance; see below for an explanation. So the question is whether a limit distribution $P(\xi)$ of $\xi_n$ exists in the weak sense, i.e.\ \begin{equation}\label{def:limit-distribution} \lim_{N\to\infty} \frac{1}{N} \sum_{n=1}^N g(\xi_n) = \int\limits_{-\infty}^\infty g(\xi) P(\xi) \; \text{d} \xi \;\;, \end{equation} where $g(\xi)$ is a bounded continuous function. It is natural to conjecture that this distribution tends to a Gaussian normal distribution, \begin{equation} \label{eq:normal-distribution} P(\xi) = \frac{1}{\sqrt{2\pi}} \exp(-\xi^2/2) \;\;, \end{equation} as in random matrix theory \cite[section VII]{BroFloFreMelPanWon81}. Note that this is a conjecture for every observable, i.e.\ the asymptotic distribution should be independent of the special observable under investigation. For hyperbolic surfaces a study of $P(\xi)$ for an observable in position space is contained in \cite{AurTag97:p}, where a good agreement with a Gaussian normal distribution was observed. In \cite{EckFisKeaAgaMaiMue95} $P(\xi)$ was studied for the Baker's map and the hydrogen atom in a strong magnetic field, and a fair agreement with a Gaussian was found. \BILD{htb} { \begin{center} \vspace*{-1.0cm} \PSImagxy{stad_gauss_4_cum.ps}{14cm}{9.75cm} \vspace*{2.5ex} \PSImagxy{odd_n__gauss_4_cum_special.ps}{14cm}{9.75cm} \end{center} } {Cumulative distribution of $\xi_n=(\langle \psi_n,A\psi_n\rangle - \overline{\sigma (A)})/\sqrt{\tilde{S}_2(E_n,A)}$ for the stadium billiard for domain 4, $A=\chi_{D_4}$, and for the cardioid billiard with observable $A=\chi_{D_4} - \chi_{D_5}$. In both cases we haven chosen $n\in[2000,6000]$. The dashed curve corresponds to the cumulative normal distribution. The insets show the distribution of $\xi_n$ together with the normal distribution with zero mean and unit variance, eq.~\eqref{eq:normal-distribution} (dashed curve).} {fig:qerg-gauss} \BILD{!t} { \vspace*{1ex} \begin{center} \PSImagxy{odd_n__ft_vert_serie_5_gauss_3_cum.ps}{14cm}{10cm} \end{center} } {Cumulative distribution of $\xi_n=(\langle \psi_n,A\psi_n\rangle - \overline{\sigma (A)})/\sqrt{\tilde{S}_2(E_n,A)}$ for the cardioid billiard, for the observable $\chi_{C(\theta,\Delta\theta)}(p)$ in momentum space with $\theta=5\pi/20$ and $\Delta \theta=\pi/10$. The dashed curve corresponds to the cumulative normal distribution. The insets show the distribution of $\xi_n$ together with the normal distribution with zero mean and unit variance, eq.~\eqref{eq:normal-distribution} (dashed curve).} {fig:ft-qerg-gauss} However, already from the plots of $d_i(n)$ shown in \figref{fig:stad-qerg-area} it is clear that the fluctuations are not symmetrically distributed around zero, but have more peaks with large positive values. The reason is that $d_i(n)= \langle \psi_n,\chi_{D_i} \psi_n\rangle- \tfrac{\operatorname{vol}(D_i)}{\operatorname{vol}(\Omega )}$ has to satisfy the inequality \begin{equation} -\frac{\operatorname{vol}(D_i)}{\operatorname{vol}(\Omega )} \le \langle \psi_n,\chi_{D_i} \psi_n\rangle-\frac{\operatorname{vol}(D_i)}{\operatorname{vol}(\Omega )} \le 1 -\frac{\operatorname{vol}(D_i)}{\operatorname{vol}(\Omega )} \;\;. \end{equation} This already indicates that the approach to an asymptotic Gaussian behavior could be rather slow. Therefore we have tested additionally for the cardioid billiard the observable $A=\chi_{D_4}-\chi_{D_5}$ where the expectation values fluctuate symmetrically around zero, and one expects a faster approach to a Gaussian behavior. In \figref{fig:qerg-gauss}a) we show the cumulative distribution \begin{equation} I_N(\xi) = \frac{1}{N}\; \#\left\{ n \;\; | \;\; \xi_n < \xi \right\} \end{equation} for domain $D_4$ of the stadium billiard and in \figref{fig:qerg-gauss}b) $I_N(\xi )$ is shown for the observable $A=\chi_{D_4}-\chi_{D_5}$ in case of the cardioid billiard. In both cases all values of $\xi_n$ with $n\in [2000,6000]$ have been taken into account, giving $N=4000$. For the rate $S_2(E,\chi_D)$ we used the result of a fit to $S_2^{\text{fit}}(E) = a E^\alpha$. The insets show the corresponding distributions of $\xi_n$ in comparison with the normal distribution, eq.~\eqref{eq:normal-distribution}. Notice, that no further fit of the mean or the variance of the Gaussian has been made. Example a) is the case for which we have found the worst agreement with a Gaussian (of all the small domains we have tested). The observable chosen for b) gives very good agreement with the Gaussian distribution. In the case of $\chi_{D_4}$ in the stadium billiard there is a significant peak around $\xi=-2$, which is due to the bouncing ball modes, for which $\langle \psi_{n''}, \chi_{D_4} \psi_{n''} \rangle$ is approximately zero, see \figref{fig:stad-qerg-area}. Therefore one has a larger fraction with negative $\xi_{n''}$. For the distribution in case of the observable $A=\chi_{D_4}-\chi_{D_5}$ of the cardioid billiard we obtain a significance niveau of $23\%$ for the Kolmogorov--Smirnov test (see e.g.\ \cite{PreTeuVetFlan92}) with respect to the cumulative normal distribution. We also studied the distribution of $\xi_n$ for the observables $a(p,q)=a(p)=\chi_{C(\theta,\Delta\theta)}(p)$ in momentum space. For the stadium billiard the computed distributions show in the considered energy range clear deviations from a Gaussian, as one already expects from fig.~\ref{fig:ft_vert_serie-stadium}. The best result was obtained for the cardioid billiard for the interval given by $i=3$ (with $\theta_i= (i-1/2) \frac{\pi}{10}$ and $\Delta \theta=\frac{\pi}{10}$) and is shown in \figref{fig:ft-qerg-gauss}. The agreement is quite good, the Kolmogorov--Smirnow test gives a significance niveau of $29\%$. There is one subtle point concerning the variance of the distribution of $\xi_n$. Since $S_2(E,A)$ does not represent a local variance around $E$, but a global one, it is necessary to take this into account to obtain for the fluctuations a variance of unity. If the rate behaves as $S_2(E,A)=a E^\alpha$ then the correction $\Xi$ is given by $\Xi=\alpha+1$, e.g.\ for $\alpha=-1/2$ we have $\Xi=1/2$. See \cite{AurBaeSte97} for a more detailed discussion on this point in the case of the distribution of the normalized mode fluctuations. Let us now discuss the influence of not quantum-ergodic sequences to the possible limit distribution. Assume that the rate for the quantum-ergodic states is $S_2 '(E,A)\sim a E^{-\alpha}$ with some power $\alpha$. If we have a subsequence of not quantum-ergodic states such that the total rate is $S_2(E,A)\sim a' E^{-\alpha'}$, we can have two different situations, either $\alpha=\alpha'$, or $\alpha<\alpha'$. In the first case the not quantum-ergodic states have no influence on the limit distribution. In the second case where the not quantum-ergodic states dominate the rate, the normalization by a rate which is slower than the one of the quantum-ergodic subsequence implies that we have $P(\xi)=\delta(\xi)$. If one instead normalizes the fluctuations with the rate of the quantum-ergodic subsequence, $S_2'(E,A)$, \begin{equation} \tilde{\xi}_n:=\frac{\langle \psi_n,\text{A} \psi_n\rangle-\overline{\sigma(\text{A})}} {\sqrt{\tilde{S}_2'(E_n,\text{A})}}\,\, , \end{equation} with $\tilde{S}_2'(E_n,\text{A}) =\Xi S_2'(E,A)$, then the limit distribution is determined only by the quantum-ergodic subsequence, independent of the behavior of the not quantum-ergodic subsequence. To see this we split \eqref{def:limit-distribution} into the different parts \begin{equation} \frac{1}{N(E)}\sum_{E_n\leq E}g(\tilde{\xi}_n)= \frac{N'(E)}{N(E)}\frac{1}{N'(E)}\sum_{E_{n'}\leq E}g(\tilde{\xi}_{n'})+ \frac{N''(E)}{N(E)}\frac{1}{N''(E)}\sum_{E_{n''}\leq E}g(\tilde{\xi}_{n''})\,\, . \end{equation} Since $\lim_{E\to\infty}\frac{N'(E)}{N(E)}=1$, $\lim_{E\to\infty}\frac{N''(E)}{N(E)}=0$ and $\frac{1}{N''(E)}\sum_{E_{n''}\leq E}g(\tilde{\xi}_{n''})\leq \max_{\xi\in\mathbb{R}}g(\xi )$, one gets \begin{equation} \lim_{E\to\infty}\frac{1}{N(E)}\sum_{E_n\leq E}g(\tilde{\xi}_n)= \lim_{E\to\infty}\frac{1}{N'(E)}\sum_{E_{n'}\leq E}g(\tilde{\xi}_{n'})\,\, . \end{equation} We conjecture that the fluctuations of the quantum-ergodic subsequence is Gaussian, and therefore all fluctuations, when normalized with the rate of the quantum-ergodic subsequence, are Gaussian. \section{Summary} The aim of the present paper is to give a detailed study of the rate of quantum ergodicity in Euclidean billiards. We first have given a short introduction to the quantum ergodicity theorems in terms of pseudodifferential operators. We have shown that the quantum ergodicity theorems of Shnirelman, Zelditch, Colin de Verdi\'ere and others are equivalent to a weak from of the semiclassical eigenfunction hypothesis for ergodic systems put forward in \cite{Vor76,Vor77,Ber77b,Ber83}. That is, the quantum ergodicity theorem is equivalent to the statement that for ergodic systems the Wigner functions $W_n (p,q)$ fulfill \begin{equation} \operatorname{W}_{n_j} (p,q )\sim \frac{1}{\text{Vol}(\Sigma_{E_{n_j}} )} \, \delta (H(p,q)-E_{n_j}) \,\, , \end{equation} for $E_{n_j}\to\infty$ and $\{n_j\}\subset\mathbb{N}$ a subsequence of density one. Of great importance for the practical applicability of the quantum ergodicity theorem is the question, at which rate quantum mechanical expectation values $\langle \psi_n , A \psi_n \rangle$ tend to their mean value $\overline{\sigma(A)}$. Different arguments were presented previously in favor of an expected rate of quantum ergodicity $S_1(E,\text{A}) = O(E^{-1/4+\varepsilon})$, for all $\varepsilon>0$, in the case of strongly chaotic systems. In section \ref{sec:rate-of-quantum-ergodicity} we discussed the influence of not quantum-ergodic subsequences to the rate. If their counting function increases sufficiently fast, they can dominate the behavior of $S_1(E,\text{A})$ asymptotically. Together with results from \cite{BaeSchSti97a} for the number of bouncing ball modes in certain billiards it follows that one can find for arbitrary $\delta>0$ an ergodic billiard for which $S_1(E,\text{A})=O(E^{-\delta})$. That is, the quantum ergodicity theorem gives a sharp bound, which cannot be improved without additional assumptions on the system. We furthermore developed a simple but powerful model for the behavior of $S_1(E,\text{A})$ in the presence of not quantum-ergodic eigenfunctions, whose main ingredient is that the quantum-ergodic eigenfunctions should obey the optimal rate $E^{-1/4}$. The discussion shows that the total rate of quantum ergodicity can be strongly influenced by those exceptional subsequences. Not only that they can cause the rate to be much slower than $E^{-1/4}$, they can lead as well to a grossly different behavior of $S_1(E,A)$ at low and intermediate and at high energies. The numerical investigations are carried out for three types of Euclidean billiards, the stadium billiard (with different parameters), the cosine and the cardioid billiard. The results are based on 2000 eigenfunctions for the cosine billiard, and up to 6000 eigenfunctions for the stadium and the cardioid billiard. As observables we have used characteristic functions of different domains in position space and also a class of observables in momentum space, considered here for the first time. It turns out that the rate of quantum ergodicity in position space is in good agreement with a power law decay, $S_1(E,\text{A}) \sim E^{-1/4+\varepsilon}$. The difference $\varepsilon$ between the exponent and $1/4$ is found to be small for several domains and systems. However we also find a number of significant examples showing a slow rate (i.e.\ $\varepsilon>0$ and not small). These are discussed in detail and can be attributed to subsequences of localized eigenfunctions. For the cosine billiard we find that the rate agrees well with the expected rate, in particular for the small domains. However asymptotically the rate has to obey $S_1(E,A) \sim E^{-1/10}$, because the counting function of the bouncing ball modes increases as $E^{9/10}$. The asymptotic regime for the rate lies far beyond any presently computable number of energy levels. By incorporating the knowledge on the counting function obtained from our previous work, we tested our model \eqref{mod-rate} for the rate successfully for all the considered domains. For the stadium billiard the situation is more complicated: here the counting function of the bouncing ball modes increases as $E^{3/4}$ and therefore, as discussed in sec.~\ref{sec:rate-of-quantum-ergodicity}, cannot influence the rate. However, we find for the stadium billiard that the rate is for several domains in position space much slower than expected. After discussing and testing several possibilities our explanation for this observation is that in the stadium billiard there exist a much larger subsequence of eigenfunctions which have an enhanced probability density in the rectangular part of the billiard than just the bouncing ball modes. They nevertheless have density zero, but their counting function increases stronger than $E^{3/4}$. Of course, we cannot decide whether this subsequence either has a quantum limit different from the Liouville measure, or if it is a quantum-ergodic subsequence, with an exceptionally slow rate. For the cardioid billiard we also have domains for which the rate is proportional to $E^{-1/4}$. But we also find significant exceptions, in particular for domain $D_3$ the rate is much slower, and this can be attributed to a number of eigenstates which show localization along the unstable periodic orbit $\overline{AB}$. For the cardioid billiard we also tested the result from \cite{EckFisKeaAgaMaiMue95}, eq.~\eqref{eq:rate-hypsyst}, for the domains $D_4$ and $D_5$, for which the rate is closest to the optimal rate. However the semiclassical result does not agree with our numerical results for the rate. It would be interesting to study this in more detail. From our numerical results we obtain the following general picture: In the studied systems there is a quantum-ergodic subsequence of density one whose rate is $S'_1(E,\text{A}) = O(E^{-1/4+\varepsilon})$. If one observes a slower rate of $S_1(E,\text{A})$ by using all eigenfunctions, this is caused by a subsequence of density zero, whose counting function increases stronger than $E^{3/4}$. These exceptional eigenfunctions show localization effects and and probably they tend to some non ergodic quantum limit. However we cannot rule out the possibility that they are quantum-ergodic but with a much slower rate than the majority of eigenfunctions. We have found furthermore an effect due to the boundary conditions. For domains lying next to the boundary we observed that the rate may be considerably faster at low energies. The qualitative explanation of the phenomenon is that the probability density of the eigenstates show enhanced fluctuations near the boundary because of the boundary conditions. Using an observable depending only on the momentum, we studied quantum ergodicity in momentum space too, which is done here for the first time to our knowledge. We find that in general the rate of quantum ergodicity is of the same magnitude as for the large domains in position space. Furthermore the oscillations of $S_1(E,A)$ are larger in momentum space, which might indicate that one has to go higher in the energy in momentum space than in position space. We also studied the distribution of the suitably normalized fluctuations of $\langle \psi_n,A\psi_n\rangle -\overline{\sigma(A)}$, see eq.~\eqref{eq:norm-fluc}, both for operators in position space and in momentum space. For the observable $A=\chi_{D_4}-\chi_{D_5}$ in the case of the cardioid billiard we find very good agreement with a Gaussian normal distribution and in the case of observables depending only on the momentum good agreement is found. However for the stadium billiard (and also domain $D_3$ for the cardioid billiard) we clearly find that again subsequences of not quantum-ergodic states may have a considerable influence. If they dominate $S_2(E,A)$, the distribution will tend to a delta function due to the normalization by $\sqrt{S_2(E,A)}$. But when normalizing instead with the rate of the quantum-ergodic states, $\tilde{S}_2' (E,\text{A})$, we expect a universal Gaussian distribution of the fluctuations. As possible investigations for the future it seems very interesting to study whether the expression given in \cite{EckFisKeaAgaMaiMue95} for the rate in terms of the classical correlation function can describe our numerical results. In particular for the cardioid billiard a more detailed investigation along these lines seems promising as this system is the most ``generic'' one from the three studied systems and we find both the optimal rate and also clear deviations. The present paper also shows that a detailed understanding of the phenomenon of scarred eigenfunctions is necessary because these clearly affect the rate of quantum ergodicity. \vspace{2ex} {\bf Acknowledgments} \vspace{1ex} We would like to thank Dr.\ R.\ Aurich, Dr.\ J.\ Bolte, T. Hesse, Dr.\ M.\ Sieber and Prof.\ Dr.\ F. Steiner, for useful discussions and comments. Furthermore we are grateful to Prof.\ Dr.\ M. Robnik and Dr.\ T. Prosen for the kind provision of the eigenvalues of the cardioid billiard. Fig.\ref{fig:wfk-sum-stadion}a) has been visualized using {\tt Geomview} from {\it The Geometry Center} of the University of Minnesota and then rendered using {\tt Blue Moon Rendering Tools} written by L.I.~Gritz. A.B.\ acknowledges support by the Deutsche Forschungsgemeinschaft under contract No. DFG-Ste 241/7-2. \vspace{1ex} \begin{appendix} \section*{Appendix} \section{Kohn-Nirenberg quantization}\label{app:relation-to-Kohn-Nirenberg} In the mathematical literature one often prefers a different quantization procedure, sometimes called the Kohn-Nirenberg quantization \cite{Hoe85a,Fol89}, and the literature on quantum ergodicity often refers to this quantization procedure. To the symbol $a\in S^m(\mathbb{R}^2\times\Omega)$ one associates the operator \begin{equation} \text{Op}^{\text{KN}} [a]f(q):=\frac{1}{(2\pi)^2} \int\limits_{\mathbb{R}^2} \text{e}^{\text{i} pq}a(p,q)\hat{f}(p)\, \text{d} p\,\, , \end{equation} where $\hat{f}(p):=\int_{\Omega}\text{e}^{-\text{i} qp}f(q)\, \text{d} q$ is the Fourier transform of $f$. The principal symbol is defined in the same way as before, i.e., if $a\sim\sum_{k=0}^{\infty}a_{m-k}$, then the leading term $a_m$ is called the principal symbol, $\sigma^{\text{KN}}(\text{Op}^{\text{KN}}[a])=a_m$. The usual quantum ergodicity theorem is now the same theorem as we have stated it, but with the Kohn-Nirenberg principal symbol $\sigma^{\text{KN}}$ instead of the principal symbol corresponding to the Weyl symbol which we have used. But it is well known, see \cite{Fol89, Hoe85a}, that if $a\in S^m(\mathbb{R}^n\times\mathbb{R}^n)$, then the Weyl symbol of the Kohn-Nirenberg operator belongs to the same symbol space, $\operatorname{W} [\text{Op}^{\text{KN}} [a]]\in S^m(\mathbb{R}^n\times\mathbb{R}^n)$, and that the principal symbol coincides with the Kohn-Nirenberg principal symbol, \begin{equation} \sigma (\text{Op}^{\text{KN}} [a])= \sigma^{\text{KN}}(\text{Op}^{\text{KN}}[a])\,\, . \end{equation} Therefore the two formulations of the quantum ergodicity theorem are equivalent. \section{Generalizations of the quantum ergodicity theorem} \label{app:generalizations-of-the-qet} Assume we have given a quantum limit $\mu _k$ on $\Sigma_1$, that is we have a subsequence of eigenfunctions $\{\psi_{n_j}\}_{j\in\mathbb{N}}$, such that \begin{equation}\label{def:qlim} \lim_{j\to\infty}\langle \psi_{n_j}, A\psi_{n_j}\rangle =\int\limits_{\Sigma_1} \mu _k (p,q)\sigma (A)(p,q)\, \text{d}\mu \,\, , \end{equation} for all $A\in S^0_{\text{cl}}(\Omega )$. We want to discuss the lift of $\mu _k$ from $\Sigma_1$ to the whole phase space. To this end we express the expectation values for an operator of arbitrary order $m\in\mathbb{R}$ by the expectation values of an operator of order zero. This can be achieved by using the fact that for every $m\in\mathbb{R}$, $(-\Delta )^{\frac{m}{2}}$ is a pseudodifferential operator of order $m$ with principal symbol $\sigma \left((-\Delta )^{\frac{m}{2}}\right) = \left(\sigma (-\Delta )\right)^{\frac{m}{2}}=H(p,q)^{\frac{m}{2}}$, see \cite{ See67, See69, Tay81}. By multiplying an operator $A\in S^m(\mathbb{R}^m)$ of order $m$ with the operator $(-\Delta )^{-\frac{m}{2}}$, which is of order $-m$, we get an operator $(-\Delta )^{-\frac{m}{2}}A\in S^0(\mathbb{R}^n)$ of order zero. For the expectation values of $A$ we therefore have \begin{equation}\label{expval:ordm} \langle \psi_{n_j}, A\psi_{n_j}\rangle = E_{n_j}^{\frac{m}{2}}\langle \psi_{n_j},(-\Delta )^{-\frac{m}{2}}A \psi_{n_j}\rangle \,\, , \end{equation} and on the right hand side we have an operator of order zero. The principal symbol of $(-\Delta )^{-\frac{m}{2}}A$ is according to eq.~(\ref{prod}) given by $\sigma ((-\Delta )^{-\frac{m}{2}})\sigma (A)= H(p,q)^{-\frac{m}{2}}\sigma (A)$, and since by definition $H(p,q)=1$ on $\Sigma_1$ we obtain from (\ref{def:qlim}) and (\ref{expval:ordm}) \begin{equation}\label{qlim:mord} \lim_{j\to\infty}E_{n_j}^{-\frac{m}{2}} \langle \psi_{n_j}, A\psi_{n_j}\rangle = \int\limits_{\Sigma_1} \mu _k (p,q)\,\sigma (A)(p,q)\; \text{d}\mu \,\, . \end{equation} Thus eq.~(\ref{qlim:mord}) provides the extension of the quantum ergodicity theorem to pseudo--differential operators of arbitrary order $m$. \section{Connection to the semiclassical eigenfunction hypothesis} \label{app:connection-to-sc-eigenfunction-hypothesis} By introducing the definition of the Liouville measure $\mu$, equation (\ref{qlim:mord}) can be written as \begin{equation} \langle \psi_{n_j}, A\psi_{n_j}\rangle \sim E_{n_j}^{\frac{m}{2}} \iint\limits \mu _k (p,q)\sigma (A)(p,q)\frac{\delta (H(p,q)-1)}{\operatorname{vol} (\Sigma_1 )}\; \text{d} p\, \text{d} q \,\, . \end{equation} If one uses the homogeneity of $\sigma (A)$, i.e.\ $E_{n_j}^{\frac{m}{2}}\sigma (A)(p,q)=\sigma (A)(E_{n_j}^{\frac{1}{2}}p,q)$, and performs a change of the momentum coordinates from $p$ to $E_{n_j}^{-\frac{1}{2}}p$ one obtains \begin{equation} \label{eq:blubb} \begin{split} \langle \psi_{n_j}, A\psi_{n_j}\rangle &\sim \iint\limits \mu _k (E_{n_j}^{-\frac{1}{2}}p,q)\sigma (A)(p,q) \, \frac{\delta (H(E_{n_j}^{-\frac{1}{2}}p,q)-1)}{\operatorname{vol} (\Sigma_1 )}E_{n_j}^{-\frac{n}{2}}\; \text{d} p\, \text{d} q \\ &=\iint\limits \mu _k (E_{n_j}^{-\frac{1}{2}}p,q)\sigma (A)(p,q) \, \frac{\delta (H(p,q)-E_{n_j})}{\operatorname{vol} (\Sigma_1 )E_{n_j}^{\frac{n}{2}-1}} \;\text{d} p\, \text{d} q\,\, , \\ \end{split} \end{equation} where furthermore the homogeneity properties of $H(p,q)$ and of the delta function have been used. In terms of the Wigner functions $W_{n_j}$ corresponding to $\psi_{n_j}$ eq.~(\ref{eq:blubb}) reads \begin{equation} \iint\limits\sigma (A)(p,q) \,W_{n_j}(p,q)\, \text{d} p\, \text{d} q \sim \iint\limits\sigma (A)(p,q)\,\mu _k (E_{n_j}^{-\frac{1}{2}}p,q) \,\frac{\delta (H(p,q)-E_{n_j})}{\operatorname{vol} (\Sigma_1 )E_{n_j}^{\frac{n}{2}-1}} \;\text{d} p\, \text{d} q\,\, , \end{equation} where $\sigma (A)(p,q)$ can be any function homogeneous in $p$ of degree $m$, for some arbitrary $m\in \mathbb{R}$. But since the set of all polynomials in $p$ is already dense in $C^{\infty}(\mathbb{R}^2\times \Omega)$ the set of homogeneous functions in $p$ is dense in $C^{\infty}(\mathbb{R}^2\times \Omega)$, too. Therefore one gets \begin{equation} W_{n_j}(p,q)\sim \mu _k (E_{n_j}^{-\frac{1}{2}}p,q)\, \frac{\delta (H(p,q)-E_{n_j})}{\operatorname{vol} (\Sigma_1 )E_{n_j}^{\frac{n}{2}-1}}\,\, . \end{equation} Note that $\operatorname{vol} (\Sigma_1 )E_{n_j}^{\frac{n}{2}-1}=\operatorname{vol} (\Sigma_{E_{n_j}})$, and if we extend $\mu _k(p,q)$ from $\Sigma_1$ to the whole phase space by requiring it to be homogeneous of degree zero, $\mu _k(p,q)=\mu _k(p/\sqrt{H(p,q)},q)$, then we finally can write \begin{equation} W_{n_j}(p,q)\sim \mu _k(p,q) \frac{\delta (H(p,q)-E_{n_j})}{\operatorname{vol} (\Sigma_{E_{n_j}})} \,\, \quad \text{for}\,\,j\to\infty \,\, . \end{equation} for a subsequence $\{n_j\}\subset\mathbb{N}$ of density 1. This shows that the quantum ergodicity theorem is equivalent to the semiclassical eigenfunction hypothesis for ergodic systems for a subsequence of density one. \end{appendix} \renewcommand{\baselinestretch}{0.975} \small\normalsize
1,314,259,994,636
arxiv
\section*{Notes}} \newcommand{\Exer}{ \bigskip\markright{EXERCISES} \section*{Exercises}} \newcommand{D_G}{D_G} \newcommand{{\rm S5}_m}{{\rm S5}_m} \newcommand{{\rm S5C}_m}{{\rm S5C}_m} \newcommand{{\rm S5I}_m}{{\rm S5I}_m} \newcommand{{\rm S5CI}_m}{{\rm S5CI}_m} \newcommand{Mart\'\i n\ }{Mart\'\i n\ } \newcommand{\end{enumerate}\setlength{\itemsep}{-\parsep}}{\end{enumerate}\setlength{\itemsep}{-\parsep}} \newcommand{\setlength{\itemsep}{0pt}\begin{itemize}}{\setlength{\itemsep}{0pt}\begin{itemize}} \newcommand{\end{description}\setlength{\itemsep}{-\parsep}}{\end{description}\setlength{\itemsep}{-\parsep}} \newcommand{\end{itemize}\setlength{\itemsep}{-\parsep}}{\end{itemize}\setlength{\itemsep}{-\parsep}} \newtheorem{fthm}{Theorem} \newtheorem{flem}[fthm]{Lemma} \newtheorem{fcor}[fthm]{Corollary} \newcommand{\slidehead}[1]{ \eject \Huge \begin{center} {\bf #1 } \end{center} \vspace{.5in} \LARGE} \newcommand{_G}{_G} \newcommand{{\bf if}}{{\bf if}} \newcommand{{\tt \ at\_time\ }}{{\tt \ at\_time\ }} \newcommand{\skew6\hat\ell\,}{\skew6\hat\ell\,} \newcommand{{\bf then}}{{\bf then}} \newcommand{{\bf until}}{{\bf until}} \newcommand{{\bf else}}{{\bf else}} \newcommand{{\bf repeat}}{{\bf repeat}} \newcommand{{\cal A}}{{\cal A}} \newcommand{{\cal E}}{{\cal E}} \newcommand{{\cal F}}{{\cal F}} \newcommand{{\cal I}}{{\cal I}} \newcommand{{\cal N}}{{\cal N}} \newcommand{{\cal R}}{{\cal R}} \newcommand{{\cal S}}{{\cal S}} \newcommand{B^{\scriptscriptstyle \cN}}{B^{\scriptscriptstyle {\cal N}}} \newcommand{B^{\scriptscriptstyle \cS}}{B^{\scriptscriptstyle {\cal S}}} \newcommand{{\cal W}}{{\cal W}} \newcommand{E_G}{E_G} \newcommand{C_G}{C_G} \newcommand{C_\cN}{C_{\cal N}} \newcommand{E_\cS}{E_{\cal S}} \newcommand{E_\cN}{E_{\cal N}} \newcommand{C_\cS}{C_{\cal S}} \newcommand{\mbox{{\it attack}}}{\mbox{{\it attack}}} \newcommand{\mbox{{\it attacking}}}{\mbox{{\it attacking}}} \newcommand{\mbox{{\it delivered}}}{\mbox{{\it delivered}}} \newcommand{\mbox{{\it exist}}}{\mbox{{\it exist}}} \newcommand{\mbox{{\it decide}}}{\mbox{{\it decide}}} \newcommand{{\it clean}}{{\it clean}} \newcommand{{\it diff}}{{\it diff}} \newcommand{{\it failed}}{{\it failed}} \newcommand\eqdef{=_{\rm def}} \newcommand{\mbox{{\it false}}}{\mbox{{\it false}}} \newcommand{D_{\cN}}{D_{{\cal N}}} \newcommand{D_{\cS}}{D_{{\cal S}}} \newcommand{{\it time}}{{\it time}} \newcommand{f}{f} \newcommand{{\rm K}_n}{{\rm K}_n} \newcommand{{\rm K}_n^C}{{\rm K}_n^C} \newcommand{{\rm K}_n^D}{{\rm K}_n^D} \newcommand{{\rm T}_n}{{\rm T}_n} \newcommand{{\rm T}_n^C}{{\rm T}_n^C} \newcommand{{\rm T}_n^D}{{\rm T}_n^D} \newcommand{{\rm S4}_n}{{\rm S4}_n} \newcommand{{\rm S4}_n^C}{{\rm S4}_n^C} \newcommand{{\rm S4}_n^D}{{\rm S4}_n^D} \newcommand{{\rm S5}_n}{{\rm S5}_n} \newcommand{{\rm S5}_n^C}{{\rm S5}_n^C} \newcommand{{\rm S5}_n^D}{{\rm S5}_n^D} \newcommand{{\rm KD45}_n}{{\rm KD45}_n} \newcommand{{\rm KD45}_n^C}{{\rm KD45}_n^C} \newcommand{{\rm KD45}_n^D}{{\rm KD45}_n^D} \newcommand{{\cal L}_n}{{\cal L}_n} \newcommand{{\cal L}_n^C}{{\cal L}_n^C} \newcommand{{\cal L}_n^D}{{\cal L}_n^D} \newcommand{{\cal L}_n^{CD}}{{\cal L}_n^{CD}} \newcommand{{\cal M}_n}{{\cal M}_n} \newcommand{{\cal M}_n^r}{{\cal M}_n^r} \newcommand{\M_n^{\mbox{\scriptsize{{\it rt}}}}}{{\cal M}_n^{\mbox{\scriptsize{{\it rt}}}}} \newcommand{\M_n^{\mbox{\scriptsize{{\it rst}}}}}{{\cal M}_n^{\mbox{\scriptsize{{\it rst}}}}} \newcommand{\M_n^{\mbox{\scriptsize{{\it elt}}}}}{{\cal M}_n^{\mbox{\scriptsize{{\it elt}}}}} \renewcommand{\mbox{${\cal L}_n$}}{\mbox{${\cal L}_{n} (\Phi)$}} \renewcommand{\mbox{${\cal L}_n^D$}}{\mbox{${\cal L}_{n}^D (\Phi)$}} \newcommand{{\rm S5}_n^{DU}}{{\rm S5}_n^{DU}} \newcommand{{\cal L}_n^D}{{\cal L}_n^D} \newcommand{{\rm S5}_n^U}{{\rm S5}_n^U} \newcommand{{\rm S5}_n^{CU}}{{\rm S5}_n^{CU}} \newcommand{{\cal L}^{U}_n}{{\cal L}^{U}_n} \newcommand{{\cal L}_n^{CU}}{{\cal L}_n^{CU}} \newcommand{{\cal L}_n^{DU}}{{\cal L}_n^{DU}} \newcommand{{\cal L}_n^{CU}}{{\cal L}_n^{CU}} \newcommand{{\cal L}_n^{DU}}{{\cal L}_n^{DU}} \newcommand{{\cal L}_n^{\it CDU}}{{\cal L}_n^{\it CDU}} \newcommand{\C_n}{{\cal C}_n} \newcommand{\I_n^{oa}(\Phi')}{{\cal I}_n^{oa}(\Phi')} \newcommand{\C_n^{oa}(\Phi)}{{\cal C}_n^{oa}(\Phi)} \newcommand{\C_n^{oa}}{{\cal C}_n^{oa}} \newcommand{OA$_{n,\Phi}$}{OA$_{n,\Phi}$} \newcommand{OA$_{n,{\Phi}}'$}{OA$_{n,{\Phi}}'$} \newcommand{U}{U} \newcommand{\, U \,}{\, U \,} \newcommand{{\rm a.m.p.}}{{\rm a.m.p.}} \newcommand{\commentout}[1]{} \newcommand{\msgc}[1]{ @ #1 } \newcommand{{\C_n^{\it amp}}}{{{\cal C}_n^{\it amp}}} \newcommand{\begin{itemize}}{\begin{itemize}} \newcommand{\end{itemize}}{\end{itemize}} \newcommand{\begin{enumerate}}{\begin{enumerate}} \newcommand{\end{enumerate}}{\end{enumerate}} \newcommand{\stackrel{r}{\rightarrow}}{\stackrel{r}{\rightarrow}} \newcommand{\mbox{\it ack}}{\mbox{\it ack}} \newcommand{\G_0}{{\cal G}_0} \newcommand{\itemsep 0pt\partopsep 0pt}{\itemsep 0pt\partopsep 0pt} \def\seealso#1#2{({\em see also\/} #1), #2} \newcommand{\cents}{\hbox{\rm \rlap{/}c}} \newcommand{\mathcal{ONL}_n}{\mathcal{ONL}_n} \newcommand{\mathcal{ONL}_n}{\mathcal{ONL}_n} \newcommand{\mathcal{ONL}_n^-}{\mathcal{ONL}_n^-} \newcommand{\mathcal{ONL}}{\mathcal{ONL}} \newcommand{{e'}}{{e'}} \newcommand{{e^*}}{{e^*}} \newcommand{{e^\bullet}}{{e^\bullet}} \newcommand{\mathitbf{N}_j}{\mathitbf{N}_j} \newcommand{\mathbf{A5}_n}{\mathbf{A5}_n} \newcommand{{AX}_n}{{AX}_n} \newcommand{\models^{kj}}{\models^{kj}} \newcommand{{\textsf{K45}}_{n}}{{\textsf{K45}}_{n}} \newcommand{{e_a^k}^w, {e_b^j}^w, w}{{e_a^k}^w, {e_b^j}^w, w} \newcommand{\ebullet_b^{k-1}}% {e_b^{k-1}^\bullet}{{e^\bullet}_b^{k-1} \newcommand{e_b^{j-1}}{e_b^{j-1}} \newcommand{e_b^k}%{e_{(b,k)}}{e_b^k \newcommand{\models^{k+1}}{\models^{k+1}} \newcommand{\estar_a^{k+1}} %{e_{(a,k+1)}^*}{{e^*}_a^{k+1}} \newcommand{e_a^{k-1}} %{e_{(a,k-1)}}{e_a^{k-1}} \newcommand{S_{k''}}{S_{k''}} \newcommand{e_{b}^{k+1}}{e_{b}^{k+1}} \newcommand{S_{k+1}}{S_{k+1}} \newcommand{e_a^{k+1}}%{e_{(a,k+1)}}{e_a^{k+1} \newcommand{\estar_b^k}{{e^*}_b^k} \newcommand{S_{k-1}}{S_{k-1}} \newcommand{\( k \)-structure}{\( k \)-structure} \newcommand{S_j}{S_j} \newcommand{e^k}{e^k} \newcommand{\kminstr}{S_{k-1}} \newcommand{\noindent}{\noindent} \newcommand{$\mathcal{ES}$}{$\mathcal{ES}$} \newcommand{$\mathcal{GL}$}{$\mathcal{GL}$} \newcommand{$\mathcal{GL}$}{$\mathcal{GL}$} \newcommand{$(k,j)$-model}{$(k,j)$-model} \newcommand{\langle}{\langle} \newcommand{\rangle}{\rangle} \newcommand{e_a^{k-2}} %{e_{(a,k-2)}}{e_a^{k-2}} \newcommand{M^c}{M^c} \newcommand{\mathcal{K}^c_a}{\mathcal{K}^c_a} \newcommand{\mathcal{K}^c_i}{\mathcal{K}^c_i} \newcommand{\mathcal{K}^c_b}{\mathcal{K}^c_b} \newcommand{e_b^{k-2}}%{e_{(b,k-2)}}{e_b^{k-2} \newcommand{\textit{obj}_a}{\textit{obj}_a} \newcommand{w_{\Sigma}}{w_{\Sigma}} \newcommand{\mathitbf{O}}{\mathitbf{O}} \newcommand{\mathitbf{O}_i}{\mathitbf{O}_i} \newcommand{\aknow}{\mathitbf{L}_{a}} \newcommand{\aoknow}{\mathitbf{O}_{a}} \newcommand{\bknow}{\mathitbf{L}_{b}} \newcommand{\mathitbf{O}_{b}}{\mathitbf{O}_{b}} \newcommand{\mathcal{M}^{1,j}}{\mathcal{M}^{1,j}} \newcommand{\mathitbf{L}_i}{\mathitbf{L}_i} \newcommand{\mathitbf{L}_i}{\mathitbf{L}_i} \newcommand{\mathitbf{L}}{\mathitbf{L}} \newcommand{{\mathitbf{N}}}{{\mathitbf{N}}} \newcommand{\mathitbf{N}_i}{\mathitbf{N}_i} \newcommand{\mathitbf{N}_a}{\mathitbf{N}_a} \newcommand{\mathitbf{L}_a}{\mathitbf{L}_a} \newcommand{\mathitbf{N}_b}{\mathitbf{N}_b} \newcommand{{\mathcal{W}}_p}{{\mathcal{W}}_p} \newcommand{{\mathcal{W}}_{\bar{p}}}{{\mathcal{W}}_{\bar{p}}} \newcommand{e_b^1}% {e_{(b,1)}}{e_b^1 \newcommand{\varphi_{a0}}{\varphi_{a0}} \newcommand{\varphi_{aj}}{\varphi_{aj}} \newcommand{\varphi_{a1}}{\varphi_{a1}} \newcommand{\varphi_{a2}}{\varphi_{a2}} \newcommand{\psi_{a0}}{\psi_{a0}} \newcommand{\psi_{aj}}{\psi_{aj}} \newcommand{\psi_{a1}}{\psi_{a1}} \newcommand{\psi_{a2}}{\psi_{a2}} \newcommand{\oasubj}{\textit{subj}_a} \newcommand{\oaObj}{{\bar{O}\textit{bj}}_a} \newcommand{\canaObj}{\textit{Obj}_a} \newcommand{\textit{Sat}}{\textit{Sat}} \newcommand{e_b^{k''}} %{e_{(b,k'')}}{e_b^{k''}} \newcommand{\ebullet_b^{k''}}{{e^\bullet}_b^{k''}} \newcommand{\onl^+}{\mathcal{ONL}_n^+} \newcommand{\axioms^+}{{AX}_n^+} \newcommand{\onl^t}{\mathcal{ONL}_n^t} \newcommand{{\onl^+}}{{\mathcal{ONL}_n^+}} \newcommand{{\onl^+}^t}{{\mathcal{ONL}_n^+}^t} \newcommand{\onl^{k+1}}{\mathcal{ONL}_n^{t+1}} \newcommand{\axioms^t}{{AX}_n^t} \newcommand{\models^{k}}{\models^{k}} \newcommand{{k-1}}{{k-1}} \newcommand{\onl^{k-1}}{\mathcal{ONL}_n^{k-1}} \newcommand{\axioms^{k-1}}{{AX}_n^{k-1}} \newcommand{\axioms^{t+1}}{{AX}_n^{t+1}} \newcommand{\mathcal{W}}{\mathcal{W}} \newcommand{\textit{subj}_i^+}{\textit{subj}_i^+} \newcommand{\textit{obj}_i^+}{\textit{obj}_i^+} \newcommand{\textit{subj}_a^+}{\textit{subj}_a^+} \newcommand{\textit{obj}_a^+}{\textit{obj}_a^+} \newcommand{\textit{subj}_b^+}{\textit{subj}_b^+} \newcommand{\textit{obj}_b^+}{\textit{obj}_b^+} \newcommand{{O}{bj}^+_i}{{O}{bj}^+_i} \newcommand{{O}{bj}^+_a}{{O}{bj}^+_a} \newcommand{{O}{bj}^+_b}{{O}{bj}^+_b} \newcommand{\mathbf{M}_i}{\mathbf{M}_i} \newcommand{e_b^{j'}}{e_b^{j'}} \newcommand{e_a^1}% {e_{(a,1)}}{e_a^1 \newcommand{e^{i}_{(b,k-1)}}{e^{i}_{(b,k-1)}} \newcommand{e^{\bar{i}}_{(b,k-1)}}{e^{\bar{i}}_{(b,k-1)}} \newcommand{\estar_b^{k-1}} %{e^{*}_{(b,k-1)}}{{e^*}_b^{k-1}} \newcommand{e_a^{k'}}%{e_{(a,k')}}{e_a^{k'} \newcommand{M^*}{M^*} \newcommand{M^{\bullet}}{M^{\bullet}} \newcommand{\estar_a^k}{{e^*}_a^k} \newcommand{\ebullet_a^k}{{e^\bullet}_a^k} \newcommand{\estar_b^j}{{e^*}_b^j} \newcommand{\ebullet_b^j}{{e^\bullet}_b^j} \newcommand{w^{*}}{w^{*}} \newcommand{{w^{\bullet}}}{{w^{\bullet}}} \newcommand{e_a^2}{e_a^2} \newcommand{\mathbb{S}_k}{\mathbb{S}_k} \newcommand{[\![}{\llbracket \newcommand{]\!]}{\rrbracket} \newcommand{\mathitbf{L}_j}{\mathitbf{L}_j} \newcommand{e_{b}^{k'-1}}{e_b^{k'-1}} \newcommand{e_b^{K-1}}{e_b^{K-1}} \newcommand{e_b^{K'-1}}{e_b^{K'-1}} \newcommand{e_a^{K'}}{e_a^{K'}} \newcommand{e_a^K}{e_a^K} \newcommand{\textit{HL}~}{\textit{HL}~} \newcommand{\bigvee \gamma_a}{\bigvee \gamma_a} \newcommand{\bigvee \gamma_b}{\bigvee \gamma_b} \newcommand{e_a^{j+1}}{e_a^{j+1}} \newcommand{e_b^{2k}}{e_b^{2k}} \newcommand{\ebullet^{2k}_b}{{e^\bullet}^{2k}_b} \newcommand{\textit{Val}}{\textit{Val}} \newcommand{\mathbb{S}_{k-1}}{\mathbb{S}_{k-1}} \newcommand{e_a^k}{e_a^k} \newcommand{e_b^{k-1}}{e_b^{k-1}} \newcommand{\emph{wrt.}~}{\emph{wrt.}~} \newcommand{\mathcal{N}}{\mathcal{N}} \newcommand{{\dl w \dr}}{{[\![ w ]\!]}} \newcommand{{\dl w' \dr}}{{[\![ w' ]\!]}} \newcommand{{e_{\dl w \dr}}}{{e_{[\![ w ]\!]}}} \newcommand{\edash_b^{k-1}}{{e'}_b^{k-1}} \newcommand{{e_{\dl w' \dr}}}{{e_{[\![ w' ]\!]}}} \newcommand{e_{\dl w'' \dr}}{e_{[\![ w'' ]\!]}} \newcommand{{\ew}_a^{k'}}{{{e_{\dl w \dr}}}_a^{k'}} \newcommand{{\ew}_a^k}{{{e_{\dl w \dr}}}_a^k} \newcommand{{\ew}_a^{j+1}}{{{e_{\dl w \dr}}}_a^{j+1}} \newcommand{{\ew}_a^{k+1}}{{{e_{\dl w \dr}}}_a^{k+1}} \newcommand{{\ew}_a^{K}}{{{e_{\dl w \dr}}}_a^{K}} \newcommand{\estar_a^{j+1}}{{e^*}_a^{j+1}} \newcommand{{\ew}_b^j}{{{e_{\dl w \dr}}}_b^j} \newcommand{{\ew}_b^J}{{{e_{\dl w \dr}}}_b^J} \newcommand{{\ewdash}_a^k}{{{e_{\dl w' \dr}}}_a^k} \newcommand{{\ewdash}_b^j}{{{e_{\dl w' \dr}}}_b^j} \newcommand{{\ewdash}_b^{k-1}}{{{e_{\dl w' \dr}}}_b^{k-1}} \newcommand{{\ewdashdash}_a^{k-2}}{{e_{\dl w'' \dr}}_a^{k-2}} \newcommand{e_b^j}{e_b^j} \newcommand{\eak, \ebj, w}{e_a^k, e_b^j, w} \newcommand{\hgap}{\hspace{0.5cm}} \newcommand{\todo}[1]{\marginpar{\tiny\textcolor{orange}{#1}}} \section{Introduction} \label{sec:introduction} Levesque's notion of only-knowing is a single agent monotonic logic that was proposed with the intention of capturing certain types of nonmonotonic reasoning. Levesque (\citeyear{77758}) already showed that there is a close connection to Moore's (\citeyear{2781}) autoepistemic logic (AEL). Recently, Lakemeyer and Levesque (\citeyear{lakemeyer2005only}) showed that only-knowing can be adapted to capture default logic as well. The main benefit of using Levesque's logic is that, via simple semantic arguments, nonmonotonic conclusions can be reached without the use of meta-logical notions such as fixpoints \cite{330786,levesque2001logic} . Only-knowing is then naturally of interest in a many agent context, since agents capable of non-trivial nonmonotonic behavior should believe other agents to also be equipped with nonmonotonic mechanisms. For instance, if all that Bob knows is that Tweety is a bird and a default that birds typically fly, then Alice, if she knows all that Bob knows, concludes that Bob believes Tweety can fly.\footnote{We use the terms "knowledge" and "belief" interchangeably in the paper.} Also, the idea of only-knowing a collection of sentences is useful for modeling the beliefs of a knowledge base (KB), since sentences that are not logically entailed by the KB are taken to be precisely those not believed. If many agents are involved, and suppose Alice has some beliefs on Bob's KB, then she could capitalize on Bob's knowledge to collaborate on tasks, or plan a strategy against him. As a logic, Levesque's construction is unique in the sense that in addition to a classical epistemic operator for belief, he introduces a modality to denote what is \emph{at most} known. This new modality has a subtle relationship to the belief operator that makes extensions to a many agent case non-trivial. Most extensions so far make use of arbitrary Kripke structures, that already unwittingly discard the simplicity of Levesque's semantics. They also have some undesirable properties, perhaps invoking some caution in their usage. For instance, in a canonical model (Lakemeyer 1993), certain types of epistemic states cannot be constructed. In another Kripke approach (Halpern 1993), the modalities do not seem to interact in an intuitive manner. Although an approach by Halpern and Lakemeyer (\citeyear{1029713}) does indeed successfully model multi-agent only-knowing, it forces us to have the semantic notion of validity directly in the language and has proof-theoretic constructs in the semantics via maximally consistent sets. Precisely for this reason, that proposal is not natural, and it is matched with a proof theory that has a set of new axioms to deal with these new notions. It is also not clear how one can extend their semantics to the first-order case. Lastly, an approach by Waaler (\citeyear{DBLP:conf/aiml/Waaler04}) avoids such an axiomatization of validity, but the model theory also has problems \cite{DBLP:conf/tark/WaalerS05}. Technical discussions on their semantics are deferred to later. The goal of this paper is to show that there is indeed a natural semantics for multi-agent only-knowing for the quantified language with equality. For the propositional subset, there is also a sound and complete axiomatization that faithfully generalizes Levesque's proof theory.\footnote{The proof theory for a quantified language is well known to be \emph{incomplete} for the single agent case. It is also known that any complete axiomatization cannot be \emph{recursive} \cite{204824,levesque2001logic}.} We also differ from Halpern and Lakemeyer in that we do not enrich the language any more than necessary (modal operators for each agent), and we do not make use of canonical Kripke models. And while canonical models, in general, are only workable semantically and can not be used in practice, our proposal has a computational appeal to it. We also show that if we do enrich the language with a modal operator for \emph{validity}, but only to establish a common language with \cite{1029713}, then we agree on the set of valid sentences. Finally, we obtain a first-order multi-agent generalization of AEL, defined solely using notions of classical logical entailment and theoremhood. The rest of the paper is organized as follows. We review Levesque's notions,\footnote{There are other notions of "all I know", which will not be discussed here \cite{101989,ben1989all}. Also see \cite{330786}.} and define a semantics with so-called \( k \)-\emph{structures}. We then compare the framework to earlier attempts. Following that, we introduce a sound and complete axiomatization for the propositional fragment. In the last sections, we sketch the multi-agent (first-order) generalization of AEL, and prove that \( k \)-structures and \cite{1029713} agree on valid sentences, for an enriched language. Then, we conclude and end. \newcommand{\mathcal{ONL}}{\mathcal{ONL}} \section{The \( k \)-structures Approach} \label{sec:the_logic} The non-modal part of Levesque's logic\footnote{We name the logic following \cite{1029713} for ease of comparisons later on. It is referred to as \( \mathcal{OL} \) in \cite{204824,levesque2001logic}.} \( \mathcal{ONL} \) consists of standard first-order logic with \( = \) and a countably infinite set of standard names \( \mathcal{N} \).\footnote{More precisely, we have logical connectives \( \lor \), \( \forall \) and \( \neg \). Other connectives are taken for their usual syntactic abbreviations.} To keep matters simple, function symbols are not considered in this language. We call a predicate other than \( = \), applied to first-order variables or standard names, an \emph{atomic} formula. We write \( \alpha^x_n \) to mean that the variable \( x \) is substituted in \( \alpha \) by a standard name. If all the variables in a formula \( \alpha \) are substituted by standard names, then we call it a \emph{ground} formula. Here, a world is simply a set of ground atoms, and the semantics is defined over the set of all possible worlds \( \mathcal{W} \). The standard names are thus \emph{rigid designators}, and denote precisely the same entities in all worlds. \( \mathcal{ONL} \) also has two modal operators: \( \mathitbf{L} \) and \( {\mathitbf{N}} \). While \( \mathitbf{L} \alpha \) is to be read as "at least \( \alpha \) is known", \( {\mathitbf{N}} \alpha \) is to be read as "at most \( \neg \alpha \) is known". A set of possible worlds is referred to as the agent's \emph{epistemic state} \( e \). Defining a model to be the pair \( (e,w) \) for \( w \in \mathcal{W} \), components of \( \mathcal{ONL} \)'s meaning of truth are: \begin{enumerate} \item \( e,w\models p \) iff \( p \in w \) and \( p \) is a ground atom, \item \( e,w \models (m = n) \) iff \( m \) and \( n \) are identical standard names, \item \( e, w\models \neg \alpha \) iff \( e,w\not\models \alpha \), \item \( e, w\models \alpha \lor \beta \) iff \( e,w \models \alpha \) or \( e,w\models \beta \), \item \( e,w\models \forall x.~\alpha \) iff \( e,w\models \alpha^x_n \) for all standard names \( n \), \item \( e,w \models \mathitbf{L} \alpha \) iff for all \( w' \in e \), \( e, w' \models \alpha \), and \item \( e,w\models {\mathitbf{N}} \alpha \) iff for all \( w' \not\in e \), \( e,w' \models \alpha \). \end{enumerate} \noindent The main idea is that \( \alpha \) is (at least) believed iff it is true at all worlds considered possible, while (at most) \( \alpha \) is believed to be false iff it is true at all worlds considered \emph{impossible}. So, an agent is said to only-know \( \alpha \), syntactically expressed as \( \mathitbf{L} \alpha \land {\mathitbf{N}} \neg \alpha \), when worlds in \( e \) are precisely those where \( \alpha \) is true. Halpern and Lakemeyer (\citeyear{1029713}) underline three features of the semantical framework of \( \mathcal{ONL} \), the intuitions of which we desire to maintain in the many agent setting: \begin{enumerate} \item Evaluating \( {\mathitbf{N}}\alpha \) does \emph{not affect} the epistemic possibilities. Formally, in \( \mathcal{ONL} \), after evaluating formulas of the form \( {\mathitbf{N}}\alpha \) the agent's epistemic state is still given by \( e \). \item A union of the agent's possibilities, that evaluate \( \mathitbf{L} \), and the impossible worlds that evaluate \( {\mathitbf{N}} \), is \emph{fixed} and \emph{independent} of \( e \), and is the set of all \emph{conceivable} states. Formally, in \( \mathcal{ONL} \), \( \mathitbf{L}\alpha \) is evaluated \emph{wrt.}~ worlds \( w\in e \), and \( {\mathitbf{N}}\alpha \) is evaluated \emph{wrt.}~ worlds \( w\in \mathcal{W} -e \); the union of which is \( \mathcal{W} \). The intuition is that the exact complement of an agent's possibilities is used in evaluating \( {\mathitbf{N}} \). \item Given any set of possibilities, there is always a model where \emph{precisely} this set is the epistemic state. Formally, in \( \mathcal{ONL} \), any subset of \( \mathcal{W} \) can be defined as the epistemic state. \end{enumerate} \noindent Although these notions seem clear enough in the single agent case, generalizing them to the many agent case is non-trivial \cite{1029713}. We shall return to analyze the features shortly. Let us begin by extending the language. Let \( \mathcal{ONL}_n \) be a first-order modal language that enriches the non-modal subset of \( \mathcal{ONL} \) with modal operators \( \mathitbf{L}_i \) and \( \mathitbf{N}_i \) for \( i = a,b \). For ease of exposition, we only have two agents \( a \) (Alice) and \( b \) (Bob). Extensions to more agents is straightforward. We freely use \( \mathitbf{O}_i \), such that \( \mathitbf{O}_i \alpha \) is an abbreviation for \( \mathitbf{L}_i \alpha \land \mathitbf{N}_i \neg \alpha \), and is read as "all that \( i \) knows is \( \alpha \)". Objective and subjective formulas are understood as follows. \begin{definition} The \( i \)-depth of a formula \( \alpha \), denoted \( |\alpha|_i \), is defined inductively as (\( \Box_i \) denotes \( \mathitbf{L}_i \) or \( \mathitbf{N}_i \)): \begin{enumerate} \item \( |\alpha|_i = 1 \) for atoms, \item \( |\neg \alpha|_i = |\alpha|_i \), \item \( |\forall x.~\alpha|_i = |\alpha|_i \), \item \( |\alpha \lor \beta|_i = \textrm{max}(|\alpha|_i, |\beta|_i) \), \item \( |\Box_i \alpha|_i = |\alpha|_i \) , \item \( |\Box_j\alpha|_i = |\alpha|_j + 1 \), for \( j \neq i \) \end{enumerate} A formula has a depth \( k \) if max(\( a \)-depth,\( b \)-depth) = \( k \). A formula is called \( i \)-objective if all epistemic operators which do not occur within the scope of another epistemic operator are of the form $\Box_j$ for $i\ne j$. A formula is called \( i \)-subjective if every atom is in the scope of an epistemic operator and all epistemic operators which do not occur within the scope of another epistemic operator are of the form $\Box_i.$ \end{definition} For example, a formula of the form \( \aknow \bknow \aknow p \lor \bknow q\) has a depth of \( 4 \), a \( a \)-depth of \( 3 \) and a \( b \)-depth of \( 4 \). \( \bknow q \) is both \( b \)-subjective and \( a \)-objective. A formula is called \emph{objective} if it does not mention any modal operators. A formula is called \emph{basic} if it does not mention any \( \mathitbf{N}_i \) for \( i = a,b \). We now define a notion of epistemic states using \( k \)-\emph{structures}. The main intuition is that we keep separate the worlds Alice believes from the worlds she considers Bob to believe, to depth \( k \). \begin{definition} A \( k \)-structure{} (\( k \geq 1 \)), say \( e^k \), for an agent is defined inductively as: \begin{itemize} \item[$-$] \( e^1 \subseteq \mathcal{W} \times\{ \{\}\}, \) \item[$-$] \( e^k \subseteq \mathcal{W} \times \mathbb{E}^{k-1}, \) where \( \mathbb{E}^{m} \) is the set of all \( m\)-structures. \end{itemize} \end{definition} \noindent A \( e^1 \) for Alice, denoted as \( e_a^1}% {e_{(a,1)} \), is intended to represent a set of worlds \( \{ \langle w, \{\} \rangle, \ldots \} \). A \( e^2 \) is of the form \( \{ \langle w, e_b^1 \rangle, \langle w', {e'}_b^1 \rangle, \ldots \} \), and it is to be read as "at \( w \), she believes Bob considers worlds from \( e_b^1 \) possible but at \( w' \), she believes Bob to consider worlds from \( {e'}_b^1 \) possible". This conveys the idea that Alice has only partial information about Bob, and so at different worlds, her beliefs about what Bob knows differ. We define a \( e^k \) for Alice, a \( e^j \) for Bob and a world \( w \in \mathcal{W} \) as a \( (k,j) \)-model \( (e_a^k, e_b^j,w) \). Only sentences of a maximal \( a \)-depth of \( k \), and a maximal \( b \)-depth of \( j \) are interpreted \emph{wrt.}~{}a \( (k,j) \)-model. The complete semantic definition is: \begin{enumerate} \item \( \eak, \ebj, w \models p \) iff \( p \in w \) and \( p \) is a ground atom, \item \( \eak, \ebj, w \models (m = n) \) iff \( m,n \in \mathcal{N} \) and are identical, \item \( \eak, \ebj, w \models \neg \alpha \) iff \( \eak, \ebj, w \not\models \alpha \), \item \( \eak, \ebj, w \models \alpha \vee \beta \) iff \( \eak, \ebj, w \models \alpha \) or \( \eak, \ebj, w \models \beta \), \item \( \eak, \ebj, w \models \forall x.~\alpha \) iff \( \eak, \ebj, w \models \alpha^x_n \) for all \( n \in \mathcal{N} \), \item \( \eak, \ebj, w \models \aknow \alpha \) iff for all \( \langle w', {e_b^{k-1}} \rangle \in {e_a^k} \),\\ \( {e_a^k}, {e_b^{k-1}}, w' \models \alpha \), \item\label{i:nknow} \( \eak, \ebj, w \models \mathitbf{N}_a \alpha \) iff for all \( \langle w', {e_b^{k-1}} \rangle \not\in {e_a^k} \),\\ \( {e_a^k}, {e_b^{k-1}}, w' \models \alpha \) \end{enumerate} \noindent And since \( \aoknow \alpha \) syntactically denotes \( \aknow \alpha \land \mathitbf{N}_a \neg \alpha \), it follows from the semantics that \begin{enumerate} \item[8.] \( \eak, \ebj, w \models \aoknow \alpha \) iff for all worlds \( w' \), for all \( e^{k-1} \) for Bob, \( \langle w', {e_b^{k-1}} \rangle \in {e_a^k} \) iff \( {e_a^k}, {e_b^{k-1}}, w' \models \alpha \) \end{enumerate} \noindent (The semantics for \( \bknow\alpha \) and \( \mathitbf{N}_b \alpha \) are given analogously.) A formula \( \alpha \) (of \( a \)-depth of \( k \) and of \( b \)-depth of \( j \)) is \emph{satisfiable} iff there is a \( (k,j) \)-model such that \( e_a^k, e_b^j,w \models \alpha \). The formula is \emph{valid} (\( \models \alpha \)) iff \( \alpha \) is true at all \( (k,j) \)-models. Satisfiability is extended to a set of formulas \( \Sigma \) (of maximal \( a,b \)-depth of \( k,j \)) in the manner that there is a \( (k,j) \)-model \( \eak, \ebj, w \) such that \( \eak, \ebj, w \models \alpha' \) for every \( \alpha' \in \Sigma \). We write \( \Sigma \models \alpha \) to mean that for every \( (k,j) \)-model \( \eak, \ebj, w \), if \( \eak, \ebj, w \models \alpha' \) for all \( \alpha' \in \Sigma \), then \( \eak, \ebj, w \models \alpha \). \newcommand{{e_a}{\downarrow}_k^{k'}}{{e_a}{\downarrow}_k^{k'}} \newcommand{{e_b}{\downarrow^{k'-1}_{k-1}}}{{e_b}{\downarrow}_{k-1}^{k'-1}} \newcommand{{e_a}{\downarrow}_0^{k'}}{{e_a}{\downarrow}_0^{k'}} Validity is not affected if models of a depth greater than that needed are used. This is to say, if \( \alpha \) is true \emph{wrt.}~all \( (k,j) \)-models, then \( \alpha \) is true \emph{wrt.}~all \( (k',j') \)-models for \( k'\geq k,j'\geq j \). We obtain this result by constructing for every \( e_a^{k'}}%{e_{(a,k')} \), a \( k \)-structure \( {e_a}{\downarrow}_k^{k'} \), such that they agree on all formulas of maximal \( a \)-depth \( k \). Analogously for \( e_b^{j'} \). \newcommand{{e_a}{\downarrow^{k'}_1}}{{e_a}{\downarrow^{k'}_1}} \begin{definition} Given \( e_a^{k'}}%{e_{(a,k')} \), we define \( {e_a}{\downarrow}_k^{k'} \) for $k'\ge k \geq 1$: \begin{enumerate} \item \( {e_a}{\downarrow^1_1} = e_a^1}% {e_{(a,1)} \), \item \( {e_a}{\downarrow^{k'}_1} = \{ \langle w, \{\} \rangle \mid \langle w, e_{b}^{k'-1} \rangle \in e_a^{k'}}%{e_{(a,k')} \} \), \item \( {e_a}{\downarrow}_k^{k'} = \{ \langle w, {e_b}{\downarrow^{k'-1}_{k-1}} \rangle \mid \langle w, e_{b}^{k'-1} \rangle \in e_a^{k'}}%{e_{(a,k')} \} \). \end{enumerate} \end{definition} \newcommand{{e_b}{\downarrow}^{j'}_j}{{e_b}{\downarrow}^{j'}_j} \newcommand{{e_a}{\downarrow}_{k-1}^{k'}}{{e_a}{\downarrow}_{k-1}^{k'}} \newcommand{{e_b}{\downarrow^{k'-1}_{j-1}}}{{e_b}{\downarrow^{k'-1}_{j-1}}} \newcommand{{e_a}{\downarrow_1^{k'}}}{{e_a}{\downarrow_1^{k'}}} \newcommand{{e_b}{\downarrow_1^{k'-1}}}{{e_b}{\downarrow_1^{k'-1}}} \newcommand{{e_a}{\downarrow_2^{k'}}}{{e_a}{\downarrow_2^{k'}}} \newcommand{{e_b}{\downarrow_2^{j'}}}{{e_b}{\downarrow_2^{j'}}} \begin{lemma}\label{lem:satisfiability_higher_structures} For all formulas \( \alpha \) of maximal \( a,b \)-depth of \( k,j \), \( e_a^{k'}}%{e_{(a,k')}, e_b^{j'}, w \models \alpha \) iff \( {e_a}{\downarrow}_k^{k'}, {e_b}{\downarrow}^{j'}_j, w \models \alpha \), for \( k' \geq k,j'\geq j \). \end{lemma} \begin{proof} By induction on the depth of formulas.~The proof immediately holds for atomic formulas, disjunctions and negations since we have the same world \( w \). Assume that the result holds for formulas of \( a,b \)-depth \( 1 \). Let \( \alpha \) such a formula, and suppose \( e_a^{k'}}%{e_{(a,k')}, e_b^{j'}, w\models \aknow \alpha \) (where \( \aknow \alpha \) has \( a,b \)-depth of \( 1,2 \)). Then, for all \( \langle w', e_{b}^{k'-1} \rangle \in e_a^{k'}}%{e_{(a,k')} \), \( e_a^{k'}}%{e_{(a,k')}, e_{b}^{k'-1}, w'\models \alpha \) iff (by induction hypothesis) \( {e_a}{\downarrow_1^{k'}}, {e_b}{\downarrow_1^{k'-1}}, w'\models \alpha \) iff \( {e_a}{\downarrow_2^{k'}}, \{\}, w\models \aknow \alpha \). By construction, we also have \( {e_a}{\downarrow_1^{k'}}, \{\},w\models \aknow \alpha \). Lastly, since \( \aknow \alpha \) is \( a \)-subjective, \( b \)'s structure is irrelevant, and thus, \( {e_a}{\downarrow_1^{k'}}, {e_b}{\downarrow_2^{j'}}, w\models \aknow\alpha \). For the reverse direction, suppose \( {e_a}{\downarrow_1^{k'}}, {e_b}{\downarrow_2^{j'}}, w\models \aknow \alpha \). Then for all \( w' \in {e_a}{\downarrow_1^{k'}} \), \( {e_a}{\downarrow_1^{k'}}, \{\}, w' \models \alpha \) iff (by construction) for all \( \langle w', e_{b}^{k'-1} \rangle \in e_a^{k'}}%{e_{(a,k')} \), \( e_a^{k'}}%{e_{(a,k')}, e_{b}^{k'-1}, w' \models \alpha \) iff \( e_a^{k'}}%{e_{(a,k')}, \{\}, w \models \aknow \alpha \). Since \( b \)'s structure is irrelevant, we have \( e_a^{k'}}%{e_{(a,k')}, e_b^{j'}, w \models \aknow \alpha \). The cases for \( \bknow \alpha \), \( \mathitbf{N}_a \alpha \) and \( \mathitbf{N}_b \alpha \) are completely symmetric. \eprf \end{proof} \begin{theorem} For all formulas \( \alpha \) of \( a,b \)-depth of \( k,j \), if \( \alpha \) is true at all \( (k,j) \)-models, then \( \alpha \) is true at all \( (k',j') \)-models with $k'\ge k$ and $j'\ge j$. \end{theorem} \begin{proof} Suppose \( \alpha \) is true at all \( (k,j) \)-models. Given any \( (k',j') \)-model, by assumption \( {e_a}{\downarrow}_k^{k'}, {e_b}{\downarrow}^{j'}_j, w\models \alpha \) and by Lemma \ref{lem:satisfiability_higher_structures}, \( e_a^{k'}}%{e_{(a,k')}, e_b^{j'}, w\models \alpha \). \eprf \end{proof} \noindent Knowledge with \( k \)-structures satisfy \emph{weak} \( \textsf{S5} \) properties, and the Barcan formula \cite{nla.cat-vn2004742}. \begin{lemma} If \( \alpha \) is a formula, the following are valid \emph{wrt.}~ models of appropriate depth (\( \Box_i \) denotes \( \mathitbf{L}_i \) or \( \mathitbf{N}_i \)): \begin{enumerate} \item \( \Box_i \alpha \land \Box_i(\alpha \supset \beta) \supset \Box_i \beta \), \item \( \Box_i \alpha \supset \Box_i\Box_i \alpha \), \item \( \neg \Box_i \alpha \supset \Box_i \neg \Box_i \alpha \), \item \( \forall \mathitbf{x}.~\Box_i \alpha \supset \Box_i(\forall \mathitbf{x}.~\alpha) \). \end{enumerate} \end{lemma} \begin{proof} The proofs are similar. For item 3, wlog let \( \Box_i \) be \( \aknow \). Suppose \( e_a^k, e_b^j, w \models \neg \aknow \alpha \). There is some \( \langle w',e_b^{k-1} \rangle \in e_a^k \) such that \( e_a^k, e_b^{k-1}, w' \models \neg \alpha \). Let \( w'' \) be any world such that \( \langle w'', \edash_b^{k-1} \rangle \in e_a^k \). Then, \( e_a^k, \edash_b^{k-1}, w'' \models \neg \aknow \alpha \). Thus, \( e_a^k, e_b^j, w\models \aknow \neg \aknow \alpha \). The case of \( \mathitbf{N}_a \) is analogous. \eprf \end{proof} \newcommand{\textit{true}}{\textit{true}} \noindent Before moving on, let us briefly reflect on the fact that $k$-structures have finite depth. So suppose \( a \) only-knows KB, of depth \( k \). Using \( k \)-structures allows us to reason about what is believed, up to depth \( k \). Also, if we construct epistemic states from \( k' \)-structures where \( k'\geq k \), then the logic correctly captures non-beliefs beyond the depth \( k \). To illustrate, let \( \textit{true} \) (depth \( 1 \)) be all that \( a \) knows. Then, it can easily be shown that both the sentences \( \aoknow(\textit{true}) \supset \neg \aknow\neg \bknow \alpha \) and \( \aoknow(\textit{true}) \supset \neg \aknow \bknow \alpha \) are valid sentences in the logic, by considering any \( e^2 \) (and higher) for \( a \). For most purposes, this restriction of having a parameter \( k \) seems harmless in the sense that agents usually have a finite knowledge base with sentences of some maximal depth $k$ and they should not be able to conclude anything about what is known at depths higher than $k$, with one exception. If we were to include a notion of common knowledge \cite{reasoning:about:knowledge}, then we would get entailments about what is believed at arbitrary depths. With our current model, this cannot be captured, but we are willing to pay that price because in return we get, for the first time, a very simple possible-world style account of only-knowing. Similarly, we have nothing to say about (infinite) knowledge bases with unbounded depth. \section{Multi-Agent Only-Knowing} \label{sec:multi_agent_only_knowing} In this section, we return to the features of only-knowing discussed earlier and verify that the new semantics reasonably extends them to the multi-agent case. We also briefly discuss earlier attempts at capturing these features. Halpern (\citeyear{DBLP:conf/aaai/Halpern93}), Lakemeyer (\citeyear{Lakemeyer1993}), and Halpern and Lakemeyer (\citeyear{1029713}) independently attempted to extend \( \mathcal{ONL} \) to the many agent case.\footnote{For space reasons, we do not review all aspects of these approaches.} There are some subtle differences in their approaches, but the main restriction is they only allow a propositional language. Henceforth, to make the comparison feasible, we shall also speak of the propositional subset of \( \mathcal{ONL}_n \) with the understanding that the semantical framework is now defined for propositions (from an infinite set \( \Phi \)) rather than ground atoms. The main component in these features is the notion of \emph{possibility}. In the single agent case, each world represents a possibility. Thus, from a logical viewpoint, a possibility is simply the set of objective formulas true at some world. Further, the set of epistemic possibilities is given by \( \{ \{\textrm{objective formulas true at}~w \} \mid w\in e\} \). Halpern and Lakemeyer (\citeyear{1029713}) correctly argue that the appropriate generalization of the notion of possibility in the many agent case are \( i \)-objective formulas. Intuitively, a possible state of affairs according to \( a \) include the state of the world (objective formulas), as well as what \( b \) is taken to believe. The earlier attempts by Halpern and Lakemeyer use Kripke structures with accessibility relations \( \mathcal{K}_i \) for each agent \( i \). Given a Kripke structure \( M \), the notion of possibility is defined as the set of \( i \)-objective formulas true at some Kripke world, and the set of epistemic possibilities is obtained from the \( i \)-objective formulas true at all \( i \)-accessible worlds. Formally, the set of epistemic possibilities true at \( (M,w) \), where \( w \) is a world in \( M \), is defined as \( \{ \textit{obj}_i^+(M,w') \mid w' \in \mathcal{K}_i(w) \} \), where \( \textit{obj}_i^+(M,w') \) is a set consisting of \( i \)-objective formulas true at \( (M,w') \).\footnote{The superscript \( + \) denotes that the set includes non-basic formulas. Given \( X^+ \), we let \( X = \{ \phi ~\textrm{is basic}~ \mid \phi \in X^+ \} \). } Although intuitive, note that, even for the propositional subset of \( \mathcal{ONL} \), a Kripke world is a completely different entity from what Levesque supposes. Perhaps, one consequence is that the semantic proofs in earlier approaches are very involved. In contrast, we define worlds exactly as Levesque supposes. And, our notion of possibility is obtained from the set of \( a \)-objective formulas true at each \( \langle w, e_b^{k-1} \rangle \) in \( e_a^k \). \newcommand{e_a^{j-1}}{e_a^{j-1}} \begin{definition} Suppose \(M = (\eak, \ebj, w) \) is a \( (k,j) \)-model. \begin{enumerate} \item let \( \textit{obj}_i^+(M) = \{ \textrm{\( i \)-objective \( \phi \)} \mid M \models \phi \} \), \item let \( {O}{bj}^+_a({e_a^k}) = \{ \textit{obj}_a^+(\{\}, {e_b^{k-1}}, w) \mid \langle w, {e_b^{k-1}} \rangle \in {e_a^k} \} \), \item let \( {O}{bj}^+_b(e_b^j) = \{ \textit{obj}_b^+(e_a^{j-1}, \{\}, w) \mid \langle w, e_a^{j-1} \rangle \in e_b^j \} \). \end{enumerate} \end{definition} \noindent All the \( a \)-objective formulas true at a model \( M \), essentially the objective formulas true \emph{wrt.}~\( w \) and the \( b \)-subjective formulas true w\emph{rt.}~\( e_b^j \), are given by \( \textit{obj}_a^+(M) \). Note that these formulas do not strictly correspond to \( a \)'s possibilities. Rather, we define \( {O}{bj}^+_a \) on her epistemic state \( e_a^k \), and this gives us all the \( a \)-objectives formulas that \( a \) considers possible. We shall now argue that the intuition of all of Levesque's properties is maintained.\footnote{It is interesting to note that such a formulation of Levesque's properties is not straightforward in the first-order case. That is, for the quantified language, it is known that there are epistemic states that can not be characterized using only objective formulas \cite{levesque2001logic}. Thus, it is left open how one must correctly generalize the features of first-order \( \mathcal{ONL} \). } \\ \noindent \textbf{Property 1.}~In the single agent case, this property ensured that an agent's epistemic possibilities are not affected on evaluating \( {\mathitbf{N}} \). This is immediately the case here. Given a model, say \( (\eak, \ebj, w) \), \( a \)'s epistemic possibilities are determined by \( {O}{bj}^+_a({e_a^k}) \). To evaluate \( \mathitbf{N}_a\alpha \), we consider all models \( ({e_a^k}, {e_b^{k-1}}, w' ) \) such that \( \langle w', {e_b^{k-1}} \rangle \not\in {e_a^k} \). Again, \( a \)'s possibilities are given by \( {O}{bj}^+_a({e_a^k}) \) for all these models, and does not change. \\[1ex] \noindent \textbf{Property 2.} In the single agent case, this property ensured that evaluating \( \mathitbf{L}\alpha \) and \( {\mathitbf{N}}\alpha \) is always \emph{wrt.}~ the set of all possibilities, and completely independent of \( e \). As discussed, in the many agent case, possibilities mean \( i \)-objective formulas and analogously, if \( \alpha \) is a possibility in \( a \)'s view, say an \( a \)-objective formula of maximal \( b \)-depth of \( k \), then we should interpret \( \aknow \alpha \) and \( \mathitbf{N}_a \alpha \) \emph{wrt.}~all \( a \)-objective possibilities of max.~depth \( k \): the set of \( (k+1) \)-structures. Clearly then, the result is fixed and independent of the corresponding \( e^{k+1} \). The following lemma is a direct consequence of the definition of the semantics. \begin{lemma} Let \( \alpha \) be a \( i \)-objective formula of \( j \)-depth \( k \), for \( j \neq i \). Then, the set of \( k\!+\!1 \)-structures that evaluate \( \mathitbf{L}_i \alpha \) and \( \mathitbf{N}_i \alpha \) is \( \mathbb{E}^{k+1} \). \end{lemma} \noindent \textbf{Property 3.} The third property ensures that one can characterize epistemic states from any set of \( i \)-objective formulas. Intuitively, given such a set, we must have a model where \emph{precisely} this set is the epistemic state. Earlier attempts at clarifying this property involved constructing a \emph{set} of maximally \( {\textsf{K45}}_{n} \)-consistent sets of basic \( i \)-objective formulas, and showing that there exist an epistemic state that precisely corresponds to this set. But, defining possibilities via \( {\textsf{K45}}_{n} \) proof-theoretic machinery inevitably leads to some limitations, as we shall see. We instead proceed semantically, and go beyond basic formulas. Let \( \Omega \) be a satisfiable set of \( i \)-objective formulas, say of maximal \( j \)-depth \( k \), for \( j\neq i \). Let \( \Omega' \) be a set obtained by adding a \( i \)-objective formula \( \gamma \) of maximal \( j \)-depth \( k \) such that \( \Omega' \) is also satisfiable. By considering all \( i \)-objective formulas of maximal \( j \)-depth \( k \), let us construct \( \Omega' \), \( \Omega'' \), \( \ldots \) by adding formulas iff the resultant set remains satisfiable. When we are done, the resulting \( \Omega^\ast \) is what we shall call a maximally satisfiable \( i \)-objective set.\footnote{A maximally satisfiable set is to be understood as a semantically characterized \emph{complete} description of a possibility, analogous to a proof theoretically characterized notion of maximally consistent set of formulas.} Naturally, there may be many such sets corresponding to \( \Omega \). We show that given a \emph{set} of maximally satisfiable \( i \)-objective sets, there is a model where precisely this set characterizes the epistemic state. \begin{theorem}\label{thm:iset-theorem} Let \( S_i \) be a set of maximally satisfiable sets of \( i \)-objective formulas, and \( \sigma \) a satisfiable objective formula. Suppose \( S_a \) is of max.~\( b \)-depth \( k-1 \) and \( S_b \) is of max.~\( a \)-depth \( j-1 \). Then there is a model \( M^* = \langle \estar_a^k, \estar_b^j, w^{*} \rangle \) such that \( M^* \models \sigma \), \( S_a = {O}{bj}^+_a(\estar_a^k) \) and \( S_b = {O}{bj}^+_b(\estar_b^j) \). \end{theorem} \begin{proof} Consider \( S_a \). Each \( S' \in S_a \) is a maximally satisfiable \( a \)-objective set, and thus by definition, there is a \( k \)-structure \( \langle w', e_b^{k-1} \rangle \) such that \( \{\}, e_b^{k-1}, w' \models S' \). Define such a set of \( k \)-structures \( \{ \langle w', e_b^{k-1} \rangle \} \), corresponding to each \( S' \in S_a \), and let this be \( \estar_a^k \). It is immediate to verify that \( {O}{bj}^+_a(\estar_a^k) = S_a \). Analogously, for \( \estar_b^j \) using \( S_b \). Finally, there is clearly some world \( w^{*} \) where \( \sigma \) holds. \eprf \end{proof} \subsection{On Validity} \label{sub:on_validity} How does the semantics compare to earlier approaches? In particular, we are interested in valid formulas. Lakemeyer (\citeyear{Lakemeyer1993}) proposes a semantics using \( {\textsf{K45}}_{n} \)-canonical models, but he shows that the formula \(\neg \aoknow \neg \mathitbf{O}_{b} p \) for any proposition \( p \) is valid. Intuitively, it says that all that Alice knows is that Bob does not only know \( p \), and as Lakemeyer argues, the validity of \( \neg \aoknow \neg \mathitbf{O}_{b} p \) is unintuitive. After all, Bob could \emph{honestly} tell Alice that he does not only know \( p \). The negation of this formula, on the other hand, is satisfiable in a Kripke structure approach by Halpern (\citeyear{DBLP:conf/aaai/Halpern93}), called the \( i \)-set approach.\footnote{In his original formulation, Halpern (\citeyear{DBLP:conf/aaai/Halpern93}) constructs \emph{trees}. We build on discussions in \cite{1029713}.} It is also satisfiable in the \( k \)-structure semantics. Interestingly, the \( i \)-set approach and \( k \)-structures agree on one more notion. The formula \( \aknow \bot \supset \neg \mathitbf{N}_a \neg \mathitbf{O}_{b} \neg \aoknow p \)~(\( \zeta \)) is valid in both, while \( \neg \zeta \) is satisfiable \emph{wrt.}~Lakemeyer (\citeyear{Lakemeyer1993}). (It turns out that the validity of \( \zeta \) in our semantical framework is implicitly related to the satisfiability of \( \mathitbf{O}_{b} \neg \aoknow p \), so this property is not unreasonable.) However, we immediately remark that the \( i \)-set approach and \( k \)-structures do not share too many similarities beyond those presented above. In fact,~the \( i \)-set approach does not truly satisfy Levesque's second property. For instance, \( \mathitbf{N}_a \neg \mathitbf{O}_{b} p \land \aknow \neg \mathitbf{O}_{b} p \)~(\( \lambda \)) is satisfiable in Halpern (\citeyear{DBLP:conf/aaai/Halpern93}). Recall that, in this property, the union of models that evaluate \( \mathitbf{N}_i \alpha \) and \( \mathitbf{L}_i \alpha \) must lead to all conceivable states. So, the satisfiability of \( \lambda \) leaves open the question as to why \( \mathitbf{O}_{b} p \) is not considered since \( \neg \mathitbf{O}_{b} p \) is true at all conceivable states. We show that, in contrast, \( \lambda \) is not satisfiable in the \( k \)-structures approach. Lastly, \cite{1029713} involves enriching the language, the intuitions of which are perhaps best explained after reviewing the proof theory, and so we defer discussions to later.\footnote{An approach by \cite{DBLP:conf/aiml/Waaler04,DBLP:conf/tark/WaalerS05} is also motivated by the proof theory. Discussions are deferred.} \begin{theorem}\label{lem:iset_validity} The following are properties of the semantics: \begin{enumerate} \item \( \aoknow \neg \mathitbf{O}_{b} p \), for any \( p \in \Phi \), is satisfiable. \item \( \models \aknow \bot \supset \neg \mathitbf{N}_a \neg \mathitbf{O}_{b} \neg \aoknow p \). \item \( \mathitbf{N}_a\neg\mathitbf{O}_{b} p \land \aknow \neg\mathitbf{O}_{b} p \) is not satisfiable. \end{enumerate} \end{theorem} \begin{proof} \textbf{Item 1.} Let \( {\mathcal{W}}_p = \{ w \mid w \models p \} \) and let \( E \) be all subsets of \( \mathcal{W} \) except the set \( {\mathcal{W}}_p \). It is easy to see that if \( e_b^1}% {e_{(b,1)} \in E \), then \( \{\}, e_b^1}% {e_{(b,1)}, w \not\models \mathitbf{O}_{b} p \), for any world \( w \). Now, define a \( e^2 \) for \( a \) that has all of \( \mathcal{W} \times E \). Thus, \( e_a^2, \{\}, w \models \aoknow \neg \mathitbf{O}_{b} p\). \textbf{Item 2.} Suppose \( e_a^k, \{\}, w\models \aknow \bot \) for any \( w \in \mathcal{W} \). Then, for all \( \langle w', e_b^{k-1} \rangle \in e_a^k \), \( e_a^k, e_b^{k-1}, w' \models \bot \), and thus, \( e_a^k = \{\} \). Suppose now \( e_a^k, \{\}, w\models \mathitbf{N}_a \neg \mathitbf{O}_{b} \neg \aoknow p \). Then, \emph{wrt.}~ all of \( \langle w', e_b^{k-1} \rangle \not\ine_a^k \) i.e.~all of \( \mathbb{E}^k \), \( \neg \mathitbf{O}_{b} \neg \aoknow p \) must hold. That is, \( \neg \mathitbf{O}_{b} \neg \aoknow p \) must be valid. From above, we know this is not the case. \textbf{Item 3.} Suppose \( e_a^k, \{\}, w \models \aknow \neg \mathitbf{O}_{b} p \), for any \( w \). Then, for all \( \langle w', e_b^{k-1} \rangle \in e_a^k \), \( e_a^k, e_b^{k-1}, w' \models \neg \mathitbf{O}_{b} p \). Since \( \mathitbf{O}_{b} p \) is satisfiable, there is a \( \estar_b^{k-1}} %{e^{*}_{(b,k-1)} \) such that \( \{\}, \estar_b^{k-1}} %{e^{*}_{(b,k-1)}, w^{*} \models \mathitbf{O}_{b} p \), and \( \langle w^{*}, \estar_b^{k-1}} %{e^{*}_{(b,k-1)} \rangle \not\in e_a^k \). Then, \( e_a^k, \{\}, w \models \neg \mathitbf{N}_a \neg \mathitbf{O}_{b} p \). \eprf \end{proof} \noindent Thus, \( k \)-structures seem to satisfy our intuitions on the behavior of only-knowing. To understand why, notice that \( \neg \aoknow \neg \mathitbf{O}_{b} p \) and \( \lambda \) involve the nesting of \( \mathitbf{N}_i \) operators. Lakemeyer (\citeyear{Lakemeyer1993}) makes an unavoidable technical commitment. A (\( i \)-objective) possibility is formally a maximally \( {\textsf{K45}}_{n} \)-consistent set of \emph{basic} \( i \)-objective formulas. The restriction to basic formulas is an artifact of a semantics based on the canonical model. Unfortunately, there is more to agent \( i \)'s possibility than just basic formulas. In the case of Halpern (\citeyear{DBLP:conf/aaai/Halpern93}), the problem seems to be that \( \mathitbf{N}_i \) and \( \mathitbf{L}_i \) do not interact naturally, and that the full complement of epistemic possibilities is not considered in interpreting \( \mathitbf{N}_i \). In contrast, Theorem \ref{thm:iset-theorem} shows that we allow non-basic formulas and by using a strictly semantic notion, we avoid problems that arise from the proof-theoretic restrictions. And, since the semantics faithfully complies with the second property, \( \lambda \) is not satisfiable. The natural question is if there are axioms that characterize the semantics. We begin, in the next section, with a proof theory by Lakemeyer (\citeyear{Lakemeyer1993}) that is known to be sound and complete for all attempts so far, but for a restricted language. \section{Proof Theory} \label{sec:proof_theory} In the single agent case, \( \mathcal{ONL} \)'s proof theory consists of axioms of propositional logic, axioms that treat \( \mathitbf{L} \) and \( {\mathitbf{N}} \) as a classical belief operator in \( \textsf{K45} \), an axiom that allows us to use \( {\mathitbf{N}} \) and \( \mathitbf{L} \) freely on subjective formulas, modus ponens (\( \mathbf{MP} \)) and necessitation (\( \mathbf{NEC} \)) for both \( \mathitbf{L} \) and \( {\mathitbf{N}} \) as inference rules, and the following axiom:\footnote{Strictly speaking, this is not the proof theory introduced in~\cite{77758}, where an axiom replaces the inference rule \( \mathbf{NEC} \). Here, we consider an equivalent formulation by Halpern and Lakemeyer~(\citeyear{1029713}).} \begin{itemize} \item[] \( \mathbf{A5}. \) \( {\mathitbf{N}} \alpha \supset \neg \mathitbf{L} \alpha \) if \( \neg \alpha \) is a propositionally consistent\\\qquad \mbox{}\qquad \mbox{} objective formula. \end{itemize} \noindent As we shall see, only the axiom \( \mathbf{A5} \) is controversial, since extending any objective \( \alpha \) to \emph{any} \( i \)-objective \( \alpha \) is problematic. Mainly, the soundness of the axiom in the single agent case relies on propositional logic. But in the multi-agent case, since we go beyond propositional formulas establishing this consistency is non-trivial, and even circular. To this end, Lakemeyer~(\citeyear{Lakemeyer1993}) proposes to resolve this consistency by relying on the existing logic \( {\textsf{K45}}_{n} \). As a consequence, his proof theoretic formulation appropriately generalizes all of Levesque's axioms, except for \( \mathbf{A5} \) where its application is restricted to only basic \( i \)-objective consistent formulas. We use \( \vdash \) to denote provability. \begin{definition} \( \mathcal{ONL}_n^- \) consists of all formulas \( \alpha \) in \( \mathcal{ONL}_n \) such that no \( \mathitbf{N}_j \) may occur in the scope of a \( \mathitbf{L}_i \) or a \( \mathitbf{N}_i \), for \( i \neq j \). \end{definition} \noindent The following axioms, along with \( \mathbf{MP} \) and \( \mathbf{NEC} \) (for \( \mathitbf{L}_i \) and \( \mathitbf{N}_i \)) is an axiomatization that we refer to as \( {AX}_n \). \( {AX}_n \) is sound and complete for the canonical model and the \( i \)-set approach for formulas in \( \mathcal{ONL}_n^- \). \begin{itemize} \item[] \( \mathbf{A1}_n. \) All instances of propositional logic, \item[] \( \mathbf{A2}_n. \) \( \mathitbf{L}_i(\alpha \supset \beta) \supset (\mathitbf{L}_i \alpha \supset \mathitbf{L}_i \beta) \), \item[] \( \mathbf{A3}_n. \) \( \mathitbf{N}_i(\alpha \supset \beta) \supset (\mathitbf{N}_i \alpha \supset \mathitbf{N}_i \beta) \), \item[] \( \mathbf{A4}_n. \) \( \sigma \supset \mathitbf{L}_i \sigma \land \mathitbf{N}_i \sigma \) for \( i \)-subjective \( \sigma \), \item[] \( \mathbf{A5}_n.~\mathitbf{N}_i \alpha \supset \neg \mathitbf{L}_i \alpha \) if \( \neg \alpha \) is a \( {\textsf{K45}}_{n} \)-consistent\\\qquad \mbox{}\qquad \mbox{} \( i \)-objective basic formula. \end{itemize} \noindent Observe that, as discussed, the soundness of \( \mathbf{A5}_n \) is built on \( {\textsf{K45}}_{n} \)-consistency. Since our semantics is not based on Kripke structures, proving that every \( {\textsf{K45}}_{n} \)-consistent formula is satisfiable in some \( (k,j) \)-model is not immediate. We propose a construction called the \( (k,j) \)-\emph{correspondence model}. In the following, in order to disambiguate \( \mathcal{W} \) from Kripke worlds, we shall refer to our worlds as propositional valuations. \begin{definition} The \( {\textsf{K45}}_{n} \) canonical model \( M^c = \langle \mathcal{W}^c, \pi^c, \mathcal{K}^c_a, \mathcal{K}^c_b \rangle \) is defined as follows: \begin{enumerate} \item \( \mathcal{W}^c = \{ w \mid w \) is a (basic) maximally consistent set \( \} \) \item for all \( p \in \Phi\) and worlds \( w \), \( \pi^c(w)(p) = \textrm{true} \) iff \( p \in w \) \item \( (w,w') \in \mathcal{K}^c_i \) iff \( w\backslash \mathitbf{L}_i \subseteq w' \), \( w\backslash \mathitbf{L}_i = \{ \alpha \mid \mathitbf{L}_i\alpha \in w\} \) \end{enumerate} \end{definition} \begin{definition} Given \( M^c \), define a set of propositional valuations \( \mathcal{W} \) such that for each world \( w \in \mathcal{W}^c \), there is a valuation \( [\![ w ]\!] \in \mathcal{W} \), \( [\![ w ]\!] = \{ p \mid p \in w \} \). \end{definition} \newcommand{{e_{\dl w \dr}}_a^1}{{e_{[\![ w ]\!]}}_a^1} \begin{definition}\label{defn:correspondence_model} Given \( M^c \) and a world \( w \in \mathcal{W}^c \), construct a $(k,j)$-model{} \( \langle {\ew}_a^k, {\ew}_b^j, [\![ w ]\!] \rangle \) from valuations \( \mathcal{W} \) inductively: \begin{enumerate} \item \( {e_{\dl w \dr}}_a^1 = \{ \langle [\![ w' ]\!], \{\} \rangle \mid w' \in \mathcal{K}^c_a(w) \} \), \item \( {\ew}_a^k = \{ \langle [\![ w' ]\!], {\ewdash}_b^{k-1} \rangle \mid w' \in \mathcal{K}^c_a(w) \} \), \item[] where \( {\ewdash}_b^{k-1} = \{ \langle [\![ w'' ]\!], {\ewdashdash}_a^{k-2} \rangle \mid w'' \in \mathcal{K}^c_b(w') \} \). \end{enumerate} \noindent Further, \( {\ew}_b^j \) is constructed analogously. Let us refer to this model as the \( (k,j) \)-correspondence model of \( (M^c, w) \). \end{definition} \noindent Roughly, Defn.~\ref{defn:correspondence_model} is a construction of a \( (k,j) \)-model that appeals to the accessibility relations in the canonical model.\footnote{The construction is somewhat similar to the notion of \emph{generated submodels} of Kripke frames \cite{hughes1984companion}.} Thus, a \( e_a^1}% {e_{(a,1)} \) for Alice \emph{wrt.}~\( w \) has precisely the valuations of Kripke worlds \(w' \in \mathcal{K}^c_a(w) \). Quite analogously, a \( e_a^k \) is a set \( \{ \langle [\![ w' ]\!], e^{k-1} \rangle \} \), where \( w' \in \mathcal{K}^c_a(w) \) as before, but \( e^{k-1} \) is an epistemic state for Bob and hence refers all worlds \( w'' \in \mathcal{K}^c_b(w') \). By a induction on the depth of a \emph{basic} formula \( \alpha \), we obtain a theorem that \( \alpha \) of maximal \( a,b \)-depth \( k,j \) is satisfiable at \( (M^c,w) \) iff the \( (k,j) \)-correspondence model satisfies the formula. \begin{theorem}\label{thm:basicformulas_canon_iff_kjmodel} For all basic formulas \( \alpha \) in \( \mathcal{ONL}_n^- \) and of maximal \( a,b \)-depth of \( k,j \), \\ \qquad \mbox{}\qquad \mbox{}\qquad \mbox{}\( M^c, w \models \alpha \) iff \( {\ew}_a^k, {\ew}_b^j, [\![ w ]\!] \models \alpha \). \end{theorem} \begin{proof} By definition, the proof holds for propositional formulas, disjunctions and negations. So let us say the result holds for formulas of \( a,b \)-depth \( 1 \). Suppose now \( M^c,w \models \aknow\alpha \), where \( \aknow \alpha \) has \( a,b \)-depth of \( 1,2 \). Then for all \( w'\in \mathcal{K}^c_a(w) \), \( M^c, w'\models \alpha \) iff (by induction hypothesis) \( {e_{\dl w' \dr}}_a^1, {e_{\dl w' \dr}}_b^1, {\dl w' \dr} \models \alpha \) iff \( {e_{\dl w \dr}}_a^2, \{\}, {\dl w \dr} \models \aknow \alpha \). By construction, we also have \( {e_{\dl w \dr}}_a^1, \{\}, {\dl w \dr} \models \aknow \alpha \). Since \( b \)'s structure is irrelevant, we get \( {e_{\dl w \dr}}_a^1, {e_{\dl w \dr}}_b^2, {\dl w \dr} \models \aknow \alpha \) proving the hypothesis. For the other direction, suppose \( {e_{\dl w \dr}}_a^1, {e_{\dl w \dr}}_b^2, {\dl w \dr} \models \aknow \alpha \). For all \( {\dl w' \dr} \in {e_{\dl w \dr}}_a^1 \), \( {e_{\dl w \dr}}_a^1, \{\}, {\dl w' \dr} \models \alpha \) iff (by hyp.) \( M^c, w'\models \alpha \) for all \( w' \in \mathcal{K}^c_a(w) \) iff \( M^c,w\models \aknow \alpha \).\eprf \end{proof} \begin{lemma}\label{lem:every_kffn_consistent_is_sat} Every \( {\textsf{K45}}_{n} \)-consistent basic formula \( \alpha \) is satisfiable wrt.~some $(k,j)$-model. \end{lemma} \begin{proof} It is a property of the canonical model that every \( {\textsf{K45}}_{n} \)-consistent basic formula is satisfiable \emph{wrt.}~the canonical model. Supposing that the formula has a \( a,b \)-depth of \( k,j \) then from Thm \ref{thm:basicformulas_canon_iff_kjmodel}, we know there is at least the correspondence $(k,j)$-model{} that also satisfies the formula. \eprf \end{proof} \begin{theorem}\label{thm:soundness_onlmin} For all \( \alpha \in \mathcal{ONL}_n^- \), if \( {AX}_n \vdash \alpha \) then \( \models \alpha \). \end{theorem} \begin{proof} The soundness is easily shown to hold for \( \mathbf{A1}_n - \mathbf{A4}_n \). The soundness of \( \mathbf{A5}_n \) is shown by induction on the depth. Suppose \( \alpha \) is a propositional formula, and say \( \neg \alpha \) is a consistent propositional formula (and hence \( {\textsf{K45}}_{n} \)-consistent). Then there is a world \( w^{*} \) such that \( \{\},\{\}, w^{*}\models \neg \alpha \). Given a \( e_a^k \), if \( \langle w^{*}, e_b^{k-1} \rangle \in e_a^k \) for some \( e_b^{k-1} \), then \( e_a^k, \{\}, w \models \neg \aknow \alpha \) for any world \( w \). If not, then \( e_a^k, \{\}, w\models \neg \mathitbf{N}_a \alpha \). Thus, \( e_a^k, \{\}, w\models \mathitbf{N}_a \alpha \supset \neg \aknow \alpha \). Wlog, assume the proof holds for \( a\)-objective formulas of max.~\( b \)-depth \( k-1 \). Suppose now, \( \alpha \) is such a formula, and \( \neg \alpha \) is \( {\textsf{K45}}_{n} \)-consistent. By Lemma \ref{lem:every_kffn_consistent_is_sat}, there is \( \langle w^{*}, \estar_b^{k-1}} %{e^{*}_{(b,k-1)} \rangle \), such that \( \{\}, \estar_b^{k-1}} %{e^{*}_{(b,k-1)}, w^{*} \models \neg \alpha \). Again, if \( \langle w^{*}, \estar_b^{k-1}} %{e^{*}_{(b,k-1)} \rangle \in e_a^k \), then \( e_a^k, \{\}, w\models \neg \aknow \alpha \) and if not, then \( e_a^k, \{\}, w\models \neg \mathitbf{N}_a \alpha \). \eprf \end{proof} \noindent We proceed with the completeness over the following definition, and lemmas. \begin{definition} A formula \( \psi \) is said to be \emph{independent} of the formula \( \phi \) wrt. an axiom system \( AX \), if neither \( AX \vdash \phi \supset \psi \) nor \( AX \vdash \phi \supset \neg \psi \). \end{definition} \begin{lemma}[Halpern and Lakemeyer, 2001]\label{lem:existence_independent} If \( \phi_1, \ldots, \phi_m \) are \( {\textsf{K45}}_{n} \)-consistent basic \( i \)-objective formulas then there exists a basic \( i \)-objective formula \( \psi \) of the form \( \mathitbf{L}_j \psi' \) (\( j \neq i \)) that is independent of \( \phi_1, \ldots, \phi_m \) wrt. \( {\textsf{K45}}_{n} \). \end{lemma} \begin{lemma}\label{lem:depth_of_independent_formula} In the lemma above, if \( \phi_i \) are \( i \)-objective and of maximal \( j \)-depth \( k \) for \( j\neq i \), then there is a \( \psi \) of \( j \)-depth \( 2k+2 \). \end{lemma} \begin{lemma}[Halpern and Lakemeyer, 2001]\label{lem:phi_or_psi_is_valid_basic} If \( \phi \) and \( \psi \) are \( i \)-objective basic formulas, and if \( \mathitbf{L}_i \phi \land \mathitbf{N}_i \psi \) is \( {AX}_n \)-consistent, then \( \phi \lor \psi \) is valid. \end{lemma} \begin{lemma}[Halpern and Lakemeyer, 2001] Every formula \( \alpha \in \mathcal{ONL}_n \) is provably equivalent to one in the normal form \emph{(}written below for \( n = \{a,b\} \)\emph{):} \\[1ex] \noindent \( \bigvee(\sigma \land \aknow \varphi_{a0} \land \neg \aknow \varphi_{a1} \ldots \land \neg \aknow \varphi_{a{m_1}} \land \bknow \varphi_{b0} \ldots \land \neg \bknow \varphi_{b{m_2}} \land \mathitbf{N}_a \psi_{a0} \ldots \land \neg \mathitbf{N}_a \psi_{a{n_1}} \land \mathitbf{N}_b \psi_{b0} \ldots \land \neg \mathitbf{N}_b \psi_{bn_2} ) \) \\[1ex] \noindent where \( \sigma \) is a propositional formula, and \( \varphi_{im} \) and \( \psi_{in} \) are \( i \)-objective. If \( \alpha \in \mathcal{ONL}_n^- \), \( \varphi_{im} \) and \( \psi_{in} \) are basic. \end{lemma} \begin{theorem}\label{thm:completeness_onlmin} For all formulas \( \alpha \in \mathcal{ONL}_n^- \), if \(\models \alpha \) then \( {AX}_n \vdash \alpha \). \end{theorem} \begin{proof} It is sufficient to prove that every \( {AX}_n \)-consistent formula \( \xi \) is satisfiable \emph{wrt.}~ some $(k,j)$-model. If \( \xi \) is basic, then by Lemma \ref{lem:every_kffn_consistent_is_sat}, the statement holds. If \( \xi \) is not basic, then wlog, it can be considered in the normal form: \\[1ex] \noindent \( \bigvee(\sigma \land \aknow \varphi_{a0} \land \neg \aknow \varphi_{a1} \ldots \land \neg \aknow \varphi_{a{m_1}} \land \bknow \varphi_{b0} \ldots \land \neg \bknow \varphi_{b{m_2}} \land \mathitbf{N}_a \psi_{a0} \ldots \land \neg \mathitbf{N}_a \psi_{a{n_1}} \land \mathitbf{N}_b \psi_{b0} \ldots \land \neg \mathitbf{N}_b \psi_{bn_2} ) \) \\[1ex] \noindent where \( \sigma \) is a propositional formula, and \( \varphi_{im} \) and \( \psi_{in} \) are \( i \)-objective and basic. Since \( \sigma \) is propositional and consistent, there is clearly a world \( w^{*} \) such that \( w^{*} \models \sigma \). We construct a \( k' \)-structure such that it satisfies all the \( a \)-subjective formulas in the normal form above. Following that, a \( j' \)-structure for all the \( b \)-subjective formulas is constructed identically. The resulting \( (k',j') \)-model (with \( w^{*} \)) satisfies \( \xi \). Let \( A \) be all \( {\textsf{K45}}_{n} \)-consistent formulas of the form \( \varphi_{a0} \land \psi_{a0} \land \neg \varphi_{aj} \) (for \( j \geq 1 \)) or the form \( \varphi_{a0} \land \psi_{a0} \land \neg \psi_{aj} \). Let \( \gamma \) be independent of all formulas in \( A \), as in Lemma \ref{lem:existence_independent} and \ref{lem:depth_of_independent_formula}. Note that, while we take \( \xi \) itself to be of maximal \( a,b \)-depth of \( k,j \), the depth of \( \varphi_{a0}, \ldots \) being \( a \)-objective are of maximal \( b \)-depth \( k-1 \), and hence \( \gamma \) is of \( b \)-depth \( 2k \) (Lemma \ref{lem:depth_of_independent_formula}). Given a consistent set of formulas, the standard Lindenbaum construction can be used to construct a maximally consistent set of formulas, all of a maximal \( b \)-depth \( k-1 \). That is, a formula is considered in the construction only if it has a maximal \( b \)-depth \( k-1 \). Now, let \( S_a \) be a set of all maximally consistent sets of formulas, constructed by only considering formulas of maximal \( b \)-depth \( k-1 \), and containing \( \varphi_{a0} \land (\neg \psi_{a0} \lor (\psi_{a0} \land \gamma)) \). Since each of these consistent sets are basic and \( a \)-objective, they are satisfiable by Lemma~\ref{lem:every_kffn_consistent_is_sat}. Thus the sets \( S' \in S_a \) are satisfiable \emph{wrt.}~ \( 2k \)-structures \( \langle w, e_b^{2k} \rangle \). Let \( k' =2k+1 \). By constructing a \( k' \)-structure for Alice, say \( e_a^{k'}}%{e_{(a,k')} \), from each \( \langle w, e_b^{2k} \rangle \) for every \( S' \in S_a \), we have that \( \canaObj(e_a^{k'}}%{e_{(a,k')}) = S_a \). We shall show that all the \( a \)-subjective formulas in the normal form are satisfied \emph{wrt.}~ \( \langle {e_a^{k'}}%{e_{(a,k')}}, \{\}, w^{*} \rangle \). Since for all \( S' \in S_a \), we have \( \varphi_{a0} \in S' \) we get that \( {e_a^{k'}}%{e_{(a,k')}}, \{\}, w^{*} \models \aknow \varphi_{a0} \). Now, since \( \aknow \varphi_{a0} \land \neg \aknow \varphi_{aj} \) is consistent, it must be that \( \varphi_{a0} \land \neg \varphi_{aj} \) is consistent. For suppose not, then \( \neg \varphi_{a0} \lor \varphi_{aj} \) is provable and thus, we have \( \varphi_{a0} \supset \varphi_{aj} \). We then prove \( \aknow \varphi_{a0} \supset \aknow \varphi_{aj} \), and since we have \( \aknow \varphi_{a0} \) we prove \( \aknow \varphi_{aj} \), clearly inconsistent with \( \aknow \varphi_{a0} \land\neg \aknow \varphi_{aj} \). Now that \( \varphi_{a0} \land \neg \varphi_{aj} \) is consistent, we either have that \( \varphi_{a0} \land \neg \varphi_{aj} \land \psi_{a0} \) or \( \varphi_{a0} \land \neg \varphi_{aj} \land \neg \psi_{a0} \) is consistent. With the former, we also have that \( \varphi_{a0} \land \neg \varphi_{aj} \land \psi_{a0} \land \gamma \) is consistent. There are maximally consistent sets that contain one of them, both of which contain \( \neg \varphi_{aj} \). This means that, \( {e_a^{k'}}%{e_{(a,k')}}, \{\}, w^{*} \models \neg \aknow \varphi_{aj} \). Now, consider some \( k' \)-structure \( \langle {w^{\bullet}}, \ebullet^{2k}_b \rangle \not \in {e_a^{k'}}%{e_{(a,k')}} \). One of the following \( a \)-objective formulas must hold \emph{wrt.}~ this \( k' \)-structure: (a) \( \varphi_{a0} \land \psi_{a0} \), (b) \( \varphi_{a0} \land \neg \psi_{a0} \), (c) \( \neg \varphi_{a0} \land \psi_{a0} \) or (d) \( \neg \varphi_{a0} \land \neg \psi_{a0} \). It can not be (d), since \( \aknow \varphi_{a0} \land \mathitbf{N}_a \psi_{a0} \) is consistent, and this implies that \( \varphi_{a0} \lor \psi_{a0} \) is valid (by Lemma \ref{lem:phi_or_psi_is_valid_basic}). It certainly cannot be (b), for it would be in some \( S' \in S_a \). This leaves us with options (c) and (a), both of which have \( \neg \psi_{a0} \). Since the \( k' \)-structure was arbitrary, we must have for all \( \langle w, e_b^{2k} \rangle \not\in {e_a^{k'}}%{e_{(a,k')}} \), \( \{\}, {e_b^{2k}}, w \models \psi_{a0} \). Thus, \( {e_a^{k'}}%{e_{(a,k')}}, \{\}, w^{*} \models \mathitbf{N}_a \psi_{a0} \). Finally, since \( \mathitbf{N}_a \psi_{a0} \land \neg \mathitbf{N}_a \psi_{aj} \) is consistent, it must be that \( \psi_{a0} \land \neg \psi_{aj} \) is consistent. Further, either \( \psi_{a0} \land \neg \psi_{aj} \land \varphi_{a0} \) or \( \psi_{a0} \land \neg \psi_{aj} \land \neg \varphi_{a0} \) is consistent. If the former, then \( \psi_{a0} \land \neg \psi_{aj} \land \varphi_{a0} \land \neg \gamma \) is also consistent. Let \( \beta \) be that which is consistent. Note that \( \neg \beta \land (\varphi_{a0} \land (\neg \psi_{a0} \lor (\psi_{a0} \land \gamma))) \) is consistent, and hence part of all \( S' \in S_a \). This means that \( {e_a^{k'}}%{e_{(a,k')}}, \{\}, w^{*} \models \aknow(\neg \beta) \). But since \( \beta \) itself is consistent, there is a \( k' \)-structure such that \( \{\}, e_{(b,2k)}^\bullet, {w^{\bullet}} \models \beta \). And this \( k' \)-structure can not be in \( {e_a^{k'}}%{e_{(a,k')}} \). This means that \( {e_a^{k'}}%{e_{(a,k')}}, \{\}, w^{*} \models \neg \mathitbf{N}_a \psi_{aj} \). Thus, all the \( a \)-subjective formulas in the normal form above are satisfiable \emph{wrt.}~ \( {e_a^{k'}}%{e_{(a,k')}} \). \eprf \end{proof} \noindent Now, observe that, although \( \aknow \bot \supset \neg \mathitbf{N}_a \neg \mathitbf{O}_{b} \neg \aoknow p \)~(\( \zeta \)) from Theorem \ref{lem:iset_validity} is valid, yet it is not derivable from \( {AX}_n \). In fact, the soundness result is easily extended to the full language \( \mathcal{ONL}_n \). Then, the proof theory cannot be complete for the full language since there is \( \zeta \in \mathcal{ONL}_n \) such that \( \not\vdash \zeta \) and \( \models \zeta \). Similarly, the validity of non-provable formulas \( \neg \aoknow \neg \mathitbf{O}_{b} p \) and \( \zeta \) \emph{wrt.}~the canonical model and the \( i \)-set approach respectively, show that although \( {AX}_n \) is also sound for the full language in these approaches, it cannot be compelete. Mainly, axiom \( \mathbf{A5}_n \) has to somehow go beyond basic formulas. As Halpern and Lakemeyer (\citeyear{1029713}) discuss, the problem is one of circularity. We would like the axiom to hold for any \( \alpha \) such that it is a consistent \( i \)-objective formula, but to deal with consistency we have to clarify what the axiom system looks like. The approach taken by Halpern and Lakemeyer is to introduce \emph{validity} (and its dual satisfiability) directly into the language. Formulas in the new language, \( \onl^+ \), are shown to be provably equivalent to \( \mathcal{ONL}_n \). Some new axioms involving validity and satisfiability are added to the axiom system, and the resultant proof theory \( \axioms^+ \) is shown to be sound and complete for formulas in \( \onl^+ \), \emph{wrt.}~an \emph{extended} canonical model. (An extended canonical model follows the spirit of the canonical model construction but by considering maximally \( \axioms^+ \)-consistent sets, and treat \( \mathitbf{L}_i \) and \( \mathitbf{N}_i \) as two independent modal operators.) So, one approach is to show that for formulas in the extended language the set of valid formulas overlap in the extended canonical model and \( k \)-structures. But then, as we argued, axiomatizing validity is not natural. Also, the proof theory is difficult to use. And in the end, we would still understand the axioms to characterize a semantics bridged on proof-theoretic elements. Again, what is desired is a generalization of Levesque's axiom \( \mathbf{A5} \), and nothing more. To this end, we propose a new axiom system, that is subtly related to the structure of formulas as are parameters \( k \) and \( j \). The axiom system has an additional \( t \)-axioms, and is to correspond to a sequence of languages \( \onl^t \).\footnote{The idea was also suggested by a reviewer in \cite{1029713} for an axiomatic characterization of the extended canonical model, although its completeness was left open.} \begin{definition} Let \( \mathcal{ONL}_n^1 = \mathcal{ONL}_n^- \). Let \( \onl^{k+1} \) be all Boolean combinations of formulas of \( \onl^t \) and formulas of the form \( \mathitbf{L}_i \alpha \) and \( \mathitbf{N}_i \alpha \) for \( \alpha \in \onl^t \). \end{definition} \noindent It is not hard to see that \( \onl^{k+1} \supseteq \onl^t \). Note that \( t \) here does not correspond to the depth of formulas. Indeed, a formula of the form \( (\bknow\aknow)^{k+1}p \) is already in \( \mathcal{ONL}_n^- \). Let \( \axioms^{t+1} \) be an axiom system consisting of \( \mathbf{A1}_n-\mathbf{A4}_n \), \( \mathbf{MP} \), \( \mathbf{NEC} \) and \( \mathbf{A5}_n^1-\mathbf{A5}_n^{t+1} \) defined inductively as: \begin{itemize} \item[] \( \mathbf{A5}_n^1. ~\mathitbf{N}_i \alpha \supset \neg \mathitbf{L}_i \alpha \), if \( \neg \alpha \) is a \( {\textsf{K45}}_{n} \)-consistent \\ \qquad \mbox{}\qquad \mbox{}\( i \)-objective basic formula. \item[] \( \mathbf{A5}_n^{t+1}. ~\mathitbf{N}_i \alpha \supset \neg \mathitbf{L}_i \alpha \), if \( \neg \alpha \in \onl^t \), is \( i \)-objective,\\ \qquad \mbox{}\qquad \mbox{} and consistent \emph{wrt.}~\( \mathbf{A1}_n-\mathbf{A4}_n \), $\mathbf{A5}_n^1-\mathbf{A5}_n^t$ . \end{itemize} \begin{theorem}\label{thm:soundness_full_lang} For all \( \alpha \in \onl^t \), if \( \axioms^t \vdash \alpha \) then \( \models \alpha \). \end{theorem} \begin{proof} We prove by induction on \( t \). The case of \( {AX}_n^1 \) is identical to Theorem \ref{thm:soundness_onlmin}. So, for the induction hypothesis, let us assume that \emph{wrt.}~ \( \axioms^t \), if \( \axioms^t \vdash \beta \) for \( \beta \in \onl^t \) then \( \models\beta \). Now, suppose that \( \neg \alpha \) is consistent \emph{wrt.}~ \( \axioms^t \) and is \( a \)-objective. This implies that \( \not \models \alpha \). Thus, there is some \( k \)-structure \( \langle w^{*}, \estar_b^k \rangle \) such that \( \{\}, \estar_b^k, w^{*} \models \neg \alpha \). Suppose now \( \langle w^{*}, \estar_b^k \rangle \in {e_a^{k+1}}%{e_{(a,k+1)}} \) then \( {e_a^{k+1}}%{e_{(a,k+1)}}, \{\}, w' \models \neg \aknow \alpha \) and if not then \( {e_a^{k+1}}%{e_{(a,k+1)}}, \{\}, w' \models \neg \mathitbf{N}_a \alpha \). Thus, \( {e_a^{k+1}}%{e_{(a,k+1)}}, \{\}, w' \models \mathitbf{N}_a \alpha \supset \neg \aknow \alpha \), demonstrating the soundness of \( \axioms^{t+1} \). \eprf \end{proof} \noindent We establish completeness in a manner identical to Theorem \ref{thm:completeness_onlmin}, and thus it necessary to ensure that Lemma \ref{lem:existence_independent}, \ref{lem:depth_of_independent_formula} and \ref{lem:phi_or_psi_is_valid_basic} hold for non-basic formulas. \begin{lemma} If \( \phi_1, \ldots, \phi_m \) are \( \axioms^t \)-consistent \( i \)-objective formulas, then there is a basic formula \( \psi \) of the form \( \mathitbf{L}_j \psi \) (\( j \neq i \)) that is independent of \( \phi_1, \ldots, \phi_m \) \emph{\emph{wrt.}~ }\( \axioms^t \). \end{lemma} \begin{proof} Suppose that \( \phi_i \) are \( a \)-objective and of maximal \( b \)-depth \( k \). A formula \( \psi \) of the form \( (\bknow \aknow)^{k+1}p \) (where \( p \in \Phi \) is in the scope of \( k+1 \) \( \bknow \aknow \)) is shown to be independent of \( \phi_1, \ldots, \phi_m \). Let us suppose we can derive a \( \gamma \) of the form \( \bknow \aknow \bknow \aknow \ldots p \) of maximal depth \( k \), to show that neither \( \vdash \gamma \supset \psi \) nor \( \vdash \gamma \supset \neg \psi \). Given any formula, the only axioms in \( \axioms^t \) that can introduce \( \gamma \) in the scope of modal operators is \( \mathbf{A4}_n \) and \( \mathbf{A5}_n^{t} \). Applying \( \mathbf{A4}_n \) gives \(\bknow \gamma \) or \( \mathitbf{N}_b \gamma \), and then using the axiom again we have \( \bknow \bknow \gamma \) or \( \bknow \mathitbf{N}_b \gamma \). It is easy to see that the resulting formulas are clearly independent from \( \psi \). Applying \( \mathbf{A5}_n^t \) on the other hand, allows us to derive \( \vdash \gamma \supset \mathitbf{N}_a \gamma \) or \( \vdash \gamma \supset \neg \aknow \gamma \) (\( \gamma \) is consistent \emph{wrt.~}\( \axioms^t \) and hence also \emph{wrt.~}\( \mathbf{A5}_n^{t-1} \)). Again, we could show \( \vdash \gamma \supset \neg \bknow \neg \aknow \gamma \). Continuing this way, it might only be possible to derive \( \neg \bknow \neg \aknow \ldots \bknow \aknow \ldots p \) of depth \( 2k+2 \), that is indeed independent of \( \psi \). \eprf \end{proof} \begin{lemma}\label{lem:phi_or_psi_is_valid} If \( \phi \) and \( \psi \) are \( i \)-objective formulas, \( \phi, \psi \in \onl^t \) and \( \mathitbf{L}_i \phi \land \mathitbf{N}_i \psi \) is \( \axioms^{t+1} \)-consistent then \( \models \phi \lor \psi \). \end{lemma} \begin{proof} Suppose not. Then \( \neg \phi \land \neg \psi \) is \( \axioms^t \)-consistent, and by \( \mathbf{A5}_n^{t+1} \) we prove \( \mathitbf{N}_a (\phi \lor \psi) \supset \neg \aknow (\phi \lor \psi) \), and thus, \( \mathitbf{N}_a \psi \supset \neg \aknow \phi \), and this is not \( \axioms^{t+1} \)-consistent with \( \aknow \phi \land \mathitbf{N}_a \psi \). \eprf \end{proof} \begin{theorem}\label{thm:completeness_onlk} For all \( \alpha \in \onl^t \), if \( \models \alpha \) then \( \axioms^t \vdash \alpha \). \end{theorem} \begin{proof} Proof by induction on \( t \). It is sufficient to show that if a formula \( \beta \in \onl^{k+1} \) is \( \axioms^{t+1} \)-consistent then it is satisfiable \emph{wrt.}~some model. We already have the proof for \( \mathcal{ONL}_n^1 \) (see Theorem \ref{thm:completeness_onlmin}). Let us assume the proof holds for all formulas \( \alpha \in \onl^t \). Particularly, this means that any formula that is \( \axioms^t \)-consistent is satisfiable \emph{wrt.}~some \( (k',j') \)-model. Let \( \alpha \in \onl^{k+1} \) (say of maximal \( a,b \)-depth of \( k+1,j+1 \)), and suppose that \( \alpha \) is consistent \emph{wrt.}~\( \axioms^{t+1} \). It is sufficient to show that \( \alpha \) is satisfiable. Wlog, we take it in the normal form: \\[1ex] % \noindent \( \bigvee(\sigma \land \aknow \varphi_{a0} \land \neg \aknow \varphi_{a1} \ldots \land \neg \aknow \varphi_{a{m_1}} \land \bknow \varphi_{b0} \ldots \land \neg \bknow \varphi_{b{m_2}} \land \mathitbf{N}_a \psi_{a0} \ldots \land \neg \mathitbf{N}_a \psi_{a{n_1}} \land \mathitbf{N}_b \psi_{b0} \ldots \land \neg \mathitbf{N}_b \psi_{bn_2} ). \) \\[1ex] \noindent Note that, by definition, it must be that all of \( \varphi_{im}, \psi_{in} \)~are at most in \( \onl^t \) (i.e.~they may also be in \( \mathcal{ONL}_n^{t-1}, \ldots \)), and \( i \)-objective. We proceed as we did for Theorem \ref{thm:completeness_onlmin} but without restricting to basic formulas. Let \( A \) be all \( \axioms^t \)-consistent formulas of the form \( \varphi_{a0} \land \psi_{a0} \land \neg \varphi_{aj} \) or \( \varphi_{a0} \land \psi_{a0} \land \neg \psi_{aj} \) (they are of maximal \( b \)-depth \( k \)). Let \( \gamma \) be independent of all formulas in \( A \). Let \( S_a \) be the set of all (\( \axioms^t \)-) maximally consistent sets of formulas, constructed from formulas of maximal \( b \)-depth \( k \), and containing \( \varphi_{a0} \land (\neg \psi_{a0} \lor (\psi_{a0} \land \gamma)) \), and hence by induction hypothesis they are satisfiable in some model. Note that all formulas in \( S_a \) are in \( \onl^t \). The \( b \)-depth is maximally \( 2k+2 \). Letting \( k''=2k+2 \), we have that for all \( S' \in S_a \), there is a \( \langle w, e_b^{k''}} %{e_{(b,k'')} \rangle \) such that \( \{\}, e_b^{k''}} %{e_{(b,k'')}, w \models S' \). Let \( k'=k''+1 \). Letting \( {e_a^{k'}}%{e_{(a,k')}} \) be all such \( k' \)-structures \( \langle w, e_b^{k''}} %{e_{(b,k'')} \rangle \) for each \( S' \in S_a \) makes \( {O}{bj}^+_a({e_a^{k'}}%{e_{(a,k')}}) = S_a \) (in contrast, for Thereom \ref{thm:completeness_onlmin} we dealt with \( \canaObj \)). We claim that this \( k' \)-structure for Alice, a \( j' \)-structure for Bob constructed similarly, and a world where \( \sigma \) holds (there is such a world since \( \sigma \) is propositional and consistent) is a model where \( \alpha \) is satisfied. The proof proceeds as in Theorem \ref{thm:completeness_onlmin}. We show the case of \( \neg \aknow \varphi_{aj} \). Since \( \aknow \varphi_{a0} \land \neg \aknow \varphi_{aj} \) is consistent \emph{wrt.}~\( \axioms^{t+1} \), it must be that \( \varphi_{a0} \land \neg \varphi_{aj} \) is consistent \emph{wrt.}~\( \axioms^{t+1} \). Further, since \( \varphi_{a0}, \varphi_{aj} \in \onl^t \), they must consistent be \emph{wrt.}~\( \axioms^t \) (for if not, they cannot by definition be consistent \emph{wrt.}~\( \axioms^{t+1} \)). This means that either \( \varphi_{a0} \land \neg \varphi_{aj} \land \psi_{a0} \) or \( \varphi_{a0} \land \neg \varphi_{aj} \land \neg \psi_{a0} \) is consistent. If the former is, then so is \( \varphi_{a0} \land \neg \varphi_{aj} \land \psi_{a0} \land \gamma \). Since \( S_a \) consist of all \( \axioms^t \)-consistent formulas containing \( \varphi_{a0} \land (\neg \psi_{a0} \land (\psi_{a0} \land \gamma)) \), there is clearly a \( S' \in S_a \) such that \( \neg \varphi_{aj} \in S' \). Consequently, it can not be that \( {e_a^{k'}}%{e_{(a,k')}}, \{\}, w' \models \aknow \varphi_{aj} \). Thus, \( e_a^{k'}}%{e_{(a,k')}, \{\}, w' \models \neg \aknow \varphi_{aj} \). \eprf \end{proof} \noindent Thus, we have a sound and complete axiomatization for the propositional fragment of \( \mathcal{ONL}_n \). In comparison to Lakemeyer (\citeyear{Lakemeyer1993}), the axiomatization goes beyond a language that restricts the nesting of \( \mathitbf{N}_i \). In contrast to Halpern and Lakemeyer (\citeyear{1029713}), the axiomatization does not necessitate the use of semantic notions in the proof theory. A third axiomatization by \cite{DBLP:conf/aiml/Waaler04,DBLP:conf/tark/WaalerS05} proposes an interesting alternative to deal with the circularity in a generalized \( \mathbf{A5} \). The idea is to first define consistency by formulating a fragment of the axiom system in the sequent calculus. Quite analogous to having \( t \)-axioms, they allow us to apply \( \mathbf{A5}_n \) on \( i \)-objective formulas of a lower depth, thus avoiding circularity without the need to appeal to satisfiability as in \cite{1029713}. Waaler and Solhaug~(\citeyear{DBLP:conf/tark/WaalerS05}) also define a semantics for multi-agent only-knowing which does not appeal to canonical models. Instead, they define a class of Kripke structures which need to satisfy certain constraints. Unfortunately, these constraints are quite involved and, as the authors admit, the nature of these models ``is complex and hard to penetrate.'' To get a feel of the axiomatization, let us consider a well studied example from \cite{1029713} to see where we differ. Suppose Alice assumes the following default: unless I know that Bob knows my secret then he does not know it. If the default is all that she knows, then she \emph{nonmonotonically} comes to believe that Bob does not know her secret. Let \( \gamma \) be a proposition that denotes Alice's secret, and we want to show that \( \vdash \aoknow(\delta) \supset \aknow \neg \bknow \gamma \), where \( \delta = \neg \aknow \bknow \gamma \supset \neg \bknow \gamma \). We write (Def.)~to mean \( \aoknow \alpha \equiv \aknow \alpha \land \mathitbf{N}_a \neg \alpha \), and we freely reason with propositional logic (PL) or \( {\textsf{K45}}_{n} \). \begin{enumerate \item \( \aoknow(\delta) \supset \aknow \neg \aknow \bknow \gamma \supset \aknow \neg \bknow \gamma \) \hfill Def.,PL,\( \mathbf{A2}_n \) \item \( \aoknow(\delta)\supset \mathitbf{N}_a\neg \aknow \bknow \gamma \land \mathitbf{N}_a\bknow \gamma \) \hfill Def.,PL,\( {\textsf{K45}}_{n} \) \item \( \mathitbf{N}_a \bknow \gamma \supset \neg \aknow \bknow \gamma \) \hfill \( \mathbf{A5}_n^1 \) \item \( \neg \aknow \bknow \gamma \supset \aknow \neg \aknow \bknow \gamma \) \hfill \( \mathbf{A4}_n \) \item \( \aoknow(\delta) \supset \aknow \neg \aknow \bknow \gamma \) \hfill 2,3,4,PL \item \( \aoknow(\delta) \supset \aknow \neg \bknow \gamma \) \hfill 1,5,PL \end{enumerate} \noindent We use \( \mathbf{A5}_n^1 \), and it is applicable because \( \neg \bknow \gamma \) is \( a \)-objective and \( {\textsf{K45}}_{n} \)-consistent. Now, suppose Alice is cautious. She changes her default to assume that if she does not believe Bob to only-know some set of facts \( \theta \in \Phi \), then \( \theta \) is not all that he knows. We would like to show \[ \vdash \aoknow(\neg \aknow \mathitbf{O}_{b} \theta \supset \neg \mathitbf{O}_{b} \theta) \supset \aknow \neg \mathitbf{O}_{b} \theta \] Of course, this default is different from \( \delta \) in containing \( \mathitbf{O}_{b} \theta \) rather than \( \bknow \gamma \). The proof is identical, except that we use \( \mathbf{A5}_n^2 \), since \( \neg \mathitbf{O}_{b} \theta \in \mathcal{ONL}_n^1 \) is \( a \)-objective and \( {AX}_n^1 \)-consistent. The latter proof requires reasoning with the \emph{satisfiability} modal operator in Halpern and Lakemeyer (\citeyear{1029713}), and is not provable with the axioms of Lakemeyer (\citeyear{Lakemeyer1993}). \section{Autoepistemic Logic} \label{sec:autoepistemic_logic} Having examined the properties of multi-agent only-knowing, in terms of a semantics for both the first-order and propositional case, and an axiomatization for the propositional case, in the current section we discuss how the semantics also captures autoepistemic logic (AEL). AEL, as originally developed by Moore (\citeyear{2781}), intends to allow agents to draw conclusions, by making observations of their own epistemic states. For instance, Alice concludes that she has no brother because if she did have one then she would have known about it, and she does not know about it \cite{2781}. The characterization of such beliefs are defined using fixpoints called \emph{stable expansions}. In the single agent case, Levesque (\citeyear{77758}) showed that the beliefs of an agent who only-knows \( \alpha \) is \emph{precisely} the stable expansion of \( \alpha \). Of course, the leverage with the former is that it is specified using regular entailments. In Lakemeyer (\citeyear{Lakemeyer1993}), and Halpern and Lakemeyer (\citeyear{1029713}), a many agent generalization of AEL is considered in the sense of a stable expansion for every agent, and relating this to what the agent only-knows. But their generalizations are only for the propositional fragment, while Levesque's definitions involved first-order entailments. In contrast, we obtain the corresponding quantificational multi-agent generalization of AEL. We state the main theorems below. The proofs are omitted since they follow very closely from the ideas for the single agent case~\cite{levesque2001logic}. \begin{definition} Let \( A \) be a set of formulas, and \( \Gamma \) is the \( i \)-stable expansion of \( A \) iff it the set of first-order implications of \( A \cup \{\mathitbf{L}_i \beta \mid \beta \in \Gamma \} \cup \{\neg\mathitbf{L}_i\beta \mid \beta \not\in\Gamma \} \). \end{definition} \newcommand{\boldsymbol e_a^+}{\boldsymbol e_a^+} \begin{definition}[Maximal structure] If \( e_a^k \) is a \( k \)-structure, let \( \boldsymbol e_a^+ \) be a \( k \)-structure with the addition of all \( \langle w', e_b^{k-1} \rangle \not \in e_a^k \) such that for every \( \alpha \in \mathcal{ONL}_n^- \) of maximal \( a,b \)-depth \( k,k-1 \), if \( e_a^k, \{\}, w \models \aknow \alpha \) for any world \( w \) then \( e_a^k, e_b^{k-1}, w' \models \alpha \). Define \( \Gamma = \{ \beta \mid \beta ~\textrm{is basic and}~ \boldsymbol e_a^+, \{\}, w \models \aknow \beta \} \) as the belief set of \( \boldsymbol e_a^+ \). \end{definition} \begin{theorem}\label{thm:onlyknowing_is_stable} Let \( M = \langle \boldsymbol e_a^+, e_b^j, w \rangle \) be a model, where \( \boldsymbol e_a^+ \) is a maximal structure for \( a \). Let \( \Gamma \) be the belief set of \( \boldsymbol e_a^+ \), and suppose \( \alpha \in \mathcal{ONL}_n^- \) is of maximal \( a,b \)-depth \( k,k-1 \). Then, \( M \models \aoknow\alpha \) iff \( \Gamma \) is the \( a \)-stable expansion of \( \alpha \). \end{theorem} \noindent Theorem \ref{thm:onlyknowing_is_stable} essentially says that the complete set of basic beliefs at a \emph{maximal} epistemic state where \( \alpha \) is all that \( i \) knows, precisely coincides with the \( i \)-stable expansion of \( \alpha \). \section{Axiomatizing Validity} \label{sec:axiomatizing_validity} Extending the work in \cite{Lakemeyer1993} and \cite{DBLP:conf/aaai/Halpern93}, which was only restricted to formulas in \( \mathcal{ONL}_n^- \), Halpern and Lakemeyer~(\citeyear{1029713}) proposed a multi-agent only-knowing logic that handles the nesting of \( \mathitbf{N}_i \) operators. But as discussed, there are two undesirable features. The first is a semantics based on canonical models, and the second is a proof theory that axiomatizes validity. Although such a construction is far from natural, we show in this section that they do indeed capture the desired properties of only-knowing. This also instructs us that our axiomatization avoids such problems in a reasonable manner. Recall that the language of \cite{1029713} is \( {\onl^+} \), which is \( \mathcal{ONL}_n \) and a modal operator for validity, \( \textit{Val} \). A modal operator \( \textit{Sat} \), for satisfiability, is used freely such that \( \textit{Val}(\alpha) \) is syntactically equivalent to \( \neg \textit{Sat}(\neg \alpha) \). To enable comparisons, we present a variant of our logic, that has all its main features, but has additional notions to handle the extended language. We then show that this logic and \cite{1029713} agree on the set of valid sentences from \( {\onl^+} \) (and also \( \mathcal{ONL}_n \)). The main feature of \cite{1029713} is the proof theory \( {AX}_n' \), and a semantics that is sound and complete for \( {AX}_n' \) via the extended canonical model. \( {AX}_n' \) consists of \( \mathbf{A1}_n-\mathbf{A4}_n \), \( \mathbf{MP} \), \( \mathbf{NEC} \) and the following: \begin{itemize} \item[] \( \mathbf{A5}_n'. \) \( \textit{Sat}(\neg \alpha) \supset (\mathitbf{N}_i \alpha \supset \neg \aknow \alpha) \), if \( \alpha \) is \( i \)-objective. \item[] \( \mathbf{V1}. \) \( \textit{Val} (\alpha) \land \textit{Val}(\alpha \supset \beta) \supset \textit{Val}(\beta) \). \item[] \( \mathbf{V2}. \) \( \textit{Sat}(p_1 \land \ldots p_n) \), if \( p_i \)'s are literals and \( p_1 \land \ldots p_n \) is \\\qquad \mbox{}\qquad \mbox{} propositionally consistent. \item[] \( \mathbf{V3}. \) \( \textit{Sat}(\alpha \land \beta_1) \land \ldots \textit{Sat}(\alpha \land \beta_k) \land \textit{Sat}(\gamma \land \delta_1) \ldots \land \\\qquad \mbox{}\qquad \mbox{} \textit{Sat}(\gamma \land \delta_m) \land \textit{Val}(\alpha \lor \gamma) \supset \textit{Sat}(\mathitbf{L}_i \alpha \land \neg \mathitbf{L}_i \neg \beta_1 \ldots \land \\\qquad \mbox{}\qquad \mbox{}\mathitbf{N}_i \gamma \land \neg \mathitbf{N}_i \neg \delta_1 \ldots) \), if \( \alpha, \beta_i, \gamma, \delta_i \) are \( i \)-objective. \item[] \( \mathbf{V4}. \) \( \textit{Sat}(\alpha) \land \textit{Sat}(\beta) \supset \textit{Sat}(\alpha \land \beta) \), if \( \alpha \) is \( i \)-objective \\\qquad \mbox{}\qquad \mbox{}and \( \beta \) is \( i \)-subjective. \item[] \( \mathbf{NEC}_\textit{Val}. \) From \( \alpha \) infer \( \textit{Val}(\alpha) \). \end{itemize} \noindent The essence of our new logic, in terms of a notion of depth (with \( |\textit{Val}(\alpha)|_i = |\alpha|_i \)) and a semantical account over possible worlds, is as before. The complete semantic definition for formulas in \( {\onl^+} \) of maximal \( a,b \)-depth of \( k,j \) is: \begin{itemize} \item[1.] -8.~as before, \item[9.] \( e_a^k, e_b^j, w \models \textit{Val}(\alpha) \) if \( e_a^k, e_b^j, w\models \alpha \) for all \( e_a^k, e_b^j,w \). \end{itemize} \noindent {Satisfiability} and {validity} (\( \models \)) are understood analogously.\footnote{Note that \( \textit{Val} \) corresponds precisely to how validity is defined.} Let \( {\onl^+} ^1 \), \( \ldots \) \( {\onl^+}^t \) be also defined analogously. Further, let axioms \( \mathbf{A1}_n-\mathbf{A5}_n^{t} \) be defined for \( {\onl^+}^{t} \). For instance, \( \mathbf{A5}_n^{t} \) is defined for any \( i \)-objective \( \neg \alpha \in {\onl^+}^{t-1} \) that is consistent with \( \mathbf{A1}_n-\mathbf{A5}_n^{t-1} \). Then, the semantics above is characterized by the proof theory \( {{AX}_n^+}^t \) defined (inductively) for \( {\onl^+}^t \), consisting of \( {AX}_n^t \) (\( \mathbf{A1}_n-\mathbf{A5}_n^t \), \( \mathbf{MP} \), \( \mathbf{NEC} \)) with \( \mathbf{NEC}_\textit{Val} \) as an additional inference rule. \begin{lemma}\label{lem:completeness_onlpk} For all \( \alpha \in {\onl^+}^t \), \( {{AX}_n^+}^t \vdash \alpha \) iff \( \models \alpha \). \end{lemma} \noindent The proof of this lemma, and those of the following theorems are given in the appendix. We proceed to show that \( \textit{Sat}(\alpha) \) is provable from \( {AX}_n' \) iff \( \alpha \) is \( {{AX}_n^+}^t \)-consistent. \begin{theorem}\label{thm:sat_means_consistent} For all \( \alpha \in {\onl^+}^t \), \( {AX}_n '\vdash \textit{Sat}(\alpha) \) iff \( \alpha \) is \( {{AX}_n^+}^t \)-consistent. \end{theorem} \noindent This allows us to show that \( {AX}_n' \) and \( {{AX}_n^+}^t \) agree on provable sentences. \begin{theorem}\label{thm:provable_sentences} For all \( \alpha \in {\onl^+}^t \), \( {AX}_n' \vdash \alpha \) iff \( {{AX}_n^+}^t \vdash \alpha \). \end{theorem} \begin{lemma}\label{cor:final} For all \( \alpha \in {\onl^+} ^t \), \( \models \alpha \) iff \( \alpha \) is valid in\\~\cite{1029713}. \end{lemma} \begin{proof} \( {AX}_n' \) is sound and complete for \cite{1029713}, and \( {{AX}_n^+}^t \) is sound and complete for \( \models \). \eprf \end{proof} \noindent Since it can be shown that every \( \alpha \in {\onl^+} \) is provably equivalent to some \( \alpha' \in \mathcal{ONL}_n \) \cite{1029713}, we also obtain the following corollary. \begin{corollary}For all \( \alpha \in \mathcal{ONL}_n ^t \), \( \models \alpha \) iff \( \alpha \) is valid in ~\cite{1029713}. \end{corollary} \section{Conclusions} \label{sec:conclusions} This paper has the following new results. We have a first-order modal logic for multi-agent only-knowing that we show, for the first time, generalizes Levesque's semantics. Unlike all attempts so far, we neither make use of proof-theoretic notions of maximal consistency nor Kripke structures \cite{DBLP:conf/tark/WaalerS05}. The benefit is that the semantic proofs are straightforward, and we understand possible worlds precisely as Levesque meant. We then analyzed a propositional subset, and showed first that the axiom system from Lakemeyer (\citeyear{Lakemeyer1993}) is sound and complete for a restricted language. We used this result to devise a new proof theory that does not require us axiomatize any semantic notions \cite{1029713}. Our axiomatization was shown to be sound and complete for the semantics, and its use is straightforward on formulas involving the nesting of \emph{at most} operators. In the process, we revisited the features of only-knowing and compared the semantical framework to other approaches. Its behavior seems to coincide with our intuitions, and it also captures a multi-agent generalization of Moore's AEL. Finally, although the axiomatization of Halpern and Lakemeyer~(\citeyear{1029713}) is not natural, we showed that they essentially capture the desired properties of multi-agent only-knowing, but at much expense. \section{Acknowledgements} \label{sec:acknowledgements} The authors would like to thank the reviewers for helpful suggestions and comments. The first author is supported by a DFG scholarship from the graduate school GK 643. \bibliographystyle{jas99}
1,314,259,994,637
arxiv
\section{Introduction} The magnetic observation of the earth with satellites has now matured to a point where continuous measurements of the field are available from 1999 onwards, thanks to the Oersted, SAC-C, and CHAMP missions \cite[e.g.][and references therein]{orsted2000,pomme}. In conjunction with ground-based measurements, such data have been used to produce a main field model of remarkable accuracy, in particular concerning the geomagnetic secular variation (GSV)\citep{chaos}. Let us stress that we are concerned in this paper with recent changes in the earth's magnetic field, occurring over time scales on the order of decades to centuries. This time scale is nothing compared to the age of the earth's dynamo ($>3$ Gyr), or the average period at which the dynamo reverses its polarity (a few hundreds of kyr, see for instance \cite{mef96}), or even the magnetic diffusion time scale in earth's core, on the order of $10$ kyr \citep[e.g.][]{bpc96}. It is, however, over this minuscule time window that the magnetic field and its changes are by far best documented \citep[e.g.][]{1989bgj}. Downward-projecting the surface magnetic field at the core-mantle boundary, and applying the continuity of the normal component of the field across this boundary, one obtains a map of this particular component at the top of the core. The catalog of these maps at different epochs constitutes most of the data we have at hand to estimate the core state. Until now, this data has been exploited within a kinematic framework \citep{rs1965,backus1968}: the normal component of the magnetic field is a passive tracer, the variations of which are used to infer the velocity that transports it \citep[e.g.][]{lemouel1984,bloxham1989}. For the purpose of modeling the core field and interpreting its temporal variations not only in terms of core kinematics, but more importantly in terms of core dynamics, it is crucial to make the best use of the new wealth of satellite data that will become available to the geomagnetic community, especially with the launch of the SWARM mission around 2010 \citep{swarmsim}. This best use can be achieved in the framework of data assimilation. In this respect, geomagnetists are facing challenges similar to the ones oceanographers were dealing with in the early Nineteen-nineties, with the advent of operational satellite observation of the oceans. Inasmuch as oceanographers benefited from the pioneering work of their atmosphericist colleagues (data assimilation is routinely used to improve weather forecasts), geomagnetists must rely on the developments achieved by the oceanic and atmospheric communities to assemble the first bricks of geomagnetic data assimilation. Dynamically speaking, the earth's core is closer to the oceans than to the atmosphere. The similarity is limited though, since the core is a conducting fluid whose dynamics are affected by the interaction of the velocity field with the magnetic field it sustains. These considerations, and their implications concerning the applicability of sophisticated ocean data assimilation strategies to the earth's core, will have to be addressed in the future. Today, geomagnetic data assimilation is still in its infancy (see below for a review of the efforts pursued in the past couple of years). We thus have to ask ourselves zero-th order questions, such as: variational or sequential assimilation? In short, one might be naively tempted to say that variational data assimilation (VDA) is more versatile than sequential data assimilation (SDA), at the expense of a more involved implementation -for an enlightening introduction to the topic, see \cite{tal97}. Through an appropriately defined misfit function, VDA can in principle answer any question of interest, provided that one resorts to the appropriate adjoint model. In this paper, we specifically address the issue of improving initial conditions to better explain a data record, and show how this can be achieved, working with a non-linear, one-dimensional magneto-hydrodynamic (MHD) model. SDA is more practical, specifically geared towards better forecasts of the model state, for example in numerical weather prediction \citep{tal97}. No adjoint model is needed here; the main difficulty lies in the computational burden of propagating the error covariance matrix needed to perform the so-called analysis, the operation by which past information is taken into account in order to better forecast future model states \citep[e.g.][]{brasseur06}. Promising efforts in applying SDA concepts and techniques to geomagnetism have recently been pursued: \cite{ltk07} have performed so-called Observing System Simulation Experiments (OSSEs) using a three-dimensional model of the geodynamo, to study in particular the response (as a function of depth) of the core to surface measurements of the normal component of the magnetic field, for different approximations of the above mentioned error covariance matrix. Also, in the context of a simplified one-dimensional MHD model, which retains part of the ingredients that make the complexity (and the beauty) of the geodynamo, \cite{stk07} have applied an optimal interpolation scheme that uses a Monte-Carlo method to calculate the same matrix, and studied the response of the system to assimilation for different temporal and spatial sampling frequencies. Both studies show a positive response of the system to SDA (i.e. better forecasts). In our opinion, though, SDA is strongly penalized by its formal impossibility to use current observations to improve past data records -even if this does not hamper its potential to produce good estimates of future core states. As said above, most of the information we have about the core is less that $500$ yr old \citep{gufm}. This record contains the signatures of the phenomena responsible for its short-term dynamics, possibly hydromagnetic waves with periods of several tens of years \citep{fj2003}. Our goal is to explore the VDA route in order to see to which extent high-resolution satellite measurements of the earth's magnetic field can help improve the historical magnetic database, and identify more precisely physical phenomena responsible for short-term geomagnetic variations. To tackle this problem, we need a dynamical model of the high-frequency dynamics of the core, and an assimilation strategy. The aim of this paper is to reveal the latter, and illustrate it with a simplified one-dimensional nonlinear MHD model. Such a toy model, similar to the one used by \cite{stk07}, retains part of the physics, at the benefit of a negligible computational cost. It enables intensive testing of the assimilation algorithm. This paper is organized as follows: the methodology we shall pursue in applying variational data assimilation to the geomagnetic secular variation is presented in Sect.~\ref{sec:metho}; its implementation for the one-dimensional, nonlinear MHD toy model is described in detail in Sect.~\ref{sec:toy}. Various synthetic assimilation experiments are presented in Sect.~\ref{sec:sae}, the results of which are summarized and further discussed in Sect.~\ref{sec:dis}. \section{Methodology} \label{sec:metho} In this section, we outline the bases of variational geomagnetic data assimilation, with the mid-term intent of improving the quality of the past geomagnetic record using the high-resolution information recorded by satellites. We resort to the unified set of notations proposed by \cite{icgl97}. What follows is essentially a transcription of the landmark paper by \cite{tc87} with these conventions, transcription to which we add the possibility of imposing constraints to the core state itself during the assimilation process. \subsection{Forward model} Assume we have a prognostic, nonlinear, numerical model $M$ which describes the dynamical evolution of the core state at any discrete time $t_i, i \in \{0, \dots, n\}$. If $\Delta t$ denotes the time-step size, the width of the time window considered here is $t_n-t_0=n \Delta t$, the initial (final) time being $t_0$ ($t_n$). In formal assimilation parlance, this is written as \begin{equation} \mathbf{x}_{i+1} = M_i [\mathbf{x}_{i}], \label{eq:mod} \end{equation} in which $\mathbf{x}$ is a column vector describing the model state. If $M$ relies for instance on the discretization of the equations governing secular variation with a grid-based approach, this vector contains the values of all the field variables at every grid point. The secular variation equations could involve terms with a known, explicit time dependence, hence the dependence of $M$ on time in Eq.~\eqref{eq:mod}. Within this framework, the modeled secular variation is entirely controlled by the initial state of the core, $\mathbf{x}_0$. \subsection{Observations} Assume now that we have knowledge of the true dynamical state of the core $\mathbf{x}_i^t$ through databases of observations $\mathbf{y}^o$ collected at discrete locations in space and time: \begin{equation} \mathbf{y}^o_i = H_i[\mathbf{x}_i^t] + \mitbf{\epsilon}_i, \end{equation} in which $H_i$ and $\mitbf{\epsilon}_i$ are the discrete observation operator and noise, respectively. For GSV, observations consist of (scalar or vector) measurements of the magnetic field, possibly supplemented by decadal timeseries of the length of day, since these are related to the angular momentum of the core \citep{jaultetal1988,blo98}. The observation operator is assumed linear and time-dependent: in the context of geomagnetic data assimilation, we can safely anticipate that its dimension will increase dramatically when entering the recent satellite era (1999-present). However, $H$ will always produce vectors whose dimension is much lower than the dimension of the state itself: this fundamental problem of undersampling is at the heart of the development of data assimilation strategies. The observational error is time-dependent as well: it is assumed to have zero mean and we denote its covariance matrix at discrete time $t_i$ by $\mathbf{R}_i$. \subsection{Quadratic misfit functions} Variational assimilation aims here at improving the definition of the initial state of the core $\mathbf{x}_0$ to produce modeled observations as close as possible to the observations of the true state. The distance between observations and predictions is measured using a quadratic misfit function $J_H$ \begin{equation} J_H = \sum_{i=0}^{n} \left[ H_i \mathbf{x}_{i} - \mathbf{y}^o_i \right]^T \obscor_i^{-1} \left[ H_i \mathbf{x}_{i} - \mathbf{y}^o_i \right], \label{defjh} \end{equation} in which the superscript `$T$' means transpose. In addition to the distance between observations and predictions of the past record, we might as well wish to try and apply some further constraints on the core state that we seek, through the addition of an extra cost function $J_C$ \begin{equation} J_C = \sum_{i=0}^n \mathbf{x}_i^T {C} \mathbf{x}_i, \label{defjc} \end{equation} in which $C$ is a matrix which describes the constraint one would like $\mathbf{x}$ to be subject to. This constraint can originate from some a priori ideas about the physics of the true state of the system, and its implication on the state itself, should this physics not be properly accounted for by the model $M$, most likely because of its computational cost. In the context of geomagnetic data assimilation, this a priori constraint can come for example from the assumption that fluid motions inside the rapidly rotating core are almost invariant along the direction of earth's rotation, according to Taylor--Proudman's theorem \cite[e.g.][]{gre90}. We shall provide the reader with an example for $C$ when applying these theoretical concepts to the 1D MHD model (see Sect.~\ref{sec:const}). Consequently, we write the total misfit function $J$ as \begin{equation} J = \frac{\alpha_H}{2} J_H + \frac{\alpha_C}{2} J_C, \label{defj} \end{equation} where $\alpha_H$ and $\alpha_C$ are the weights of the observational and constraint-based misfits, respectively. These two coefficients should be normalized; we will discuss the normalization in Sect.~\ref{sec:sae}. \subsection{Sensitivity to the initial conditions} To minimize $J$, we express its sensitivity to $\mathbf{x}_0$, namely $\mitbf{\nabla}_{\mathbf{x}_0}J$. With our conventions, $\mitbf{\nabla}_{\mathbf{x}_0}J$ is a row vector, since a change in $\mathbf{x}_0$, $\delta \mathbf{x}_0$, is responsible for a change in $J$, $\delta J$, given by \begin{equation} \delta J = \mitbf{\nabla}_{\mathbf{x}_{0}} J \cdot \delta \mathbf{x}_{0}. \end{equation} To compute this gradient, we first introduce the tangent linear operator which relates a change in $\mathbf{x}_{i+1}$ to a change in the core state at the preceding discrete time, $\mathbf{x}_{i}$: \begin{equation} \delta \mathbf{x}_{i+1} = M'_i \delta \mathbf{x}_{i}. \end{equation} The tangent linear operator $M'_i$ is obtained by linearizing the model $M_i$ about the state $\mathbf{x}_i$. Successive applications of the above relationship allow us to relate perturbations of the state vector $\mathbf{x}_{i} $ at a given model time $t_{i}$ to perturbations of the initial state $\mathbf{x}_{0}$: \begin{equation} \delta \mathbf{x}_{i} = \prod_{j=0}^{i-1} M'_j \delta \mathbf{x}_{0},\forall i \in\{1,\dots,n\} \label{chaina} \end{equation} The sensitivity of $J$ to any $\mathbf{x}_i$ expresses itself via \begin{equation} \delta J = \mitbf{\nabla}_{\mathbf{x}_{i}} J \cdot \delta \mathbf{x}_{i}, \label{sensi} \end{equation} that is \begin{equation} \delta J = \mitbf{\nabla}_{\mathbf{x}_{i}} J \cdot \prod_{j=0}^{i-1} M'_j \delta \mathbf{x}_{0},\ i \in\{1,\dots,n\}. \end{equation} Additionally, after differentiating Eq.~\eqref{defj} using Eqs.~\eqref{defjh} and \eqref{defjc}, we obtain $$ \mitbf{\nabla}_{\mathbf{x}_{i}} J = \alpha_H (H_i \mathbf{x}_i - \mathbf{y}^o_i)^T \mathbf{R}^{-1}_i H_i + \alpha_C \mathbf{x}_i^T {C} ,\ i \in\{0,\dots,n\}. $$ Gathering the observational and constraint contributions to $J$ originating from every state vector $\mathbf{x}_i$ finally yields \begin{eqnarray*} \delta J &=& \sum_{i=1}^n \left[ \alpha_H (H_i \mathbf{x}_i - \mathbf{y}^o_i)^T \mathbf{R}^{-1}_i H_i + \alpha_C \mathbf{x}_i^T {C} \right] \cdot \prod_{j=0}^{i-1} M'_j \delta \mathbf{x}_{0} \\ && + \left[ \alpha_H (H_0 \mathbf{x}_0 - \mathbf{y}^o_0)^T \mathbf{R}^{-1}_0 H_0 + \alpha_C \mathbf{x}_0^T {C} \right] \delta \mathbf{x}_{0} \\ &=& \left\{ \sum_{i=1}^n \left[ \alpha_H (H_i \mathbf{x}_i - \mathbf{y}^o_i)^T \mathbf{R}_i^{-1} H_i + \alpha_C \mathbf{x}_i^T {C} \right] \prod_{j=0}^{i-1} M'_j \right. \\ && + \alpha_H (H_0 \mathbf{x}_0 - \mathbf{y}^o_0)^T \mathbf{R}_0^{-1} H_0 + \alpha_C \mathbf{x}_0^T {C} \Bigg\} \delta \mathbf{x}_{0}, \end{eqnarray*} which implies in turn that \begin{eqnarray} \mitbf{\nabla}_{\mathbf{x}_0} J &=& \sum_{i=1}^n \left[ \alpha_H (H_i \mathbf{x}_i - \mathbf{y}^o_i)^T \mathbf{R}_i^{-1} H_i + \alpha_C \mathbf{x}_i^T {C} \right] \prod_{j=0}^{i-1} M'_j \nonumber\\ && + \alpha_H (H_0 \mathbf{x}_0 - \mathbf{y}^o_0)^T \mathbf{R}_0^{-1} H_0 + \alpha_C \mathbf{x}_0^T {C}. \label{gradrow} \end{eqnarray} \subsection{The adjoint model} The computation of $\mitbf{\nabla}_{\mathbf{x}_0} J $ via Eq.~\eqref{gradrow} is injected in an iterative method to adjust the initial state of the system to try and minimize $J$. The $l+1$-th step of this algorithm is given in general terms by \begin{equation} \mathbf{x}_0^{l+1} = \mathbf{x}_0^{l} - \rho^l \mathbf{d}^l, \end{equation} in which $\mathbf{d}$ is a descent direction, and $\rho^l$ an appropriate chosen scalar. In the case of the steepest descent algorithm, $\mathbf{d}^l = (\mitbf{\nabla}_{\mathbf{x}_0^l} J)^T $, and $\rho^l$ is an a priori set constant. The descent direction is a column vector, hence the need to take the transpose of $\mitbf{\nabla}_{\mathbf{x}_0^l}J$. In practice, the transpose of Eq.~\eqref{gradrow} yields, at the $l$-th step of the algorithm, \begin{eqnarray} \left[\mitbf{\nabla}_{\mathbf{x}_0^l} J\right]^T &=& \sum_{i=1}^n M'^T_{0} \cdots M'^T_{i-1} \left[\alpha_H H_i^T \mathbf{R}^{-1}_i(H_i \mathbf{x}_i^l - \mathbf{y}^o_i) +\alpha_C {C} \mathbf{x}_i^l \right] \nonumber \\ && +\alpha_H H_0^T \mathbf{R}^{-1}_0 (H_0 \mathbf{x}_0^l - \mathbf{y}^o_0) + \alpha_C {C} \mathbf{x}_0^l. \end{eqnarray} Introducing the adjoint variable $\mathbf{a}$, the calculation of $(\mitbf{\nabla}_{\mathbf{x}_0^l} J)^T$ is therefore performed practically by integrating the so-called adjoint model \begin{equation} \mathbf{a}_{i-1}^l = {M'}^T_{i-1} \mathbf{a}_i^l + \alpha_H H^T_{i-1} \mathbf{R}^{-1}_{i-1}(H_{i-1} \mathbf{x}^l_{i-1} - \mathbf{y}^o_{i-1}) + \alpha_C {C} \mathbf{x}^l_{i-1}, \label{adjm} \end{equation} starting from $\mathbf{a}^l_{n+1}=\mitbf{0}$, and going backwards in order to finally estimate \begin{equation} (\mitbf{\nabla}_{\mathbf{x}_0^l} J)^T = \mathbf{a}^l_0. \end{equation} Equation \eqref{adjm} is at the heart of variational data assimilation \citep{tal97}. Some remarks and comments concerning this so-called adjoint equation are in order: \begin{enumerate} \item It requires to implement the transpose of the tangent linear operator, the so-called adjoint operator, ${M'}^T_{i}$. If the discretized forward model is cast in terms of matrix-matrix and/or matrix-vector products, then this implementation can be rather straightforward (see Sect.~\ref{sec:toy}). Still, for realistic applications, deriving the discrete adjoint equation can be rather convoluted \cite[e.g.][Chap. 4]{ben02}. \item The discrete adjoint equation (Eq.~\ref{adjm}) is based on the already discretized model of the secular variation. Such an approach is essentially motivated by practical reasons, assuming that we already have a numerical model of the geomagnetic secular variation at hand. We should mention here that a similar effort can be performed at the continuous level, before discretization. The misfit can be defined at this level; the calculus of its variations gives then rise to the Euler--Lagrange equations, one of which being the continuous backward, or adjoint, equation. One could then simply discretize this equation, using the same numerical approach as the one used for the forward model, and use this tool to adjust $\mathbf{x}_0$. According to \cite{ben02}, though, the ``discrete adjoint equation" is not the ``adjoint discrete equation", the former breaking adjoint symmetry, which results in a solution being suboptimal \citep[][\S~4.1.6]{ben02}. \item Aside from the initial state $\mathbf{x}_0$, one can in principle add model parameters ($\mathbf{p}$, say) as adjustable variables, and invert jointly for $\mathbf{x}_0$ and $\mathbf{p}$, at the expense of expressing the discrete sensitivity of $J$ to $\mathbf{p}$ as well. For geomagnetic VDA, this versatility might be of interest, in order for instance to assess the importance of magnetic diffusion over the time window of the historical geomagnetic record. \item The whole sequence of core states $\mathbf{x}_{i}^l, i \in \{0,\dots,n \}$, has to be kept in memory. This memory requirement can become quite significant when considering dynamical models of the GSV. Besides, even if the computational cost of the adjoint model is by construction equivalent to the cost of the forward model, the variational assimilation algorithm presented here is at least one or two orders of magnitude more expensive than a single forward realization, because of the number of iterations needed to obtain a significant reduction of the misfit function. When tackling `real' problems in the future (as opposed to the illustrative problem of the next sections), memory and CPU time constraints might make it necessary to lower the resolution of the forward (and adjoint) models, by taking parameters values further away from the real core. A constraint such as the one imposed through Eq.~\eqref{defjc} can then appear as a way to ease the pain and not to sacrifice too much physics, at negligible extra computational cost. \end{enumerate} We give a practical illustration of these ideas and concepts in the next two sections. \section{Application to a one-dimensional nonlinear MHD model} \label{sec:toy} We consider a conducting fluid, whose state is fully characterized by two scalar fields, $u$ and $b$. Formally, $b$ represents the magnetic field (it can be observed), and $u$ is the velocity field (it is invisible). \subsection{The forward model} \subsubsection{Governing equations} The conducting fluid has density $\rho$, kinematic viscosity $\nu$, electrical conductivity $\sigma$, magnetic diffusivity $\eta$, and magnetic permeability $\mu$ ($\eta=1/\mu \sigma$). Its pseudo-velocity $u$ and pseudo-magnetic field $b$ are both scalar fields, defined over a domain of length $2L$, $[-L,L]$. We refer to pseudo fields here since these fields are not divergence-free. If they were so, they would have to be constant over the domain, which would considerably limit their interest from the assimilation standpoint. Bearing this remark in mind, we shall omit the `pseudo' adjective in the remainder of this study. We choose $L$ as the length scale, the magnetic diffusion time scale $L^2/\eta$ as the time scale, $B_0$ as the magnetic field scale, and $B_0/\sqrt{\rho \mu}$ as the velocity scale (i.e. the Alfv\'en wave speed). Accordingly, the evolution of $u$ and $b$ is controlled by the following set of non-dimensional equations: \begin{eqnarray} \forall (x,t) \in ]-1,1[ \times [0,T], \nonumber \\ \partial_t u + S \ u \partial_x u &=& S\ b \partial_x b + Pm\partial_x^2 u \label{mom}, \\ \partial_t b +S \ u \partial_x b &=& S \ b \partial_x u + \partial_x^2 b, \label{ind} \end{eqnarray} supplemented by the boundary and initial conditions \begin{eqnarray} u(x,t) &=& 0 \mbox{ if } x = \pm 1, \\ b(x,t) &=& \pm 1 \mbox{ if } x = \pm 1, \\ &+& \mbox{ given } u(\cdot,t=0), b(\cdot,t=0). \end{eqnarray} Eq.~\eqref{mom} is the momentum equation: the rate of change of the velocity is controlled by advection, magnetic forces and diffusion. Similarly, in the induction equation \eqref{ind}, the rate of change of the magnetic field results from the competition between inductive effects and ohmic diffusion. Two non-dimensional numbers define this system, $$ S = \sqrt{\mu/\rho} \sigma B_0 L, $$ which is the Lundquist number (ratio of the magnetic diffusion time scale to the Alfv\'en time scale), and $$ Pm = \nu/\eta, $$ which is the magnetic Prandtl number, a material property very small for liquid metals - $Pm~\sim~10^{-5}$ for earth's core \citep[e.g.][]{jpp88}. \subsubsection{Numerical model} \label{sec:num} Fields are discretized in space using one Legendre spectral element of order $N$. In such a framework, basis functions are the Lagrangian interpolants $h_i^N$defined over the collection of $N+1$ Gauss--Lobatto--Legendre (GLL) points $\xi_i^N, i \in \{0,\dots,N\}$ \citep[for a comprehensive description of the spectral element method, see][]{dfm02}. Figure~\ref{fig:bf} shows such a basis function for $i=50,N=150$. Having basis functions defined everywhere over $[-1,1]$ makes it straightforward to define numerically the observation operator $H$ (see Sect. \ref{sec:obstrue}). We now drop the superscript $N$ for the sake of brevity. The semi-discretized velocity and magnetic fields are column vectors, denoted with bold fonts \begin{eqnarray} \mathbf{u}(t)&=&\left[u(\xi_0=-1,t),u(\xi_1,t),\dots,u(\xi_N=1,t) \right]^T, \\ \mathbf{b}(t)&=&\left[b(\xi_0=-1,t),b(\xi_1,t),\dots,b(\xi_N=1,t) \right]^T. \end{eqnarray} \begin{figure} \centerline{\includegraphics[width=.5\linewidth]{./npg-2007-0005-f01.pdf}} \caption{\label{fig:bf} An example of a basis function used to discretize the MHD model in space. This particular Lagrangian interpolant, $ h_{50}^{150}$, is obtained for a polynomial order $N=150$, and it is attached to the 51st Gauss--Lobatto--Legendre point.} \end{figure} Discretization is performed in time with a semi-implicit finite-differencing scheme of order $1$, explicit for nonlinear terms, and implicit for diffusive terms. As in the previous section, assuming that $\Delta t$ is the time step size, we define $t_i = i \Delta t, \mathbf{u}_i = \mathbf{u}(t=t_i), \mathbf{b}_i = \mathbf{b}(t=t_i), i \in \{0,\dots,n\}.$ As a result of discretization in both space and time, the model is advanced in time by solving the following algebraic system \begin{equation} \ub{i+1} = \left[ \begin{array}{cc} \mathbf{\sf H}_u^{-1} & 0 \\ 0 & \mathbf{\sf H}_b^{-1} \\ \end{array} \right] \fub{i}, \end{equation} where \begin{eqnarray} \mathbf{\sf H}_u &=& \mathbf{\sf M}/\Delta t + Pm \mathbf{\sf K}, \\ \mathbf{\sf H}_b &=& \mathbf{\sf M}/\Delta t + \mathbf{\sf K}, \\ \mathbf{f}_{u,i} &=& \mathbf{\sf M} \left(\mathbf{u}_i/\Delta t -S \mathbf{u}_i \odot \mathbf{\sf D} \mathbf{u}_i +S \mathbf{b}_i \odot \mathbf{\sf D} \mathbf{b}_i \right ), \\ \mathbf{f}_{b,i} &=& \mathbf{\sf M} \left(\mathbf{b}_i /\Delta t -S \mathbf{u}_i \odot \mathbf{\sf D} \mathbf{b}_i +S \mathbf{b}_i \odot \mathbf{\sf D} \mathbf{u}_i \right), \end{eqnarray} are the Helmholtz operators acting on velocity field and the magnetic field, and the forcing terms for each of these two, respectively. We have introduced the following definitions : \begin{itemize} \item $\mathbf{\sf M}$, which is the diagonal mass matrix, \item $\mathbf{\sf K}$, which is the so-called stiffness matrix (it is symmetric definite positive), \item $\odot$, which denotes the Hadamard product: $(\mathbf{b}\odot \mathbf{u})_k = (\mathbf{u}\odot \mathbf{b})_k = b_k u_k$, \item and $\mathbf{\sf D}$, the so-called derivative matrix \begin{equation}\mathbf{\sf D}_{ij} = \frac{dh^{N}_{i}}{dx}|_{x=\xi_j}, \end{equation} \end{itemize} the knowledge of which is required to evaluate the nonlinear terms. Advancing in time requires to invert both Helmholtz operators, which we do directly resorting to standard linear algebra routines \citep{laug}. Let us also bear in mind that the Helmholtz operators are symmetric (i.e. self-adjoint). In assimilation parlance, and according to the conventions introduced in the previous section, the state vector $\mathbf{x}$ is consequently equal to $[\mathbf{u},\mathbf{b}]^T$, and its dimension is $s=2(N-1)$ (since the value of both the velocity and magnetic fields are prescribed on the boundaries of the domain). \subsection{The true state} \label{true} Since we are dealing in this paper with synthetic observations, it is necessary to define the true state of the 1D system as the state obtained via the integration of the numerical model defined in the preceding paragraph, for a given set of initial conditions, and specific values of the Lundquist and magnetic Prandtl numbers, $S$ and $Pm$. The true state (denoted with the superscript `$t$') will always refer to the following initial conditions \begin{eqnarray} u^t(x,t=0) &=& \sin(\pi x) + (2/5) \sin (5 \pi x), \label{ut}\\ b^t(x,t=0) &=& \cos(\pi x) + 2 \sin [\pi(x+1)/4], \label{bt} \end{eqnarray} along with $S=1$ and $Pm=10^{-3}$. The model is integrated forward in time until $T=0.2$ (a fifth of a magnetic diffusion time). The polynomial order used to compute the true state is $N=300$, and the time step size $\Delta t=2\ 10^{-3}$. Figure \ref{fig:true} shows the velocity (left) and magnetic field (right) at initial (black curves) and final (red curves) model times. \begin{figure*} \centerline{\includegraphics[width=.8\linewidth]{./npg-2007-0005-f02.pdf}} \caption{\label{fig:true} The true state used for synthetic variational assimilation experiments. Left: the first, $t=0$ (black) and last, $t=T$ (red) velocity fields. Right: the first, $t=0$ (black) and last, $t=T$ (red) magnetic fields.} \end{figure*} The low value of the magnetic Prandtl number $Pm$ reflects itself in the sharp velocity boundary layers that develop near the domain boundaries, while the magnetic field exhibits in contrast a smooth profile (the magnetic diffusivity being three orders of magnitude larger than the kinematic viscosity). To properly resolve these Hartmann boundary layers there must be enough points in the vicinity of the domain boundaries: we benefit here from the clustering of GLL points near the boundaries \citep{dfm02}. Besides, even if the magnetic profile is very smooth, one can nevertheless point out here and there kinks in the final profile. These kinks are associated with sharp velocity gradients (such as the one around $x=0.75$) and are a consequence of the nonlinear $b \partial_x u$ term in the induction Eq.~\eqref{ind}. \subsection{Observation of the true state} \label{sec:obstrue} In order to mimic the situation relevant for the earth's core and geomagnetic secular variation, assume that we have knowledge of $b$ at discrete locations in space and time, and that the velocity $u$ is not measurable. For the sake of generality, observations of $b$ are not necessarily made at collocation points, hence the need to define a spatial observation operator $H^{\mbox{spa}}_i$ (at discrete time $t_i$) consistent with the numerical approximation introduced above. If $n^{\mbox{S}}_i$ denotes the number of virtual magnetic stations at time $t_i$, and $\xi^o_{i,j}$ their locations ($j \in \{1,\dots, n^{\mbox{S}}_i \}$), $H^{\mbox{spa}}_i$ is a rectangular $n^{\mbox{S}}_i \times (N+1) $ matrix, whose coefficients write \begin{equation} H^{\mbox{spa}}_{i,jl} = h_l^N(\xi_{i,j}^o). \end{equation} A database of magnetic observations $\mathbf{y}^o_i=\mathbf{b}^o_i$ is therefore produced at discrete time $t_i$ via the matrix-vector product \begin{equation} \mathbf{b}^o_i = H^{\mbox{spa}}_i \mathbf{b}_i^t. \end{equation} Integration of the adjoint model also requires the knowledge of the transpose of the observation operator (Eq.~\ref{adjm}), the construction of which is straightforward according to the previous definition. To construct the set of synthetic observations, we take for simplicity the observational noise to be zero. During the assimilation process, we shall assume that estimation errors are uncorrelated, and that the level of confidence is the same for each virtual observatory. Consequently, \begin{equation} \mathbf{R}_i=\mathbf{\sf I}^o, \end{equation} in which $\mathbf{\sf I}^o$ is the $n^{\mbox{S}}_i \times n^{\mbox{S}}_i$ identity matrix, throughout the numerical experiments. As an aside, let us notice that magnetic observations could equivalently consist of an (arbitrarily truncated) set of spectral coefficients, resulting from the expansion of the magnetic field on the basis of Legendre polynomials. Our use of stations is essentially motivated by the fact that our forward model is built in physical space. For real applications, a spectral approach is interesting since it can naturally account for the potential character of the field in a source-free region; however, it is less amenable to the spatial description of observation errors, if these do not vary smoothly. \subsection{The adjoint model} \subsubsection{The tangent linear operator} As stated in the the previous section, the tangent linear operator $M'_i$ at discrete time $t_i$ is obtained at the discrete level by linearizing the model about the current solution $(\mathbf{u}_i,\mathbf{b}_i)$. By perturbing these two fields \begin{eqnarray} \mathbf{u}_i \rightarrow \mathbf{u}_i + \delta \mathbf{u}_i,\\ \mathbf{b}_i \rightarrow \mathbf{b}_i + \delta \mathbf{b}_i, \end{eqnarray} we get (after some algebra) $$ \dub{i+1} = \left[\begin{array}{cc} \mathbf{\sf A}_i & \mathbf{\sf B}_i \\ \mathbf{\sf C}_i & \mathbf{\sf E}_i \end{array} \right]\dub{i} $$ having introduced the $(N+1)^2$ following matrices \begin{eqnarray} \mathbf{\sf A}_i &=& \mathbf{\sf H}_u^{-1} \mathbf{\sf M} \left( \mathbf{\sf I}/\Delta t -S \mathbf{\sf D} \mathbf{u}_i \odot - S \mathbf{u}_i \odot \mathbf{\sf D} \right), \\ \mathbf{\sf B}_i &=& \mathbf{\sf H}_u^{-1} \mathbf{\sf M} \left( S \mathbf{b}_i \odot \mathbf{\sf D} + S \mathbf{\sf D} \mathbf{b}_i \odot \right),\\ \mathbf{\sf C}_i &=& \mathbf{\sf H}_b^{-1} \mathbf{\sf M} \left( -S \mathbf{\sf D} \mathbf{b}_i \odot -S \mathbf{b}_i \odot \mathbf{\sf D} \right),\\ \mathbf{\sf E}_i &=& \mathbf{\sf H}_b^{-1} \mathbf{\sf M} \left( \mathbf{\sf I}/\Delta t -S \mathbf{u}_i \odot \mathbf{\sf D} + S \mathbf{\sf D} \mathbf{u}_i \odot \right). \end{eqnarray} Aside from the $(N+1)^2$ identity matrix $\mathbf{\sf I}$, matrices and notations appearing in these definitions have already been introduced in \S \ref{sec:num}. In connection with the general definition introduced in the previous section, $\delta \mathbf{x}_{i+1} = M'_i \delta \mathbf{x}_{i}$, $M'_i$ is the block matrix \begin{equation} M'_i = \left[\begin{array}{cc} \mathbf{\sf A}_i & \mathbf{\sf B}_i \\ \mathbf{\sf C}_i & \mathbf{\sf E}_i \end{array} \right]. \label{eq:tlo} \end{equation} \subsubsection{Implementation of the adjoint equation} The sensitivity of the model to its initial conditions is computed by applying the adjoint operator, $M_i'^T$, to the adjoint variables - see Eq.~\eqref{adjm}. According to Eq.~\eqref{eq:tlo}, one gets \begin{equation} M_i'^T = \left[\begin{array}{cc} \mathbf{\sf A}_i^T & \mathbf{\sf C}_i^T \\ \mathbf{\sf B}_i^T & \mathbf{\sf E}_i^T \end{array} \right], \label{eq:adjb} \end{equation} with each transpose given by \begin{eqnarray} \mathbf{\sf A}_i^T &=& \left( \mathbf{\sf I}/\Delta t -S \mathbf{u}_i \odot \mathbf{\sf D}^T - S \mathbf{\sf D}^T \mathbf{u}_i \odot \right) \mathbf{\sf M} \mathbf{\sf H}_u^{-1}, \\ \mathbf{\sf B}_i^T &=& \left( S \mathbf{\sf D}^T \mathbf{b}_i \odot + S \mathbf{b}_i \odot \mathbf{\sf D}^T \right)\mathbf{\sf M} \mathbf{\sf H}_u^{-1}, \\ \mathbf{\sf C}_i^T &=& \left( - S \mathbf{b}_i \odot \mathbf{\sf D}^T -S \mathbf{\sf D}^T \mathbf{b}_i \odot \right)\mathbf{\sf M} \mathbf{\sf H}_b^{-1}, \\ \mathbf{\sf E}_i^T &=& \left(\mathbf{\sf I}/\Delta t -S \mathbf{\sf D}^T \mathbf{u}_i \odot + S \mathbf{u}_i \odot \mathbf{\sf D}^T \right)\mathbf{\sf M} \mathbf{\sf H}_b^{-1}. \end{eqnarray} In writing the equation in this form, we have used the symmetry properties of the Helmholtz and mass matrices, and introduced the transpose of the derivative matrix, $D^T$. Programming the adjoint model is very similar to programming the forward model, provided that one has cast the latter in terms of matrix-matrix, matrix-vector, and Hadamard products. \section{Synthetic assimilation experiments} \label{sec:sae} Having all the numerical tools at hand, we start out by assuming that we have imperfect knowledge of the initial model state, through an initial guess $\mathbf{x}_0^g$, with the model parameters and resolution equal to the ones that helped us define the true state of \S \ref{true}. We wish here to quantify how assimilation of observations can help improve the knowledge of the initial (and subsequent) states, with particular emphasis on the influence of spatial and temporal sampling. In the series of results reported in this paragraph, the initial guess at model initial time is : \begin{eqnarray} u^g(x,t=0) &=& \sin(\pi x), \label{ug}\\ b^g(x,t=0) &=& \cos(\pi x) + 2 \sin [\pi(x+1)/4] + (1/2) \sin (2 \pi x). \label{bg} \end{eqnarray} With respect to the true state at the initial time, the first guess is missing the small-scale component of $u$, i.e. the second term on the right-hand side of Eq.~\eqref{ut}. In addition, our estimate of $b$ has an extra parasitic large-scale component (the third term on the right-hand side of Eq.~\eqref{bg}), a situation that could occur when dealing with the GSV, for which the importance of unmodeled small-scale features has been recently put forward given the accuracy of satellite data \citep{eh05}. Figure \ref{fig:gvst} shows the initial and final $u^g$ and $b^g$, along with $u^t$ and $b^t$ at the same epochs for comparison, and the difference between the two, multiplied by a factor of five. \begin{figure*} \centerline{\includegraphics[width=.8\linewidth]{./npg-2007-0005-f03.pdf}} \caption{\label{fig:gvst} Initial guesses used for the variational assimilation experiments, plotted against the corresponding true state variables. Also plotted is five times the difference between the two. a: velocity at time $0$. b: velocity at final time $T$. c: magnetic field at time $0$. d: magnetic field at final time $T$. In each panel, the true state is plotted with the black line, the guess with the green line, and the magnified difference with the blue line. } \end{figure*} Differences in $b$ are not pronounced. Over the time window considered here, the parasitic small-scale component has undergone considerable diffusion. To quantify the differences between the true state and the guess, we resort to the $L_2$ norm $$\ltwo{f}= \sqrt{\int_{-1}^{+1} f^2 dx}, $$ and define the relative magnetic and fluid errors at time $t_i$ by \begin{eqnarray} e^b_i &=& \ltwo{b^t_i-b^f_i}/\ltwo{b^t_i},\\ e^u_i &=& \ltwo{u^t_i-u^f_i}/\ltwo{u^t_i}. \end{eqnarray} The initial guess given by Eqs.~\eqref{ug}\eqref{bg} is characterized by the following errors: $e^b_0 = 21.6 \%, e^b_n=2.9 \%, e^u_0=37.1 \%, $ and $e^u_n=37.1 \%$. \subsection{Improvement of the initial guess with no a priori constraint on the state} \subsubsection{Regular space and time sampling} Observations of $b^t$ are performed at $n^{\mbox{S}}$ virtual observatories which are equidistant in space, at a number of epochs $nt$ evenly distributed over the time interval. Assuming no a priori constraint on the state, we set $\alpha_C = 0$ in Def. \ref{defj}. The other constant $\alpha_H = 1/(ntn^{\mbox{S}}).$ The minimization problem is tackled by means of a conjugate gradient algorithm, \`a la Polak--Ribi\`ere \citep{she94}. Iterations are stopped either when the initial misfit has decreased by 8 orders of magnitude, or when the iteration count exceeds 5,000. In most cases, the latter situation has appeared in our simulations. A typical minimization is characterized by a fast decrease in the misfit during the first few tens of iterations, followed by a slowly decreasing (almost flat) behaviour. Even if the solution keeps on getting better (i.e. closer to the synthetic reality) during this slow convergence period, practical considerations (having in mind the cost of future geomagnetic assimilations) prompted us to stop the minimization. \begin{figure*} \centerline{\includegraphics[width=.8\linewidth]{./npg-2007-0005-f04.pdf}} \caption{\label{fig:typ} Synthetic assimilation results. a): velocity at initial model time $t=0$. b): velocity at final time $t=T$. c): magnetic field at initial time $t=0$. d): magnetic field at final time $t=T$. In each panel, the true field is plotted in black, the assimilated field (starting from the guess shown in Fig. \ref{fig:gvst}) in green, and the difference between the two, multiplied by a factor of 5, is shown in blue. The red triangles indicate the location of the $n^{\mbox{S}}$ virtual magnetic observatories ($n^{\mbox{S}}=20$ in this particular case).} \end{figure*} A typical example of a variational assimilation result is shown in Fig. \ref{fig:typ}. In this case, $n^{\mbox{S}}=20$ and $nt=20$. The recovery of the final magnetic field $b_n$ is excellent (see Fig. \ref{fig:typ}d), the relative $L_2$ error being $1.8\ 10^{-4}$. The benefit here is double: first, the network of observatories is dense enough to sample properly the field, and second, a measurement is made exactly at this discrete time instant, leaving no time for error fields to develop. When the latter is possible, the recovered fields can be contaminated by small-scale features, that is features that have length scales smaller than the spatial sampling scale. We see this happening in Fig. \ref{fig:typ}c), in which the magnified difference between the recovered and true $b_0$, shown in blue, appears indeed quite spiky; $e^b_0$ has still decreased from an initial value of $21.6 \%$ (Fig. \ref{fig:gvst}c) down to $1.2 \%$. Results for velocity are shown in Figs. \ref{fig:typ}a and \ref{fig:typ}b. The recovered velocity is closer to the true state than the initial guess: this is the expected benefit from the nonlinear coupling between velocity and magnetic field in Eqs.~\eqref{mom}-\eqref{ind}. The indirect knowledge we have of $u$, through the observation of $b$, is sufficient to get better estimates of this field variable. At the end of the assimilation process, $e^u_0$ and $e^u_n$, which were approximately equal to $37 \%$ with the initial guess, have been brought down to $8.2$ and $4.7$ \%, respectively. The velocity at present time (Fig. $\ref{fig:typ} $) is remarkably close to the true velocity, save for the left boundary layer sharp structure, which is undersampled (see the distribution of red triangles). \begin{figure*} \centerline{\includegraphics[width=\linewidth]{./npg-2007-0005-f05.pdf}} \caption{\label{fig:dyne} Dynamical evolution of $L_2$ errors (logarithmic value) for the magnetic field (a) and the fluid velocity (b). Dashed lines : errors for initial guesses. Solid lines : errors after variational assimilation. Circles represent instants are which magnetic observations are made. In this particular case, $nt=20$ and $n^{\mbox{S}}=20$.} \end{figure*} We further document the dynamical evolution of $L_2$ errors by plotting on Fig.~\ref{fig:dyne} the temporal evolution of $e^b$ and $e^u$ for this particular configuration. Instants at which observations are made are represented by circles, and the temporal evolution of the guess errors are also shown for comparison. The guess for the initial magnetic field is characterized by a decrease of the error that occurs over $\approx .1$ diffusion time scale, that is roughly the time it takes for most of the parasitic small-scale error component to diffuse away, the error being then dominated at later epochs by advection errors, originating from errors in the velocity field. The recovered magnetic field (Fig.~\ref{fig:dyne}a, solid line), is in very good agreement with the true field as soon as measurements are available ($t\ge 1\%$ of a magnetic diffusion time, see the circles on Fig.~\ref{fig:dyne}a). Even though no measurements are available for the initial epoch, the initial field has also been corrected significantly, as discussed above. In the latter parts of the record, oscillations in the magnetic error field are present -they disappear if the minimization is pushed further (not shown). The unobserved velocity field does not exhibit such a drastic reduction in error as soon as observations are available (Fig.~\ref{fig:dyne}b, solid line). Still, it is worth noticing that the velocity error is significantly smaller in the second part of the record, in connection with the physical observation that most of the parasitic small-scale component of the field has decayed away (see above): advection errors dominate in determining the time derivative of $b$ in Eq.~\eqref{ind}, leaving room for a better assessment of the value of $u$. For other cases (different $n^{\mbox{S}}$ and $nt$), we find a similar behaviour (not shown). We comment on the effects of an irregular time sampling on the above observations in section \ref{irregtime}. Having in mind what one gets in this particular $(nt,n^{\mbox{S}})$ configuration, we now summarize in Fig.~\ref{fig:sys} results obtained by varying systematically these $2$ parameters. After assimilation, the logarithmic value of the $L_2$ velocity and magnetic field errors, at the initial and final stages ($i=0$ and $i=n$), are plotted versus $nt$, using $n^{\mbox{S}}=5, 10, 20, 50,$ and $100$ virtual magnetic stations. As far as temporal sampling is concerned, $nt$ can be equal to $1$ (having one observation at present time only), $10$, $20$, $50$ or $100$. \begin{figure*} \centerline{\includegraphics[width=.8\linewidth]{./npg-2007-0005-f06.pdf}} \caption{\label{fig:sys} Systematic study of the response of the one-dimensional MHD system to variational assimilation. Shown are the logarithms of $L_2$ errors for the $t=0$ (a) and $t=T$ (b) magnetic field, and the $t=0$ (c) and $t=T$ (d) velocity field, versus the number of times observations are made over [0,T], $nt$, using spatial networks of varying density ($n^{\mbox{S}}=5,10,20,50$ and $100$).} \end{figure*} Inspection of Fig.~\ref{fig:sys} leads us to make the following comments: \begin{itemize} \item Regarding $b$ : \begin{itemize} \item 50 stations are enough to properly sample the magnetic field in space. In this case $nt=1$ is sufficient to properly determine $\mathbf{b}_n$, and no real improvement is made when increasing $nt$ (Fig.~\ref{fig:sys}b). During the iterative process, the value of the field is brought to its observed value at every station of the dense network, and this is it: no dynamical information is needed. \item When, on the other hand, spatial sampling is not good enough, information on the dynamics of $b$ helps improve partially its knowledge at present time. For instance, we get a factor of $5$ reduction in $e^b_n$ with $n^{\mbox{S}}=20$, going from $nt=1$ to $nt=10$ (Fig.~\ref{fig:sys}b, circles). The improvement then stabilizes upon increasing $nt$: spatial error dominates. \item This also applies for the initial magnetic field $\mathbf{b}_0$, see Fig.~\ref{fig:sys}a. As a matter of fact, having no dynamical information about $b$ ($nt=1$) precludes any improvement on $\mathbf{b}_0$, for any density of the spatial network. Improvement occurs for $nt>1$. If the spatial coverage is good enough ($n^{\mbox{S}}>50$), no plateau is reached, since the agreement between the assimilated and true fields keeps on getting better, as it should. \end{itemize} \item Regarding $u$ : \begin{itemize} \item The recovered $u$ is always sensitive to spatial resolution, even for $nt=1$ (Figs.~\ref{fig:sys}c and \ref{fig:sys}d). \item If $nt$ is increased, the error decreases and reaches a plateau which is again determined by spatial resolution. This holds for $e^u_0$ and $e^u_n$. For the reason stated above, $\mathbf{u}_n$ is better known than $\mathbf{u}_0$. The error is dominated in both cases by a poor description of the left boundary layer (see the blue curves in Figs.~\ref{fig:typ}a and \ref{fig:typ}b). The gradient associated with this layer is not sufficiently well constrained by magnetic observations (one reason being that the magnetic diffusivity is three times larger than the kinematic viscosity). Consequently, we can speculate that the error made in this specific region at the final time is retro-propagated and amplified going backwards in time, through the adjoint equation, resulting in $e^u_0>e^u_n$. \end{itemize} \end{itemize} \subsubsection{Irregular spatial sampling} We have also studied the effect of an irregular spatial sampling by performing a suite of simulations identical to the ones described above, save that we assumed that stations were only located in the left half of the domain (i.e. the $[-1,0]$ segment). The global conclusion is then the following: assimilation results in an improvement of estimates of $b$ and $u$ in the sampled region, whereas no benefit is visible in the unconstrained region. To illustrate this tendency (and keep a long story short), we only report in Fig.~\ref{fig:bias} the recovered $u$ and $b$ for $(n^{\mbox{S}},nt)= (10,20)$, which corresponds to the ``regular" case depicted in Fig. \ref{fig:typ}, deprived from its $10$ stations located in $[0,1]$. \begin{figure*} \centerline{\includegraphics[width=.8\linewidth]{./npg-2007-0005-f07.pdf}} \caption{\label{fig:bias} Synthetic assimilation results obtained with an asymmetric network of virtual observatories (red triangles). Other model and assimilation parameters as in Fig.~\ref{fig:typ}. a): velocity at initial model time $t=0$. b): velocity at final model time $t=T$. c): magnetic field at $t=0$. d): magnetic field at $t=T$. In each panel, the true field is plotted in black, the assimilated field in green, and the difference between the two, multiplied by a factor of $5$, is shown in blue.} \end{figure*} The lack of transmission of information from the left-hand side of the domain to its right-hand side is related to the short duration of model integration ($0.2$ magnetic diffusion time, which corresponds to $0.2$ advective diffusion time with our choice of $S=1$). We shall comment further on the relevance of this remark for the assimilation of the historical geomagnetic secular variation in the discussion. The lack of observational constraint on the right-hand side of the domain results sometimes in final errors larger than the initial ones (compare in particular Figs.~\ref{fig:bias}a and \ref{fig:bias}b, with Figs.~\ref{fig:typ}a and \ref{fig:typ}b). We also note large error oscillations located at the interface between the left (sampled) and right (not sampled) regions, particularly at initial model time (Figs.~\ref{fig:bias}a and \ref{fig:bias}c). The contrast in spatial coverage is likely to be the cause of these oscillations (for which we do not have a formal explanation); this type of behaviour should be kept in mind for future geomagnetic applications. \subsubsection{Irregular time sampling} \label{irregtime} We can also assume that the temporal sampling rate is not constant (keeping the spatial network of observatories homogeneous), restricting for instance drastically the epochs at which observations are made to the last $10$ \% of model integration time, the sampling rate being ideal (that is performing observations at each model step). Not surprisingly, we are penalized by our total ignorance of the $90$ remaining per cent of the record. We illustrate the results obtained after assimilation with our now well-known array of $n^{\mbox{S}}=20$ stations by plotting the evolution of the errors in $b$ and $u$ (as defined above) versus time in Fig.~\ref{fig:dyne_irreg}. \begin{figure*} \centerline{\includegraphics[width=\linewidth]{./npg-2007-0005-f08.pdf}} \caption{\label{fig:dyne_irreg} Same as Fig.~\ref{fig:dyne}, save that the $nt=20$ epochs at which measurements are made are concentrated over the last $10\%$ of model integration time.} \end{figure*} Although the same amount of information ($n^{\mbox{S}} nt = 400 $) has been collected to produce Figs.~\ref{fig:dyne} and \ref{fig:dyne_irreg}, the uneven temporal sampling of the latter has dramatic consequences on the improvement of the estimate of $b$. In particular, the initial error $e^b_0$ remains large. The error decreases then linearly with time until the first measurement is made. We also observe that the minimum $e^b$ is obtained in the middle of the observation era. The poor quality of the temporal sampling, coupled with the not-sufficient spatial resolution obtained with these 20 stations, does not allow us to reach error levels as small as the ones obtained in Fig. \ref{fig:dyne}, even at epochs during which observations are made. The velocity is sensitive to a lesser extent to this effect, with velocity errors being roughly $2$ times larger in Fig.~\ref{fig:dyne_irreg} than in Fig.~\ref{fig:dyne}. \subsection{Imposing an a priori constraint on the state} \label{sec:const} As stated in Sect.~\ref{sec:metho}, future applications of variational data assimilation to the geomagnetic secular variation might require to try and impose a priori constraints on the core state. In a kinematic framework, this is currently done in order to restrict the extent of the null space when trying to invert for the core flow responsible for the GSV \citep{backus1968,lemouel1984}. Assume for instance that we want to try and minimize the gradients of the velocity and magnetic fields, in a proportion given by the ratio of their diffusivities, that is the magnetic Prandtl number $Pm$, at any model time. The associated cost function is written \begin{equation} J_C = \sum_{i=0}^{n} \left[\mathbf{b}_i^T \mathbf{\sf D}^T \mathbf{\sf D} \mathbf{b}_i +Pm \left(\mathbf{u}_i^T \mathbf{\sf D}^T \mathbf{\sf D} \mathbf{u}_i\right)\right], \label{defconst} \end{equation} in which $\mathbf{\sf D}$ is the derivative matrix introduced in \S\ref{sec:num}. The total misfit reads, according to Eq.~\eqref{defj} $$ J = \alpha_H J_H + \alpha_C J_C, $$ with $\alpha_H=1/(ntn^{\mbox{S}})$ as before, and $\alpha_C = \beta / [n (N-1)]$, in which $\beta$ is the parameter that controls the constraint to observation weight ratio. \begin{figure} \centerline{\includegraphics[width=\linewidth]{./npg-2007-0005-f09.pdf}} \caption{\label{fig:const} Influence of an a priori imposed constraint (in this case aiming at reducing the gradients in the model state) on the results of variational assimilation. Shown are the difference fields (arbitrary scales) between the assimilated and true states, for the velocity field (left panel) and the magnetic field (right panel), at initial model time. Again, as in Fig.~\ref{fig:typ}, we have made $nt=20$ measurements at $n^{\mbox{S}}=20$ evenly distributed stations. $\beta$ measures the relative ratio of the constraint to the observations. Indicated for reference are the $L_2$ errors corresponding to each configuration. The grey line is the zero line. } \end{figure} Response of the assimilated model to the imposed constraint is illustrated in Fig.~\ref{fig:const}, using the $(nt=20,n^{\mbox{S}}=20)$ reference case of Fig.~\ref{fig:typ}, for three increasing values of the $\beta$ parameter: $10^{-1},1,$ and $10^1$, and showing also for reference what happens when $\beta=0$. We show the error fields (the scale is arbitrary, but the same for all curves) at the initial model time, for velocity (left panel) and magnetic field (right panel). The $L_2$ errors for each field at the end of assimilation indicate that this particular constraint can result in marginally better estimate of the initial state of the model, provided that the value of the parameter $\beta$ is kept small. For $\beta=10^{-1}$, the initial magnetic field is much smoother than the one obtained without the constraint and makes more physical sense (Fig.~\ref{fig:const}d). The associated velocity field remains spiky, with peak to peak error amplitudes strongly reduced in the heart of the computational domain (Fig.~\ref{fig:const}c). This results in smaller errors (reduction of about $20 \%$ for $b_0$ and $10 \%$ for $u_0$). Increasing further the value of $\beta$ leads to a magnetic field that is too smooth (and an error field even dominated by large-scale oscillations, see Fig.~\ref{fig:const}h), simply because too much weight has been put on the large-scale components of $b$. The velocity error is now also smooth (Fig.~\ref{fig:const}g), at the expense of a velocity field being further away from the sought solution ($e^u_0=11.7$\%), especially in the left Hartmann boundary layer. \begin{figure} \centerline{\includegraphics[width=\linewidth]{./npg-2007-0005-f10.pdf}} \caption{\label{fig:conv} Convergence behaviour for different constraint levels $\beta$. The ratio of the current value of the misfit $J^l$ (normalized by its initial value $J^0$) is plotted against the iteration count $l$. $\beta$ measures the strength of the constraint imposed on the state relative to the observations.} \end{figure} In the case of real data assimilation (as opposed to the synthetic case here, the true state of which we know, and departures from which we can easily quantify), we do not know the true state. To get a feeling for the response of the system to the imposition of an extra constraint, it is nevertheless possible to monitor for instance the convergence behaviour during the descent. On Fig.~\ref{fig:conv}, the ratio of the misfit to its initial value is plotted versus the iteration number in the conjugate gradient algorithm (log-log plot). If $\beta$ is small, the misfit keeps on decreasing, even after 5,000 iterations (green curve). On the other hand, a too strong a constraint (blue and red curves in Fig.~\ref{fig:conv}) is not well accommodated by the model and results in a rapid flattening of the convergence curve, showing that convergence behaviour can be used as a proxy to assess the efficacy of an a priori imposed constraint. Again, we have used the constraint given by Eq.~\eqref{defconst} for illustrative purposes, and do not claim that this specific low-pass filter is mandatory for the assimilation of GSV data. Similar types of constraints are used to solve the kinematic inverse problem of GSV \citep{blox91}; see also \cite{paisetal2004} and \cite{ao2004} for recent innovative studies on the subject. The example developed in this section aims at showing that a formal continuity exists between the kinematic and dynamical approaches to the GSV. \subsection{Convergence issues} In most of the cases presented above, the iteration counts had reached $5,000$ before the cost function had decreased by $8$ orders of magnitude. Even though the aim of this paper is not to address specifically the matter of convergence acceleration algorithms, a few comments are in order, since $5,000$ is too large a number when considering two- or three-dimensional applications. \begin{itemize} \item In many cases, a reduction of the initial misfit by only $4$ orders of magnitude gives rise to decent solutions, obtained typically in a few hundreds of iterations. For example, in the case corresponding to Fig.~\ref{fig:typ}, a decrease of the initial misfit by $4$ orders of magnitude is obtained after $475$ iterations. The resulting error levels are already acceptable : $e^u_0=12$~$10^{-2}$, $e^u_n=7.5$~$10^{-2}$, $e^b_0=1.8$~$10^{-2}$, and $e^b_n=3.0$~$10^{-4}$. \item More importantly, in future applications, convergence will be sped up through the introduction of a background error covariance matrix $\mathbf{B}$, resulting in an extra term \citep{icgl97} $$ \frac{1}{2}[\mathbf{x}_0 - \mathbf{x}_b ]^T \mathbf{B}^{-1} [\mathbf{x}_0 - \mathbf{x}_b ] $$ added to the cost function (Eq.~\eqref{defj}). Here, $\mathbf{x}_b$ denotes the background state at model time~$0$, the definition of which depends on the problem of interest. In order to illustrate how this extra term can accelerate the inversion process, we have performed the following assimilation experiment: we take the network of virtual observatories of Fig.~\ref{fig:typ}, and define the background state at model time $0$ to be zero for the velocity field (which is not directly measured), and the polynomial extrapolation of the $t=0$ magnetic observations made at the $n^{\mbox{S}}=20$ stations on the $N+1$ GLL grid points for the magnetic field (resorting to Lagrangian interpolants defined by the network of stations). The background error covariance matrix is chosen to be diagonal, without cross-covariance terms. This approach enables a misfit reduction by $5$ orders of magnitude in 238 iterations, with the following $L_2$ error levels : $e^u_0=13$~$10^{-2}$, $e^u_n=11.9$~$10^{-2}$, $e^b_0=2.6$~$10^{-5}$, and $e^b_n=2.6$~$10^{-4}$. This rather crude approach is beneficial for a) the computational cost and b) the estimate of the magnetic field. The recovery of the velocity is not as good as it should be, because we have made no assumption at all on the background velocity field. In future applications of VDA to the GSV, some a priori information on the background velocity field inside the core will have to be introduced in the assimilation process. The exact nature of this information is beyond the scope of this study. \end{itemize} \section{Summary and conclusion} \label{sec:dis} We have laid the theoretical and technical bases necessary to apply variational data assimilation to the geomagnetic secular variation, with the intent of improving the quality of the historical geomagnetic record. For the purpose of illustration, we have adapted these concepts (well established in the oceanographic and atmospheric communities) to a one-dimensional nonlinear MHD model. Leaving aside the technical details exposed in section \ref{sec:toy}, we can summarize our findings and ideas for future developments as follows: \begin{itemize} \item Observations of the magnetic field always have a positive impact on the estimate of the invisible velocity field, even if these two fields live at different length scales (as could be expected from the small value of the magnetic Prandtl number). \item With respect to a purely kinematic approach, having successive observations dynamically related by the model allows one to partially overcome errors due to a poor spatial sampling of the magnetic field. This is particularly encouraging in the prospect of assimilating main geomagnetic field data, the resolution of which is limited to spherical harmonic degree $14$ (say), because of (remanent or induced) crustal magnetization. \item Over the model integration time ($20$ \% of an advection time), regions poorly covered exhibit poor recoveries of the true fields, since information does not have enough time to be transported there from well covered regions. In this respect, model dynamics clearly controls assimilation behaviour. Concerning the true GSV, the time window we referred to in the introduction has a width of roughly a quarter of an advective time scale. Again, this is rather short to circumvent the spatial limitations mentioned above, if advective transport controls the GSV catalog. This catalog, however, could contain the signature of global hydromagnetic oscillations \citep{1966Hide,fj2003}, in which case our hope is that problems due to short duration and coarse spatial sampling should be alleviated. This issue is currently under investigation in our simplified framework, since the toy model presented here supports Alfv\'en waves. \item A priori imposed constraints (such as the low-pass filter of Sect.~\ref{sec:const}) can improve assimilation results. They make variational data assimilation appear in the formal continuity of kinematic geomagnetic inverse problems as addressed by the geomagnetic community over the past $40$ years. \end{itemize} \begin{figure*} \centerline{\includegraphics[width=\linewidth]{./npg-2007-0005-f11.pdf}} \caption{\label{fig:gufm} Dynamical evolution of $L_2$ errors (logarithmic value) for the magnetic field (a) and the fluid velocity (b). Black lines : errors for initial guesses. Green (red) lines : errors for assimilation results that do (not) incorporate the data obtained by a dense virtual network of magnetic stations, which aims at mimicking the satellite era -the blue segment on each panel-, spanning the last $5$ \% of model integration time.} \end{figure*} Finally, in order to illustrate the potential interest of applying VDA techniques to try and improve the recent GSV record, we show in Fig.~\ref{fig:gufm} the results of two synthetic assimilation experiments. These are analogous to the ones described in great length in Sect.~\ref{sec:sae} (same physical and numerical parameters, constraint parameter $\beta=10^{-1}$). In both cases, observations are made by a network of $6$ evenly distributed stations during the first half of model integration time (the logbooks era, say). The second half of the record is then produced by a network of $15$ stations for case A (the observatory era). For case B, this is also the case, save that the last $5$\% of the record are obtained via a high-resolution network of $60$ stations. The two records therefore only differ in the last $5$\% of model integration time. Case B is meant to estimate the potential impact of the recent satellite era on our description of the historical record. The evolution of the magnetic error $e^b$ backwards in time (Fig.~\ref{fig:gufm}a) shows that the benefit due to the dense network is noticeable over three quarters of model integration time, with an error reduction of roughly a factor of $5$. The velocity field is (as usual) less sensitive to the better quality of the record; still, it responds well to it, with an average decrease of $e^u$ on the order of $20$\%, evenly distributed over the time window. Even if obtained with a simplified model (bearing in particular in mind that real geomagnetic observations are only available at the core surface), these results are promising and indicate that VDA should certainly be considered as the natural way of using high-quality satellite data to refine the historical geomagnetic record in order to `reassimilate' \citep{tal97} pre-satellite observations. To do so, a good initial guess is needed, which is already available \citep{gufm}; also required is a forward model (and its adjoint) describing the high-frequency physics of the core. This model could either be a full three-dimensional model of the geodynamo, or a two-dimensional, specific model of short-period core dynamics, based on the assumption that this dynamics is quasi-geostrophic \citep{jault2006}. The latter possibility is under investigation. \vspace{1.cm} {\noindent \bfseries \large Acknowledgements} We thank Andrew Tangborn and an anonymous referee for their very useful comments, and \'Elisabeth Canet, Dominique Jault, Alexandra Pais, and Philippe Cardin for stimulating discussions. AF also thanks \'Eric Beucler for sharing his knowledge of inverse problem theory, and \'Elisabeth Canet for her very careful reading of the manuscript. This work has been partially supported by a grant from the Agence Nationale de la Recherche ("white" research program VS-QG, grant reference BLAN06-2\_155316). All graphics were produced using the freely available {\tt pstricks} and {\tt pstricks-add} packages. \input{./fea2007.bbl} \end{document}
1,314,259,994,638
arxiv
\section{Introduction} \label{sec:intro} In \cite{Datta:2017ert}, backgrounds of the form \begin{equation}\label{background} {\rm AdS}_3 \times \Bigl( {\rm S}^3 \times \mathbb{T}^4 \Bigr) / D_n \ , \end{equation} were shown to have ${\cal N}=(2,2)$ spacetime supersymmetry after orbifolding by dihedral groups $D_n$, $n=1,2,3,4,6$. Here the generators of $D_n$ act geometrically on the two $\mathbb{T}^2$'s in $\mathbb{T}^4 \cong \mathbb{T}^2 \times \mathbb{T}^2$, and the reflection generators of $D_n$ rotate the ${\rm S}^3$ by $180$ degrees. It was furthermore proposed that the CFT dual of this string background should lie on the same moduli space as \begin{equation}\label{symorb} {\rm Sym}_N\bigl(\mathbb{T}^4 / D_n \bigr) \ , \end{equation} where the $D_n$ action on $\mathbb{T}^4$ is the same as above. Without the $D_n$ orbifolds, the $\mathcal{N} = (4,4)$ duality was originally proposed in \cite{Maldacena:1997re}, see \cite{David:2002wn} for a review, and it was recently understood from a microscopic viewpoint in \cite{Gaberdiel:2018rqv,Eberhardt:2018ouy}. To this end the string background with pure NS-NS flux was considered, in which case an exact worldsheet description is available \cite{Maldacena:2000hw,Maldacena:2000kv,Maldacena:2001km} (for the generalisation to the supersymmetric setup see also \cite{Giveon:1998ns,Israel:2003ry,Raju:2007uj,Ferreira:2017pgt}) in terms of a WZW model based on $\mathfrak{sl}(2,\mathds{R})$. It was proposed in \cite{Gaberdiel:2018rqv} that the theory with $k=1$ should be exactly dual to the symmetric orbifold of $\mathbb{T}^4$. The construction of this model in the RNS language is a bit problematic, but can be made sense of in the hybrid formalism of \cite{Berkovits:1999im}, where the worldsheet fields organise themselves (for pure NSNS flux) into a WZW model based on the superalgebra $\mathfrak{psu}(1,1|2)_k$. The latter was used in \cite{Eberhardt:2018ouy} to demonstrate an exact agreement between the spacetime spectrum of the hybrid theory and that of the symmetric orbifold of $\mathbb{T}^4$. Subsequently, it was shown in \cite{Eberhardt:2019qcl} that the operator algebra of the symmetric orbifold can also be reconstructed from the worldsheet. Again in this paper, it was noted that one may generalise the analysis to $k>1$ for which the long string spectrum of the string theory is matched with the symmetric orbifold of (${\cal N}=4$ Liouville theory) $\times \, \mathbb{T}^4$. \smallskip The aim of this paper is to perform a similar analysis for the orbifolds of the form (\ref{background}). We shall mainly concentrate on the case with $k=1$, for which we expect again a direct match to the symmetric orbifold in (\ref{symorb}). The action of the $D_n$ generators on the RNS worldsheet fields was already worked out in \cite{Datta:2017ert}. The relation between these degrees of freedom and those appearing in the hybrid formalism of \cite{Berkovits:1999im} was spelled out in \cite{Eberhardt:2019qcl}, and this allows us to determine the $D_n$ action on the fields of the hybrid string. It is then straightforward to perform an analysis similar to \cite{Eberhardt:2018ouy}, resulting again in an exact match of the spectra. This gives strong evidence for the duality between (\ref{background}) and (\ref{symorb}). \medskip The paper is organised as follows. In Section~\ref{sec:Dnaction} we show that the $D_n$ action on the fields in the RNS formalism can be expressed in terms of rotation generators of various ${\rm SU}(2)$ symmetry groups of the background. This then allows us to translate the $D_n$ action to the hybrid formulation, see Section~\ref{sec:hybridaction}. In order to keep track of the group action on the full spectrum, we generalise the analysis of \cite{Eberhardt:2018ouy} by introducing the corresponding chemical potentials in Section~\ref{sec:chemical}. This is relatively straightforward, except for the behaviour of the ghost fields which requires some explanation (see Section~\ref{sec:3.1}). Section~\ref{sec:orbifold} is concerned with calculating the spacetime spectrum of the world-sheet orbifold for $k=1$, and we show that this reproduces indeed the symmetric orbifold spectrum of (\ref{symorb}). We explain in Section~\ref{sec:k>1} how our analysis generalises for $k>1$, for which the dual symmetric orbifold is given by (\ref{dualorb}), and we end in Section~\ref{sec:concl} with some conclusions. There is one appendix in which some aspects of the representation theory of $D_n$ are summarised. \section{The $D_n$ action}\label{sec:Dnaction} Let us begin by reviewing the description of the orbifold theory in the RNS formalism. Before orbifolding the degrees of freedom of the world-sheet theory consist of \begin{equation} \mathfrak{sl}(2, \Rs)^{(1)}_k [J, \psi] \ \oplus \ \mathfrak{su} (2)^{(1)}_k[K, \chi] \ \oplus \ (\mathbb{T}^4)^{(1)}[\partial X, \lambda] \ \oplus \ \mathrm{Fock}[b,c,{\beta},{\gamma}]\ , \label{RNSworldsheet} \end{equation} where the $(1)$ superscript indicates that these are ${\cal N}=1$ superaffine models (with the notation for the relevant fields in square brackets), and the $(b,c)$ and $({\beta},{\gamma})$ denote the conformal and superconformal ghosts, respectively. The dihedral group generators act on the various world-sheet fields as follows. The bosons and fermions of the $\mathbb{T}^4$ torus transform in the fundamental representation of ${\rm SO}(4)$, and we can define the $D_n$ action on them by using that $D_n$ is naturally a subgroup of the orthogonal group in two dimensions ${\rm O}(2)$, together with \begin{equation} D_n \subset {\rm O}(2)_{\rm diag} \subset S\bigl( {\rm O}(2) \times {\rm O}(2) \bigr) \subset {\rm SO}(4) \ , \end{equation} where $S\bigl( {\rm O}(2) \times {\rm O}(2) \bigr)$ is the subgroup of ${\rm O}(2) \times {\rm O}(2)$ for which the product of determinants is $+1$. In terms of the representation theory of $D_n$ (that is reviewed in Appendix~\ref{app:dihedral}), this means that both the bosons and the fermions transform in \begin{equation}\label{Dntorus} [\partial X, \lambda ] \in \rho_1 \oplus \rho_1 \ . \end{equation} The action of $D_n$ on the Narain lattice $\Gamma^{4,4}$ of momentum and winding states was discussed in detail in \cite{Datta:2017ert}, see in particular Section~2.3 of that paper, and it turns out there are two inequivalent choices $D_n^{(1,2)}$ for $n=1,2,3$. Finally, the fields associated to ${\rm AdS}_3$ are invariant under $D_n$, while the generators of the sphere get rotated by $180$ degrees if the generator of $D_n$ is a reflection generator (and are invariant otherwise). \smallskip For the following it will be convenient to write the above group actions in terms of `current' generators (so that we can determine the trace with the insertion of a group element in terms of the corresponding chemical potentials). Actually, it will be convenient (and sufficient) to introduce chemical potentials only for all fermionic degrees of freedom, as well as the bosonic degrees of freedom associated to ${\rm AdS}_3\times {\rm S}^3$, since the bosonic torus modes are unaffected by the transformation from the RNS formalism to the hybrid formalism (and hence can be treated as in the original RNS case). For the (fermionic) torus degrees of freedom, the relevant current algebra is $\mathfrak{so}(4)_1$ that is generated by the bilinears of the four real fermions. The corresponding zero mode algebra (that is relevant for this discussion) decomposes as \begin{equation}\label{so4} \mathfrak{so}(4) \cong \mathfrak{su}(2)_{+} \oplus \mathfrak{su}(2)_{-} \ , \end{equation} and we denote the two sets of $\mathfrak{su}(2)$ generators by $M^a_\pm$. The $D_n$ action given by $\rho_1 \oplus \rho_1$, see eq.~(\ref{Dntorus}), actually takes values in ${\rm SO}(4)$, and hence can be written in terms of exponentials of $\mathfrak{su}(2)_{+} \oplus \mathfrak{su}(2)_-$ generators, \begin{equation} \rho_{\,\mathbb{T}^4} (P) = \mathrm{Ad}(e^{\pi i ( M_{+}^{1} + M_{-}^{1})})\ , \qquad \rho_{\,\mathbb{T}^4}(U) = \mathrm{Ad}(e^{\frac{4 \pi i}{n} M_{+}^3}) \ , \label{DnTorusFermionsInnerAutomorphism} \end{equation} where $U$ is the rotation generator, and $P$ the reflection generator of $D_n$. Here we work with the convention that \begin{equation} e^{\pi i M^1} = - i\, \left( \begin{matrix} 0 & 1 \cr 1 & 0 \end{matrix} \right) \ , \qquad e^{\frac{4\pi i}{n} M^3} = \left(\begin{matrix} e^{\frac{2\pi i}{n}} & 0 \cr 0 & e^{-\frac{2\pi i}{n}} \end{matrix} \right) \ , \end{equation} and it is easy to see that this describes (an equivalent representation to) $\rho_1 \oplus \rho_1$, see eq.~(\ref{eqn:DefRepDn}) and \cite[Appendix~A]{Datta:2017ert}. Note that this construction just reflects that the fundamental representation ${\bold{f} 4}$ of $\mathfrak{so}(4)$ corresponds to ${\bold{f} 4} \cong ({\bold{f} 2},{\bold{f} 2})$ in terms of $\mathfrak{su}(2)_+ \oplus \mathfrak{su}(2)_-$. Finally, the rotation action on ${\rm S}^3$ (by $180$ degrees for the case of $P$, and trivial in the case of $U$) can be written in terms of the current generators associated to the $\mathfrak{su}(2)^{(1)}_k$ algebra (whose global algebra we shall refer to as $\mathfrak{su}(2)_{\rm R}$ in the following with generators $K^a$) \begin{equation} \rho_{\mathfrak{su} (2)_k^{(1)}} (P) = \mathrm{Ad}(e^{\pi i K^3})\ , \qquad \rho_{\mathfrak{su} (2)_k^{(1)}} (U) = 1\ . \label{DnInnerAutoS3} \end{equation} Thus the $t^3$-valued currents (and fermions) are invariant, while the $t^\pm$-valued currents (and fermions) transform in the $\rho_-$ representation of $D_n$. Altogether the $D_n$ action on the RNS fields (except for the torus bosons) is therefore given by \begin{equation}\label{DnActionRNS} \rho_{\, {\rm RNS}} (P) = \mathrm{Ad}(e^{\pi i ( K^3 + M_{+}^1 + M_{-}^1)})\ , \qquad \rho_{\, {\rm RNS}}(U) = \mathrm{Ad}(e^{\frac{4 \pi i}{n} M_{+}^3}) \ . \end{equation} \subsection{Translation to the hybrid fields}\label{sec:hybridaction} Next we want to translate these group actions to the hybrid fields. In the hybrid formalism of \cite{Berkovits:1999im}, the RNS world-sheet CFT of \eqref{RNSworldsheet} is reorganised as \begin{equation} \mathfrak{psu}(1,1\vert 2)_k [J,K,S] \oplus \mathbb{T}^4_{\mathrm{twisted}} [\partial X, \Psi] \oplus \mathrm{Fock}[b,c,\rho] \ , \label{HybridWorldsheet} \end{equation} where the fermions of $\mathbb{T}^4_{\mathrm{twisted}}$ have conformal dimension $h=1$ or $h=0$ (`topologically twisted'), and $\rho$ is a boson of negative metric and background charge $Q=3$. (We are using the conventions of \cite{Eberhardt:2019qcl}). More specifically, the reformulation only affects the fermions (and the ghosts), but does not touch the (decoupled) bosonic generators of the RNS formalism. The fermions of the hybrid description can be re-expressed in terms of the RNS fields as, see Section~3.2 of \cite{Eberhardt:2019qcl} \begin{align} p^{A\alpha\beta} &= e^{\frac{A}{2}H_1 + \frac{\alpha}{2}H_2 + \frac{\beta}{2}(H_4 + H_5) + \frac{\beta}{2} ( A \alpha H_3 - \phi)}\ , \label{pdef}\\ \Psi^{\mu \beta} &= e^{\frac{\mu}{2}(H_4 -H_5) +\frac{\beta}{2}(H_4 + H_5) + \beta (-\phi + \chi)}\ , \label{Psidef} \end{align} where $\frac{1}{2}A, \frac{1}{2}\alpha, \frac{1}{2}\mu, \frac{1}{2}\beta \in \{\pm\frac{1}{2}\}$ are the spins of these fermionic fields with respect to the global $\mathfrak{sl}(2, \Rs) \oplus \mathfrak{su} (2)_{\rm R} \oplus \mathfrak{su} (2)_{+} \oplus \mathfrak{su} (2)_{-}$, respectively. As a consequence, the $p^{A\alpha\beta}$ transform in the $(\mathbf{2},\mathbf{2},\mathbf{2})$ of $\mathfrak{sl}(2, \Rs) \oplus \mathfrak{su} (2)_{\rm R} \oplus \mathfrak{su} (2)_{-}$, while the $\Psi^{\mu\beta}$ transform in the $(\mathbf{2}_+,\mathbf{2}_-)$ of $\mathfrak{su} (2)_{+} \oplus \mathfrak{su} (2)_{-}$.\footnote{We will sometimes use the notation ${\bold{f} 2}_\pm$ and ${\bold{f} 2}_{\rm R}$ to indicate with respect to which $\mathfrak{su}(2)$ algebra the relevant states transform.} The eight fields $p^{A\alpha\beta}$ can be separated into four fields $p^{A\alpha} := p^{A\alpha +}$ with conformal weight $h=1$, and their four conjugate fields $\theta^{A\alpha} := p^{A\alpha -}$ with conformal weight $h=0$. This defines four fermionic first order systems with $\lambda = 1$, which can be combined with the bosonic currents of $\mathfrak{sl}(2, \Rs)_{k+2} \oplus \mathfrak{su} (2)_{k-2}$ to produce a free field (Wakimoto) representation of $\mathfrak{psu}(1,1\vert 2)_k$. In particular, the supercurrents $S^{A\alpha\beta}$ transforming in the $(\mathbf{2},\mathbf{2},\mathbf{2})$ of $\mathfrak{sl}(2, \Rs) \oplus \mathfrak{su} (2)_{\rm R} \oplus \mathfrak{su} (2)_{-}$ are given by \begin{align} S^{A\alpha+} &= p^{A\alpha}, \\ S^{A\alpha-} &= k\partial \theta^{A\alpha} +J^a c_a \tensor{(\sigma_a)}{^A_B} \,\theta^{B\alpha} - K^a \tensor{(\sigma_a)}{^\alpha_\beta} \,\theta^{A\beta}. \label{s3:2:3:waki_S_A_ag_minus} \end{align} These fields are uncharged with respect to $\mathfrak{su}(2)_+$, and hence we find from (\ref{DnTorusFermionsInnerAutomorphism}) and (\ref{DnInnerAutoS3}) that they transform under the $D_n$ action as \begin{equation} \rho_{\mathfrak{psu}(1,1\vert 2)_k} (P) = \mathrm{Ad}(e^{\pi i (K^3 +M^1_{-})})\ , \qquad \rho_{\mathfrak{psu}(1,1\vert 2)_k} (U) = 1\ . \label{DnActionPSU} \end{equation} \smallskip The topologically twisted fermions $\Psi^{\mu\beta}$ transform in the $(\mathbf{2}_+,\mathbf{2}_-)$ with respect to $\mathfrak{su}(2)_+ \oplus \mathfrak{su}(2)_-$, just like their RNS counterparts, and hence their action is also described by $\rho_{\,\mathbb{T}^4}$, see eq.~\eqref{DnTorusFermionsInnerAutomorphism}. Altogether we therefore get the $D_n$ action on the hybrid fields \begin{equation}\label{DnActionhybrid} \rho_{\, {\rm hybrid}} (P) = \mathrm{Ad}(e^{\pi i ( K^3 + M_{+}^1 + M_{-}^1)})\ , \qquad \rho_{\, {\rm hybrid}}(U) = \mathrm{Ad}(e^{\frac{4 \pi i}{n} M_{+}^3}) \ , \end{equation} which, by construction, agrees with that on the RNS fields, see eq.~(\ref{DnActionRNS}). Finally, while the $\rho$-ghost is expressed in terms of $\mathbb{T}^4$ degrees of freedom, \begin{align}\label{rhodef} \partial \rho = - (\partial H_4 + \partial H_5) + 2 \partial \phi - \partial \chi\ , \end{align} it remains invariant under the induced $D_n$ action. (Note that $\partial H_4 + \partial H_5$ is the bilinear fermionic current associated with $M^3_-$.) \section{Introducing chemical potentials}\label{sec:chemical} In order to proceed it is convenient to introduce appropriate chemical potentials into the original analysis of \cite{Eberhardt:2018ouy} since this will allow us to keep track of the various group actions relatively easily. We will work with the conventions that \begin{equation} \mathrm{ch}(u,v,z,t;\tau) := \mathrm{tr} \, e^{2\pi i (u M^3_{+} + v M^3_{-} )} y^{K^3} x^{J_0^3} q^{L_0 - \frac{c}{24}}\ , \end{equation} where \begin{equation} q = e^{2\pi i \tau} \ , \qquad y = e^{2\pi i z} \ , \qquad x= e^{2\pi i t} \ . \end{equation} In particular, the action of the $D_n$ generators $P$ and $R$ inside the trace can be absorbed into shifting the chemical potentials as\footnote{Note that since the $P$ action involves $M^1_\pm$, it is convenient to work in the basis where $M^1_\pm$ rather than $M^3_\pm$ is diagonal in $P$-twisted sectors, but for the calculation of the character this is immaterial.} \begin{align} P: \qquad & (u,v,z,t) \mapsto \bigl(u+\tfrac{1}{2}, v+\tfrac{1}{2}, z+\tfrac{1}{2}, t) \label{Pchar}\\ U: \qquad & (u,v,z,t) \mapsto \bigl( u+\tfrac{2}{n}, v, z, t) \ , \label{Rchar} \end{align} as follows directly from eq.~(\ref{DnActionhybrid}). We shall mainly be interested in the case where $k=1$, for which the representation theory of $\mathfrak{psu}(1,1\vert 2)_k$ is very restrictive, see \cite[Section~4.2]{Eberhardt:2018ouy} for a detailed analysis. In particular, only the $j=\frac{1}{2}$ continuous representations (at the `bottom' of the continuum) are allowed, together with their spectrally flowed versions. These representations are thus labelled by the spectral flow parameter $w$, as well as $\lambda\in [0,1)$, defining the eigenvalues of $J^3_0$ modulo integers. For the calculation of the characters (that possess many null-vectors) it will be convenient to use the free field realisation of $\mathfrak{psu}(1,1\vert 2)_1$ in terms of two complex fermions and two pairs of symplectic bosons, see \cite[Section~4.5]{Eberhardt:2018ouy}. Here the symplectic boson bilinears generate $\mathfrak{sl}(2, \Rs)_1$, the fermionic bilinears generate $\mathfrak{su} (2)_1$, while the boson-fermion bilinears define the supercharges. The $D_n$ action given in (\ref{DnActionPSU}) can be lifted to the free fields by taking the symplectic bosons to be invariant, while the complex fermions transform as $({\bold{f} 2}_{\rm R},{\bold{f} 2}_-)$ with respect to $\mathfrak{su}(2)_{\rm R} \oplus \mathfrak{su} (2)_-$. Then the relevant characters take the form, see \cite[Section 5]{Eberhardt:2019niq} \begin{equation} \mathrm{ch}_{w,\lambda} (v,z,t;\tau) = \sum_{m \in{\mathbb{Z}} + \lambda} q^{-mw + w^2/2}x^m \, \frac{\vartheta_1 (\frac{t+z + v}{2};\tau) \, \vartheta_1 (\frac{t-z + v}{2};\tau)}{ \eta (\tau)^4}\ . \label{s3:4:1:TensionlessPsuCharacter} \end{equation} \subsection{The ghost contribution}\label{sec:3.1} While the discussion so far is relatively straightforward, there is one subtle point we need to explain in more detail. Naively, one may have guessed that the ghosts are invariant under the $D_n$ action, but this is, in some sense, not quite correct. The basic reason for this can be read off from the DDF analysis of \cite{Eberhardt:2019qcl}. To be specific, let us for example consider the DDF generators that correspond to the fermionic torus directions in the $-\frac{1}{2}$ picture (cf.\ eqs.~(5.8) and (5.9) of \cite{Eberhardt:2019qcl}) \begin{equation} \Lambda_r^{(-\frac{1}{2}) \mu \alpha} = k^{-\frac{1}{4}} \, \oint dz \, \Bigl( p^{+\alpha -} \, \Psi^{\mu-} e^{-\rho} \gamma^{r+\frac{1}{2}} + p^{-\alpha -} \, \Psi^{\mu-} e^{-\rho} \gamma^{r-\frac{1}{2}} \Bigr) \ , \end{equation} where $p^{A\alpha\beta}$, $\Psi^{\mu\beta}$ and $\rho$ were defined in eqs.~(\ref{pdef}), (\ref{Psidef}) and (\ref{rhodef}), respectively. The original torus excitations, i.e.\ the $\Psi^{\mu\beta}$, obviously only carry charge with respect to $\mathfrak{su}(2)_+\oplus\mathfrak{su}(2)_-$. However, after combining with the other hybrid fields to form DDF operators (that map physical states to physical states), they acquire charge with respect to the $\mathfrak{su}(2)_{\rm R}$ symmetry coming from the ${\rm S}^3$, i.e. they now have an $\alpha$ index instead of a $\beta$ index.\footnote{The $\mathfrak{su}(2)_{\rm R}$ becomes the R-symmetry of the spacetime CFT, and the torus fermions of the spacetime theory are indeed charged under it.} If we want to describe this effect in terms of ghosts (that eliminate the unphysical degrees of freedom), then we must take the ghosts to have some effective charge with respect to the various $\mathfrak{su}(2)$'s (despite the fact that, on the face of it, they are uncharged with respect to any of these $\mathfrak{su}(2)$'s.) In order to do this more quantitatively, we shall proceed as follows. We know that, at least for $k\geq 2$, the RNS formalism and the hybrid formalism are equivalent once physical state conditions are imposed. We also know the $\mathfrak{su}(2)$ transformation properties (and hence those with respect to $D_n$) of the RNS and hybrid fields before imposing the physical state condition. This will allow us to deduce how the hybrid ghosts behave `effectively' with respect to these $\mathfrak{su}(2)$ charges, and hence also under the $D_n$ action. To start with, recall that in the RNS formalism the world-sheet fields consist of \begin{equation} \mathfrak{sl}(2, \Rs)_k^{(1)} \oplus \mathfrak{su}(2)_k^{(1)} \oplus (\mathbb{T}^4)^{(1)} \oplus \mathrm{Fock}[b,c,\beta,\gamma]\ , \end{equation} where the superscript $(1)$ indicates that these are all ${\cal N}=1$ superconformal algebras. After decoupling the fermions, imposing the physical state condition on the fermions (so as to reduce their number from $10$ to $8$), and interpreting them from the spacetime perspective --- this can be either done by using the abstruse identity, see e.g.\ the discussion in \cite[Section~2.3]{Gaberdiel:2018rqv}, or by using the DDF construction from above --- these degrees of freedom transform as \begin{equation}\label{NSgen} \mathfrak{sl}(2, \Rs)_{k+2} \oplus \mathfrak{su}(2)_{k-2} \oplus \mathbb{T}^4_{\rm bos} \oplus \mathrm{Fock}[b,c] \oplus \mathrm{Fock}[\bigl(({\bold{f} 2}_{\rm R},{\bold{f} 2}_+) \oplus ({\bold{f} 2}_{\rm R},{\bold{f} 2}_-)\bigr)\, \mathrm{fermions}] \ . \end{equation} This is to be compared with the analysis in the hybrid formalism where the degrees of freedom transform as \begin{equation} \mathfrak{psu}(1,1\vert 2)_k \oplus \mathbb{T}^4_{\rm bos} \oplus \mathrm{Fock}[b,c,\rho] \oplus \mathrm{Fock}[({\bold{f} 2}_{+},{\bold{f} 2}_-)\, \mathrm{fermions}] \ , \end{equation} where the last term comes from the topologically twisted fermions $\Psi^{\mu\beta}$. Using the Wakimoto representation of $\mathfrak{psu}(1,1\vert 2)_k$ that was described above, see the discussion around eq.~(\ref{s3:2:3:waki_S_A_ag_minus}), the $\mathfrak{psu}(1,1\vert 2)_k$ factor corresponds to \begin{equation} \mathfrak{psu}(1,1\vert 2)_k \ \cong \ \mathfrak{sl}(2, \Rs)_{k+2} \oplus \mathfrak{su}(2)_{k-2} \oplus \mathrm{Fock}[2 \cdot ({\bold{f} 2}_{\rm R},{\bold{f} 2}_{-}) \, \mathrm{fermions}] \ . \end{equation} Thus the effective ghost contribution in the hybrid formalism must remove one set of $({\bold{f} 2}_{\rm R},{\bold{f} 2}_{-})$ fermions and one set of $({\bold{f} 2}_{+},{\bold{f} 2}_{-})$ fermions, and replace it by one set of $({\bold{f} 2}_{\rm R},{\bold{f} 2}_{+})$ fermions, \begin{equation} \{\mathrm{eff.\ ghost} \} \sim \frac{\mathrm{Fock}[\{(\mathbf{2}_{\rm R},\mathbf{2}_+)\, \mathrm{fermions}\}]}{ \mathrm{Fock}[\{(\mathbf{2}_{\rm R},\mathbf{2}_-)\, \mathrm{fermions}\}] \cdot \mathrm{Fock}[\{(\mathbf{2}_{+},\mathbf{2}_-)\, \mathrm{fermions}\}]} \ . \end{equation} In terms of characters it must therefore take the form \begin{equation}\label{ghostcharacter} Z^{\mathrm{gh}} (u,v,z,t;\tau) = \underbrace{\Bigl| \frac{\eta(\tau)^2}{\vartheta_1(\frac{t+z+v}{2};\tau) \vartheta_1(\frac{t-z+v}{2};\tau)} \Bigr|^2}_{Z^{\mathrm{gh}}_1} \cdot \underbrace{\Bigl| \frac{\vartheta_1(\frac{t+z+u}{2};\tau) \vartheta_1(\frac{t-z+u}{2};\tau)} {\vartheta_1(\frac{t+u+v}{2};\tau) \vartheta_1(\frac{t-u+v}{2};\tau)} \Bigr|^2}_{Z^{\mathrm{gh}}_2} \ . \end{equation} Obviously, this identification is only formal since the BRST cohomology will mix all the fields together, and we cannot just extract the contribution from the additional $\rho$ ghost in this manner. However, on the level of characters this identity will be correct, and it will allow us to keep track of the $D_n$ transformation properties of the fields. In particular, we see from this analysis that for $k=1$, for which the $\mathfrak{psu}(1,1\vert 2)_1$ character is given by eq.~(\ref{s3:4:1:TensionlessPsuCharacter}), the first factor $Z^{\mathrm{gh}}_1$ of the ghost contribution (\ref{ghostcharacter}) cancels all the oscillator contributions of $\mathfrak{psu}(1,1\vert 2)_1$, \begin{equation}\label{psugh1} \left( Z^{\mathfrak{psu}(1,1\vert 2)_1} \cdot Z_1^{\mathrm{gh}} \right) (v,z,t;\tau) = \abs{\sum_{m \in \mathbb{Z} + \lambda} x^m q^{-mw + \frac{w^2}{2}}}^2. \end{equation} Thus we are only left with the zero mode sum that will be fixed by the mass-shell condition. As a consequence, the ${\rm AdS}_3 \times {\rm S}^3$ factor becomes `topological' for $k=1$, and all the remaining degrees of freedom come from the $\mathbb{T}^4$ part of the theory, as already argued in \cite{Eberhardt:2018ouy}. Finally, the second factor $Z^{\mathrm{gh}}_2$ of (\ref{ghostcharacter}) makes sure that the resulting fermionic degrees of freedom have the correct charge by transmuting the topologically twisted fermions transforming as $(\mathbf{2}_{+},\mathbf{2}_-)$, into the spacetime fermions transforming as $(\mathbf{2}_{\rm R},\mathbf{2}_+)$. \section{Calculating the orbifold}\label{sec:orbifold} As we have explained in Section~\ref{sec:hybridaction}, the action of $D_n$ on the hybrid fields can be described in terms of group rotations, see (\ref{DnActionhybrid}) as well as (\ref{Pchar}) and (\ref{Rchar}). In order to respect these group symmetries, the same action must then also be applied to the effective ghost contribution, see eq.~(\ref{ghostcharacter}). As a consequence, the cancellation between $Z^{\mathrm{gh}}_1$ and the the $\mathfrak{psu}(1,1\vert 2)_1$ character at level $k=1$ continues to hold also for the orbifold theory, see (\ref{psugh1}). Since the resulting expression is independent of the chemical potentials $(u,v,z)$, it is unaffected by the $D_n$ action.\footnote{Recall that (\ref{DnActionPSU}) describes the full $D_n$ action on the $\mathfrak{psu}(1,1\vert 2)$ superalgebra (including the action on the bosonic degrees of freedom).} It will therefore again be fixed by the mass-shell condition, exactly as in the case without orbifold, see also below. The other factor of the world-sheet partition function (before orbifolding) equals \begin{equation} \left( Z^{\mathbb{T}^4} \cdot Z_1^{\mathrm{gh}} \right) (u,v,z,t;\tau) = Z^{\mathbb{T}^4}_{\rm bos}(\tau) \, Z^{\mathbb{T}^4}_{\rm fer}(u,z,t;\tau) \ , \end{equation} where $Z^{\mathbb{T}^4}_{\rm bos}$ and $Z^{\mathbb{T}^4}_{\rm fer}$ are the bosonic and fermionic contribution coming from the $\mathbb{T}^4$, respectively \begin{equation} Z^{\mathbb{T}^4}_{\rm bos}(\tau) = \frac{Z_\Theta(\tau)}{|\eta(\tau)|^4} \ , \qquad Z^{\mathbb{T}^4}_{\rm fer}(\tau)(u,z,t;\tau) = \Bigl| \frac{\vartheta_1(\frac{t+z+u}{2};\tau) \vartheta_1(\frac{t-z+u}{2};\tau)}{\eta^2(\tau)} \Bigr|^2 \ . \label{Zfer} \end{equation} Here $Z_\Theta(\tau)$ is the lattice theta function of the torus (that accounts for the winding and momentum excitations), and in the second term we have used that, because of the second factor of (\ref{ghostcharacter}), the fermions transform effectively in $(\mathbf{2}_{\rm R},\mathbf{2}_+)$, see the discussion at the end of the previous section. It is now straightforward to calculate the various orbifold contributions of the off-shell partition function. In particular, in the $(h,g)$ sector (where $h$ labels the twisted sector, and $g$ the insertion of the group element) we have \begin{equation} \partfunc{h}{g} (u,v,z,t;\tau) = \abs{ \sum_{m\in\mathbb{Z}+\lambda}x^m q^{-mw+\frac{w^2}{2}}}^2 \partfunc{h}{g}^{\mathbb{T}^4, \mathrm{bos}}(\tau) \, \, \partfunc{h}{g}^{\mathbb{T}^4,\mathrm{ferm}}(u,z,t;\tau)\ . \label{s4:2:ghsector_zeroandT4} \end{equation} The orbifold contribution coming from the bosonic degrees of freedom on the torus can be calculated directly from the $D_n$ action on the torus lattice, and this works exactly as in \cite{Datta:2017ert}. On the other hand, for the fermionic degrees of freedom we can keep track of the $D_n$ action by using (\ref{Pchar}) and (\ref{Rchar}), and this leads to \begin{equation} \partfunc{h_{l,\beta}}{g_{k,\alpha}}^{\mathbb{T}^4,\, \mathrm{ferm}}(u,z,t;\tau) = Z_{\mathbb{T}^4}^{\mathrm{ferm}} \left(u+\bigl(\frac{\beta}{2}+\frac{2l}{n}\bigr)\tau + \frac{\alpha}{2}+\frac{2k}{n}\,,\ z+ \frac{\beta}{2}\tau + \frac{\alpha}{2}\,, \, t\,; \, \tau\right)\ , \label{FermionOrbifoldFormula} \end{equation} where we have labelled an arbitrary group element in $D_n$ by \begin{equation} g_{k,\alpha} = U^k P^\alpha \ , \qquad k = 0,1,\ldots,n-1\ , \quad \alpha=0,1 \ , \end{equation} and similarly for $h_{l,\beta}$. For $h=e$, i.e.\ $l=\beta=0$, this follows directly from eqs.~(\ref{Pchar}) and (\ref{Rchar}), and most of the other cases can be obtained using the modular transformation properties of these twisted twining characters, \begin{equation} \tau \rightarrow \frac{a\tau + b}{c\tau + d}\ , \qquad \partfunc{h}{g} \mapsto \partfunc{h^a g^b}{h^cg^d} \ , \end{equation} together with the modular properties of (\ref{Zfer}); in particular we need the identities \begin{equation} \begin{array}{rclrcl} \vartheta_1\bigl(\frac{z}{\tau};-\frac{1}{\tau}\bigr) & = & -i \,\sqrt{-i\tau} \, e^{ \frac{\pi i z^2}{\tau}}\, \vartheta_1(z;\tau) \qquad \qquad & \eta\bigl(-\frac{1}{\tau}\bigr) & = & \sqrt{-i\tau} \, \eta(\tau) \ , \\[4pt] \vartheta_1(z;\tau+1) & = & e^{i\frac{\pi}{4}} \, \vartheta_1(z;\tau) \qquad & \eta(\tau+1) & = & e^{i\frac{\pi}{12}} \, \eta(\tau) \ , \end{array} \end{equation} see \cite[Appendix~F]{Eberhardt:2018ouy} for our conventions. In fact, there is only one ${\rm SL}(2,\mathbb{Z})$ orbit of sectors that is not fixed by this argument, containing the representatives \begin{equation} \partfunc{P}{U^k}^{\mathbb{T}^4,\, \mathrm{ferm}}(u,z,t;\tau) = Z_{\mathbb{T}^4}^{\mathrm{ferm}} \left(u+\frac{\tau}{2}+ \frac{2k}{n}\,,\ z+ \frac{\tau}{2}\,, \, t\,; \, \tau\right)\ . \end{equation} The expression for this sector can be obtained by noting that in the $P$-twisted sector two of the four fermions are half-integer moded (while the other two have integer modes), and that the $U^k$ generators act as before. The final step consists of imposing the physical state conditions $L_0 = 0 = \bar{L}_0$, i.e.\ picking out the term $q^0 \bar{q}^{\, 0}$ in eq.~(\ref{s4:2:ghsector_zeroandT4}). As in \cite{Eberhardt:2018ouy}, only one term in the sum survives, namely the one for which \begin{equation} m = \frac{h^{\mathbb{T}^4}}{w} + \frac{w}{2} \ , \end{equation} and similarly for the right-movers. Thus the $(h,g)$ sector of the string partition function becomes \begin{equation} Z_{\rm string}^{(h,g)} (u,z;t) = \sum_{w=1}^{\infty} x^{\frac{w}{2}} \bar{x}^{\frac{w}{2}} \, \partfunc{h}{g}^{\mathbb{T}^4, \mathrm{bos}}\bigl(\frac{t}{w}\bigr) \, \, \partfunc{h}{g}^{\mathbb{T}^4,\mathrm{ferm}}\bigl(u,z,t;\frac{t}{w}\bigr) ' \ , \end{equation} where the prime indicates that only those states contribute for which $m-\bar{m}\in\mathbb{Z}$. Using the same theta function identitites as in \cite[eq.~(5.8)]{Eberhardt:2018ouy}, we can then finally rewrite this as\footnote{Here the symbols $\mathrm{R}$ and $\mathrm{NS}$ describe the moding of the fermions before considering the $h$-twisted sector, i.e.\ the twisting by $h$ changes the moding of the fermions relative to an integer moding (for the case of $\mathrm{R}$) and a half-integer moding (for the case of $\mathrm{NS}$).} \begin{equation} Z_{\rm string}^{(h,g)} (u,z;t) = \sum_{w \in 2\mathbb{N}} \abs{x^{\frac{w}{4}}}^2 \partfunc{h}{g}^{\mathbb{T}^4,\mathrm{R}}\bigl(u,z; \frac{t}{w}\bigr) + \sum_{w \in 2\mathbb{N}-1} \abs{x^{\frac{w}{4}}}^2 \partfunc{h}{g}^{\mathbb{T}^4,\mathrm{NS}} \bigl(u,z; \frac{t}{w}\bigr)\ . \label{s4:2:3:FullStringPartfunc_gh} \end{equation} This agrees exactly with the single particle (single cycle) part of the partition function of the symmetric orbifold of $\mathbb{T}^4 /D_n$, where $w$ describes the length of the single cycle. \section{Some comments about $k>1$}\label{sec:k>1} The analysis for $k>1$ actually works very similarly. In that case we can directly use the RNS description, and there is no need to introduce the hybrid fields. There are no null-vectors for $\mathfrak{sl}(2,\mathbb{R})_k$, so the underlying characters are the Verma module characters. If we concentrate on the long string sector, the analysis of \cite{Eberhardt:2019qcl} (together with the refinement explained in the previous section) goes through essentially unmodified, and the spacetime spectrum turns out to match exactly with \begin{equation}\label{dualorb} \mathrm{Sym}_N \Bigl( \bigl[ (\hbox{${\cal N}=4$ Liouville at $c=6(k-1)$}) \times \mathbb{T}^4 \bigr] / D_n \Bigr) \ . \end{equation} It remains to explain though how $D_n$ acts on the seed theory. To this end we recall the description of the world-sheet degrees of freedom from (\ref{NSgen}), \begin{equation}\label{NSgen1} \mathfrak{sl}(2, \Rs)_{k+2} \oplus \mathfrak{su}(2)_{k-2} \oplus \mathbb{T}^4_{\rm bos} \oplus \mathrm{Fock}[b,c] \oplus \mathrm{Fock}[\bigl(({\bold{f} 2}_{\rm R},{\bold{f} 2}_+) \oplus ({\bold{f} 2}_{\rm R},{\bold{f} 2}_-)\bigr)\, \mathrm{fermions}] \ . \end{equation} The fermions in the $(\mathbf{2}_{\rm R},\mathbf{2}_+)$ combine with the bosons from $\mathbb{T}^4_{\rm bos}$ to give the supersymmetric $\mathbb{T}^4$ theory, on which $D_n$ acts as in (\ref{Dntorus}). The $(b,c)$ ghosts cancel two (of the three) bosonic degrees of freedom from $\mathfrak{sl}(2, \Rs)_{k+2}$, and thus the remaining degrees of freedom are \begin{equation}\label{5.3} \hbox{free boson} \oplus \mathfrak{su}(2)_{k-2} \oplus \mathrm{Fock}[({\bold{f} 2}_{\rm R},{\bold{f} 2}_-) \, \mathrm{fermions}] \ , \end{equation} which just give rise to $\hbox{${\cal N}=4$ Liouville at $c=6(k-1)$}$, see the discussion in \cite[Section~6.2]{Eberhardt:2019qcl}. They transform under $D_n$ as follows. First the free boson arises from $\mathfrak{sl}(2, \Rs)_{k+2}$, and hence is invariant under $D_n$. On the remaining degrees of freedom, the action of $D_n$ can be read off from (\ref{DnActionRNS}): the rotation generator $U$ of $D_n$ acts trivially (since none of these fields are charged under $\mathfrak{su}(2)_+$), while the reflection generator $P$ rotates $\mathfrak{su}(2)_{k-2}$ by $180$ degrees (one of the currents, say $K^3$, is invariant, while the other two pick up a sign). As regards the fermions, we note that the reflection generator is embedded diagonally into $\mathfrak{su}(2)_{\rm R}\oplus\mathfrak{su}(2)_{-}$, see eq.~(\ref{DnActionRNS}). The fermions in $({\bold{f} 2}_{\rm R},{\bold{f} 2}_-)$ transform in the tensor product ${\bold{f} 2} \otimes {\bold{f} 2} = {\bold{f} 3} \oplus {\bold{f} 1}$ with respect to this diagonal $\mathfrak{su}(2)$, and hence their transformation property coincides with the geometric $D_n$ action on ${\rm S}^3$. (This is to say that $3$ of the four fermions transform exactly as the currents $K^a$, while the remaining fermion is invariant.) In particular, we can combine the spacetime fields of (\ref{5.3}) into \begin{equation}\label{su2orb} \mathfrak{su}(2)^{(1)}_{k} \oplus \mathfrak{u}(1)^{(1)} \ , \end{equation} where the superscript $(1)$ refers to the ${\cal N}=1$ superconformal affine algebra (for which the fermions transform in the adjoint representation). The $P$ generator of $D_n$ then acts by rotating the $\mathfrak{su}(2)^{(1)}_{k}$ factor (including the fermions) by $180$ degrees, while leaving $\mathfrak{u}(1)^{(1)}$ invariant.\footnote{This agrees with eq.~(\ref{DnInnerAutoS3}), where now $\mathfrak{su}(2)^{(1)}_{k}$ is part of the spacetime Liouville theory.} This then defines the $D_n$ action on the ${\cal N}=4$ Liouville factor in (\ref{dualorb}), and hence specifies the dual CFT. This analysis applies irrespective of whether $k$ is even or odd. This is to be compared with the original analysis of \cite{Datta:2017ert}, where the duality only worked for $k$ odd since the $\mathfrak{u}(1)$ charge quantisation did not match between world-sheet and dual CFT. The reason why things are different here is that, unlike the case considered in \cite{Datta:2017ert}, the symmetric orbifold of the spacetime theory now also contains a Liouville factor which, in particular, includes the $\mathfrak{su}(2)_{k}^{(1)}$ algebra of eq.~(\ref{su2orb}). As we have just seen, this algebra is also orbifolded, thus providing an extra contribution to the charge quantisation in the twisted sector. As a consequence the charges between the two descriptions now match irrespective of whether $k$ is even or odd.\footnote{We thank Lorenz Eberhardt for a useful discussion about this issue.} \smallskip Finally, for $k>1$ the DDF operators can be directly constructed in the RNS formalism \cite{Giveon:1998ns,Eberhardt:2019qcl}, and their $D_n$ transformation properties follow directly from those of the RNS fields. As in \cite{Eberhardt:2019qcl}, see in particular Sections~2.5 and 2.6, the relevant modes will be $w$-fractionally moded in the $w$-spectrally flowed sector, exactly as one should expect for the $w$-cycle twisted sector of the dual CFT. As a result, the operator algebra of the symmetric orbifold (\ref{dualorb}) can also be reproduced from the world-sheet. It also seems to imply that the spacetime theory is supersymmetric for any value of $k$, in contradiction to what was argued in \cite{Datta:2017ert}. \section{Conclusions}\label{sec:concl} In this paper we have shown that the duality between string theory on ${\rm AdS}_3 \times {\rm S}^3 \times \mathbb{T}^4$ with minimal NSNS flux ($k=1$) on the one hand, and the symmetric orbifold of $\mathbb{T}^4$ \cite{Eberhardt:2018ouy} on the other hand, extends also to the ${\cal N}=(2,2)$ supersymmetric $D_n$ orbifolds of these same backgrounds studied in \cite{Datta:2017ert}. Our results give strong support to the duality proposal of \cite{Datta:2017ert}, and they also demonstrate that the techniques of \cite{Eberhardt:2018ouy} are more widely applicable. It would be interesting to show that a similar analysis can be done for the ${\cal N}=(3,3)$ orbifold of \cite{Eberhardt:2018sce}; this should follow from the duality for ${\rm AdS}_3 \times {\rm S}^3 \times {\rm S}^3 \times {\rm S}^1$ that was first proposed in \cite{Eberhardt:2017pty} and then derived microscopically in \cite{Eberhardt:2019niq}. More fundamentally, it would be helpful to make these dualities more manifest. In this context it is curious to note that the Drinfel'd-Sokolov (or quantum Hamiltonian) reduction of the $\mathfrak{psu}(1,1\vert 2)_k$ supergroup WZW model leads exactly to the same $\mathcal{N} = 4$ Liouville theory (including its central charge) that appears for $k>1$ in \eqref{dualorb}, see e.g.~\cite{Ito92}, in particular the discussion around equation (19) of that paper with reference to table 1.\footnote{Incidentally, an analogous statement holds for the Drinfel'd-Sokolov reduction of $\mathfrak{d}(2,1;\alpha)$ (also given in \cite{Ito92}), and the Liouville factor in the ${\rm AdS}_3 \times {\rm S}^3 \times {\rm S}^3 \times {\rm S}^1$ string spectrum of \cite{Eberhardt:2019niq}.} If one could express the BRST charge of these Drinfel'd-Sokolov reductions in terms of the worldsheet BRST operator, this should lead to a more conceptual (and less background dependent) derivation of these dualities. It should also allow for a more direct derivation of the ``effective ghost" contribution of Section~\ref{sec:3.1}. \section*{Acknowledgements} We thank Lorenz Eberhardt for many useful discussions. The results of this paper are largely based on the Master thesis of one of us (JAM). We gratefully acknowledge the support of the NCCR SwissMAP that is funded by the Swiss National Science Foundation.
1,314,259,994,639
arxiv
\section{Introduction} Recent years have seen tremendous advances in NLP systems' ability to handle discourse level phenomena, including discourse unit segmentation and connective detection \cite{zeldes-etal-2019-disrpt} as well as discourse relation classification (e.g.~\citealt{lin_ng_kan_2014, braud-etal-2017-cross, kobayashi-etal-2021-improving}). For segmentation and connective detection, the current state of the art (SOTA) is provided by models using Transformer-based, pretrained contextualized word embeddings \cite{MullerBraudMorey2019}, focusing on large context windows without implementation of hand-crafted features. For relation classification, SOTA performance on the English RST-DT benchmark \cite{CarlsonEtAl2003} has been achieved by neural approaches \citep{guz-carenini-2020-coreference, Kobayashi_Hirao_Kamigaito_Okumura_Nagata_2020, nguyen-etal-2021-rst, kobayashi-etal-2021-improving}. For PDTB-style data, the 2015 and 2016 CoNLL shared tasks on shallow discourse parsing \cite{xue-etal-2015-conll, xue-etal-2016-conll} have motivated work on both explicit (e.g.~\citealt{kido-aizawa-2016-discourse}) and implicit (e.g.~\citealt{liu-etal-aaai-implicit, wang-lan-2016-two, rutherford-etal-2017-systematic, kim-etal-2020-implicit, liang-etal-2020-extending, zhang-etal-2021-context}) discourse relation classification in English PDTB-2 \cite{PrasadEtAl2008} and PDTB-3 \cite{PrasadWebberLeeEtAl2019} as well as on the PDTB-style Chinese newswire corpus (CDTB, \citealt{zhou-xue-2012-pdtb, cdtb-zhou-etal}) such as \citet{schenk-etal-2016-really} and \citet{weiss-bajec-2016-discourse}. Our system for DISRPT 2021, called DisCoDisCo{}, ({\bf Dis}trict of {\bf Co}lumbia {\bf Dis}course {\bf Co}gnoscente) extends the current SOTA architecture by introducing hand-crafted categorical and numerical features that represent salient aspects of documents' structural and linguistic properties. While Transformer-based contextualized word embeddings (CWEs) have proven to be rich in linguistic features, they are not perfect \citep{rogers-etal-2020-primer}, and there are some textual features---such as position of a sentence within a document, or the number of identical words occurring in two discourse units---which are difficult or impossible for a typical Transformer-based CWE model to know. We therefore supplement CWEs with hand-crafted features in our model, with special attention paid to features we expect CWEs to have a poor grasp of. We implement our system with a pretrained Transformer-based contextualized word embedding model at its core, and dense embeddings of our hand-crafted features incorporated into it. Our exact approach varies by task: we use a tokenwise classification approach for EDU segmentation, a CRF-based sequence tagger for connective detection, and a BERT pooling classifier for relation classification. Our system is implemented in PyTorch \citep{pytorch} using the framework AllenNLP \cite{GardnerEtAl2018}. Our results show SOTA scores exceeding comparable numbers from the 2019 shared task, and ablation studies indicate the contribution of features beyond CWEs. \section{Previous Work} \paragraph{Segmentation and Connective Detection} Following the era of rule-based segmenters (e.g. \citealt{Marcu2000}, \citealt{ThanhAbeysingheHuyck2004}), \citet{SoricutMarcu2003} used probabilistic models over constituent trees for token-wise binary classification (i.e.~boundary/no-boundary). \citet{SporlederLapata2005} used a two-level stacked boosting classifier on chunks, POS tags, tokens and sentence lengths, among other features. \citet{HernaultPrendingerEtAl2010} used an SVM over token and POS trigrams as well as phrase structure trees. More recently, \citet{BraudLacroixSoegaard2017} used bi-LSTM-CRF sequence labeling on dependency parses, with words, POS tags, dependency relations, parent, grandparent, and dependency direction, achieving an F$_1$ of 89.5 on the English RST-DT benchmark \cite{CarlsonEtAl2003} with parser-predicted syntax. Approaches using CWEs as the only input feature \cite{MullerBraudMorey2019} have achieved an F$_1$ of 96.04 on the same dataset with gold sentence splits and 93.43 without, while for some smaller English and non-English datasets, approaches incorporating features and word embeddings remain superior (e.g.~for English STAC and GUM, as well as Dutch RST data, \citealt{YuZhuLiuEtAl2019}; and for Chinese, \citealt{bourgonje-schafer-2019-multi}; for more on these datasets see below). For connective detection, \citet{pitler2009using} used a MaxEnt classifier with syntactic features extracted from gold Penn Treebank \cite{MarcusSantoriniMarcinkiewicz1993} parses of PDTB \cite{PrasadEtAl2008} articles. \citet{patterson2013predicting} presented a logistic regression model trained on eight relation types from PDTB, with features in three categories: \textit{Relation-level} such as the connective signaling the relation; \textit{Argument-level} such as size or complexity of argument spans; and \textit{Discourse-level} features, targeting dependencies between the relation and its neighboring relations in the text (cf.~our approach to featurizing overall utilization of argument spans in the data below). \citet{polepalli2012automatic} used SVM and CRF to identify connectives in biomedical texts \cite{prasad2011biomedical}, with features such as POS tags, dependencies and domain-specific semantic features included several biomedical gene/species taggers, in addition to predicted biomedical NER features. Current SOTA approaches rely on sequence labeling in a BIO scheme with CWEs from either plain text \cite{MullerBraudMorey2019} or integrating word embeddings and dependency tree features (POS, dependencies, phrase spans, \citealt{YuZhuLiuEtAl2019}), dependending on the dataset and availability of gold standard features. \paragraph{Discourse Relation Classification} Generally speaking, discourse relation classification assigns a relation label to two pieces of texts from a set of predefined coherence or rhetorical relation labels \cite{stede2011discourse}, which varies across different discourse frameworks, corpora, and languages. Given different perspectives and theoretical frameworks, the implementation and evaluation of the relation classification task varies considerably. In Rhetorical Structure Theory (RST, \citealt{mann1988rhetorical}), discourse relations hold between spans of text and are hierarchically represented in a tree structure \cite{Zeldes2018book}. Performance is evaluated and reported using the micro-averaged, standard Parseval scores for a binary tree representation, following \citet{morey-etal-2017-much}. Current SOTA performance \cite{kobayashi-etal-2021-improving} on the English RST-DT benchmark \cite{CarlsonEtAl2003} with gold segmentation achieved a micro-averaged original Parseval score of 54.1 by utilizing both a span-based neural parser \cite{Kobayashi_Hirao_Kamigaito_Okumura_Nagata_2020} and a two-staged transition-based SVM parser \cite{wang-etal-2017-two} as well as leveraging silver data. Since PDTB is a lexically grounded framework, discourse relation classification is also called sense classification in PDTB-style discourse parsing: a sense label is assigned to the discourse connective between two text spans when a discourse connective is present (i.e.~explicit relation classification) or a label is assigned to an adjacent pair of sentences when no discourse connective is present (i.e.~implicit relation classification) \cite{slp3chapter22}. Explicit relation classification is easier as the presence of the connective itself is considered the best signal of the relation label. Most systems from the 2016 CoNLL shared task on shallow discourse parsing adopted machine learning techniques such as SVM and MaxEnt with hand-crafted features \cite{xue-etal-2016-conll}. For instance, for the English PDTB-2 \cite{PrasadEtAl2008}, \citet{kido-aizawa-2016-discourse} achieved the best performance (an F$_1$ = 90.22) on the explicit relation classification task by implementing a majority classifier and a MaxEnt classifier while \citet{wang-lan-2016-two} achieved the best performance (F$_1$ = 40.91) on implicit relation classification using a convolutional neural network. \citet{wang-lan-2016-two} also achieved the best performance on the Chinese CDTB dataset \cite{zhou-xue-2012-pdtb} in the implicit relation classification task. More recent work on implicit relation classification has adopted a graph-based context tracking network to model the necessary context for interpreting the discourse and has gained better performance on PDTB-2 \cite{zhang-etal-2021-context}. In addition, the increase in the number of implicit relation instances in PDTB-3 \cite{PrasadWebberLeeEtAl2019} has sparked more interest in exploring their recognition, such as \citet{kim-etal-2020-implicit} and \citet{liang-etal-2020-extending}. \citet{kim-etal-2020-implicit} presented the first set of results on implicit relation classification for both top-level senses (four labels) and more fined-grained level-2 senses (amounting to 11 labels) in PDTB-3 from two strong sentence encoder models using BERT \cite{devlin-etal-2019-bert} and XLNet \cite{NEURIPS2019_dc6a7e65-xlnet}. Due to the novelty of the DISRPT 2021 relation classification task, which combines implicit and explicit relation classification across frameworks for an unlabeled graph structure, comparable scores do not yet exist at the time of writing. \section{Approach} Our system comprises two main components: one targeting segmentation and connective detection using neural sequence tagging (as binary classification and BIO tagging respectively), and one targeting relation classification using BERT \citep{devlin-etal-2019-bert} fine-tuning. We further enhance both components with the use of hand-crafted categorical and numeric features by encoding them in dense embeddings and introducing them into our neural models. \subsection{Segmentation and Connective Detection} Our model for segmentation and connective detection is structured as a sequence tagging model, as might be used for a task like POS tagging or entity recognition: the text is embedded, encoded with a single bi-LSTM, and decoded. \newcommand\smallmodel[1]{{\tt\scriptsize #1}} \newcommand\model[1]{{\tt\footnotesize #1}} \newcommand\twoline[2]{\begin{tabular}{@{}c@{}}#1 \\ #2\end{tabular}} \begin{table*}[h!tb] \centering \begin{tabular}{lll} \hline \textbf{Lng.} & \textbf{Segmentation/Connective Detection} & \textbf{Relation Classification} \\ \hline \smallmodel{deu} & \smallmodel{xlm-roberta-large} & \smallmodel{bert-base-german-cased} \\ \smallmodel{eng} & \smallmodel{google/electra-large-discriminator} & \smallmodel{bert-base-cased} \\ \smallmodel{eus} & \smallmodel{ixa-ehu/berteus-base-cased} & \smallmodel{ixa-ehu/berteus-base-cased} \\ \smallmodel{fas} & \smallmodel{HooshvareLab/bert-fa-base-uncased} & \smallmodel{HooshvareLab/bert-fa-base-uncased} \\ \smallmodel{fra} & \smallmodel{xlm-roberta-large} & \smallmodel{dbmdz/bert-base-french-europeana-cased} \\ \smallmodel{nld} & \smallmodel{pdelobelle/robbert-v2-dutch-base} & \smallmodel{GroNLP/bert-base-dutch-cased} \\ \smallmodel{por} & \smallmodel{neuralmind/bert-base-portuguese-cased} & \smallmodel{neuralmind/bert-base-portuguese-cased} \\ \smallmodel{rus} & \smallmodel{DeepPavlov/rubert-base-cased} & \smallmodel{DeepPavlov/rubert-base-cased-sentence} \\ \smallmodel{spa} & \smallmodel{dccuchile/bert-base-spanish-wwm-cased} & \smallmodel{dccuchile/bert-base-spanish-wwm-cased} \\ \smallmodel{tur} & \smallmodel{dbmdz/bert-base-turkish-cased} & \smallmodel{dbmdz/bert-base-turkish-cased} \\ \smallmodel{zho} & \smallmodel{bert-base-chinese} & \smallmodel{hfl/chinese-bert-wwm-ext} \\ \hline \end{tabular} \caption{\label{tab:lms} CWE Models used, by language. All models were obtained from \url{huggingface.co}'s registry. Note that there is one exception for relation classification: on \model{eng.sdrt.stac}, \model{bert-base-uncased} is used.} \end{table*} \begin{table*}[ht] \centering \begin{tabular}{llll} \hline \textbf{Feature} & \textbf{Type} & \textbf{Example} & \textbf{Description} \\\hline UPOS tag & Cat. & PROPN & UD POS tag \\ XPOS tag & Cat. & NNP & Language-specific POS tag \\ UD deprel & Cat. & advmod & UD dependency relation \\ Head distance & Num. & 5 & Distance from a word to its head in its UD tree \\ Sentence type & Cat. & subjunctive & Captures mood and other high-level sentential features \\ Genre & Cat. & reddit & Genre of a document (where available, as in \model{eng.rst.gum}) \\ Sentence length & Num. & 23 & Length, in tokens, of a sentence. \\ \hline \end{tabular} \caption{Summary of 7 of the 12 features used for the segmentation and connective detection module. Every categorical feature is embedded in a space whose size is the square root of the total number of labels for the feature, and numerical features are log scaled.} \label{tab:segfeats} \end{table*} In the embedding layer, we rely on three kinds of embeddings: bi-LSTM encoded character embeddings ($d$ = 64); language-specific fastText \citep{bojanowski-etal-2017-enriching} static word embeddings ($d$ = 300); and language-specific contextualized word embeddings from pretrained models posted publicly on HuggingFace's model registry at \url{huggingface.co} and used via HuggingFace's \texttt{transformers} library \citep{wolf-etal-2020-transformers} ($d$ = 768/1024). The fastText embeddings are kept frozen during training, but the pretrained Transformer model's parameters are trainable, at a lower learning rate. Average pooling is used to obtain word-level representations from CWE sub-word representations. Multiple CWEs were evaluated for each language, and the one that yielded the best performance on the validation splits of the corpora for that language was selected, shown in \Cref{tab:lms}. These three representations are concatenated, yielding a vector of size $d_{\mathrm{emb}}$ for each word. In the next layer, we encode the embeddings along with a variety of features and a representation of the preceding and following sentence (see below). The features we compute are tokenwise, and cover a variety of grammatical and textual information that we expected would be useful for the task. Some of the features are described in \Cref{tab:segfeats}. In order to convert these features into tensors, every categorical feature is embedded in a space as big as the square root of the total number of labels for the feature, and every numerical feature is log scaled. This yields an additional $d_{\mathrm{feat}}$ dimensions for each word. In addition to the features, we also compute a representation of the current sentence's two neighboring sentences by embedding them and using a bi-LSTM to summarize them into a relatively low-dimensional ($d_{\mathrm{neighbors}}$ = 400) vector, which is concatenated onto every word's vector. Combining the feature dimensions and the neighboring sentences' dimensions, our input to the encoder is of size $d_{\mathrm{enc}} = d_{\mathrm{emb}} + d_{\mathrm{feat}} + d_{\mathrm{neighbors}}$. The sequence is fed through a bi-LSTM, and the label for each token is then predicted either by a linear projection layer or conditional random fields: CRF is used for connective detection datasets, and the linear projection layer is used for segmentation datasets.\footnote{We initially used a CRF on all datasets, but our experiments showed a small degradation on segmentation datasets when using CRF.} \begin{table*}[ht] \centering \begin{tabular}{llll} \hline \textbf{Feature} & \textbf{Type} & \textbf{Example} & \textbf{Description} \\\hline Genre & Cat. & reddit & Genre of a document (where available, as in eng.rst.gum) \\ Children* & Num. & 2 & No. of child discourse units each unit in the pair has \\ Discontinuous* & Cat. & false & Whether the unit's tokens are not all contiguous in the text \\ Is Sentence* & Cat. & true & Whether the unit is a whole sentence \\ Length Ratio & Num. & 0.3 & Ratio of unit 1 and unit 2's token lengths \\ Same Speaker & Cat. & true & Whether the same speaker produced unit 1 and unit 2 \\ Doc. Length & Num. & 214 & Length of the document, in tokens \\ Position* & Num. & 0.4 & Position of the unit in the document, between 0.0 and 1.0 \\ Distance & Num. & 7 & No. of other discourse units between unit 1 and unit 2 \\ Lexical Overlap & Num. & 3 & No. of overlapping non-stoplist words in unit 1 and unit 2 \\ \hline \end{tabular} \caption{Sample of features used for the relation classification module. Asterisked features apply twice for each instance, once for each unit. Combination of features varies per corpus---see code for full details. } \label{tab:relfeats} \end{table*} For the plain text segmentation scenario, we generate automatic sentence splits and Universal Dependencies parses using the Transformer-based sentence splitter used in the AMALGUM corpus \cite{GesslerEtAl2020} trained on the treebanked shared task training data, tagged using Stanza \cite{qi-etal-2020-stanza} and parsed using DiaParser\footnote{https://github.com/Unipisa/diaparser} \cite{AttardiEtAl2021}. For \model{fas.rst.prstc} and \model{zho.rst.sctb}, we split the text based on punctuation (on ‘.’, ‘!’, ‘?’ and Chinese equivalents) since experiments revealed that this approach yields better sentence boundaries. \subsection{Relation Classification} Our relation classification module has a simple architecture: a pretrained BERT model is used (again varying by language---cf.~\Cref{tab:lms}), and a linear projection and softmax layer is used on the output of the pooling layer to predict the label of the relation. The two units involved in every relation are prepared just as if they were being prepared for BERT's Next Sentence Prediction (NSP) task: a [CLS] token begins the sequence, a [SEP] token separates the two units in the sequence, and another [SEP] token appears at the end of the sequence. As an example, consider this instance from \model{eng.sdrt.stac}: \[ \texttt{\small[CLS] do we start ? [SEP] no [SEP]} \] Though this model was originally intended as a baseline, further experiments with e.g.~a separate encoder proved to be much less competitive. Our exact choice of pretrained model differs in most cases from the one used in the segmentation and connective detection task, primarily due to superior performance by models that were pretrained using the NSP task and had a pretrained pooler layer. This restricts the LM choice: for example, most models that are styled after RoBERTa \citep{liu2019roberta} are not pretrained using an NSP task. We select models using the same process as before, based on optimal performance on the validation (dev) sets of the corpora. The system is further enhanced with features. First, the direction feature on each relation is encoded using pseudo-tokens: if the direction of the relation is left to right (1>2), we insert the tokens {\tt \}} and {\tt >} around the first unit. In the example above, the direction of the relation is left to right (1>2), and the resulting sequence with pseudo-tokens is: \[ \texttt{\small[CLS] \} do we start ? > [SEP] no [SEP]} \] The same is done for right-to-left units, where the characters {\tt \{} and {\tt <} are used instead, but surrounding the second unit: \[ \texttt{\small[CLS] thanks [SEP] < im ok \{ [SEP]} \] Our motivation in doing this is to represent directionality for the BERT encoder in its native feature space, and experimental data show that it is helpful. Second, we introduce hand-crafted features in a step between the BERT model's embedding and encoder layers. Recall that BERT has a static embedding layer which projects each word-piece into its initial vector representation. Just before this input is sent to the Transformer encoder blocks, we expand the sequence by inserting a new vector in between the [CLS] token and the first token of unit 1. This feature vector bears sequence-level information, where categorical and numerical features have been encoded into a vector just as for the segmentation and connective detection module: numerical features are optionally log scaled or binned and embedded, and categorical features are embedded. The remaining dimensions after all features have been added to the vector are padded with 0. Unlike our approach for segmentation and connective detection, we change which features we use on a per-corpus basis, as preliminary experiments showed that using all features for all corpora can produce significant degradations, which we hypothesize are caused by feature sparseness in the training split leading to overfitting. A sample of the features we used is in \Cref{tab:relfeats}, and the list of which features were used for which corpus is available in our code. Specifically, for the \textsc{Lexical Overlap} feature in the table, we used the freely available stoplists used by the Python library spaCy \citep{spacy}. The \textsc{Same Speaker} feature has proven very useful in the STAC dataset, which is a corpus of strategic chat conversations \cite{asher-etal-2016-discourse}. The \textsc{Distance} feature is used in half of the datasets and has shown effectiveness regardless of annotation framework. Similarly, the \textsc{Position} feature has been shown to be beneficial for half of the corpora. The \textsc{Length Ratio} feature proved to be effective for the three PDTB-style datasets. For RST-style corpora, the number of \textsc{Children} of a nucleus or satellite unit is more effective. Moreover, the \textsc{Discontinuous} feature has also contributed to performance gain in several RST-style corpora such as \model{eng.rst.gum}, \model{eng.rst.rstdt}, \model{por.rst.cstn}, \model{spa.rst.rststb}, \model{zho.rst.sctb}, and \model{fas.rst.prstc}. The \textsc{Genre} feature is beneficial in corpora that have a wide range of text types such as \model{eng.rst.gum}. The direction feature was also included in the feature vector, as experiments showed that including it was helpful, despite the fact that the pseudo-tokens were already expressing it to the BERT encoder. \section{Results} \def\phantom{$-$}{\phantom{$-$}} \begin{table*}[ht]% \centering \begin{subtable}{0.3985\textwidth} \resizebox{\columnwidth}{!}{% \begin{tabular}{l|c|c|c|l|c} \hline \textbf{Corpus} & \textbf{P} & \textbf{R} & \textbf{F$_1$} & \textbf{2019 Best F$_1$} & \textbf{vs. 2019} \\ \hline deu.rst.pcc & 97.07 & 94.15 & \textbf{95.58} & 94.99 (ToNy) & \P0.59\phantom{$-$} \\ eng.rst.gum & 93.90 & 94.43 & \textbf{94.15} & -- & -- \\ eng.rst.rstdt & 96.39 & 96.89 & \textbf{96.64} & 96.04 (ToNy) & \P0.60\phantom{$-$} \\ eng.sdrt.stac & 96.25 & 93.63 & 94.91 & \textbf{95.32} (GumDrop) & $-$0.41\phantom{$-$} \\ eus.rst.ert & 93.42 & 87.73 & \textbf{90.46} & -- & -- \\ fas.rst.prstc & 92.79 & 93.10 & \textbf{92.94} & -- & -- \\ fra.sdrt.annodis & 89.43 & 90.65 & \textbf{90.02} & -- & -- \\ nld.rst.nldt & 97.50 & 94.50 & \textbf{95.97} & 95.45 (GumDrop) & \P0.52\phantom{$-$} \\ por.rst.cstn & 93.18 & 95.56 & \textbf{94.35} & 92.92 (ToNy) & \P1.43\phantom{$-$} \\ rus.rst.rrt & 85.57 & 86.89 & \textbf{86.21} & -- & -- \\ spa.rst.rststb & 92.53 & 91.96 & \textbf{92.22} & 90.74 (ToNy) & \P1.48\phantom{$-$} \\ spa.rst.sctb & 83.44 & 81.55 & 82.48 & \textbf{83.12} (ToNy) & $-$0.64\phantom{$-$} \\ zho.rst.sctb & 90.30 & 77.38 & \textbf{83.34} & 81.67 (DFKI) & \P1.67\phantom{$-$} \\ \hline eng.pdtb.pdtb & 92.93 & 91.15 & \textbf{92.02} & -- & \phantom{$-$}--\phantom{$-$} \\ tur.pdtb.tdb & 93.71 & 94.53 & \textbf{94.11} & -- & \phantom{$-$}--\phantom{$-$} \\ zho.pdtb.cdtb & 89.19 & 85.95 & \textbf{87.52} & -- & \phantom{$-$}--\phantom{$-$} \\ \hline \textbf{mean} & 92.35 & 90.63 & 91.43 & -- & -- \\ \hline \end{tabular} } \caption{Results for Gold Treebanked Data.} \label{tab:seggold} \end{subtable}% \hspace{0.04\textwidth}% \begin{subtable}{0.4615\textwidth}% \resizebox{\columnwidth}{!}{% \begin{tabular}{l|c|c|c|l|c|c} \hline \textbf{Corpus} & \textbf{P} & \textbf{R} & \textbf{F$_1$} & \textbf{2019 Best F$_1$} & \textbf{vs. 2019} & \textbf{vs. Gold} \\ \hline deu.rst.pcc & 95.15 & 92.86 & 93.94 & \textbf{94.68} (ToNy) & $-$0.74\phantom{$-$} & $-$1.64 \\ eng.rst.gum & 92.65 & 92.59 & \textbf{92.61} & -- & -- & $-$1.54\\ eng.rst.rstdt & 96.80 & 95.92 & \textbf{96.35} & 93.43 (ToNy) & \P2.92\phantom{$-$} & $-$0.28 \\ eng.sdrt.stac & 91.77 & 92.06 & \textbf{91.91} & 83.99 (ToNy) & \P7.92\phantom{$-$} & $-$3.00 \\ eus.rst.ert & 92.70 & 88.38 & \textbf{90.47} & -- & -- & \phantom{$-$}0.01\\ fas.rst.prstc & 92.95 & 92.78 & \textbf{92.86} & -- & -- & $-$0.08\\ fra.sdrt.annodis & 87.95 & 83.79 & \textbf{85.78} & -- & -- & $-$4.24\\ nld.rst.nldt & 96.97 & 92.54 & \textbf{94.69} & 92.32 (ToNy) & \P2.37\phantom{$-$} & $-$1.29\\ por.rst.cstn & 93.21 & 95.03 & \textbf{94.11} & 91.86 (ToNy) & \P2.25\phantom{$-$} & $-$0.25\\ rus.rst.rrt & 87.31 & 84.24 & \textbf{85.74} & -- & -- & $-$0.47\\ spa.rst.rststb & 93.30 & 90.30 & \textbf{91.76} & 89.60 (ToNy) & \P2.16\phantom{$-$} & $-$0.46\\ spa.rst.sctb & 83.97 & 77.98 & 80.86 & \textbf{81.65} (ToNy) & $-$0.79\phantom{$-$} & $-$1.62\\ zho.rst.sctb & 84.04 & 70.00 & \textbf{76.21} & 73.10 (GumDrop) & \P3.08\phantom{$-$} & $-$7.13\\ \hline eng.pdtb.pdtb & 94.29 & 90.92 & \textbf{92.56} & -- & \phantom{$-$}--\phantom{$-$} & \phantom{$-$}0.54 \\ tur.pdtb.tdb & 91.98 & 95.22 & \textbf{93.56} & -- & \phantom{$-$}--\phantom{$-$} & $-$0.55 \\ zho.pdtb.cdtb & 90.27 & 86.54 & \textbf{88.35} & -- & \phantom{$-$}--\phantom{$-$} & \phantom{$-$}0.83 \\ \hline \textbf{mean} & 91.58 & 88.82 & 90.11 & -- & -- & $-$1.32 \\ \hline \end{tabular} } \caption{Results for Plain Tokenized Data.} \end{subtable} \caption{Segmentation and Connective Detection Results. All numbers are averaged over five runs in order to accommodate instability in the training process which leads to varying performance. If a corpus was included in the 2019 shared task and has not been significantly modified since then, we also include the best result on the corpus in 2019 for comparison. }% \label{tab:seg}% \end{table*} \subsection{Segmentation and Connectives} Table \ref{tab:seg} gives scores on EDU segmentation and connective detection in the two shared task scenarios: treebanked and plain text, as well as the best previously reported score and system for datasets which are unchanged from 2019 (see \citealt{zeldes-etal-2019-disrpt} for details). We find strong performance in both the treebanked and plain tokenized data scenarios: our system nearly always outperforms the best score from 2019, and we observe especially large gains for connective detection. On treebanked data, the results show that performance has improved since 2019 on nearly all unchanged datasets, with degradations of only around 0.5\% for \model{eng.sdrt.stac} and \model{spa.rst.sctb} compared to the previous best systems, GumDrop and ToNy respectively. For some datasets, gains are dramatic, most notably for Turkish (14.5\% gain) and Chinese connective detection (8.4\%), which is perhaps due to the availability of better language models and our use of conditional random fields. On average the improvement on treebanked data is close to 3\% for datasets represented in 2019. On plain tokenized data, the improvement from 2019 is even more pronounced, with an average gain of 3.7\% compared to 2.8\% for treebanked data. While performance on some corpora was roughly constant regardless of whether data was treebanked or plain tokenized (e.g. \model{eng.rst.rstdt}, \model{por.rst.cstn}), it dropped considerably for some corpora on plain tokenized data. This effect is most dramatic for \model{zho.rst.sctb}, where we see a degradation of 7.1\%. This effect cannot be explained just by the amount of training data available: correlation between training token count and degradation is low (Pearson's $r$ = 0.092, $p$ = 0.74). We speculate that these causes of the degradations are primarily due to idiosyncrasies of the corpora. \model{eng.sdrt.stac}, for instance, has a mild degradation ($-$3\%), which we believe to be primarily due to the lack of punctuation and capitalization compared to e.g.~a newswire corpus like \model{eng.rst.rstdt}, which exhibited very little degradation. From this we might infer that degradation in the absence of treebanked data will be correlated with the degree to which predicting sentence splits from plain text is difficult. We additionally hypothesize that a lack of gold sentence breaks affects RST datasets more than PDTB datasets, since the beginning of a sentence is almost always the beginning of a new elementary discourse unit, while connectives are mainly identified lexically, and need to be identified regardless of the relative position of sentence splits. \paragraph{Ablation Study} \begin{table} \resizebox{\columnwidth}{!}{% \begin{tabular}{lccccc} \hline \textbf{Corpus} & \textbf{F$_1$} (all) & \multicolumn{2}{c}{\textbf{F$_1$} (no feats.)} & \multicolumn{2}{c}{\textbf{F$_1$} (CWE only)} \\ & & {\it abs.} & {\it gain} & {\it abs.} & {\it gain} \\ \hline deu.rst.pcc & 95.58 & 96.28 & -0.70 & 95.81 & -0.23 \\ eng.rst.gum & 94.15 & 89.74 & \hphantom{0}4.41 & 92.23 & \hphantom{0}1.92 \\ eng.rst.rstdt & 96.64 & 91.41 & \hphantom{0}5.23 & 94.59 & \hphantom{0}2.05 \\ eng.sdrt.stac & 94.91 & 94.54 & \hphantom{0}0.37 & 94.54 & \hphantom{0}0.37 \\ eus.rst.ert & 90.46 & 91.01 & -0.54 & 90.76 & -0.30 \\ fas.rst.prstc & 92.94 & 92.99 & -0.05 & 93.24 & -0.30 \\ fra.sdrt.annodis & 90.02 & 88.80 & \hphantom{0}1.22 & 88.48 & \hphantom{0}1.54 \\ nld.rst.nldt & 95.97 & 95.48 & \hphantom{0}0.50 & 95.11 & \hphantom{0}0.86 \\ por.rst.cstn & 94.35 & 92.65 & \hphantom{0}1.71 & 93.93 & \hphantom{0}0.42 \\ rus.rst.rrt & 86.21 & 86.18 & \hphantom{0}0.03 & 86.01 & \hphantom{0}0.20 \\ spa.rst.rststb & 92.22 & 92.04 & \hphantom{0}0.18 & 92.39 & -0.17 \\ spa.rst.sctb & 82.48 & 84.21 & -1.73 & 83.32 & -0.84 \\ zho.rst.sctb & 83.34 & 84.87 & -1.54 & 82.60 & \hphantom{0}0.74 \\ \hline eng.pdtb.pdtb & 92.02 & 87.72 & \hphantom{0}4.30 & 82.42 & \hphantom{0}9.60 \\ tur.pdtb.tdb & 94.11 & 94.02 & \hphantom{0}0.10 & 93.54 & \hphantom{0}0.57 \\ zho.pdtb.cdtb & 87.52 & 88.53 & -1.01 & 88.14 & -0.62 \\ \hline \textbf{mean} & 91.43 & 90.65 & \hphantom{0}0.78 & 90.45 & \hphantom{0}0.99 \\ \hline \end{tabular} } \caption{F$_1$ scores for ablations on gold treebanked data: next to normal scores from \Cref{tab:seggold}, we report scores without handcrafted features, and without character embeddings and fastText static word embeddings, as well as the ``gain'' for each (non-ablated score -- ablated score). Due to time constraints, ablations are based on {\it three} runs instead of the standard five.} \label{tab:seg_abl}% \end{table} In order to assess the importance of the various modules of our segmentation and connective detection system, we conduct an ablation study. Due to the large computational expense of conducting full runs over all datasets, we choose only two ablation conditions. In the first, we remove all handcrafted features described in \Cref{tab:segfeats}. In the second, we remove character embeddings and fastText static word embeddings, leaving only contextualized word embeddings. The results of this study is given in \Cref{tab:seg_abl}. The general trend in the results of the ablation study seems to be that both handcrafted features and supplementary word embeddings are helpful on average, though they may sometimes lead to minor degradations, and have a dramatically pronounced effect on a few corpora in particular. Handcrafted features have a mild effect on most corpora but lead to large gains for GUM, RST-DT, and PDTB. It is not immediately clear why this might be: performance on GUM, which is diverse with respect to genre, probably benefits from having a genre feature, but RST-DT and PDTB are homogeneous with respect to genre. We also note that GUM, RST-DT, and especially PDTB are large corpora, so perhaps the explanation lies in their size, but RRT is also very large and has multiple genres, yet handcrafted features led to nearly no gain on this dataset. Turning now to the CWE-only ablation, we see a similar pattern: most corpora are only minorly affected by the inclusion of non-CWE embeddings, with a couple (GUM, RST-DT) showing a moderate gain of 2\%, and one corpus (PDTB) showing an anomalous gain of 10\%. Just as with the handcrafted feature ablation, it is difficult to know what could explain these corpora's divergent behavior. Ordinarily, static word embeddings might benefit small corpora with OOV items in the test set, since the embedding space will be stable in the unseen data -- however PDTB is very large and homogeneous (newswire), making this explanation unlikely. Since no other corpus showed such a dramatic drop with non-CWE embeddings ablated, and since other CWE-based systems at DISRPT 2021 score around what our system would have scored if the drop had been more in line with what was observed for other corpora (2\% drop, for a score in the low 90s, as was achieved by disCut and SegFormers), we speculate that the 10\% drop observed here is due to some kind of implementation error or statistical fluke due to the nondeterminism of training models on GPUs, though the effect survives in the system reproduction on the Shared Task evaluators' machine. In sum, our ablation study for segmentation and connective detection suggests that both handcrafted features and non-CWE embeddings are not silver bullets, though they are often helpful. Degradations were seen more often on smaller datasets, which perhaps indicates that in low-data situations these additional resources can serve more than anything as a source of overfitting. But both were on average responsible for a 1\% gain,\footnote{Assuming that they can be treated independently, which is an idealization.} which shows they are both useful, and invites the question of whether there might be even better handcrafted features, which could be tailored more accurately to properties of specific target languages and genres. \subsection{Relation Classification} \begin{table}[t] \resizebox{\columnwidth}{!}{% \begin{tabular}{lcccc} \hline \textbf{Corpus} & \textbf{\# Relations} & \twoline{\textbf{Accuracy}}{\textbf{(w/ feats.)}} & \twoline{\textbf{Accuracy}}{\textbf{(w/o feats.)}} & \twoline{\textbf{Feature}}{\textbf{Gain}} \\\hline deu.rst.pcc & 26 & 39.23 & 33.85 & \phantom{0}5.38 \\ eng.pdtb.pdtb & 23 & 74.44 & 75.63 & -1.19 \\ eng.rst.gum & 23 & 66.76 & 62.65 & \phantom{0}5.55 \\ eng.rst.rstdt & 17 & 67.10 & 66.45 & \phantom{0}0.65 \\ eng.sdrt.stac & 16 & 65.03 & 59.67 & \phantom{0}5.36 \\ eus.rst.ert & 29 & 60.62 & 59.59 & \phantom{0}1.03 \\ fra.sdrt.annodis & 18 & 46.40 & 48.32 & -1.92 \\ nld.rst.nldt & 32 & 55.21 & 52.15 & \phantom{0}3.06 \\ por.rst.cstn & 32 & 64.34 & 67.28 & -2.94 \\ rus.rst.rrt & 22 & 66.44 & 65.46 & \phantom{0}0.98 \\ spa.rst.rststb & 29 & 54.23 & 54.23 & \phantom{0}0.00 \\ spa.rst.sctb & 25 & 66.04 & 61.01 & \phantom{0}5.03 \\ tur.pdtb.tdb & 23 & 60.09 & 57.58 & \phantom{0}2.51 \\ zho.pdtb.cdtb & 9 & 86.49 & 87.34 & -0.85 \\ zho.rst.sctb & 26 & 64.15 & 64.15 & \phantom{0}0.00 \\ fas.rst.prstc & 17 & 52.53 & 51.18 & \phantom{0}1.35 \\\hline \textbf{mean} & & 61.82 & 60.41 & \phantom{0}1.41 \\ \hline \end{tabular} } \caption{Relation Classification Results. The score for each corpus is averaged over 5 runs. Also included is score {\it without} any hand-crafted features.} \label{tab:rel-clf-results} \end{table} Table \ref{tab:rel-clf-results} gives scores (averaged over 5 runs for each corpus) on relation classification on all 16 corpora. We include performance on all corpora without any hand-crafted features added in order to assess their utility, and we find that they appreciably boost performance, granting on average a 1.5\% accuracy gain, with some of the biggest gains on small corpora with many labels like \model{deu.rst.pcc} ($+$5.38\%) and \model{spa.rst.sctb} ($+$5.03\%). Since difficulty of classification increases with number of labels, we also include the number of relation types for each corpus in order to contextualize the scores. The \model{zho.pdtb.cdtb} corpus achieved the highest accuracy score, as there are only 9 relation types to classify,% \footnote{Unlike the other two PDTB-style corpora (i.e.~\model{eng.pdtb.pdtb} and \model{tur.pdtb.tdb}), where the predicted labels are truncated at Level-2 (e.g.~\textsc{Temporal.Asynchronous}), the relation labels in \model{zho.pdtb.cdtb} only contain one level (e.g.~\textsc{Temporal}).} and scores tend to be lower for corpora with many relations like \model{nld.rst.nldt}. There is much variance in how much performance on each corpus was able to benefit from having additional features. Many of the corpora that had the largest gains are small, but this is not always the case: \model{tur.pdtb.tdb}, one of the larger corpora, has its score improved by 2.5\%. On the other hand, while small corpora generally seem to benefit more from features, not all are able to: \model{fra.sdrt.annodis}, a small corpus, sees a degradation of 2\% with features. We expect that much of the differential benefit of features is to be explained by the nature of the label-sets used in different corpora, and the available features. No two of these corpora use the exactly same label set, and label sets vary quite a bit in the linguistic phenomena that they encode and are sensitive to. Additionally, different corpora have different features available, such as genre (GUM, RRT, RSTSTB, PRSTC, SCTB -- yes; PCC, PDTB, RST-DT, TDB -- no), gold speaker information (STAC), discontinuity (Annodis, ERT, GUM, PDTB, RST-DT -- yes; PCC, NLDT, STAC -- no) etc., meaning that looking at gain across datasets is not comparing like with like. While some features are available for all corpora, such as distance, unit lengths, or position in document, others are restricted by framework, such as number of children, which is not relevant for PDTB-style data. A set of formalism-agnostic features (e.g.~length\_ratio, is\_sentence, and the direction of the dependency head of the unit) were used for PDTB-style data across the board and were only effective for TDB: we hypothesized that the English PDTB dataset is so big that generic features do not add much value; for CDTB, as we found in our error analysis, the 9 relations sometimes are not very distinct from each other, and these generic features do not help with disambiguation in those cases. Overall, a picture emerges with relations that is similar to the one that arose with segmentation and connective detection: features are helpful on the whole, slightly harmful in some cases, and especially helpful on some corpora. More work remains to be done in understanding the contribution of individual features and how these relate to the frameworks and data types available in each language. \section{Discussion} \begin{figure*}% \centering \begin{subfigure}{0.48\textwidth} \includegraphics[width=9cm, trim={0cm 0.5cm 1cm 0cm}, clip]{confmat_gum.pdf} \caption{eng.rst.gum (GUM)} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=9cm, trim={0cm 0.5cm 1cm 0cm}, clip]{confmat_stac.pdf} \caption{eng.sdrt.stac (STAC)} \end{subfigure} \caption{Confusion Matrices for Common Relations in the Highest and Lowest Scoring EDU Datasets. }% \label{fig:confmat}% \end{figure*} Figure \ref{fig:confmat} shows confusion matrices for common relations in the highest and lowest scoring EDU datasets, \model{eng.rst.gum} (GUM) and \model{eng.sdrt.stac} (STAC). Both panels reveal issues with over-prediction of the most common labels, which can be thought of as `defaults': the most common \textsc{elaboration} in GUM and the second most common, \textsc{Comment} in STAC. The actual most common relation in STAC, \textsc{Question}, does not suffer from false positives, likely due to a combination of the frequent and reliable question mark cue, \textit{wh}-words or subject-verb inversion, combined with the availability of gold speaker and direction information (\textsc{Question} only links units from different speakers, left to right). The same is true for \textsc{question} in GUM, which also obtained a comparatively high score. Conversely, rarer relations are hardly predicted, with \textsc{antithesis} in GUM being predicted for less than half of its true instances, and similarly for \textsc{Result} in STAC, suggesting a class imbalance problem, in particular given that both these relations are sometimes marked by overt discourse markers. Although EDU datasets (RST/SDRT) do not distinguish explicit and implicit relations, analysis suggests that explicit signals are important. For GUM, the model scored high on medium-frequency relations with clear cues such as \textsc{attribution}, which are always signalled by attribution verbs such as \textit{believe} and \textit{say}. This also holds true for the \model{eng.rst.rstdt} corpus where \textsc{attribution} is the highest-scoring relation. GUM's \textsc{elaboration} and \textsc{joint} as well as \textsc{sequence} and \textsc{joint} are two pairs of relation labels that are most frequently confused with each other, as the former pair contains the two labels that are both overgeneralized and the latter contains the two labels that are both \textsc{multinuclear} relation types that could be confusing when no explicit connective or lexical item indicating a sequential order of actions or events is present \cite{liu-2019-beyond}. Relations with relatively unambiguous markers, such as \textsc{condition}, show good results in both GUM and STAC, indicating that even relatively rare relations can be identified if they are usually explicitly signalled. Relations such as \textsc{justify}, \textsc{result}, and \textsc{cause} scored low in both matrices as such instances in the test data often do not have explicit discourse markers to help understand the rhetorical relation between the units in context. In the presence of an ambiguous discourse marker, predictions prefer the relation that is more prototypically associated with that marker: for instance, the gold relation label for \ref{gum_error_1} is \textsc{result} whereas the model classified it as \textsc{sequence}, likely due to the fact that the discourse marker \textit{then} tends to be a strong signal for \textsc{sequence} indicating something sequential is involved. \ex. \textbf{Then} she lets go and falls . You scream . [GUM\_fiction\_falling] \label{gum_error_1} In fact, if it were not the case that RST mandates a single outgoing relation per discourse unit, it would be possible to claim a concurrent sequential relation (likely to a previous unit) next to the annotated \textsc{result} relation for the pair in the example (see \citealt{Stede2008} on concurrent relations in RST). \section{Conclusion} We have presented DisCoDisCo, a system for all tasks in the DISPRT 2021 Shared Task: EDU segmentation, connective detection, and relation classification. Our system relies on sequence tagging and sentence pair classification architectures powered by CWEs and supported by rich, handcrafted, instance-level features, such as position in the document, distance between units, gold speaker information, document metadata, and more. Our results suggest that powerful pretrained language models are the main drivers of performance, with additional features providing small to medium improvements (with some exceptions, such as the high importance of speaker information for chat data as in STAC). For relation classification, CWEs pretrained using an NSP task proved to be superior. Our error analysis suggests unsurprisingly that class imbalances, especially in the case of relations that tend to be implicit (i.e.~lack overt lexical signals), lead to over-prediction of majority classes, suggesting a need for more training data for the minority ones. However, we are encouraged by improvements on datasets that were featured in the 2019 Shared Task \cite{zeldes-etal-2019-disrpt}, and the overall high scores obtained by the system across a range of datasets, all while including some correct predictions for relatively rare relations. We hope that the growing availability of annotated data, coupled with architectures that can harness pre-trained models, will lead to further improvements in the near future.
1,314,259,994,640
arxiv
\section{Introduction} Digital images provide a powerful and intuitive way to represent the physical world. Unfortunately, noise is inevitable in the data that is taken or transmitted. When recovering an underlying image from its corrupted measurements, one requires a \textit{fidelity} term to properly model the discrepancy of an imaging formation model as well as a \textit{regularization} term to refine the solution space of this inverse problem. The choice of such data fidelity term often depends on specific applications, specifically on the assumption of the noise distribution~\cite{bungert2020variational}. For example, a standard approach for additive Gaussian noise is the least-squares fitting. Using the maximum a posteriori (MAP) estimation, Aubert and Aujol \cite{aubert2008variational} formulated a non-convex data fidelity term for multiplicative noise, which can be solved via a difference of convex algorithm \cite{li2016variational}. In photon-counting devices such as x-ray computed tomography (CT) \cite{elbakri2002statistical,kak2001principles} and positron emission tomography (PET) \cite{vardi1985statistical}, the number of photons collected by a device follows a Poisson distribution, thus referred to as Poisson noise. Following the MAP of Poisson statistics, the data discrepancy for Poisson noise can be modeled by a log-likelihood form \cite{chowdhury2020non,chowdhury2020poisson,le2007variational}. Since the nonlinearity of such data fidelity causes computational difficulties, a popular approach in CT reconstruction adopts a weighted least-squares model \cite{thibault2007three} as the data fitting term. To date, major research interests in the image processing community have been focusing on developing regularization methods by exploiting the prior knowledge and/or the special structures of an imaging problem. For instance, the classic Tikhonov regularization~\cite{tikhonov1943stability} returns a smooth output in an attempt to remove the noise, however, at the cost of smearing out important structures and edges. Total variation (TV) \cite{rudin1992nonlinear} is an edge-preserving regularization in that it tends to diffuse along the edges, rather than across, but TV causes a staircasing (blocky) artifact. As remedies, total generalized variation (TGV) \cite{bredies2010total} and fractional-order TV (FOTV) \cite{zhang2015total} were proposed to preserve higher-order smoothness. In addition, non-local regularizations \cite{lou2010image,zhang2010bregmanized} based on patch similarities \cite{buades2005review} work well for textures and repetitive patterns in an image. Instead of proposing explicit regularization models, we reveal in this paper that implicit regularization effects can be achieved by using the $L^2$-based Sobolev norms as a data fidelity term. Recall that a Sobolev space is a vector space of functions equipped with a norm that combines the $L^p$ norms of the function and its derivatives up to a given order. We are particularly interested in the $L^2$-based Sobolev spaces, often referred to as the $H^s$ spaces for $s\in \mathbb R,$ as they are well-studied and widely used. Note that an $H^s$ space is also a Hilbert space with a well-defined inner product. Its associated norm, which we refer to as the $H^s$ norm, is naturally equipped with a particular form of weighting in the Fourier domain. The order of biasing (e.g., towards either low or high frequencies) and the strength of biasing can both be controlled by the choice of $s\in\mathbb R$. When $s=0$, it reduces to the standard $L^2$ norm with equal weights on all the frequencies due to Parseval's identity. Since $H^s$ is a generalization of the $L^2$ norm, using the $H^s$ norm undoubtedly leads to improved results when the parameter $s$ is appropriately chosen according to the prior information, e.g., noise spectra. On the other hand, the $H^s$ norms offer additional flexibility by choosing $s$ to achieve either smoothing ($s<0$) or sharpening ($s>0$) effects depending on the noise type in an input image. It was analyzed in~\cite{engquist2020quadratic} that the class of the $H^s$ norms brings a preconditioning effect as an objective function, thus altering the stability of the original inverse problem. In~\cite{yang2020anderson}, a particular frequency bias of the $H^s$ norm was utilized to accelerate fixed-point iterations when seeking numerical solutions to elliptic partial differential equation (PDEs). The introduction of Sobolev spaces was significant for the development of functional analysis~\cite{sobolev1963applications} and various applications related to PDEs~\cite{evans98} such as the finite element method~\cite{szabo1991finite}. There have been relevant works to the Sobolev norms in image processing and inverse problems. For example, the $H^{-1}$ semi-norm is closely related to the quadratic Wasserstein ($W_2$) metric from optimal transportation~\cite{villani2003topics} under both the asymptotic regime~\cite{otto2000generalization} and the non-asymptotic regime~\cite{peyre2018comparison}. This connection has been utilized in many applications~\cite{engquist2020quadratic,papadakis2014optimal} such as Bayesian inverse problems~\cite{dunlop2020stability}. Another close connection comes from works on the Sobolev gradient~\cite{neuberger2009sobolev}, in which the gradient of a given functional is taken with respect to the inner product induced by the underlying Sobolev norm~\cite{calderMY10,sundaramoorthi2007sobolev} with demonstrated effects in sharpening and edge-preserving. We build connections between the $H^s$ norm as an energy term and the gradient flow induced by the $H^s$ norms, showing the equivalence between the two types of methods under certain scenarios. The main contributions of this work are threefold. First, we propose to use the $H^s$ norms as a novel data-fitting term to effectively utilize their implicit regularization effects for noise removal. Second, we analyze the connections of the $H^s$ norms to the $W_2$ distance from optimal transportation and the Sobolev gradient flow. Such analysis contributes to a better understanding on the effectiveness of optimal transport-based image processing. Third, we propose a series of efficient algorithms including computational approaches to calculate the $H^s$ norms under different setups as well as a combination with the TV regularization. The rest of the paper is organized as follows. \Cref{sect:analysis} devotes to the analysis of the Sobolev norms, including the implicit regularization effects and the connections to two related works of the $W_2$ distance and the Sobolev gradient. We describe three approaches for computing the $\cH^s$ norm in \Cref{sec:Hs_dist} under different boundary conditions and choices of $s$. The combination of the $\cH^s$ norm as a data-fitting term with the TV regularization is discussed in \Cref{sect:Hs+TV}. In \Cref{sect:exp}, we conduct experiments on image denoising and deblurring examples to demonstrate different scenarios where the weak norm ($s<0$) and the strong norm ($s>0$) are preferred, respectively. Conclusions follow in \Cref{sect:conclusion}. \section{Analysis on Sobolev Norms} \label{sect:analysis} In this section, we briefly review the definitions and properties of the $L^2$-based Sobolev norms, followed by discussing the implicit regularization effects and potentials in solving linear inverse problems through mathematical analysis and numerical examples. We draw connections of the Sobolev norms to the quadratic Wasserstein distance~\cite{villani2003topics} in Section~\ref{sect:W2} and the Sobolev gradient~\cite{calderMY10} in Section~\ref{sect:sob_grad}. \subsection{$\H^s$ Sobolev Space} \label{sec:Hs_def} We remark that there are two common ways to define the $L^2$-based Sobolev norm. One is based on the Sobolev space $W^{k,p}(\R^d)$ for a nonnegative integer $k$; see \cref{def:Wkp_def}. \begin{defn}[Sobolev Space $W^{k,p}(\R^d)$\label{def:Wkp_def}] Let $1\leq p<\infty$ and $k$ be a nonnegative integer. If a function $f$ and its weak derivatives $D^{\alpha}f=\frac{\partial^{|\alpha|}f}{\partial x_1^{\alpha_1}\cdots \partial x_d^{\alpha_d}}$, $|\alpha|\leq k$ all lie in $L^p(\R^d)$, where $\alpha$ is a multi-index and $|\alpha|=\sum_{i=1}^d \alpha_i$, we say $f\in W^{k,p}(\R^d)$ and define the $W^{k,p}(\R^d)$ norm of $f$ as \begin{equation}\label{eq:Wkp_def} \|f\|_{W^{k,p}(\R^d)} := \left(\sum_{|\alpha|\leq k} \| D^{\alpha} f \|_{L^p(\R^d)}^p\right)^{1/p}. \end{equation} \end{defn} In this work we focus on the $L^2$-based Sobolev space $W^{k,2}$, which is a Hilbert space. While~\Cref{def:Wkp_def} is concerned with integer-regularity spaces, there exists a natural extension to a more general $L^2$-based Sobolev space $W^{s,2}(\R^d)$ for an arbitrary scalar $s\in\R$ through the Fourier transform. This leads to the second definition of the Sobolev space. Specifically, we define \begin{eqnarray} \F f(\xi) = \hat{f}(\xi) = (2\pi)^{-\frac{d}{2}} \int_{\mathbb{R}^d} f(x) e^{-ix\cdot \xi}dx, \end{eqnarray} where $\F$ denotes the Fourier transform. We further denote $\F^{-1}$ as the inverse Fourier transform, $I$ as the identity operator, $\aver{\xi} := \sqrt{1 + |\xi|^2}$, and $\mathcal{S}^\prime(\R^d)$ as the space of tempered distributions. \begin{defn}[Sobolev Space $\cH^{s}(\R^d)$\label{def:Hs_def}] Let $s \in \R$, the Sobolev space $\cH^{s}$ over $\R^d$ is given by \begin{align} \cH^{s}(\R^d) := \left\{ f \in \mathcal{S}'(\R^d) : \F^{-1} \left[ \aver{\xi}^{s} \F f \right] \in L^2(\R^d) \right\}. \end{align} The space $ \cH^{s}(\R^d)$ is equipped with the norm \begin{align} \label{eq:Hs_def} \|f\|_{\cH^{s}(\R^d)} := \left\| \F^{-1} \left[ \aver{\xi}^{s} \F f \right] \right\|_{L^2(\R^d)} = \left\| \mathcal{P}_sf \right\|_{L^2(\R^d)}, \end{align} where the operator $\mathcal{P}_s:=(I-\Delta)^{s/2}$. \end{defn} When $s=0$, the $H^{s}(\R^d)$ space (norm) reduces to the standard $L^2$ space (norm). One can show that $W^{k,2}(\R^d) = H^k(\R^d)$ for any integer $k$~\cite{arbogastmethods}. We remark that $ \|f\|_{H^{k}(\R^d)} \neq \|f\|_{W^{k,2}(\R^d)} $ for the same $k$ in general, but the two norms are equivalent, which can be shown through Fourier transforms. Hereafter, we mainly focus on $H^{s}(\R^d)$ for $s\in \R$, due to its better generality. \subsection{Implicit Regularization Effects of the $H^s$ Norms} \label{sec:Hs_analysis} We consider the following data formation model, \begin{equation} \label{EQ:Lin IP} f_\sigma = \mathcal{A} u + n_\sigma, \end{equation} where $f_\sigma$ denotes the noisy measurements with an additive Gaussian noise $n_\sigma$ of standard deviation $\sigma$, and $\mathcal A$ denotes a linear degradation operator. A general inverse problem is posted as recovering an underlying image $u$ from the data $f_\sigma$ with the knowledge of $\mathcal A.$ If $\mathcal A$ is the identity operator, i.e., $\mathcal A = I,$ this problem is referred to as \textit{denoising}. If $\mathcal A$ can be formulated as a convolution operator with a blurring kernel, it is called image \textit{deblurring} or \textit{deconvolution.} We assume the linear operator $\mathcal{A}$ is asymptotically diagonal in the Fourier domain such that \begin{equation}\label{EQ:IP Symbol} \widehat{\mathcal{A} u}(\xi)\sim \aver{\xi}^{-\alpha} \hat{u}(\xi), \end{equation} where $\alpha\in\R$, the hat symbol denotes the Fourier transform with frequency coordinate $\xi$, and $\sim$ refers to the relationship that both sides are asymptotically on the same order of magnitude. When $\alpha> 0$, we say the operator $\mathcal{A}$ is ``smoothing''. The value of $\alpha$ can describe to some extent the degree of ill-conditionedness (or difficulty) of solving an inverse problem~\cite{bal2012introduction} in the sense that the larger the $\alpha$ is, the more ill-posed the associated inverse problem becomes. We examine the regularization effects of using the $\cH^s$ norm defined in~\eqref{eq:Hs_def} to quantify the data misfit. In other words, we seek a solution of the inverse problem \eqref{EQ:Lin IP} by minimizing \begin{equation}\label{eq:Hs_obj_old} \Phi_{\cH^s}(u) := \frac{1}{2}\|\mathcal{A}u-f_\sigma\|^2_{\cH^s}=\frac{1}{2}\|\mathcal{P}_s(\mathcal{A}u-f_\sigma)\|^2_{L^2}=\frac{1}{2}\int_{\R^d}\aver{\xi}^{2s}|\widehat{\mathcal{A}u}(\xi) - \widehat{f_\sigma}(\xi)|^2 d\xi, \end{equation} without any additional regularization term. The minimizer of $\Phi_{\cH^s}(u)$ has a closed-form solution, i.e., \begin{equation}\label{EQ:Hs Inversion Phy} u =\Big(\mathcal{A}^* \mathcal{P}_s^*\mathcal{P}_s \mathcal{A} \Big)^{-1} \mathcal{A}^* \mathcal{P}_s^*\mathcal{P}_s f_\sigma, \end{equation} where $\mathcal{A}^*$ is the adjoint operator of $\mathcal{A}$ under the $L^2$ inner product and $\mathcal{P}_s= (I-\Delta)^{s/2}.$ Note that $\mathcal{P}_s^* = \mathcal{P}_s$ as $\mathcal{P}_s$ is self-adjoint. By comparing \eqref{EQ:Hs Inversion Phy} with the standard least-squares solution, we conclude that the $\cH^s$-based inversion can be seen as a weighted least-squares method if $s\neq 0$. \begin{rmk} A variant of~\eqref{eq:Hs_obj_old} is to use the $\dot{H}^s$ semi-norm instead of the standard $H^s$ norm. That is, we replace $\aver{\xi}^{2s} = (1 + |\xi|^2)^s$ by $|\xi|^{2s}$, and the objective function becomes \begin{equation}\label{eq:Hs_obj_semi} \Phi_{\dot{\cH}^s}(u)= \frac{1}{2}\|\mathcal{A}u-f_\sigma\|^2_{\dot{\cH}^s}:=\frac{1}{2}\int_{\R^d} |\xi|^{2s}|\widehat{\mathcal{A}u}(\xi) - \widehat{f_\sigma}(\xi)|^2 d\xi. \end{equation} The frequency bias from $\Phi_{\dot{\cH}^s}$ is more straightforward to analyze than the one from $\Phi_{\cH^s}(u)$, as the weight in front of each frequency is precisely an algebraic factor $|\xi|^{s}$. If $f\in \cH^{s}$ for $s>0$, we have $||f||_{\dot{\cH}^s} < \infty$. However, this is not the case for $s<0$. For example, a function $f$ may have a finite $\cH^{-1}$ norm, but if $\int f dx \neq 0$, it does not have a well-defined $\dot{\cH}^{-1}$ norm. \end{rmk} \begin{rmk} If $s_1,s_2\in \R$ and $s_1 < s_2$, then $\cH^{s_2} \subset \cH^{s_1}$ is continuously embedded. In other words, we specify the order among all $H^s$ spaces, e.g., $H^2 \subset H^1 \subset L^2\subset H^{-1} \subset H^{-2}$. \end{rmk} We consider the following three scenarios to illustrate the implicit regularization effects of $\Phi_{\cH^s}$ as an objective function. A similar analysis applies to $\Phi_{\dot{\cH}^s}$. \begin{itemize} \item When $s=0$, the solution \eqref{EQ:Hs Inversion Phy} reduces to the standard least-squares solution, i.e., $u= \mathcal{A}^\dagger f_\sigma,$ where $\mathcal{A}^\dagger$ is the Moore--Penrose inverse operator of $\mathcal{A}$. Without any regularization term, this solution inevitably overfits the noise in the observation $f_\sigma$. \item When $s>0$, $\mathcal{P}_s$ can be regarded as a differential operator, which amplifies high-frequency contents of $f_\sigma$. If the noise in $f_\sigma$ is also high-frequency, the overfitting phenomenon caused by $\mathcal{P}_s$ is even worse than the standard least-squares solution. On the other hand, if $f_\sigma$ is corrupted by lower-frequency noise, the weighted least-squares would avoid overfitting. \item When $s<0$, $\mathcal{P}_s$ is an integral operator, meaning that applying $\mathcal{P}_s$ to $f_\sigma$ suppresses high-frequency components. The noisy content in $f_\sigma$ does not fully ``propagate'' into the reconstructed solution $u$. The inverse problem is less sensitive to the high-frequency noise in $f_\sigma$, indicating the improved well-posedness. Again, this property becomes disadvantageous if $f_\sigma$ is subject to lower-frequency noise. \end{itemize} Based on the above three different types of scenarios, it is clear that the $H^s$ norm causes a particular weight on the frequency contents of the input function according to the choice of $s$. We will later refer to this property as the \textit{spectral bias} of the $H^s$ norm. \begin{rmk}\label{rmk:goodbad} To summarize, if the data is polluted with high-frequency noise, using a weak norm as the objective function alone improves the posedness of the inverse data-fitting problem without the help of any regularization term. On the other hand, a potential disadvantage of the weaker norm is that the objective function not only implicitly suppresses the higher-frequency noisy content but also the higher-frequency component of the noise-free data. Consequently, the reconstruction loses the high-frequency resolution, as illustrated in~\cite[Figure 4]{engquist2020quadratic}. \end{rmk} Next, we demonstrate the aforementioned properties regarding $s=0$, $s>0$ and $s<0$ through numerical examples of reconstructing a (discrete) image $u$ from \eqref{EQ:Lin IP} by minimizing the discretized objective function $$\Phi_{\cH^s}(u) = \frac{1}{2}\|P_s(Au-f_\sigma)\|^2_{L^2},$$ where $P_s$ is a proper discretization of the continuous operator $\mathcal{P}_s$, and $A$ denotes the linear operator $\mathcal{A}$ in the matrix form; please refer to Section~\ref{sec:Hs_dist} for discretization details. Applying the gradient descent algorithm with a fixed step size $\eta$ to minimize the objective function $\Phi_{\cH^s}(u)$ yields the following iterative step: \begin{equation}\label{eq:Hs_GD} u^{(n+1)} = u^{(n)} - \eta \nabla \Phi(u^{(n)}) = u^{(n)} - \eta A^TP_s^T P_s (Au^{(n)}-f_\sigma). \end{equation} \begin{figure} \centering \subfloat[Blurry Input]{\includegraphics[width = 0.15\textwidth]{Compare-NoTV-NoNoise-All-s-input.png}\label{fig:square_no_noise_input}} \hspace{0.1cm} \subfloat[$s=1$]{\includegraphics[width = 0.15\textwidth]{Compare-NoTV-NoNoise-All-s100.png}} \hspace{0.1cm} \subfloat[$s=0.5$]{\includegraphics[width = 0.15\textwidth]{Compare-NoTV-NoNoise-All-s50.png}} \hspace{0.1cm} \subfloat[$s=0$]{\includegraphics[width = 0.15\textwidth]{Compare-NoTV-NoNoise-All-s0.png}} \hspace{0.1cm} \subfloat[$s=-0.5$]{\includegraphics[width = 0.15\textwidth]{Compare-NoTV-NoNoise-All-s-50.png}} \hspace{0.1cm} \subfloat[$s=-1$]{\includegraphics[width = 0.15\textwidth]{Compare-NoTV-NoNoise-All-s-100.png}} \caption{Effects of minimizing $\Phi_{\cH^s}$ with different choices of $s$. The reconstructed solutions gradually transition from sharp to blurry after the same number of gradient descent iterations, showing that strong norms ($s>0$) are better at sharpening. \label{fig:square_deblur_nonoise_different_s}} \end{figure} We apply~\eqref{eq:Hs_GD} to a simple example of image deblurring. Consider a binary image of size $100\times 100$ with a black square in the middle to be the ground-truth, referred to as the Square image. The linear operator $A$ can be formulated as a convolution with $15\times 15$ Gaussian kernel of standard deviation $1$, which can be implemented through {\tt fspecial(`gaussian',15,1)} in Matlab. The blurry image is further corrupted by an additive Gaussian noise with standard deviation $\sigma$. When $\sigma=0$, the input image is blurry but not noisy, as seen in~\Cref{fig:square_no_noise_input}. We show reconstructed images by minimizing $\Phi_{\cH^s}$ with different choices of $s$ via~\eqref{eq:Hs_GD}. The five values of $s$ in~\Cref{fig:square_deblur_nonoise_different_s} cover all scenarios: $s=0$, $s>0$ and $s<0$. After running $100$ iterations of the gradient descent algorithm~\eqref{eq:Hs_GD} with the same step size $\eta=1$, we observe in \Cref{fig:square_deblur_nonoise_different_s} a gradual transition from sharp to blurry reconstruction results as $s$ decreases from $s=1$ to $s=-1$. This is aligned with our earlier discussion that the operator $\mathcal P_s$ for positive $s$ is a differential operator, which boosts the higher-frequency content of $A^T (Au^{(n)}-f_\sigma)$, which is the gradient when the $L^2$ norm becomes the objective function. Consequently, it accelerates the gradient descent algorithm to converge to the sharp ground truth, as the only missing information in the blurry input is precisely in the high-frequency domain. In summary, strong norms ($s>0$) are good at sharpening. We then examine the influence of noise on the reconstructions by minimizing the $\Phi_{\cH^s}$ functional. For this purpose, we add different amounts of noises, i.e., $\sigma = 0.1$ and $\sigma=0.5$, to the same blurry image (shown in~\Cref{fig:square_no_noise_input}), leading to noisy and blurry data shown in~\Cref{fig:square-blur01} and~\Cref{fig:square-blur05}, respectively. Again, we reconstruct the images by running 100 iterations of gradient descent with the same step size. The top row of~\Cref{fig:square_deblur} corresponds to a smaller noise level ($\sigma = 0.1$). The $L^2$-based method, i.e., $s=0,$ clearly suffers from overfitting the noise, as the reconstruction is even noisier than the input. The best result is achieved at $s= -0.5,$ while the reconstructed images are over smooth as $s$ decreases. This set of tests shows both advantages and potential limitations of weak norms ($s<0$) as addressed in~\Cref{rmk:goodbad}. The bottom row of~\Cref{fig:square_deblur} corresponds to a larger noise level ($\sigma = 0.5$), when the overfitting phenomenon is more severe not only for the $L^2$ norm, but also for the cases of $s= -0.5$ and $s=-0.25$. The best reconstruction occurs at $s=-1$, where the spectral bias of the objective function towards lower-frequency contents of the residual (the difference between the current iterate and the input image) is the strongest. That is, the weighting coefficients on the low-frequency components are much bigger in contrast to the ones on the high-frequency ones due to the rapid decay of function $\langle \xi\rangle^{-1}$ compared to $\langle \xi \rangle ^{-0.5}$. The comparison between two noise levels also implies that the best choice of $s$ is data-dependent. One heuristic principle is that the noisier the input is, the weaker objective function (smaller $s$) one should choose to avoid overfitting the noise. In~\Cref{fig:square_deblur_zoom_in}, we show the cross-sections of 2D images; the location of the cross-section is indicated by the red lines in~\Cref{fig:square-blur01} and~\Cref{fig:square-blur05}. In~\Cref{fig:zoomin_01}, the 1D plots clearly show the over-smoothing artifact for $s=-1,$ and the construction of $s=-0.5$ is closest to the ground truth. In contrast, the case $s=-0.5$ is no longer good enough to ``smooth'' out the stronger noise in~\Cref{fig:zoomin_05}, and the result from $s=-1$ turns out to be the best fit. \begin{figure} \centering \subfloat[Noisy Input]{\includegraphics[width = 0.15\textwidth]{Compare_NoTV_Input_01.png}\label{fig:square-blur01}} \hspace{0.1cm} \subfloat[$s=-1$]{\includegraphics[width = 0.15\textwidth]{Compare_NoTV_s-100.png}} \hspace{0.1cm} \subfloat[$s=-0.75$]{\includegraphics[width = 0.15\textwidth]{Compare_NoTV_s-75.png}} \hspace{0.1cm} \subfloat[$s=-0.5$]{\includegraphics[width = 0.15\textwidth]{Compare_NoTV_s-50.png}} \hspace{0.1cm} \subfloat[$s=-0.25$]{\includegraphics[width = 0.15\textwidth]{Compare_NoTV_s-25.png}} \hspace{0.1cm} \subfloat[$s=0$ ($L^2$)]{\includegraphics[width = 0.15\textwidth]{Compare_NoTV_s0.png}}\\ \subfloat[Noisy Input]{\includegraphics[width = 0.15\textwidth]{Compare_NoTV_Input_05.png}\label{fig:square-blur05}} \hspace{0.1cm} \subfloat[$s=-1$]{\includegraphics[width = 0.15\textwidth]{Compare_NoTV_BIg_Noise_s-100.png}} \hspace{0.1cm} \subfloat[$s=-0.75$]{\includegraphics[width = 0.15\textwidth]{Compare_NoTV_BIg_Noise_s-75.png}} \hspace{0.1cm} \subfloat[$s=-0.5$]{\includegraphics[width = 0.15\textwidth]{Compare_NoTV_BIg_Noise_s-50.png}} \hspace{0.1cm} \subfloat[$s=-0.25$]{\includegraphics[width = 0.15\textwidth]{Compare_NoTV_BIg_Noise_s-25.png}} \hspace{0.1cm} \subfloat[$s=0$ ($L^2$)]{\includegraphics[width = 0.15\textwidth]{Compare_NoTV_BIg_Noise_s0.png}} \caption{Deblurring the Square image by minimizing $\Phi_{\cH^s}(u)$. The top row presents the blurry noisy input with $\sigma = 0.1$ and reconstruction results of different $s$ values. A noisier case ($\sigma = 0.5$) is illustrated in the bottom row. \label{fig:square_deblur}} \end{figure} \begin{figure} \centering \subfloat[$\sigma=0.1$]{\includegraphics[width = 1.0\textwidth]{Compare_NoTV_Small_Noise_ZoomIn.png}\label{fig:zoomin_01}}\\ \subfloat[$\sigma=0.5$]{\includegraphics[width = 1.0\textwidth]{Compare_NoTV_Big_Noise_ZoomIn.png}\label{fig:zoomin_05}} \caption{The zoom-in view for different choice of $s$ at the cross section (the red line) illustrated in~\Cref{fig:square-blur01} and~\Cref{fig:square-blur05}, respectively.} \label{fig:square_deblur_zoom_in} \end{figure} \subsection{Relationship with the $W_2$ Distance}\label{sect:W2} Here, we discuss a connection between the Sobolev norms and the quadratic Wasserstein ($W_2$) distance~\cite{villani2003topics} to provide a better understanding of both metrics. The Wasserstein distance defined below is associated to the cost function $c(x,y) = |x-y|^p$ in the optimal transportation problem. \begin{definition}[Wasserstein Distance] We denote by $\mathscr{P}_p(\Omega)$ the set of probability measures with finite moments of order $p$. For $1\leq p<\infty$, \begin{equation}\label{eq:static} W_p(\mu,\nu)=\left( \inf _{T_{\mu,\nu}\in \mathcal{M}}\int_{\Omega}\left|x-T_{\mu,\nu}(x)\right|^p d\mu(x)\right) ^{\frac{1}{p}},\quad \mu, \nu \in \mathscr{P}_p(\Omega), \end{equation} where $\mathcal{M}$ is the set of all maps that push forward $\mu$ into $\nu$. Note that $W_2$ corresponds to the case $p=2$. \end{definition} An asymptotic connection between the $W_2$ metric and the $\cH^s$ norm was first provided in~\cite{otto2000generalization} given the two probability distributions under comparison are close enough such that the linearization error is small. Consider $\mu$ as the probability measure and $d\pi$ as an infinitesimal perturbation that has zero total mass. Then \begin{equation}\label{EQ:W2-Hm1 Asym} W_2(\mu, \mu+d\pi)=\|d\pi\|_{\dot{\cH}_{(d\mu)}^{-1}}+\smallO (d\pi). \end{equation} We remark that $\dot{\cH}_{(d\mu)}^{-1}$ is the weighted $\dot{\cH}^{-1}$ semi-norm~\cite{villani2003topics}, which is defined as follows. Given strictly positive probability density $f\,dx= d\mu$, we define a Laplace-type linear operator \[ \mathcal{L}_f = -\Delta +\nabla(-\log f)\cdot \nabla, \] which satisfies the integration by parts formula \[ \int_{\mathbb{R}^d} (\mathcal{L}_fh_1)h_2 d\mu = \int_{\mathbb{R}^d} h_1(\mathcal{L}_fh_2) d\mu = \int_{\mathbb{R}^d} \nabla h_1 \cdot \nabla h_2 d\mu,\quad \forall h_1,h_2\in H^1_{(d\mu)}. \] Thus, we can define the weighted $L^2_{(d\mu)}$, $\dot{\cH}_{(d\mu)}^{1}$ and $\dot{\cH}_{(d\mu)}^{-1}$ norms as, \begin{align*} \|h\|^2_{L^2(d\mu)} = \int_{\mathbb{R}^d} h^2 d\mu,\quad \|h\|^2_{\dot{\cH}^1(d\mu)} = \int_{\mathbb{R}^d} |\nabla h|^2 d\mu,\quad \|h\|^2_{\dot{\cH}^{-1}(d\mu)} = \int_{\mathbb{R}^d} h(\mathcal{L}_f^{-1}h) d\mu. \end{align*} If $f=1$, the last equation returns the standard $\dot{\cH}^{-1}$ semi-norm based on the Lebesgue measure, which is \eqref{eq:Hs_obj_semi} with $s=-1$. A connection between $W_2$ and $\dot{\cH}^{-1}$ under a non-asymptotic regime was later presented in~\cite{peyre2018comparison}. If both $f dx=d\mu$ and $g dx=d\nu$ are bounded from below and above by constants $c_1$ and $c_2$, we have the following \emph{non-asymptotic} equivalence between $W_2$ and $\dot{\cH}^{-1}$~\cite{peyre2018comparison}, \begin{equation}\label{EQ:W2-Hm1} \frac{1}{c_2} \|f-g\|_{\dot {\cH}^{-1}} \le W_2(\mu, \nu) \le \frac{1}{c_1} \|f-g\|_{\dot {\cH}^{-1}}. \end{equation} Note that in both the asymptotic and the non-asymptotic regimes, the $W_2$ metric shares a similar spectral bias as the $\dot {\cH}^{-1}$ semi-norm, up to a weighting function. Thus, the implicit regularization properties for the case $s=-1$ discussed in Section~\ref{sec:Hs_analysis} can extend to the quadratic Wasserstein metric. This finding explains the improved stability of the Wasserstein metric in inverse problems from various applied fields, including machine learning~\cite{arjovsky2017wasserstein}, parameter identification~\cite{yang2021optimal}, and full-waveform inversion~\cite{yang2018application}. \subsection{Relationship with the Sobolev Gradient Flow} \label{sect:sob_grad} The well-known heat equation $u_t = \Delta u$ where $u:\Omega \mapsto \mathbb{R}$ ($\Omega$ is an open subset of $ \mathbb{R}^2$ with smooth boundary $\partial \Omega$) can be seen as the gradient flow of the energy functional \[ E(u) = \frac 1 2 \int_\Omega |\nabla u|^2 dx = \frac 1 2\|\nabla u\|_{L^2}^2 , \] with respect to the $L^2$ inner product $\langle v, w \rangle_{L^2} = \int_{\Omega} v\, w\,dx$. A different gradient flow can be derived from a more general inner product, for example, based on the Hilbert space $\cH^{s}$ in~\Cref{def:Hs_def} for any $s\in\R$. An inner product on the Sobolev space $\cH^1(\Omega)$~\cite{evans98,sundaramoorthi2007sobolev} can be defined as $$ g_\lambda(v, w) = (1-\lambda) \langle v, w \rangle_{L^2} + \lambda \langle v, w \rangle_{\cH^1} = \langle v, w \rangle_{L^2} + \lambda \langle v, w \rangle_{\dot{\cH}^1} , $$ for any $\lambda >0$ and $\langle v, w \rangle_{\dot{\cH}^1} = \langle \nabla v, \nabla w \rangle_{L^2}$. If we are only interested in periodic functions on the domain $\Omega$, the gradient operators considered here are equipped with the periodic boundary condition. When $\lambda=0$, $g_\lambda(v, w)$ reduces the conventional $L^2$ inner product, and when $\lambda = 1$, it becomes the standard $\cH^1$ inner product: $\langle v,w \rangle_{{\cH}^1} = \langle v, w \rangle_{L^2} + \langle \nabla v, \nabla w \rangle_{L^2}$. Calder \emph{et al.}~\cite{calderMY10} exploited a general Sobolev gradient flow for image processing and established the well-posedness of the Sobolev gradient flow $ u_t = (I-\lambda\Delta)^{-1}\Delta u$ in both the forward and the backward directions of minimizing $E(u)$. Specifically worth noticing is that the backward direction can be regarded as a sharpening operator \cite{liu2020image,lou2013video}. Without loss of generality, we set $\lambda=1$ when studying a connection between the Sobolev gradient and the gradient of the $\cH^s$ norm as the energy functional. Given any energy (objective) functional $E(u)$, an inner product based on the Sobolev metric $\cH^1(\Omega)$ gives a specific gradient formula \begin{equation} \nabla_{\cH^1} E(u) =(I-\Delta)^{-1} \nabla_{L^2} E(u), \end{equation} such that \begin{equation} \langle \nabla_{\cH^1} E(u) , v \rangle_{\cH^1} = \langle \nabla_{L^2} E(u) , v \rangle_{L^2} = \lim_{\epsilon\rightarrow 0} \frac{E(u+\epsilon v) - E(u)}{\epsilon},\quad \forall v\in \cH^1(\Omega) \subset L^2(\Omega). \end{equation} If we consider the energy functionals $\Phi_{L^2}(u)$ (i.e., $\Phi_{\cH^{0}}(u)$) and $\Phi_{\cH^{-1}}(u)$ defined in~\eqref{eq:Hs_obj_old}, we have \begin{align*} &\nabla_{L^2} \Big( \Phi_{L^2}(u) \Big)= \mathcal{A}^*(\mathcal{A}u - f_\sigma),\quad \nabla_{H^1} \Big( \Phi_{L^2}(u) \Big)= (I-\Delta)^{-1}\mathcal{A}^*(\mathcal{A}u - f_\sigma), \\ &\nabla_{L^2} \Big( \Phi_{\cH^{-1}}(u) \Big)= \mathcal{A}^*(I-\Delta)^{-1}(\mathcal{A}u - f_\sigma). \end{align*} Correspondingly, we have the following three gradient flow equations: \begin{eqnarray} u_t &=& -\mathcal{A}^*(\mathcal{A}u - f_\sigma)\hspace{2.1cm} (\text{$L^2$ gradient flow of $\Phi_{L^2}(u)$}), \label{eq:L2_L2}\\ u_t &=& -(I-\Delta)^{-1} \mathcal{A}^*(\mathcal{A}u - f_\sigma) \quad (\text{$\cH^{1}$ gradient flow of $\Phi_{L^2}(u)$}), \label{eq:H1_L2}\\ u_t &=& -\mathcal{A}^* (I-\Delta)^{-1}(\mathcal{A}u - f_\sigma)\quad (\text{$L^2$ gradient flow of $\Phi_{\cH^{-1}}(u)$}). \label{eq:L2_Hm1} \end{eqnarray} If $\mathcal{A}^*$ shares the same set of eigenfunctions as the Laplace operator $\Delta$, then $\mathcal{A}^* (I-\Delta)^{-1} = (I-\Delta)^{-1} \mathcal{A}^*$, and hence~\eqref{eq:H1_L2} is exactly equivalent to~\eqref{eq:L2_Hm1}. Even if $\mathcal{A}^*$ does not commute with $(I-\Delta)^{-1}$, one can still view $(I-\Delta)^{-1}$ as a smoothing (integral) preconditioning operator upon the residual $\mathcal{A}u - f_\sigma$, which we wish to reduce to zero no matter the objective function is $\Phi_{L^2}(u)$ or $\Phi_{\cH^{-1}}(u)$. To sum up, \eqref{eq:H1_L2} and~\eqref{eq:L2_Hm1} are similar in nature in terms of the spectral bias of the resulting gradient descent dynamics, which demonstrates the equivalence between the change of the gradient flow and the change of the objective function under certain circumstances. In contrast to~\eqref{eq:L2_L2}, both \eqref{eq:H1_L2} and~\eqref{eq:L2_Hm1} are equipped with the smoothing property due to the additional $(I-\Delta)^{-1}$ operator. \section{Numerical Computation of the $H^s$ Norms}\label{sec:Hs_dist} In this section, we present three numerical methods for computing the general $H^s$ norms of any $s\in \R$. The first one (in Section~\ref{sect:firstHs}) applies to periodic functions defined on a domain, which is either the entire $\R^d$ or a compact subset of $\R^d$, denoted by $\Omega$. We are mainly interested in periodic functions to align with a fast implementation of convolution that assumes the periodic boundary condition. In addition, we discuss the functions with zero Neumann boundary condition in Section~\ref{sect:secondHs} and integer-valued $s$ in Section~\ref{sect:thirdHs}. \subsection{Through the Discrete Fourier Transform}\label{sect:firstHs} Recall that the Hilbert space $\cH^s(\mathbb{R}^n)$, $s\in\R$, is equipped with the norm~\eqref{eq:Hs_def}. If we compute the $H^s$ norm of a periodic function $f \in \cH^s$ defined on the entire $\R^d$, or equivalently, defined on $\Omega\subset \R^d$, we have \begin{equation}\label{eq:Hs_periodic} \left\| f\right\|_{\cH^{s}(\mathbb{R}^n)} = \left\| \mathcal{P}_s f\right\|_{L^2(\mathbb{R}^n)} \approx \left\| {P}_s f\right\|_{L^2(\mathbb{R}^n)}, \end{equation} where $\mathcal{P}_sf = \mathcal{F}^{-1}\left[(1+|\xi|^2)^{s/2}\mathcal{F}f\right]$ and ``$\approx$'' indicates the approximation by discretization. The discretization of the linear operator $\mathcal{P}_s$, denoted as $P_s$, can be computed explicitly through diagonalization, or implicitly, through the fast Fourier transform. For the former, the discretization of $\mathcal{F}$ is the discrete Fourier transform (DFT) matrix, while the discretization of $\mathcal{F}^{-1}$ is its conjugate transpose. The discretization of $(1+|\xi|^2)^{s/2}$ is correspondingly a diagonal matrix. \subsection{Through the Discrete Cosine Transform}\label{sect:secondHs} If we are interested in computing the $H^s$ norm of non-periodic functions on the domain $\Omega$ that is a compact subset of $\R^d$, we adopt the zero Neumann boundary condition~\cite{schechter1960negative} as the boundary condition for the Laplacian operator. As a result, rather than DFT, a consistent definition is through the discrete cosine transform (DCT) due to its relationship with the discrete Laplacian on a regular grid associated with the zero Neumann boundary condition, i.e., \begin{equation} \label{eq:Hs_DCT} \|f\|_{\cH^{s}(\Omega)} \approx \| \widehat{P}_s f\|_{L^2(\Omega)},\quad \widehat{P}_s = {C}^{-1} (I-\Lambda)^{s/2} {C}, \end{equation} where ${C}$ and ${C}^{-1}$ are matrices representing the DCT and its inverse, respectively. Moreover, $I$ is the identity matrix and $\Lambda$ is a diagonal matrix whose diagonal entries are eigenvalues of the discrete Laplacian with the zero Neumann boundary condition. One may observe that~\eqref{eq:Hs_DCT} shares great similarity with~\eqref{eq:Hs_def} except for the facts that DFT is replaced with DCT and the diagonal matrix also varies according to eigenvectors and eigenvalues of the discrete Laplacian with different boundary conditions. \subsection{Through Solving a Partial Differential Equation}\label{sect:thirdHs} Let $\Omega \subset \mathbb{R}^n$ be a bounded Lipschitz-smooth domain. The Hilbert space $\cH^{s}(\Omega)$ is the same as the Sobolev space $W^{s,2}(\Omega)$ for all integers $s \in \mathbb{Z}$; see~\cite[Sec.~7]{arbogastmethods}, i.e., \[ W^{s,2}(\Omega) = \{f|_\Omega: f\in W^{s,2}(\R^d)\} = \{f|_\Omega: f\in H^s(\R^d)\} = H^{s}(\Omega). \] Consequently, we can define an equivalent norm for functions in $H^{s}(\Omega)$ through $\|\cdot\|_{W^{s,2}(\Omega) }$, which involves differential operators with the zero Neumann boundary conditions~\cite{schechter1960negative}. When $s\in \mathbb{N}$, the computation of the $W^{s,2}(\Omega)$ norm should follow its definition in~\cref{def:Wkp_def} while the differential operator involved should be handled with the zero Neumann boundary condition. In this case, one explicit definition of $\|f\|_{\cH^{-s}(\Omega)}$ via the Laplace operator~\cite{schechter1960negative,yang2020anderson} is given by \begin{equation} \label{eq:dual_Hs} \|f\|_{\cH^{-s}(\Omega)} = \|u\|_{\cH^{s}(\Omega)}, \end{equation} where $u(x)$ is the solution to the following partial differential equation with the zero Neumann boundary condition~\cite[Section 3]{schechter1960negative}, \begin{equation}\label{eq:dual_Hs_PDE} \begin{cases} \mathfrak{L}^{s} u(x) =f(x), &x\in\Omega, \\ \nabla u\cdot {\bf n} = 0, &x\in \partial\Omega, \end{cases} \end{equation} for $ \mathfrak{L}^{s} = \sum\limits_{|\alpha|\leq s} (-1)^{|\alpha|} D^{2\alpha} . We may define the operator $\mathfrak{L}^{-s}$ by setting $u = \mathfrak{L}^{-s} f$. Combining~\eqref{eq:dual_Hs} and~\eqref{eq:dual_Hs_PDE}, we have \begin{equation} \|f\|^2_{\cH^{-s}(\Omega)} = \langle u, f \rangle_{L^2(\Omega)} = \langle \mathfrak{L}^{-s} f, f \rangle_{L^2(\Omega)} = \| \widetilde{\mathcal{P}}_s f\|^2_2,\quad \text{where}\quad \widetilde{\mathcal{P}}_s^* \widetilde{\mathcal{P}}_s = \mathfrak{L}^{-s}. \end{equation} We may also denote $\widetilde{\mathcal{P}}_s = \mathfrak{L}^{-s/2}$. The numerical discretization of $\widetilde{\mathcal{P}}_s $ is denoted as $\widetilde{P}_s$. Note that~\eqref{eq:Hs_def} and~\eqref{eq:dual_Hs} do not yield precisely the same norm given $f\in \cH^s(\R^d)$, where $s\in\mathbb{Z}$. For example, when $s=-2$ and $d=2$, the definition~\eqref{eq:Hs_def} depends on the integral operator $(I-\Delta)^{-1}$ based on the definition of the $\cH^{-s}(\Omega)$ norm while the definition~\eqref{eq:dual_Hs} depends on the integral operator $(I-\Delta + \Delta^2)^{-1/2}$ based on the definition of the $W^{-s,2}(\Omega)$ norm in~\eqref{def:Wkp_def}. However, the leading terms in both definitions match. Thus, they are equivalent norms for functions that belong to the same functional space $\cH^{s}(\Omega) = W^{s,2}(\Omega)$ given a fixed $s$. We remark that the $H^s$ norms with non-integer $s$ cannot be calculated through PDEs; instead, one should refer to Section~\ref{sect:secondHs}. \section{Combining the $H^s$ Data Fitting Term with the TV regularization} \label{sect:Hs+TV} We propose the use of the $H^s$ norm to measure the data misfit, as opposed to the traditional $L^2$ norm. As a proof of concept, we incorporate the celebrated TV regularization~\cite{rudin1992nonlinear} by minimizing the following energy functional, \begin{equation}\label{eq:image_model} J(u) = \frac{\lambda}{2} \|\mathcal{A} u -f_\sigma\|_{H^s}^2 + \mu \| \nabla u \|_1, \end{equation} where $\lambda,\mu\in \R^+$ are scalars balancing the data fitting term and the regularization term. We include two parameters $\lambda, \mu$ for the ease of disabling either one of them in experiments. We consider that the linear operator $\mathcal{A}$ is either the identity operator for the denoising task or a convolution operator for the deblurring task, and $f_\sigma$ is the noisy (blurry) data. We discuss the discretization of the model \eqref{eq:image_model}. Suppose a two-dimensional (2D) image is defined on an $m\times n$ Cartesian grid. By using a standard linear index, we can represent a 2D image as a vector, i.e., the $((i-1)m+j)$-th component denotes the intensity value at pixel $(i,j).$ We define a discrete gradient operator, \begin{equation}\label{eq:gradient} \mathbf{D} u:= \left[\begin{array}{l} D_x\\ D_y \end{array} \right] u, \end{equation} where $D_x,D_y$ are the finite forward difference operator with the periodic boundary condition in the horizontal and vertical directions, respectively. We adopt the periodic boundary condition for finite difference scheme to align with the periodic boundary condition when implementing the discrete convolution operator $A$ by the fast Fourier transform (FFT). We denote $N := mn$ and the Euclidean spaces by $\mathcal X:=\mathbb{R}^{N}, \mathcal Y:=\mathbb{R}^{2N}$, then $u\in \mathcal X,$ $Au\in \mathcal X,$ and $\mathbf{D} u\in \mathcal Y$. The $\cH^s$ norm can be expressed in terms of the weighted norm, which is equivalent to the multiplication of $\mathbf{P}_s$, the discrete representation of the operator $\mathcal{P}_s$. Given the choice of $s$ and the particular boundary condition, we can select a preferable way of implementing $\mathbf{P}_s$ as any of the three types of matrices ${P}_s$, $\widehat {P}_s$, and $\widetilde {P}_s$ discussed in Section~\ref{sec:Hs_dist}. To align with the periodic boundary condition used for $\mathbf D$ and $A$, we choose $\mathbf{P}_s = P_s$. In summary, we obtain the following objective function in a discrete form, \begin{equation}\label{eq:Hs_obj} J(u) = \frac{\lambda}{2} \| \mathbf{P}_s(A u -f_\sigma)\|_{2}^2 + \mu \| \mathbf{D} u \|_1. \end{equation} There are a number of optimization algorithms available to minimize $J(u)$ in order to find the optimal solution $u$, such as the Newton's method, the conjugate gradient descent method, and various quasi-Newton methods~\cite{esser2010general,goldstein2009split,nocedal2006numerical}. Here, we present the alternating direction method of multipliers (ADMM)~\cite{boyd2011distributed,glowinski1975approximation}, by introducing an auxiliary variable $d$ and studying an equivalent form of~\eqref{eq:Hs_obj} \begin{equation}\label{equ:split_model_uncon} \min_{u \in\mathcal X, d\in\mathcal Y} \quad \mu \| d \|_1+\frac{\lambda}{2} \|\mathbf{P}_s(A u -f_\sigma)\|_2^2 \quad \mathrm{s.t.} \quad d = \mathbf{D} u. \end{equation} The corresponding augmented Lagrangian function is expressed as \begin{equation}\label{eq:AL4L1uncon} \mathcal{L}(u, d; v) = \mu\| d \|_1+\frac{\lambda}{2} \|\mathbf{P}_s(A u -f_\sigma)\|_2^2+\langle \rho v,\mathbf{D} u - d\rangle + \frac{\rho}{2}\| d - \mathbf{D} u \|_2^2, \end{equation} with a dual variable $ v$ and a positive parameter $\rho.$ The ADMM framework involves the following iterations, \begin{equation} \label{ADMML1_uncon} \left\{\begin{array}{l} u^{(k+1)}=\arg\min_u \mathcal{L}(u, d^{(k)}; v^{(k)}),\\ d^{(k+1)}=\arg\min_{ d} \mathcal{L}(u^{(k+1)}, d; v^{(k)}),\\ v^{(k+1)} = v^{(k)} + \mathbf{D} u^{(k+1)} - d^{(k+1)}. \end{array}\right. \end{equation} By taking the derivative of $\mathcal{L}$ with respect to $u$, we obtain a closed-form solution of the $u$-subproblem in \eqref{ADMML1_uncon}, i.e., \begin{equation}\label{ADMM_l1con_u} u^{(k+1)} = \left(\lambda A^T \mathbf{P}_s^T \mathbf{P}_s A + \rho \mathbf{D}^T \mathbf{D} \right)^{-1}\left(\lambda A^T \mathbf{P}_s^T \mathbf{P}_s f_\sigma + \mathbf{D}^T \big( d^{(k)} -\rho v^{(k)} \big)\right). \end{equation} We remark that $-\mathbf{D}^T \mathbf{D}$ is the discrete Laplacian operator with the periodic boundary condition. In this case, the discrete operators (matrices), $A$, $A^T$, $\mathbf{P}_s^T \mathbf{P}_s$ and $\mathbf{D}^T \mathbf{D}$ all have the discrete Fourier modes as eigenvectors. As a result, the matrix $\lambda A^T \mathbf{P}_s^T \mathbf{P}_s A + \rho \mathbf{D}^T \mathbf{D}$ in~\eqref{ADMM_l1con_u} shares the Fourier modes as eigenvectors, and its inverse can be computed efficiently by FFT. The $d$-subproblem in \eqref{ADMML1_uncon} has also a closed-form solution given by \begin{equation}\label{ADMM_l1con_d} \h d^{(k+1)} = \mathbf{ shrink}\left(\nabla u^{(k+1)} + \h v^{(k)}, \frac{\mu}{\rho}\right), \end{equation} where $ \mathbf{shrink}(\h v, \beta) = \mathrm{sign}(\h v)\circ \max\left\{|\h v|-\beta, 0\right\} $ with the Hadamard (elementwise) product $\circ$. Finally, $ v^{(k+1)}$ is updated based on $u^{(k+1)}$ and $ d^{(k+1)}$. The iterative process continues until reaching the stopping criteria or the maximum number of iterations. \section{Experiments}\label{sect:exp} We start the section on numerical experiments by expanding the deblurring example in Section~\ref{sec:Hs_analysis}. In particular, we conduct a comprehensive study of the $H^s$ norms with different choices of $s$ under a variety of noise levels and whether the TV regularization term is included in the objective function or not. We quantitatively measure the reconstruction performance in terms of the peak signal-to-noise ratio (PSNR), which is defined by \begin{equation*} \mbox{PSNR}( u^\ast, \tilde{u}) := 20 \log_{10} \frac{N M}{\| u^\ast-\tilde{ u}\|_2^2}, \end{equation*} where $u^\ast$ is the restored image, $\tilde{ u}$ is the ground truth, and $N, \ M$ are the number of pixels and the maximum peak value of $\tilde{ u},$ respectively. The PSNR values in different settings of deblurring the Square image are recorded in \Cref{tab:Hs_Compare}. \setlength{\tabcolsep}{6pt} \begin{table*}[t] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Add $TV$ & Noise $\sigma$ & input & $s = 0$ & $s =-0.25 $ & $s =-0.5 $ & $s =-0.75 $ & $s =-1$\\ \hline \hline No & 0 & 24.25 & 194.57 & 194.57 & 194.57 & 194.57 & 194.57\\ No & 0.1 & 18.61& 9.46 & 19.62 & 21.63 & 17.38 & 14.09\\ No & 0.5 & 5.95 & -14.57 & -3.05 & 7.54 & 16.29 & 18.12\\ Yes & 0.1 & 18.61& 39.03 & 39.49 & 39.85 & 40.16 & 40.39\\ Yes & 0.5 & 5.95 & 27.67 & 27.99 & 28.23 & 28.39 & 28.44\\ \hline \end{tabular} \caption{Deblurring the Square image comparison among different $\cH^s$ norms in terms of PSNR. Visual results corresponding to the second and the third rows are shown in \cref{fig:square_deblur}.} \label{tab:Hs_Compare} \end{table*} The first row of~\Cref{tab:Hs_Compare} is about the reconstruction without using TV from noise-free data, i.e., $\sigma = 0$. All the PSNR values are all over 190, which implies the perfect recovery (subject to numerical round-off errors). In this noise-free case, the reconstruction is a standard (weighted) least-squares solution. Furthermore, the choice of the data-fitting term does not affect the minimizer of the optimization problem, though the convergence rate may differ. As seen in~\Cref{fig:square_deblur_nonoise_different_s}, the same number of gradient descent iterations yields different sharpness when $s$ varies. Still without the regularization term, we examine the denoising results using the noisy blurry data and record the PSNR values in the second and the third rows of~\Cref{tab:Hs_Compare}. These quantitative values reflect that the reconstruction results after a fixed number of gradient descent iterations \eqref{eq:Hs_GD} differ drastically with respect to different $s$ values, as also illustrated in~\Cref{fig:square_deblur}. We plot the PSNR values with more $s$ values in~\Cref{fig:square_deblur_diff_s} than those documented in~\Cref{tab:Hs_Compare}, which further illustrates that the optimal choice of $s$ depends on the noise level. The effect of the TV regularization is presented in the last two rows of~\Cref{tab:Hs_Compare}. On one hand, TV significantly improves the results over the model without TV. On the other hand, using the optimal $\cH^s$ norm as the data-fitting term together with TV outperforms the classic TV with the $L^2$ norm, as the former has an extra degree of freedom. \begin{figure} \centering \includegraphics[width = 1.0\textwidth]{Compare-NoTV-Noises-Diff-S.png} \caption{Illustrating how the PSNR value depends on different $\cH^s$ norms for deblurring the Square image without regularization. The optimal $s$ varies with the noise intensity. For a larger noise variance, it is preferable to select a weaker norm (corresponding to a smaller $s$).} \label{fig:square_deblur_diff_s} \end{figure} The remaining of this section presents the denoising results of high-frequency Gaussian noise in Section~\ref{sect:denoising} and low-frequency noise arisen in geographical images in Section~\ref{sect:exp_lf_noise}, followed by deblurring examples in Section~\ref{sect:deblurring}. \setlength{\tabcolsep}{10pt} \begin{table*}[t] \centering \begin{tabular}{ |c|c|c|c|c|c|c|c|} \hline Test image & $\sigma$ & input & TV & NLM& BM3D & WNNM & proposed \\ \hline \hline \multirow{3}[6]*{\bf Shape} & 0.1 & 19.95 & 34.15 & 32.36 & 34.61 & 35.23 & 37.56 \bigstrut\\ \cline{2-8} & 0.2 & 14.03 & 25.38 & 27.65 & 28.11 & 29.65 & 33.11 \bigstrut\\ \hline \hline \multirow{3}[5]*{\bf Peppers} & 0.1 & 19.99 & 27.71 & 28.28& 29.98& 30.20 & 28.41\bigstrut\\ \cline{2-8} & 0.2 & 13.98 &24.15 & 24.27 & 26.23 & 26.84 & 24.71\bigstrut\\ \hline \hline \end{tabular} \caption{Image denoising comparison in terms of PSNR.} \label{tab:psnr_denoise} \end{table*} \begin{figure} \centering \subfloat[Noisy Input]{\includegraphics[width = 0.30\textwidth]{noisy_shape_02_input.png}} \hspace{0.1cm} \subfloat[TV]{\includegraphics[width = 0.30\textwidth]{noisy_shape_02_TV.png}} \hspace{0.1cm} \subfloat[NLM]{\includegraphics[width = 0.30\textwidth]{noisy_shape_02_NLM.png}}\\ \subfloat[BM3D]{\includegraphics[width = 0.30\textwidth]{noisy_shape_02_BM3d.png}} \hspace{0.1cm} \subfloat[WNNM]{\includegraphics[width = 0.30\textwidth]{noisy_shape_02_WNNM.png}} \hspace{0.1cm} \subfloat[proposed]{\includegraphics[width = 0.30\textwidth]{noisy_shape_02_TVHs.png}}\\ \caption{Comparison of denoising the Shape image with additive Gaussian noise of $\sigma = 0.2.$\label{fig:shape_denoise}} \end{figure} \begin{figure} \centering \subfloat[Noisy Input]{\includegraphics[width = 0.30\textwidth]{noisy_peppers_02_input.png}} \hspace{0.1cm} \subfloat[TV]{\includegraphics[width = 0.30\textwidth]{noisy_peppers_02_TV.png}} \hspace{0.1cm} \subfloat[NLM]{\includegraphics[width = 0.30\textwidth]{noisy_peppers_02_NLM.png}}\\ \subfloat[BM3D]{\includegraphics[width = 0.30\textwidth]{noisy_peppers_02_BM3d.png}} \hspace{0.1cm} \subfloat[WNNM]{\includegraphics[width = 0.30\textwidth]{noisy_peppers_02_WNNM.png}} \hspace{0.1cm} \subfloat[proposed]{\includegraphics[width = 0.30\textwidth]{noisy_peppers_02_TVHs.png}} \caption{Comparison of denoising the Peppers image with additive Gaussian noise of $\sigma = 0.2.$\label{fig:peppers_denoise}} \end{figure} \begin{figure} \centering \subfloat[Noisy Input, PSNR= 27.53]{\includegraphics[width = 0.5\textwidth]{Marm-Mig-input}\label{fig:Marm_input}} \subfloat[$\dot{\cH}^1$, PSNR= 36.34]{\includegraphics[width = 0.5\textwidth]{Marm-Mig-H1}\label{fig:Marm_H1}}\\ \subfloat[$\dot{\cH}^2$, PSNR= 37.54]{\includegraphics[width = 0.5\textwidth]{Marm-Mig-H2}\label{fig:Marm_H2}} \subfloat[$\dot{\cH}^3$, PSNR= 37.97]{\includegraphics[width = 0.5\textwidth]{Marm-Mig-H3}\label{fig:Marm_H3}} \caption{Marmousi RTM image denoising using different $\dot{\cH}^s$ semi-norms to measure the data fidelity term.\label{fig:Marm_denoise}} \end{figure} \subsection{Image Denoising}\label{sect:denoising} For the case of denoising, i.e., $A=I$, we consider two testing images, labeled as Shape and Peppers, each under two noise levels: $\sigma = 0.1$ and $0.2$ as the standard deviation of the additive Gaussian random noise. We compare the proposed approach TV+$H^s$ with the classic TV+$L^2$ (TV) \cite{rudin1992nonlinear}, non-local means (NLM) \cite{buades2005review}, block-matching and 3D filtering (BM3D) \cite{dabov2007image}, and weighted nuclear norm minimization (WNNM) \cite{gu2014weighted}. We use the ADMM framework \eqref{ADMML1_uncon} to solve both TV+$H^s$ and TV+$L^2$ ($s=0$). We call the Matlab command {\tt imnlmfilt} for NLM, while using respective authors' codes for BM3D and WNNM. We set the default parameters for NLM, BM3D, and WNNM. As for TV+$H^s$, we fix $\mu=1$ and select the optimal parameters $\lambda$ and $\rho$ that achieve the highest PSNR for each combination of testing image and noise level. \Cref{tab:psnr_denoise} reports the quantitative performance in terms of PSNR for all the scenarios, showing that the proposed approach significantly outperforms the state-of-the-art for denoising the Shape image. The ground truth image of Shape mainly contains low-frequency components in the Fourier domain, while the added Gaussian noise is high-frequency in nature. Since there is a well-separated scale, applying the $H^s$-based weighting for $s<0$ can effectively suppress the high-frequency noise while still imposing a large weight on the actual image content. For a more complex image (Peppers), TV is insufficient to preserve fine structures, as it yields worse results than other modern regularizations such as BM3D and WNNM. As TV+$H^s$ is a generalization of TV+$L^2$, using $H^s$ as the data fidelity term with a proper choice of $s$ is always better than the standard $L^2$ one. Visual results of Shape and Peppers are presented in \cref{fig:shape_denoise,fig:peppers_denoise}, respectively, both at a higher noise level $\sigma = 0.2$. The proposed approach clearly produces piecewise constant output thanks to the TV regularization, while TV+$L^2$ yields an unsatisfactory result since the $L^2$ metric overfits the noise in the input image. The use of the $H^s$ norms for $s<0$ improves the stability and reduces the overfitting phenomenon, leading to a better reconstruction than the classical $L^2$ norm. Although BM3D and WNNM achieve higher PSNR than TV+$H^s$ for the peppers image, both reconstructions suffer from ringing artifacts. \subsection{Geophysical Image Denoising}\label{sect:exp_lf_noise} We present another denoising example from a seismic application, in which the noise is mostly of low frequencies. Reverse-time migration (RTM)~\cite{claerbout1971toward} is a prestack two-way wave-equation migration to illustrate complex structure, especially strong contrast geological interfaces such as environments involving salts. Conventional RTM uses an imaging condition which is the zero time-lag cross-correlation between the source and the receiver wavefields. It overcomes the difficulties of ray theory and further improves image resolutions by replacing the semi-analytical solutions to the wave equation with fully numerical solutions for the full wavefield. However, artifacts are produced by the cross-correlation of source-receiver wavefields propagating in the same direction. Specifically, migration artifacts appear at shallow depths, above strong reflectors, and severely mask the migrated structures; see~\Cref{fig:Marm_input}. They are generated by the cross-correlation of reflections, backscattered waves, head waves, and diving waves~\cite{zhang2009practical}. We are interested in reducing the strong low-frequency noise in the input data by minimizing the objective function~\eqref{eq:Hs_obj_semi}, where the linear operator $\mathcal{A}$ is the identity. Different from the Gaussian noise as in the previous section, we deal with low-frequency noise in this example. Based on the discussion in Section~\ref{sec:Hs_analysis}, it is beneficial to use strong norms (i.e., $s>0$) to suppress the low-frequency noise. Here, we consider $\dot{\cH}^1$, $\dot{\cH}^2$ and $\dot{\cH}^3$ with the corresponding results shown in~\Crefrange{fig:Marm_H1}{fig:Marm_H3}, respectively. According to PSNR, using the $\dot{\cH}^3$ norm as the objective function produces the best recovery. We also demonstrate that all the three strong semi-norms can effectively suppress the low-frequency noise in~\Cref{fig:Marm_input} without changing the reflecting features of the underlying image. \subsection{Image Deblurring}\label{sect:deblurring} We test on two images: Circles and Cameraman, for image deblurring. The blurring kernel is fixed as a $7\times 7$ Gaussian function with the standard deviation of 1. By assuming the periodic boundary condition and using the Convolution Theorem, the linear operator $A$ can be implemented by FFT. We also consider two noise levels: $\sigma = 0.1$ and $0.2$ as the standard deviation of the additive Gaussian random noise. We compare the proposed approach TV+$H^s$ with TV, a hyper-Laplacian model (Hyper) \cite{krishnan2009fast}, a modification of BM3D from denoising to deblurring \cite{dabov2008image}, and a weighted anisotropic and isotropic (WAI) regularization proposed in~\cite{lou2015weighted}. We use the online codes of the competing methods: Hyper, BM3D, and WAI. For all the methods, we tune the parameters so that they can achieve the highest PSNR for each combination of testing image and noise level. We record the PSNR values in \cref{tab:psnr_deblur} and present the visual results under a lower noise level ($\sigma=0.1$) in \cref{fig:circle_deblur,fig:cameraman_deblur}. Similar to the denoising case, the proposed approach works particularly well for images with simple geometries such as Circles, and is comparable to the state-of-the-art deblurring methods for the Cameraman image. \setlength{\tabcolsep}{10pt} \begin{table*}[t] \centering \begin{tabular}{ |c|c|c|c|c|c|c|c|} \hline Test image & $\sigma$ & input & TV & Hyper & BM3D & WAI & proposed \\ \hline \hline \multirow{3}[6]*{\bf Circles} & 0.1 & 19.78& 32.56& 30.61& 32.52& 31.96& 32.93 \bigstrut\\ \cline{2-8} & 0.2 & 13.91 & 29.84 & 28.10 & 29.97 & 29.78 & 30.03 \bigstrut\\ \hline \hline \multirow{3}[5]*{\bf Cameraman} & 0.1 & 18.96& 24.52& 24.54 & 25.49& 24.40& 24.53\bigstrut\\ \cline{2-8} & 0.2 & 13.65& 22.89 & 22.75& 23.53 & 22.92 & 22.96 \bigstrut\\ \hline \hline \end{tabular} \caption{Image deblurring comparison in terms of PSNR.} \label{tab:psnr_deblur} \end{table*} \begin{figure}[t] \centering \subfloat[Noisy Input]{\includegraphics[width = 0.30\textwidth]{blurry_circle_01_input.png}} \hspace{0.1cm} \subfloat[TV]{\includegraphics[width = 0.30\textwidth]{blurry_circle_01_TV.png}} \hspace{0.1cm} \subfloat[Hyper]{\includegraphics[width = 0.30\textwidth]{blurry_circle_01_L23.png}}\\ \subfloat[BM3D]{\includegraphics[width = 0.30\textwidth]{blurry_circle_01_BM3d.png}} \hspace{0.1cm} \subfloat[WAI]{\includegraphics[width = 0.30\textwidth]{blurry_circle_01_L12.png}} \hspace{0.1cm} \subfloat[proposed]{\includegraphics[width = 0.30\textwidth]{blurry_circle_01_TVHs.png}}\\ \caption{Comparison of deblurring the Circles image with a $7\times 7$ Gaussian blur and additive Gaussian noise of $\sigma = 0.1.$\label{fig:circle_deblur}} \end{figure} \begin{figure}[t] \centering \subfloat[Noisy Input]{\includegraphics[width = 0.30\textwidth]{blurry_cameraman_01_input.png}} \hspace{0.1cm} \subfloat[TV]{\includegraphics[width = 0.30\textwidth]{blurry_cameraman_01_TV.png}} \hspace{0.1cm} \subfloat[Hyper]{\includegraphics[width = 0.30\textwidth]{blurry_cameraman_01_L23.png}}\\ \subfloat[BM3D]{\includegraphics[width = 0.30\textwidth]{blurry_cameraman_01_BM3d.png}} \hspace{0.1cm} \subfloat[WAI]{\includegraphics[width = 0.30\textwidth]{blurry_cameraman_01_L12.png}} \hspace{0.1cm} \subfloat[proposed]{\includegraphics[width = 0.30\textwidth]{blurry_cameraman_01_TVHs.png}} \caption{Comparison of denoising the Cameraman image with $7\times 7$ Gaussian blur and additive Gaussian noise of $\sigma = 0.1.$\label{fig:cameraman_deblur}} \end{figure} \section{Conclusions}\label{sect:conclusion} In this paper, we proposed a novel idea of using the Sobolev ($H^s$) norms as a data fidelity term for imaging applications. We revealed implicit regularization effects offered by the proposed data fitting term. Specifically, we shall choose a weak norm ($s<0$) for high-frequency noises and a strong norm ($s>0$) for low-frequency noises. We clarified the relationship of the Sobolev norms to two related concepts of Wasserstein distance and Sobolev gradient flow. We presented three numerical schemes to compute the $H^s$ norms under different domains and boundary conditions. We also discussed an efficient ADMM framework to incorporate the total variation with the proposed $H^s$ norm. Experimental results showed that TV+$H^s$ works particularly well for images with simple geometries, and always outperforms the standard TV+$L^2.$ \bibliographystyle{plain}
1,314,259,994,641
arxiv
\section{Introduction} \subsection{Motivation \& Contributions} Reconstruct an unknown $N$-dimensional sparse signal vector $\boldsymbol{x}\in\mathbb{R}^{N}$ from linear and noisy $M$-dimensional measurement vector $\boldsymbol{y}\in\mathbb{R}^{M}$~\cite{Donoho06,Candes061}, given by \begin{equation} \label{model} \boldsymbol{y} = \boldsymbol{A}\boldsymbol{x} + \boldsymbol{w}. \end{equation} In (\ref{model}), $\boldsymbol{A}\in\mathbb{R}^{M\times N}$ denotes a known sensing matrix. The additive white Gaussian noise (AWGN) vector $\boldsymbol{w}\in\mathbb{R}^{M}$ is composed of independent zero-mean Gaussian elements with variance $\sigma^{2}$. For simplicity, the signal vector $\boldsymbol{x}=(x_{1},\ldots,x_{N})^{\mathrm{T}}$ is assumed to have independent and identically distributed (i.i.d.) zero-mean elements with unit variance. A promising approach to this reconstruction issue is message-passing (MP). As powerful and low-complexity MP, Donoho {\it et al}.~\cite{Donoho09} proposed approximate message-passing (AMP). Bayes-optimal AMP can be regarded as an exact approximation of belief propagation (BP) in the large system limit~\cite{Kabashima03}, where both $M$ and $N$ tend to infinity while the compression rate $\delta=M/N$ is kept ${\mathcal O}(1)$. AMP was proved to be Bayes-optimal in a certain region of $\delta$ via state evolution (SE)~\cite{Bayati11,Bayati15} if the sensing matrix has zero-mean sub-Gaussian i.i.d.\ elements. However, AMP fails to converge for ill-conditioned~\cite{Rangan191} or non-zero mean~\cite{Caltagirone14} sensing matrices unless damping is used. To resolve this convergence issue, orthogonal AMP (OAMP)~\cite{Ma17} or equivalently vector AMP (VAMP)~\cite{Rangan192} were proposed. A prototype of OAMP/VAMP was presented in a pioneering paper~\cite[Appendix~D]{Opper05}. Bayes-optimal OAMP/VAMP can be regarded as an exact approximation~\cite{Cespedes14,Takeuchi201} of expectation propagation (EP)~\cite{Minka01} in the large system limit. OAMP/VAMP was proved to be Bayes-optimal in a certain region of $\delta$ via SE~\cite{Rangan192,Takeuchi201} if the sensing matrix is right-orthogonally invariant. A disadvantage of Bayes-optimal OAMP/VAMP is use of the linear minimum mean-square error (LMMSE) filter. While the singular-value decomposition (SVD) of the sensing matrix circumvents per-iteration computation of the LMMSE filter~\cite{Rangan192}, the SVD itself is high complexity unless the sensing matrix has a special structure. This paper proposes novel MP to solve the convergence issue in AMP and the high-complexity issue in OAMP/VAMP. The main idea is a generalization of conventional MP to long-memory MP (LM-MP)~\cite{Takeuchi19}. LM-MP exploits messages in all preceding iterations for each message update while conventional MP only utilizes them in the latest iteration. LM-MP aims to improve the convergence property of AMP ultimately up to that of OAMP/VAMP, without using the LMMSE filter. As a solvable instance of LM-MP, convolutional AMP (CAMP) has been proposed in \cite{Takeuchi202}. CAMP replaces the so-called Onsager correction in AMP with a convolution of messages in all preceding iterations while it uses the same low-complexity matched filter (MF) as AMP. Tap coefficients in the convolution are designed so as to guarantee asymptotic Gaussianity of estimation errors for all right-orthogonally invariant sensing matrices. However, Bayes-optimal denoisers were not designed in CAMP because \cite{Takeuchi202} presented no SE analysis to evaluate the mean-square errors (MSEs) before/after denoising. The main contribution of this paper is the SE analysis to establish CAMP with Bayes-optimal denoisers. Two-dimensional difference equations---called SE equations---are derived to describe the dynamics of the MSEs before/after denoising. The SE equations are utilized to compute variance parameters that are used in the Bayes-optimal denoisers. CAMP with Bayes-optimal denoisers is proved to achieve the same performance as OAMP/VAMP if the SE equations converge. Thus, CAMP with Bayes-optimal denoisers is called Bayes-optimal CAMP. \subsection{Related Works} As related works, a similar approach to LM-MP~\cite{Takeuchi19} was considered via non-rigorous dynamical functional theory~\cite{Opper16} and rigorous SE~\cite{Fan20}. LM-MP~\cite{Takeuchi19} imposes the orthogonality between estimation errors before/after denoising as considered in OAMP~\cite{Ma17}, while \cite{Opper16,Fan20} requires no restriction. The orthogonality enables a systematic design of LM-MP that satisfies asymptotic Gaussianity of estimation errors. As another related work, memory AMP (MAMP)~\cite{Liu20} was proposed after submission of a long version~\cite{Takeuchi203} for this paper. MAMP utilizes all preceding messages in damping before/after denoising while CAMP exploits them in the Onsager correction. The LM-MP framework~\cite{Takeuchi19} was used to design the LM damping in MAMP. \section{Convolutional Approximate Message-Passing} CAMP~\cite{Takeuchi202} computes an estimator $\boldsymbol{x}_{t}\in\mathbb{R}^{N}$ of the signal vector $\boldsymbol{x}$ in iteration~$t$ from the information about the measurement vector $\boldsymbol{y}$ and the sensing matrix $\boldsymbol{A}$ in (\ref{model}), \begin{equation} \label{denoising} \boldsymbol{x}_{t+1} = f_{t}(\boldsymbol{x}_{t} + \boldsymbol{A}^{\mathrm{T}}\boldsymbol{z}_{t}), \end{equation} with $\boldsymbol{x}_{0}=\boldsymbol{0}$ and $\boldsymbol{z}_{0}=\boldsymbol{y}$. For $t>0$, \begin{equation} \label{z} \boldsymbol{z}_{t} = \boldsymbol{y} - \boldsymbol{A}\boldsymbol{x}_{t} + \sum_{\tau=0}^{t-1}\xi_{\tau}^{(t-1)}\left( \theta_{t-\tau}\boldsymbol{A}\boldsymbol{A}^{\mathrm{T}} - g_{t-\tau}\boldsymbol{I}_{M} \right)\boldsymbol{z}_{\tau}. \end{equation} In (\ref{denoising}), a Lipschitz-continuous scalar denoiser $f_{t}:\mathbb{R}\to\mathbb{R}$ is applied element-wisely. In (\ref{z}), the notation $\xi_{t'}^{(t)} =\prod_{\tau=t'}^{t}\xi_{\tau}$ for $t'\leq t$ is defined via \begin{equation} \label{xi} \xi_{t} = \left\langle f_{t}'(\boldsymbol{x}_{t} + \boldsymbol{A}^{\mathrm{T}}\boldsymbol{z}_{t}) \right\rangle, \end{equation} where $\langle \boldsymbol{v}\rangle= N^{-1}\sum_{n=1}^{N}v_{n}$ denotes the arithmetic average of $\boldsymbol{v}=(v_{1},\ldots,v_{N})^{\mathrm{T}}$. The last term on the right-hand side (RHS) in (\ref{z}) is called the Onsager correction, which is the convolution of the messages $\{\boldsymbol{z}_{0}, \ldots,\boldsymbol{z}_{t-1}\}$ in all preceding iterations. The tap coefficients $\{g_{t}\}$ are designed via the LM-MP framework~\cite{Takeuchi19} so as to guarantee asymptotic Gaussianity of the estimation error $\boldsymbol{x}_{t}-\boldsymbol{x}$. The other tap coefficients $\{\theta_{t}\}$ have been introduced in this paper to improve the convergence property of CAMP. Note that $\theta_{t}=0$ for any $t\geq1$ was used in the original CAMP~\cite{Takeuchi202}. Once asymptotic Gaussianity is established, the Bayes-optimal denoiser $f_{t}(\boldsymbol{u}_{t}) = \mathbb{E}[\boldsymbol{x} | \boldsymbol{u}_{t}]$ can be designed via the virtual AWGN observation, \begin{equation} \label{AWGN} \boldsymbol{u}_{t} =\boldsymbol{x}+\boldsymbol{\omega}_{t},\quad \boldsymbol{\omega}_{t}\sim\mathcal{N}(\boldsymbol{0},a_{t,t} \boldsymbol{I}_{N}). \end{equation} Since $\boldsymbol{x}$ has been assumed to have i.i.d.\ elements, the Bayes-optimal denoiser is separable. The purpose of this paper is to evaluate the variance $a_{t,t}$ in the AWGN observation~(\ref{AWGN}) via SE. \section{Design} \subsection{Asymptotic Gaussianity} The tap coefficients $\{g_{t}\}$ in (\ref{z}) are determined so as to realize the asymptotic Gaussianity of the estimation error $\boldsymbol{h}_{t}= \boldsymbol{x}_{t} + \boldsymbol{A}^{\mathrm{T}}\boldsymbol{z}_{t} - \boldsymbol{x}$ before denoising in (\ref{denoising}), i.e.\ almost surely \begin{IEEEeqnarray}{rl} &\lim_{M=\delta N\to\infty}\frac{1}{N}\left\{ f_{t}(\boldsymbol{x} + \boldsymbol{h}_{t}) - \boldsymbol{x} \right\}^{\mathrm{T}} \left\{ f_{t'}(\boldsymbol{x} + \boldsymbol{h}_{t'}) - \boldsymbol{x} \right\} \nonumber \\ =& \mathbb{E}\left[ \{f_{t}(x_{1} + h_{t}) - x_{1}\}\{f_{t'}(x_{1} + h_{t'}) - x_{1}\} \right]\equiv d_{t+1,t'+1}, \nonumber \\ \label{Gaussianity} \end{IEEEeqnarray} where $(h_{t}, h_{t'})^{\mathrm{T}}\sim\mathcal{N}(\boldsymbol{0}, \boldsymbol{\Sigma})$ is an independent zero-mean Gaussian vector with covariance \begin{equation} \boldsymbol{\Sigma} = \begin{bmatrix} a_{t,t} & a_{t,t'} \\ a_{t',t} & a_{t',t'} \end{bmatrix}, \end{equation} with \begin{equation} \label{a_tt} a_{t,t'}\equiv\lim_{M=\delta N\to\infty}\frac{1}{N} \mathbb{E}[\boldsymbol{h}_{t}^{\mathrm{T}}\boldsymbol{h}_{t'}]. \end{equation} The asymptotic Gaussianity~(\ref{Gaussianity}) implies that the estimation errors $\boldsymbol{h}_{t}$ and $\boldsymbol{h}_{t'}$ can be treated as if they followed the zero-mean Gaussian distribution with $\mathbb{E}[\boldsymbol{h}_{t}\boldsymbol{h}_{t'}^{\mathrm{T}}] =a_{t,t'}\boldsymbol{I}_{N}$, as long as the covariance on the left-hand side (LHS) of (\ref{Gaussianity}) is considered. The asymptotic Gaussianity is proved via a unified framework of SE~\cite{Takeuchi19}, which proposed a general error model and proved the asymptotic Gaussianity of the estimation error before denoising in the general error model. Thus, it is sufficient to prove that the general error model in \cite{Takeuchi19} contains the error model of CAMP. An important assumption in the general error model is orthogonal invariance of the sensing matrix $\boldsymbol{A}$. \begin{definition} An orthogonal matrix $\boldsymbol{V}$ is said to be Haar-distributed if $\boldsymbol{V}$ is orthogonally invariant, i.e.\ $\boldsymbol{V}\sim\boldsymbol{\Phi}\boldsymbol{V}\boldsymbol{\Psi}$ for all orthogonal matrices $\boldsymbol{\Phi}, \boldsymbol{\Psi}$ independent of $\boldsymbol{V}$. \end{definition} \begin{assumption} \label{assumption_A} The sensing matrix $\boldsymbol{A}$ is right-orthogonally invariant, i.e.\ $\boldsymbol{A}\sim\boldsymbol{A}\boldsymbol{\Psi}$ for any orthogonal matrix $\boldsymbol{\Psi}$ independent of $\boldsymbol{A}$. More precisely, the $N\times N$ orthogonal matrix $\boldsymbol{V}$ in the SVD $\boldsymbol{A}=\boldsymbol{U}\boldsymbol{\Sigma} \boldsymbol{V}^{\mathrm{T}}$ is Haar-distributed and independent of $\boldsymbol{U}\boldsymbol{\Sigma}$. Furthermore, the empirical eigenvalue distribution of $\boldsymbol{A}^{\mathrm{T}}\boldsymbol{A}$ converges almost surely to a compactly supported deterministic distribution with unit first moment in the large system limit. \end{assumption} Throughout this paper, Assumption~\ref{assumption_A} is postulated. Assumption~\ref{assumption_A} holds when $\boldsymbol{A}$ has zero-mean i.i.d.\ Gaussian elements with variance $1/M$. The asymptotic Gaussianity depends heavily on the Haar assumption of $\boldsymbol{V}$. The Haar orthogonal transform $\boldsymbol{V}\boldsymbol{a}$ of any vector $\boldsymbol{a}\in\mathbb{R}^{N}$ is distributed as $N^{-1/2}\|\boldsymbol{a}\|\boldsymbol{z}$ in which $\boldsymbol{z}\sim\mathcal{N}(\boldsymbol{0},\boldsymbol{I}_{N})$ is a standard Gaussian vector and independent of $\|\boldsymbol{a}\|$. When the amplitude $N^{-1/2}\|\boldsymbol{a}\|$ tends to a constant as $N\to\infty$, the vector $\boldsymbol{V}\boldsymbol{a}$ looks like a Gaussian vector. This is a rough intuition on the asymptotic Gaussianity. To present a closed-form of $\{g_{t}\}$ realizing the asymptotic Gaussianity, we define the $\eta$-transform of the asymptotic eigenvalue distribution of $\boldsymbol{A}^{\mathrm{T}}\boldsymbol{A}$~\cite{Tulino04} as \begin{equation} \eta(x) = \lim_{M=\delta N\to\infty}\frac{1}{N}\mathrm{Tr}\left\{ \left( \boldsymbol{I}_{N} + x\boldsymbol{A}^{\mathrm{T}}\boldsymbol{A} \right)^{-1} \right\}. \end{equation} Furthermore, let $G(z)$ denote the generating function of the tap coefficients $\{g_{t}\}$, \begin{equation} \label{G} G(z) = \sum_{t=0}^{\infty}g_{t}z^{-t}, \quad g_{0}=1. \end{equation} Similarly, the generating function $\Theta(z)$ of $\{\theta_{t}\}$ is defined in the same manner, with $\theta_{0}=1$. The tap coefficients $\{g_{t}\}$ are given via the generating function $G(z)$. \begin{theorem} \label{theorem1} For a fixed generating function $\Theta(z)$, suppose that the generating function $G(z)$ of $\{g_{t}\}$ satisfies \begin{equation} \label{tap_coefficient_g} \eta\left( \frac{1 - (1-z^{-1})\Theta(z)}{(1-z^{-1})G(z)} \right) = (1 - z^{-1})\Theta(z). \end{equation} Then, the asymptotic Gaussianity~(\ref{Gaussianity}) holds. \end{theorem} \begin{IEEEproof} See \cite[Theorems~1, 2, and 3]{Takeuchi203}. \end{IEEEproof} The proof of Theorem~\ref{theorem1} is essentially the same as in the original CAMP~\cite{Takeuchi202} with $\theta_{0}=1$ and $\theta_{t}=0$ for $t>0$. For special sensing matrices $\boldsymbol{A}$, the tap coefficients $\{g_{t}\}$ are explicitly given as follows: \begin{corollary} \label{corollary1} Suppose that the sensing matrix $\boldsymbol{A}$ has independent Gaussian elements with mean $\sqrt{\gamma/M}$ and variance $(1-\gamma)/M$ for any $\gamma\in[0,1)$. Then, (\ref{tap_coefficient_g}) reduces to \begin{equation} g_{t} = \left( 1 - \frac{1}{\delta} \right)\theta_{t} + \frac{1}{\delta}\sum_{\tau=0}^{t}(\theta_{\tau} - \theta_{\tau-1})\theta_{t-\tau}. \end{equation} \end{corollary} \begin{IEEEproof} See \cite[Corollary~1]{Takeuchi203}. \end{IEEEproof} Corollary~\ref{corollary1} implies that CAMP reduces to AMP for $\theta_{0}=1$ and $\theta_{t}=0$ for $t>0$. Thus, CAMP has no ability to resolve the non-zero mean case~\cite{Caltagirone14}. \begin{corollary} Suppose that the sensing matrix $\boldsymbol{A}$ has $M$ identical singular values for $M\leq N$, i.e.\ $\boldsymbol{A}\boldsymbol{A}^{\mathrm{T}} =\delta^{-1}\boldsymbol{I}_{M}$. Then, (\ref{tap_coefficient_g}) with $\theta_{t}=0$ for $t>0$ reduces to $g_{t}=1-\delta^{-1}$ for all $t>0$. \end{corollary} \begin{IEEEproof} See \cite[Corollary~2]{Takeuchi203}. \end{IEEEproof} Note that $\theta_{t}=0$ can be set for $t>0$, without loss of generality, since $\theta_{t-\tau}\boldsymbol{A}\boldsymbol{A}^{\mathrm{T}} - g_{t-\tau}\boldsymbol{I}_{M} = (\delta^{-1}\theta_{t-\tau} - g_{t-\tau}) \boldsymbol{I}_{M}$ holds on the Onsager term in (\ref{z}). To present the non-identical singular-value case, we define the convolution operator $*$ as \begin{equation} \label{convolution} a_{t+i}*b_{t+j} = \sum_{\tau=0}^{t}a_{\tau+i}b_{t-\tau+j} \end{equation} for two sequences $\{a_{\tau}, b_{\tau}\}_{\tau=0}^{\infty}$, in which $a_{\tau}=0$ and $b_{\tau}=0$ are assumed for all $\tau<0$. \begin{corollary} \label{corollary3} Suppose that the sensing matrix $\boldsymbol{A}$ has non-zero singular values $\sigma_{0}\geq\cdots\geq\sigma_{M-1}>0$ satisfying condition number $\kappa=\sigma_{0}/\sigma_{M-1}>1$, $\sigma_{m}/\sigma_{m-1}=\kappa^{-1/(M-1)}$, and $\sigma_{0}^{2}=N(1-\kappa^{-2/(M-1)})/(1-\kappa^{-2M/(M-1)})$, and that there is some $t_{1}\in\mathbb{N}$ such that $\theta_{t}=0$ holds for all $t>t_{1}$. Let $\alpha_{0}^{(j)}=1$ and \begin{equation} \alpha_{t}^{(j)} = \left\{ \begin{array}{cl} \frac{C^{t/j}}{(t/j)!}\bar{\theta}_{j}^{t/j} & \hbox{if $t$ is divisible by $j$,} \\ 0 & \hbox{otherwise} \end{array} \right. \end{equation} for $t\in\mathbb{N}$ and $j\in\{1,\ldots,t_{1}\}$, with $\bar{\theta}_{t}=\theta_{t-1}-\theta_{t}$ and $C=2\delta^{-1}\ln\kappa$. Define $p_{0}=\bar{q}_{0}=1$ and \begin{equation} p_{t} = - \frac{\beta_{t}^{(t_{1})}}{\kappa^{2}-1}, \end{equation} \begin{equation} \bar{q}_{t} = \frac{1}{\bar{\theta}_{1}}\left( \frac{\beta_{t+1}^{(t_{1})}}{C} - \sum_{\tau=1}^{t_{1}}\bar{\theta}_{\tau+1}\bar{q}_{t-\tau} \right) \end{equation} for $t>0$, with $\beta_{t}^{(t_{1})}=\alpha_{t}^{(1)}*\alpha_{t}^{(2)}*\cdots* \alpha_{t}^{(t_{1})}$. Then, (\ref{tap_coefficient_g}) reduces to \begin{equation} \label{g_CAMP} g_{t} = p_{t} - \sum_{\tau=1}^{t}q_{\tau}g_{t-\tau}, \end{equation} with \begin{equation} q_{t} = \bar{q}_{t} - \bar{q}_{t-1}. \end{equation} \end{corollary} \begin{IEEEproof} See \cite[Corollary~3]{Takeuchi203}. \end{IEEEproof} Note that $G(z)=P(z)/Q(z)$ holds, with $P(z)$ and $Q(z)$ denoting the generating functions of $\{p_{t}\}$ and $\{q_{t}\}$ in Corollary~\ref{corollary3}, respectively. \subsection{State Evolution} We have so far designed the tap coefficients $\{g_{t}\}$ that realize the asymptotic Gaussianity~(\ref{Gaussianity}) for fixed tap coefficients $\{\theta_{t}\}$. We next evaluate the asymptotic MSEs $\{a_{t,t}\}$ before denoising in (\ref{a_tt}). We use SE to derive SE equations that describe the dynamics of $\{a_{t,t}\}$. The derived SE equations are two-dimensional difference equations with respect to the covariance parameters $\{a_{t,t'}\}$ and $\{d_{t,t'}\}$ before and after denoising in (\ref{a_tt}) and (\ref{Gaussianity}), respectively. To present the SE equations, we introduce several notations. In terms of their implementations, rather than the proof, we re-write the generating function~(\ref{G}) as \begin{equation} G(z)=\frac{\sum_{t=0}^{\infty}p_{t}z^{-t}}{\sum_{t=0}^{\infty}q_{t}z^{-t}}, \end{equation} with $p_{0}=1$, $q_{0}=1$. In particular, $g_{t}=p_{t}$ holds when $q_{t}=0$ is selected for all $t>0$. We define $\bar{\xi}_{t}$ as an asymptotic variable associated with $\xi_{t}$ given in (\ref{xi}), \begin{equation} \label{xi_bar} \bar{\xi}_{t} = \mathbb{E}[f_{t}'(x_{1}+h_{t})]. \end{equation} In (\ref{xi_bar}), $h_{t}$ is an independent zero-mean Gaussian random variable with variance $a_{t,t}$. Thus, $\bar{\xi}_{t}$ is a function of $a_{t,t}$. The notation $\bar{\xi}_{t'}^{(t)}=\prod_{\tau=t'}^{t}\bar{\xi}_{\tau}$ is defined in the same manner as in $\xi_{t'}^{(t)}$. Furthermore, we use the notational convention $\bar{\xi}_{t'}^{(t)}=1$ for $t'>t$. \begin{theorem} \label{theorem2} Suppose that the generating functions of the tap coefficients $\{g_{t}\}$ and $\{\theta_{t}\}$ satisfy the condition~(\ref{tap_coefficient_g}) in Theorem~\ref{theorem1}. Define $r_{t}=q_{t}*\theta_{t}$ via the convolution~(\ref{convolution}) and \begin{IEEEeqnarray}{rl} \mathfrak{D}_{\tau',\tau} &= (p_{\tau'+\tau} - p_{\tau'+\tau+1})*q_{\tau} + (p_{\tau} - p_{\tau-1})*q_{\tau'+\tau+1} \nonumber \\ +& (p_{\tau-1}-p_{\tau})*r_{\tau'+\tau+1} + (r_{\tau} - r_{\tau-1})*p_{\tau'+\tau+1} \nonumber \\ +& p_{\tau}*(r_{\tau'+\tau} - \delta_{\tau',0}r_{\tau}) - r_{\tau}*(p_{\tau'+\tau} - \delta_{\tau',0}p_{\tau}), \label{D} \end{IEEEeqnarray} where $\delta_{t,t'}$ denotes the Kronecker delta. Then, the covariance parameters $\{a_{t,t'}\}$ in (\ref{a_tt}) satisfy \begin{IEEEeqnarray}{rl} \sum_{\tau'=0}^{t'}\sum_{\tau=0}^{t}\bar{\xi}_{t'-\tau'}^{(t'-1)}\bar{\xi}_{t-\tau}^{(t-1)} \Big\{ \mathfrak{D}_{\tau',\tau}a_{t'-\tau',t-\tau}& \nonumber \\ - (p_{\tau}*r_{\tau'+\tau+1} - r_{\tau}*p_{\tau'+\tau+1})d_{t'-\tau',t-\tau}& \nonumber \\ - \sigma^{2}\left[ (q_{\tau'}q_{\tau})*(\theta_{\tau'+\tau} - \theta_{\tau'+\tau+1}) \right]&\Big\} =0, \label{SE_equation} \end{IEEEeqnarray} with \begin{IEEEeqnarray}{rl} &(q_{t'}q_{t})*(\theta_{t'+t} - \theta_{t'+t+1}) \nonumber \\ =& \sum_{\tau'=0}^{t'}\sum_{\tau=0}^{t}q_{\tau'}q_{\tau} (\theta_{t'-\tau'+t-\tau} - \theta_{t'-\tau'+t-\tau+1}). \end{IEEEeqnarray} In these expressions, all variables with negative indices are set to zero. The covariance parameters $\{d_{t,t'}\}$ are given in (\ref{Gaussianity}) with notational convention $f_{-1}(\cdot)=0$. \end{theorem} \begin{IEEEproof} See \cite[Theorem~4]{Takeuchi203}. \end{IEEEproof} The coupled SE equations~(\ref{Gaussianity}) and (\ref{SE_equation}) describe the dynamics of the covariance parameters $\{a_{t,t'}\}$ and $\{d_{t,t'}\}$. The SE equations can be solved with the initial condition $d_{0,0}=N^{-1}\mathbb{E}[\|\boldsymbol{x}\|^{2}]=1$ and the boundary conditions $a_{t,t'}=d_{t,t'}=0$ for $t<0$ or $t'<0$. We next optimize the denoiser $f_{t}$ to establish Bayes-optimal CAMP. The asymptotic Gaussianity~(\ref{Gaussianity}) indicates that the posterior mean estimator $f_{t}(\boldsymbol{u}_{t}) =\mathbb{E}[\boldsymbol{x}|\boldsymbol{u}_{t}]\equiv f_{\mathrm{opt}}(\boldsymbol{u}_{t}; a_{t,t})$ of $\boldsymbol{x}$ given the AWGN observation~(\ref{AWGN}) minimizes the asymptotic MSE~$d_{t+1,t+1}$ in (\ref{Gaussianity}) after denoising in CAMP. Thus, we use the posterior mean estimator as the Bayes-optimal denoiser. \begin{theorem} \label{theorem3} Use the Bayes-optimal denoiser. Suppose that the SE equations~(\ref{Gaussianity}) and (\ref{SE_equation}) converge, i.e.\ $\lim_{t',t\to\infty}a_{t',t}=a_{\mathrm{s}}$ and $\lim_{t',t\to\infty}d_{t',t}=d_{\mathrm{s}}$. If $\Theta(\xi_{\mathrm{s}}^{-1})=1$ and $1+(\xi_{\mathrm{s}}-1)d\Theta(\xi_{\mathrm{s}}^{-1})/(dz^{-1})\neq0$ hold for $\xi_{\mathrm{s}}=d_{\mathrm{s}}/a_{\mathrm{s}}$, then the fixed-point (FP) $(a_{\mathrm{s}}, d_{\mathrm{s}})$ satisfies \begin{equation} \label{fixed_point} a_{\mathrm{s}} = \frac{\sigma^{2}} {R_{\boldsymbol{A}^{\mathrm{T}}\boldsymbol{A}}(-d_{\mathrm{s}}/\sigma^{2})}, \; d_{\mathrm{s}}=\mathbb{E}\left[ \{f_{\mathrm{opt}}(x_{1}+h_{\mathrm{s}}; a_{\mathrm{s}}) - x_{1}\}^{2} \right], \end{equation} with $h_{\mathrm{s}}\sim\mathcal{N}(0,a_{\mathrm{s}})$, where $R_{\boldsymbol{A}^{\mathrm{T}}\boldsymbol{A}}(x)$ denotes the R-transform of the asymptotic eigenvalue distribution of $\boldsymbol{A}^{\mathrm{T}}\boldsymbol{A}$~\cite{Tulino04}. \end{theorem} \begin{IEEEproof} Without loss of generality, we assume $p_{t}=g_{t}$ and $q_{t}=\delta_{t,0}$. Since the SE equations have been assumed to converge, (\ref{SE_equation}) reduces to \begin{IEEEeqnarray}{rl} \sum_{\tau'=0}^{\infty}\sum_{\tau=0}^{\infty}\xi_{\mathrm{s}}^{\tau'+\tau} &\Big\{ \mathfrak{D}_{\tau',\tau}a_{\mathrm{s}} - (g_{\tau}*\theta_{\tau'+\tau+1} - \theta_{\tau}*g_{\tau'+\tau+1})d_{\mathrm{s}} \nonumber \\ &- \sigma^{2}(\theta_{\tau'+\tau} - \theta_{\tau'+\tau+1}) \Big\} =0, \label{SE_equation_FP} \end{IEEEeqnarray} as $t, t'\to\infty$, with \begin{IEEEeqnarray}{rl} \mathfrak{D}_{\tau',\tau} =& g_{\tau'+\tau} - g_{\tau'+\tau+1} +(g_{\tau-1}-g_{\tau})*\theta_{\tau'+\tau+1} \nonumber \\ &+ (\theta_{\tau} - \theta_{\tau-1})*g_{\tau'+\tau+1} + g_{\tau}*(\theta_{\tau'+\tau} - \delta_{\tau',0}\theta_{\tau}) \nonumber \\ &- \theta_{\tau}*(g_{\tau'+\tau} - \delta_{\tau',0}g_{\tau}). \label{D_tmp} \end{IEEEeqnarray} We next re-write (\ref{SE_equation_FP}) via the Z-transform. Define the Z-transform of a two-dimensional array $\{h_{t',t}\}_{t',t=0}^{\infty}$ as \begin{equation} H(y,z) = \sum_{t',t=0}^{\infty}h_{t',t}y^{-t'}z^{-t}. \end{equation} It is an elementary exercise to confirm that the Z-transform of $\{\mathcal{D}_{t',t}\}$ given in (\ref{D_tmp}) is equal to~\cite{Takeuchi203} \begin{IEEEeqnarray}{rl} F_{G,\Theta}(y,z) =& (y^{-1}+z^{-1}-1)[G(z)\Delta_{\Theta} - \Theta(z)\Delta_{G}] \nonumber \\ &+ \Delta_{G_{1}} - \Delta_{G}, \label{F_G} \end{IEEEeqnarray} where $G(z)$ and $\Theta(z)$ are the Z-transforms of $\{g_{t}\}$ and $\{\theta_{t}\}$, respectively, with \begin{equation} F_{1}(z)=z^{-1}F(z),\quad \Delta_{F(y)} = \frac{F(y) - F(z)}{y^{-1}-z^{-1}}. \end{equation} Representing the remaining terms in (\ref{SE_equation_FP}) with the Z-transform, we obtain \begin{equation} F_{G,\Theta}(y,z)\frac{d_{\mathrm{s}}}{\xi_{\mathrm{s}}} =\{G(z)\Delta_{\Theta} - \Theta(z)\Delta_{G}\}d_{\mathrm{s}} + (\Delta_{\Theta_{1}} - \Delta_{\Theta})\sigma^{2} \label{SE_equation_FP_Z} \end{equation} in the limit $y, z\to\xi_{\mathrm{s}}^{-1}$, where we have used the identity $\xi_{\mathrm{s}}=d_{\mathrm{s}}/a_{\mathrm{s}}$ for the Bayes-optimal denoiser. We simplify (\ref{SE_equation_FP_Z}) via series-expansion. Series-expanding $\Delta_{F}$ and $\Delta_{F_{1}}$ with respect to $z^{-1}$ at $z=y$ up to the first order, we have \begin{equation} \left\{ 1 + (\xi_{\mathrm{s}}-1) \frac{d\Theta}{dz^{-1}}(\xi_{\mathrm{s}}^{-1}) \right\}\left\{ \frac{G(\xi_{\mathrm{s}}^{-1})d_{\mathrm{s}}}{\xi_{\mathrm{s}}} - \sigma^{2} \right\} = 0 \end{equation} under the assumptions of $\Theta(\xi_{\mathrm{s}}^{-1})=1$. Since $1+(\xi_{\mathrm{s}}-1)d\Theta(\xi_{\mathrm{s}}^{-1})/(dz^{-1})\neq0$ has been assumed, we arrive at \begin{equation} \label{G_idenity} \frac{G(\xi_{\mathrm{s}}^{-1})}{\xi_{\mathrm{s}}} = \frac{\sigma^{2}}{d_{\mathrm{s}}}. \end{equation} To prove the FP~(\ref{fixed_point}), we utilize the following relationship between the $\eta$-transform and the R-transform~\cite[Eq.~(2.74)]{Tulino04}: \begin{equation} \label{relationship} \eta(x) = \frac{1}{1 + xR_{\boldsymbol{A}^{\mathrm{T}}\boldsymbol{A}}(-x\eta(x))}. \end{equation} Using (\ref{tap_coefficient_g}) to evaluate (\ref{relationship}) at $x=x^{*}$ given by \begin{equation} \label{pole} x^{*} = \frac{1-(1-z^{-1})\Theta(z)}{(1-z^{-1})G(z)}, \end{equation} we obtain \begin{equation} G(z) = \Theta(z) R_{\boldsymbol{A}^{\mathrm{T}}\boldsymbol{A}}\left( -\frac{1 - (1-z^{-1})\Theta(z)}{G(z)}\Theta(z) \right). \end{equation} Letting $z=\xi_{\mathrm{s}}^{-1}$ and applying the assumption $\Theta(\xi_{\mathrm{s}}^{-1})=1$ yield \begin{equation} G(\xi_{\mathrm{s}}^{-1}) = R_{\boldsymbol{A}^{\mathrm{T}}\boldsymbol{A}}\left( - \frac{\xi_{\mathrm{s}}}{G(\xi_{\mathrm{s}}^{-1})} \right). \end{equation} Substituting (\ref{G_idenity}) into this identity and using $\xi_{\mathrm{s}}=d_{\mathrm{s}}/a_{\mathrm{s}}$, we arrive at the FP~(\ref{fixed_point}). \end{IEEEproof} The FP~(\ref{fixed_point}) is equal to that of the Bayes-optimal performance~\cite{Takeda06,Tulino13,Barbier18}. This implies that CAMP with the Bayes-optimal denoiser is Bayes-optimal if it converges as $t\to\infty$ and if the FP~(\ref{fixed_point}) is unique. Thus, we refer to CAMP with the Bayes-optimal denoiser as Bayes-optimal CAMP or simply as CAMP. The original CAMP~\cite{Takeuchi202} with $\theta_{0}=1$ and $\theta_{t}=0$ for $t>0$ satisfies the conditions with respect to $\Theta(z)$ in Theorem~\ref{theorem3}. Thus, the parameters $\{\theta_{t}\}$ only contribute to the convergence properties of CAMP. Numerical evaluation in the next section implies that using non-zero $\theta_{t}$ improves the stability of CAMP. \section{Numerical Results} In all numerical results, we assume the Bernoulli-Gaussian (BG) prior with signal density $\rho\in[0, 1]$. Each signal $x_{n}$ takes zero with probability $1-\rho$. Otherwise, $x_{n}$ is sampled from the zero-mean Gaussian distribution with variance $1/\rho$. As a practical alternative of $\boldsymbol{V}$ in Assumption~\ref{assumption_A}, we used Hadamard matrices with random permutation in numerical simulations. See Corollary~\ref{corollary3} for the details of singular values. For simplicity, we assume $\theta_{t}=0$ for all $t>2$. To impose the condition $\Theta(a_{\mathrm{s}}/d_{\mathrm{s}})=1$ in Theorem~\ref{theorem3}, we use $\theta_{0}=1$, $\theta_{1}=-\theta d_{\mathrm{s}}/a_{\mathrm{s}}$, and $\theta_{2}=\theta\in\mathbb{R}$, in which $(a_{\mathrm{s}}, d_{\mathrm{s}})$ is a solution to the FP equations~(\ref{fixed_point}). In particular, the CAMP reduces to the original CAMP in \cite{Takeuchi202} for $\theta=0$. We first solve the SE equations~(\ref{Gaussianity}) and (\ref{SE_equation}) to investigate the convergence property of CAMP. Figure~\ref{fig1} shows the asymptotic MSEs of the original CAMP with $\theta=0$~\cite{Takeuchi202} and of the proposed CAMP with $\theta=-0.7$. The proposed CAMP can achieve the Bayes-optimal MSE~\cite{Takeda06,Tulino13,Barbier18} while the original CAMP with $\theta=0$ fails to converge. This implies that use of non-zero $\theta\neq0$ improves the stability of CAMP. \begin{figure}[t] \begin{center} \includegraphics[width=\hsize]{fig1.eps} \caption{ Asymptotic MSE $d_{t+1,t+1}$ versus the number of iterations~$t$ for the CAMP. $\delta=0.5$, $\rho=0.1$, condition number~$\kappa=17$, and $1/\sigma^{2}=30$~dB. } \label{fig1} \end{center} \end{figure} We next investigate what occurs for the original CAMP with $\theta=0$. In Theorem~\ref{theorem3}, we have assumed the convergence of the asymptotic covariance~$d_{\tau,t}$ given in (\ref{Gaussianity}). As shown in Fig.~\ref{fig2}, a soliton-like wave propagates for $\theta=0$ while the convergence assumption in Theorem~\ref{theorem3} is valid for $\theta=-0.7$. The reason why the soliton-like wave occurs is as follows: As the condition number $\kappa$ increases, $a_{\tau,t}$ in the SE equation~(\ref{SE_equation}) becomes unstable. On the other hand, the forgetting factor $\bar{\xi}_{t}\in(0,1]$ in (\ref{xi_bar}) decreases in general as $a_{t,t}$ grows. As a result, a soliton-like quasi-steady wave appears when $a_{\tau,t}$ is unstable. It is an interesting future issue to analyze the stability of the SE equation~(\ref{SE_equation}). \begin{figure}[t] \begin{center} \includegraphics[width=\hsize]{fig2.eps} \caption{ Asymptotic covariance $d_{\tau, t}$ versus $\tau$ for the CAMP. $\delta=0.5$, $\rho=0.1$, condition number~$\kappa=17$, and $1/\sigma^{2}=30$~dB. } \label{fig2} \end{center} \end{figure} We finally present numerical simulations for the CAMP, AMP~\cite{Donoho09}, and OAMP/VAMP~\cite{Ma17,Rangan192}. See Table~\ref{table1} for the complexity of these algorithms. As long as the number of iterations $t$ is sufficiently smaller than $M$, CAMP has the same complexity as AMP, while OAMP/VAMP requires higher complexity. \begin{table}[t] \begin{center} \caption{ Complexity in $M\leq N$ and the number of iterations~$t$. } \label{table1} \begin{tabular}{|c|c|c|} \hline & Time complexity & Space complexity \\ \hline CAMP & ${\cal O}(tMN + t^{2}M + t^{4})$ & ${\cal O}(MN + tM + t^{2})$ \\ \hline AMP & ${\cal O}(tMN)$ & ${\cal O}(MN)$ \\ \hline OAMP/VAMP & ${\cal O}(M^{2}N+tMN)$ & ${\cal O}(N^{2}+MN)$ \\ \hline \end{tabular} \end{center} \end{table} We used a damping technique to improve the convergence properties of the three algorithms. See \cite[Sec.~III-E and Sec.~IV-A]{Takeuchi203} for the details of damping. The parameter $\theta$ and damping factor were optimized via exhaustive search. As shown in Fig.~\ref{fig3}, AMP approaches the Bayes-optimal MSE only for small condition numbers. On the other hand, the CAMP is Bayes-optimal for low-to-moderate condition numbers. However, the performance of CAMP degrades for high condition numbers, for which OAMP/VAMP still achieves the Bayes-optimal MSE. These results imply that CAMP has room for improvement in the case of high condition numbers while it can improve the convergence property of AMP. \begin{figure}[t] \begin{center} \includegraphics[width=\hsize]{fig3.eps} \caption{ MSE versus the condition number~$\kappa$ for the CAMP, AMP, and OAMP/VAMP. $M=2^{10}$, $N=2^{11}$, $\rho=0.1$, $1/\sigma^{2}=30$~dB, $100$ iterations, and $10^{5}$ independent trials. } \label{fig3} \end{center} \end{figure} \section*{Acknowledgment} The author was in part supported by the Grant-in-Aid for Scientific Research~(B) (JSPS KAKENHI Grant Numbers 18H01441 and 21H01326), Japan. \balance \bibliographystyle{IEEEtran}
1,314,259,994,642
arxiv
\section{Introduction} Gauge theories of fundamental interactions have been the cornerstone of describing the physical world at the most basic level. Their enormous success primarily lies in the region where the coupling strength is small enough and the tools of perturbation theory are reliable. However, not all interesting phenomena can be accessed in this approximation scheme. In the non perturbative sector of quantum chromodynamics (QCD), two major phenomena emerge: 1) color confinement, and 2) dynamical chiral symmetry breaking (DCSB). For studying strongly interacting bound states, a reliable understanding of these phenomena is essential. However, it can be achieved solely through non perturbative techniques such as lattice QCD, SDEs,~\cite{Dyson:1949ha,Schwinger:1951ex}, chiral perturbation theory and effective quark models. Keeping this in mind, our interest is focussed on the study of the physically acceptable truncations of SDEs beyond perturbation theory. SDEs are the fundamental equations of motion of any quantum field theory (QFT). They form an infinite set of coupled integral equations that relate the $n$-point Green function to the $(n+1)$-point function. As the simplest example, propagators are related to the three point vertices, the latter to the four point functions and so on,~\textit{ad infinitum}. As their derivation requires no assumption regarding the strength of the interaction, they are ideally suited for studying interactions like QCD, where one single theory has diametrically opposed perturbative and non perturbative facets in the ultraviolet and infrared regimes of momenta, respectively. Unfortunately, being an infinite set of coupled equations, they are intractable without some simplifying assumptions. Typically, in the non perturbative region, SDEs are truncated at the level of two-point Green functions (propagators). We must then use an \textit{ansatz} for the full three point vertex. This has to be done carefully. Otherwise, solutions can be in conflict with some of the key features of a QFT, such as gauge invariance of physical observables and renormalizability of the divergent functions involved, thus jeopardizing the credibility of the truncation scheme employed. In contrast with the complicated non abelian scenario of QCD, quantum electrodynamics (QED) has proved to be a good starting point in studying the non pertutbative regime of the SDEs. Better yet, in the absence of Dirac matrices, sQED can offer an even more attractive model to construct acceptable non perturbative $ans\ddot{a}tze$ for the vertices involved. In this article, we set out to construct a scalar-photon three point vertex which must comply with the following key criteria: \begin{itemize} \item It must satisfy the {\bf Ward-Fradkin-Green-Takahashi identity} (WFGTI),~\cite{Ward:1950xp,Green:1953te,Takahashi:1957xn}. \end{itemize} Just like in spinor QED and QCD, Ball and Chiu,~\cite{Ball:1980ay}, provide the non perturbative form of the longitudinal three point vertex in sQED, which explicitly satisfies the WFGTI,~\cite{Ward:1950xp,Green:1953te,Takahashi:1957xn}. We take it as our starting point. \begin{itemize} \item It must satisfy the {\bf local gauge covariance} properties of the theory. \end{itemize} Note that although the WFGTI is a consequence of gauge invariance, it is insufficient to ensure the local gauge covariance relation of the scalar propagator. In order to ensure the latter, we demand the transverse part of the vertex to be constrained by the LKFTs,~\cite{Landau:1955zz,Fradkin:1955jr,Johnson:1959zz,Zumino:1959wt}. The LKFTs are a well defined set of transformations which describe the response of the Green functions to an arbitrary gauge transformation. These transformations leave the SDEs and the WFGTI form-invariant and ensure chiral quark condensate is gauge invariant in spinor QED and QCD, a feature not guaranteed through satisfying WFGTI alone. Therefore, LKFTs potentially play an important role in guiding us toward an improved {\em ansatz} for the three point vertex and imposing gauge invariant chiral symmetry breaking, see for example Refs.~\cite{Burden:1993gy,Bashir:2000ur,Bashir:2002sp,Bashir:2004hh,Bashir:2006ga,Bashir:2005wt,Bashir:2008ej,Bashir:2009fv,Aslam:2015nia}. More recently, these transformations have also been studied in the world line formalism, where we generalize LKFTs to arbitrary amplitudes in sQED,~\cite{Ahmadiniaz:2015kfq}. The truncation scheme in preserving gauge invariance of observables has also been studied in simpler gauge theories such as QED3, e.g.,~\cite{Maris:1996zg,Bashir:2002dz,Bashir:2002sp,Bashir:2004yt,Fischer:2004nq,Lo:2010fm}. These works involve introducing constraints of gauge invariance in the truncations. In Ref.~\cite{Fischer:2004nq}, it was shown that if one naively employed even the most sophisticated full Curtis-Pennington (CP) or Ball-Chiu (BC) vertices in different covariant gauges, they are not sufficient to ensure gauge invariant results for physical observables and the expected gauge covariance properties of the fermion propagator. However, in later articles~\cite{Bashir:2005wt,Bashir:2008fk}, the need to incorporate the LKFT correctly was emphasized in order to obtain gauge invariance of corresponding physical observables, such as the chiral quark condensate and the confinement-deconfinement phase transition as a function of the number of fermion flavors ($n_f$). \begin{itemize} \item It must ensure the {\bf multiplicative renormalizability} (MR) of the two point propagator. \end{itemize} Studies in massless scalar and spinor QED as well as in QCD, demonstrate that the LKFT of the wavefunction renormalization implies an MR form of a power law,~\cite{Bashir:2004hh,Bashir:2002sp,Bashir:2004mu,Bashir:2006ga,Aslam:2015nia}. We would like to reiterate that this solution can be reproduced only with an appropriate choice of the electron-photon three point vertex, as demonstrated first in Ref.~\cite{Curtis:1990zs}. There have been a series of works, spanned over a couple of decades, which construct the electron-photon vertex, implementing the LKFT and MR of the electron propagator,~\cite{Curtis:1990zs,Curtis:1991fb,Dong:1994jr,Bashir:1994az,Bashir:1995qr,Bashir:1997qt,Bashir:1999bd,Bashir:2000rv,Kizilersu:2009kg,Bashir:2011ij,Bashir:2011dp}. In Ref.~\cite{Bashir:2011dp}, MR was implemented for the fermion propagator and it simultaneously ensures the gauge invariance of the critical coupling, above which chiral symmetry is dynamically broken. In this article, we impose the conditions of MR on the three point scalar-photon vertex in sQED. It involves an unknown function $W(x)$ of a dimensionless ratio $x$ of momenta, satisfying an integral constraint which guarantees the MR of the scalar propagator. In this construction, we assume that the transverse vertex has no dependence on the angle between the incoming and outgoing momenta of the scalar particle, an approximation which can be readily undone through defining an effective transverse vertex. \begin{itemize} \item It should reduce to its {\bf perturbation theory} Feynman expansion in the limit of weak coupling. \end{itemize} A truncation of the complete set of SDEs, that maintains gauge invariance and MR of a gauge theory at every level of approximation, is perturbation theory. Physically meaningful solutions of the SDEs must agree with perturbative results in the weak coupling regime. We use one loop perturbative calculations as a guiding principle for the three point vertex,~\cite{Kizilersu:1995iz,Davydychev:2000rt,Bashir:2007qq}. In our construction in terms of the function $W$ mentioned above, we explore how perturbation theory provides an additional constraint. Using a one loop calculation of the scalar-photon three point vertex presented in Refs.~\cite{Bashir:2007qq,Bashir:2009xx}, we derive a perturbative constraint on $W(x)$ to ${\cal{O}}(\alpha)$, in the leading logarithms approximation (LLA). We ensure that our non perturbative construction of the said vertex satisfies this constraint. \begin{itemize} \item It must have the same {\bf symmetry properties} as the bare vertex under charge conjugation, parity and time reversal. \item One loop perturbation theory suggests that it should be free of any {\bf kinematic singularities}. Following Ball and Chiu,~\cite{Ball:1980ay}, we shall enforce this requirement. \end{itemize} The scalar-photon three point vertex $\Gamma^{\mu}(k,p)$ must be symmetric under the exchange of momenta $k$ and $p$. Moreover, we do not expect it to have kinematic singularities as $k^2 \Rightarrow p^2$. We build these features into our construction. The paper is organized as follows: in~\sect{sec:SDE-SP} we introduce the SDE for the massless scalar propagator in quenched sQED. We define the longitudinal and transverse parts of the scalar-photon vertex and simplify the SDE by performing angular integration. In~\sect{sec:SP-LKFT}, we study the LKFT for the scalar propagator to obtain a non perturbative expression for the wavefunction renormalization which defines this propagator. We introduce and explain the concept of MR in~\sect{sec:SP-MR}. We deduce a power law solution for the wavefunction renormalization of the scalar propagator and compare it with the findings of the LKFT in~\sect{sec:SP-LKFT}. \sect{sec:Vertex} contains details of how we impose constraints of the LKFT and MR on the three point transverse scalar-photon vertex in terms of the function $W(x)$. In~\sect{sec:PT}, we add additional constraints of one loop perturbation theory, symmetry properties and the lack of kinematic singularities. We also construct an explicit example of a non perturbative massless three point scalar-photon vertex. We present our conclusions and discussion in~\sect{sec:Conc}. \newpage \section{\label{sec:SDE-SP} The SDE for the Scalar Propagator} The explicit form of the sQED Lagrangian is: \begin{eqnarray} {\cal L}_{\rm sQED} &=& -\frac{1}{4} F_{\mu \nu} F^{\mu \nu} - \frac{1}{2 \xi} \, \left( \partial^{\mu} A_{\mu} \right)^2 + \left( \partial^{\mu} \varphi^* \right) \left( \partial_{\mu} \varphi \right) \nonumber \\ &-& m^2 \varphi^* \varphi -i e \left( \varphi^* \partial^{\mu} \varphi - \varphi \partial^{\mu} \varphi^* \right) A_{\mu} \nonumber \\ &+& 2 e^2 \varphi^* A^{\mu} \varphi A_{\mu} - \frac{\lambda}{4} (\varphi^* \varphi)^2 \,. \end{eqnarray} The detailed derivation of the SDEs for relevant Green functions for this sQED Lagrangian already exists in literature,~\cite{Binosi:2006da}. The SDE for the scalar propagator $S(k)$, in the quenched approximation, is shown in Fig.~\ref{ScalarPropagatorSDE}: \begin{figure}[ht] \includegraphics[width=0.45\textwidth]{PropagatorSDEComplete-Modified.pdf} \caption{The SDE for the scalar propagator. The color-filled solid blobs labelled with $S$ and $\Gamma_{\nu}$ stand for the full scalar propagator and the full scalar-photon vertex, respectively. The dots ($\cdots$) represent all the diagrams whose contribution begins at the two loop level.} \label{ScalarPropagatorSDE} \end{figure} \noindent Mathematically, this is written as: \begin{eqnarray} -iS^{-1}(k) & = & -iS_{0}^{-1}(k) \nonumber \\ &+& e^{2} \int_{M}{ \frac{d^{4} \omega}{(2\pi)^{4}} (\omega + k)^{\mu} S(\omega) \Gamma^{\nu}(\omega,k) \Delta_{\mu\nu}(q) } \nonumber \\ &-& e^{2} \int_{M}{ \frac{d^{4} \omega}{(2\pi)^{4}} \Gamma^{\mu\nu}_0 (k,-\omega,k,\omega) \Delta_{\mu\nu}(\omega) } \nonumber \\ &-& \int_{M}{ \frac{d^{4} \omega}{(2\pi)^{4}} S(\omega) \Gamma_0(k,\omega) } + \cdots \,, \label{gap equation} \end{eqnarray} where $e$ is the electromagnetic coupling, $q=\omega - k$, and the subscript $M$ indicates integration over the entire Minkowski space. $\Delta_{\mu\nu}(\omega)$ and $S_{0}(k)$ are the bare photon and scalar propagators. $S(k)$ is the full scalar propagator. For massless scalars, $S(k)$ can be expressed in terms of the so-called wavefunction renormalization $F(k^{2},\Lambda^{2})$, so that \begin{equation} S(k)=\frac{F(k^{2},\Lambda^{2})}{k^{2}} \,, \label{scalar propagator} \end{equation} where $\Lambda$ is the ultraviolet cut-off used to regularize the divergent integrals involved. The bare scalar propagator is given by $S_{0}(k)=1/k^{2}$. The bare photon propagator is \begin{equation} \Delta_{\mu\nu}(q)=\frac{1}{q^{2}} \left[ g_{\mu\nu} +(\xi-1) \frac{q_{\mu}q_{\nu}}{q^{2}} \right] \,, \label{photon propagator} \end{equation} and it remains unrenormalized in the quenched approximation. $\Gamma^{\mu\nu}_0 (k,-\omega,k,\omega)=2 i e^2 g^{\mu \nu}$ and $\Gamma_0(k,\omega) = - i \lambda$ are the bare four point scalar-scalar-photon-photon and the four-scalar vertices, respectively. The last two diagrams of the gap equation, Eq.~(\ref{gap equation}), will be referred to as the photon and the scalar bubble diagrams, in that order. $\Gamma^{\nu}(\omega,k)$ is the full three point scalar-photon vertex, for which we must make an \textit{ansatz} in order to solve Eq.~(\ref{gap equation}). The WFGTI for this vertex, i.e., \begin{equation} q_{\mu}\Gamma^{\mu}(\omega,k)=S^{-1}(\omega)-S^{-1}(k) \,, \label{WGTI for the 3-point vertex} \end{equation} allows us to decompose it as a sum of longitudinal and transverse components, as suggested by Ball and Chiu,~\cite{Ball:1980ay}: \begin{equation} \Gamma^{\mu}(\omega,k)=\Gamma_{L}^{\mu}(\omega,k)+\Gamma_{T}^{\mu}(\omega,k) \,. \label{Ball-Chiu vertex decomposition} \end{equation} The \textit{longitudinal} part $\Gamma_{L}^{\mu}(\omega,k)$ satisfies the WFGTI, Eq.~(\ref{WGTI for the 3-point vertex}), by itself, and the \textit{transverse} part $\Gamma_{T}^{\mu}(\omega,k)$, which remains completely undetermined, is naturally constrained by \begin{equation} q_{\mu}\Gamma_{T}^{\mu}(\omega,k)=0 \,. \label{transverse part definition} \end{equation} Moreover, \begin{equation} \Gamma_{T}^{\mu}(k,k)=0 \,. \label{kklimit} \end{equation} In order to satisfy the WFGTI in a manner free of kinematic singularities, we follow Ball and Chiu and write \begin{equation} \Gamma_{L}^{\mu}(\omega,k) = \frac{S^{-1} (\omega)-S^{-1}(k) }{ \omega^{2} - k^{2} } (\omega + k)^{\mu} \,. \label{Longitudinal vertex} \end{equation} This construction implies that the ultraviolet divergences solely reside in the longitudinal part. Moreover, recall the following relations between the renormalized and bare quantities: \begin{eqnarray} S^{R}(p) = {\cal Z}_2^{-1} S(p) \,, \; \Gamma^{\mu}_{R}(k,p) = {\cal Z}_1 \Gamma^{\mu}(k,p) \,. \end{eqnarray} Thus, the form of the longitudinal vertex in Eq.~(\ref{Longitudinal vertex}) guarantees the relation ${\cal Z}_1 = {\cal Z}_2$. Consequently, the running of the coupling is dictated by the corrections to the photon propagator alone. In the approximation of quenched sQED, the coupling does not run. If we unquench the theory, it is easy to calculate ${\cal Z}_3$ and the running coupling constant with the well known expression: \begin{eqnarray} \alpha(Q^2) = \frac{\alpha(Q_0^2)}{1 - \left( \alpha(Q_0^2)/12 \pi \right) {\rm ln}(Q^2/Q_0^2) } \,. \end{eqnarray} The ultraviolet finite transverse vertex can be expanded out in terms of one unknown function $\tau(\omega^{2},k^{2},q^{2})$,~\cite{Ball:1980ay}: \begin{equation} \Gamma_{T}^{\mu}(\omega,k) = \tau(\omega^{2},k^{2},q^{2}) T^{\mu}(\omega,k) \,, \label{Transverse vertex} \end{equation} where \begin{equation} T^{\mu}(\omega,k) = \left( \omega \cdot q \right) k^{\mu} - \left( k \cdot q \right) \omega^{\mu} \label{Tensor_mu} \end{equation} is the transverse basis vector in the Minkowski space and fulfils Eqs.~(\ref{transverse part definition},\ref{kklimit}). To begin with, the form factor $\tau(\omega^{2},k^{2},q^{2})$ is an unconstrained scalar function (representing an $8$-fold simplification of the spinor QED/QCD case). Following the non perturbative vertex construction/truncation of Refs.~\cite{Ball:1980ay,Bashir:2007qq}, our analysis ensures that gauge invariance (in terms of the WFGTI) for the scalar propagator and the scalar-photon vertex is satisfied. Within our truncation, another source of gauge non-invariance in the scalar propagator could be the lack of implementation of LKFT, a feature of the bare as well as BC vertices. We make sure that our {\em ansatz} for the transverse part satisfies this constraint non perturbatively. Photon propagator also has its Ward identity but we work throughout in the quenched approximation. Therefore, within the confines of our assumptions, it receives no corrections and hence the four point diagrams we have discarded do not affect the correct gauge invariance properties of the scalar propagator. They will be essential, for example, in ensuring the transversality of the photon propagator in unquenched sQED, not investigated in the present work. Furthermore, there are also restrictions of the gauge transformations on how the three point vertex is related to the four point vertex, constraining the form of the latter. In Ref. ~\cite{Bashir:2009xx}, two of the present authors exploited these constraints to carry out its non perturbative construction consistent with WFGTI which relates three point vertices to the four point ones. There is an undetermined part which is transverse to one or both the external photons, and needs to be evaluated through perturbation theory. They present in detail how the transverse part at the one loop order can be evaluated for completely general kinematics of momenta involved in covariant gauges and dimensions. In this article, our focus is on constraining the non perturbative three point scalar-photon vertex, capturing its key features, in particular its gauge covariance properties, its perturbative expansion in the LLA as well as the MR of the scalar propagator. We make use of Eqs.~(\ref{WGTI for the 3-point vertex}, \ref{Ball-Chiu vertex decomposition},\ref{Longitudinal vertex}, \ref{Transverse vertex},\ref{Tensor_mu}) in the gap equation, i.e., Eq.~(\ref{gap equation}), and then Wick rotate it to the Euclidean space to write: \begin{eqnarray} \hspace{-1cm} \frac{1}{F(k^{2},\Lambda^{2})} & = & 1-\frac{\alpha}{4\pi^{3}} \frac{1}{k^{2}} \int_{E}{ d^{4}\omega \frac{1}{q^{2}} \left\{ \left[ 1-\frac{S(\omega)}{S(k)} \right] \right. } \nonumber \\ & & \hspace{-1cm} \times \left[1 + (\xi -1) \frac{\omega^{2}-k^{2}}{q^{2}} +2\frac{k^{2}}{\omega^{2}-k^{2}} \right. \nonumber \\ & & \hspace{-1cm} \left. \left. +2\frac{\omega \cdot k}{\omega^{2}-k^{2}} \right] - 2S(\omega) \tau(\omega^{2},k^{2},q^{2}) \Delta^{2} \right\} \,, \label{equation for F without assumptions} \end{eqnarray} where $\Delta^2=\left( \omega \cdot k \right)^{2} - \omega^{2} k^{2}$, $\alpha=e^{2}/4\pi$ is the bare coupling constant, and the subscript $E$ indicates integration over the whole Euclidean space. Note that we have neglected the photon and the scalar bubble diagrams as well as the diagrams whose contribution begins at the two loops level, since they do not contribute to leading logs in the one loop calculation, as we shall discuss later. At this stage, it appears impossible to proceed any further because of the dependence of $\tau$ on the angle between the incoming and outgoing momenta $\omega$ and $k$ of the scalar particle. We shall assume that the transverse vertex has no dependence on this angle, i.e., it is independent of $q^{2}$. Consequently, this vertex is only an effective one which will allow us to capture many key features of the theory in a simple manner. This assumption allows us to carry out the angular integration in Eq.~(\ref{equation for F without assumptions}). In this sense, we are calculating an \textit{effective} transverse vertex. Note that it is easy to undo this independent angle approximation exactly. This has been explained and employed in Refs.~\cite{Bashir:1997qt,Bashir:2011vg} for the case of spinor QED. Based upon the results found in these articles and our cross-check for sQED, we conclude that the qualitative implications of the {\em ansatz} of the three point scalar-photon vertex are insignificant, and hence we do not report corresponding findings. The angular integration leads us to \begin{eqnarray} \frac{1}{F(k^{2},\Lambda^{2})} & = & 1 - \frac{\alpha}{4\pi} \int_{0}^{ k^{2}}{ d\omega^{2} \frac{\omega^{2}}{k^{2}} \left[ 1-\frac{S(\omega)}{S(k)} \right] } \nonumber \\ & & \hspace{1cm} \times \left[ \frac{(2 - \xi)}{k^{2}} + \frac{1}{\omega^{2}-k^{2}} \left( 2 + \frac{\omega^{2}}{k^{2}} \right) \right] \nonumber \\ & & \hspace{-.5cm} - \frac{\alpha}{4\pi} \int_{ k^{2}}^{\Lambda^{2}}{ d\omega^{2} \left[ 1-\frac{S(\omega)}{S(k)} \right] \left\{ \frac{3}{\omega^{2} - k^{2}} + \frac{\xi}{k^{2}} \right\} } \nonumber \\ & & \hspace{-.5cm} + \frac{\alpha}{8\pi} \int_{0}^{ k^{2}}{ d\omega^{2} \omega^{2} S(\omega) \tau(\omega^{2},k^{2}) \left( \frac{\omega^{4}}{k^{4}} - 3 \frac{\omega^{2}}{k^{2}} \right) } \nonumber \\ & & \hspace{-.5cm} + \frac{\alpha}{8\pi} \int_{ k^{2}}^{\Lambda^{2}}{ d\omega^{2} \omega^{2} S(\omega) \tau(\omega^{2},k^{2}) \left( \frac{k^{2}}{\omega^{2}} - 3 \right) } \,. \nonumber \\ & & \label{equation for F with effective vertex} \end{eqnarray} At this point, it is obvious that we require the knowledge of the form factor $\tau(\omega^{2},k^{2})$ to find the wavefunction renormalization $F(k^{2},\Lambda^{2})$. However, this problem can be inverted. The requirements of LKFT and the MR of $F(k^{2},\Lambda^{2})$ can tightly constrain the function $\tau(\omega^{2},k^{2})$. We would like to stress that these constraints will be valid only within our truncation scheme which consists of the set of assumptions and hypotheses we have detailed before. We study them in the next section. \section{\label{sec:SP-LKFT} Scalar Propagator and LKFT} These transformations have the simplest structure in the Euclidean coordinate space. Therefore, we start by defining the Fourier transformations between the scalar propagators in coordinate and momentum spaces: \begin{eqnarray} {\cal S}_E(x;\xi)&=& \int \frac{d^dk}{(2\pi)^d} \ {\rm e}^{-i\mathbf{k} \cdot \mathbf{x}} \, S_E(k;\xi)\,,\label{f2x}\\ S_E(k;\xi)&=& \int d^dx \ {\rm e}^{i\mathbf{k} \cdot \mathbf{x}} \, {\cal S}_E(x;\xi)\;. \label{f2p} \end{eqnarray} Notice a slight modification of notation that we shall use in this section: $S(p) \Rightarrow S(p;\xi)$ for the sake of clarity. Moreover, we use the notation ${\cal S}$ for the propagator in the coordinate space in order to specify that its functional dependence is different from that of $S$, the same propagator in the momentum space. The subscript $E$ stands for the Euclidean space. The LKFT relating the coordinate space scalar propagator in a given gauge $\xi_0$ to the one in an arbitrary covariant gauge $\xi$ reads: \begin{eqnarray} {\cal S}_E^{\rm LKFT}(x;\xi) = {\cal S}_E(x;\xi_0){\rm e}^{-i [\Delta(0)-\Delta(x)]} \;,\label{LKprop} \end{eqnarray} where \begin{eqnarray} \Delta (x)&=&-i (\xi-\xi_0) e^2 (\mu x)^{4-d} \int \frac{d^dk}{(2\pi)^d} \frac{{\rm e}^{-i\mathbf{k} \cdot \mathbf{x}}}{k^4}\nonumber\\ &=&-\frac{i (\xi-\xi_0) e^2}{16 (\pi)^{d/2}} (\mu x)^{4-d} \Gamma\left(\frac{d}{2}-2\right) \;.\label{deltad} \end{eqnarray} Here, $\mu$ is a mass scale introduced for dimensional purposes; it ensures that in every dimension $d$, the coupling $e$ is dimensionless. For the four dimensional case, we expand around $d=4- 2 \epsilon$ and use \begin{eqnarray} \Gamma \left( - {\epsilon} \right) &=& - \frac{1}{\epsilon} - \gamma + {\cal O}(\epsilon) \;, \nonumber \\ x^{\epsilon} &=& 1 + \epsilon {\rm ln} x + {\cal O}(\epsilon^2) \; . \end{eqnarray} Therefore, \begin{eqnarray} \Delta(x)=i\frac{ (\xi-\xi_0) e^2}{16\pi^{2-\epsilon}}\Big[\frac{1}{\epsilon}+\gamma+2\ln \mu x+\mathcal{O}\,(\epsilon)\Big]\,. \end{eqnarray} Note that in the term proportional to $\ln x$, one cannot simply put $x=0$. Therefore, we need to introduce a cutoff scale $x_{\min}$. We then arrive at \begin{eqnarray} \Delta(x_{\rm min})-\Delta(x)=-i\ln \left(\frac{x^2}{x_{\rm min}^2}\right)^\nu\,, \end{eqnarray} with $\nu={\alpha (\xi-\xi_0)}/{(4\pi)}$. If we have the knowledge of the propagator in one gauge, we can transform it to any other gauge dictated by the LKFT: \begin{eqnarray} {\cal S}_E^{\rm LKFT}(x;\xi) &=& {\cal S}_E(x;\xi_0)\, {\rm e}^{-i\big(\Delta(x_{\min})-\Delta(x)\big)} \nonumber \\ &=& {\cal S}_E(x;\xi_0)\, \Big(\frac{x^2}{x_{\rm min}^2}\Big)^{-\nu}\,. \label{propagator} \end{eqnarray} Let us start from the tree level massive scalar propagator \begin{eqnarray} S_E(k;\xi_0) = - \frac{1}{k^2 +m^2} \,. \end{eqnarray} Its Fourier transformation into the coordinate space is: \begin{eqnarray} {\cal S}_E(x;\xi_0) = -\frac{m}{4\pi^2x} K_1(mx)\;, \end{eqnarray} where $K_1(mx)$ is the modified Bessel function of the second kind. The LKFT readily yields: \begin{eqnarray} {\cal S}_E^{\rm LKFT}(x;\xi) &=& -\frac{m}{4\pi^2x} K_1(mx)\left(\frac{x^2}{x^2_{\rm min}} \right)^{-\nu}\;. \end{eqnarray} We can Fourier transform this result back to the momentum space to get \begin{eqnarray} S_E^{\rm LKFT}(k;\xi) &=& -\frac{1}{m^2}\left( \frac{m^2}{ -\Lambda^2}\right)^{\nu} \Gamma(1-\nu) \Gamma(2-\nu) \nonumber \\ && \times \; _2F_1\left(1-\nu,2-\nu;2;-\frac{k^2}{m^2}\right) \,, \end{eqnarray} where we have made the identification $4/x_{\rm min}^2 \rightarrow -\Lambda^2$. This is the non perturbative LKFT expression for the scalar propagator, starting from its knowledge at the tree level in the gauge $\xi_0$. To evaluate it in the massless limit, we make use of the identity \begin{eqnarray} {}_2F_1 \left(a,b;c;z \right) = (1-z)^{-a} {}_2F_1 \left(a,c-b;c;\frac{z}{z-1}\right) \end{eqnarray} to rewrite the scalar propagator as follows: \begin{eqnarray} && \hspace{-1.2cm} S_E^{\rm LKFT}(k;\xi)= - \left( \frac{1}{-\Lambda^2} \right)^{\nu} \Gamma(1-\nu) \Gamma(2-\nu) \nonumber \\ && \hspace{-.5cm} \times \,(k^2 + m^2)^{\nu-1} \, _2F_1\left(1-\nu,\nu;2;-\frac{k^2}{k^2 + m^2}\right) \,. \end{eqnarray} The massless limit now yields \begin{eqnarray} S_E^{\rm LKFT}(k;\xi)&=& - \frac{1}{k^2} \; \frac{\Gamma(1-\nu)}{\Gamma(1+\nu)}\; \left( -\frac{k^2}{\Lambda^2} \right)^{\nu} \,. \end{eqnarray} This is a power law with exponent $\nu$. Expanding it out in the powers of coupling, retaining the leading logarithms and writing the result in the Minkowski space, we get: \begin{eqnarray} S^{\rm LKFT}(k;\xi)&=& \frac{1}{k^2} \left[ 1 + \, \frac{\alpha (\xi - \xi_0)}{4 \pi} \; {\rm ln} \left( \frac{k^2}{\Lambda^2} \right) \right] \,. \label{Prop-LKFT} \end{eqnarray} It implies \begin{eqnarray} F^{\rm LKFT}(k^{2},\Lambda^{2}) &=& 1 + \, \frac{\alpha (\xi - \xi_0)}{4 \pi} \; {\rm ln} \left( \frac{k^2}{\Lambda^2} \right) \,. \label{WFR-LKFT} \end{eqnarray} This result provides constraints on the transverse scalar-photon vertex through Eq.~(\ref{equation for F with effective vertex}). Before we set about exploiting this constraint, we would like to connect Eq.~(\ref{WFR-LKFT}) with perturbation theory and MR of the scalar propagator in the next section. \section{\label{sec:SP-MR} Scalar Propagator and MR} MR of the scalar propagator requires the renormalized $F_{R}$ to be related to the unrenormalized $F$ through a multiplicative factor ${\cal Z}_{2}$ by \begin{equation} F_{R}(k^{2},\mu^{2}) = {\cal Z}_{2}^{-1}(\mu^{2},\Lambda^{2}) F(k^{2},\Lambda^{2}) \,, \label{Scalar propagatos MR eq} \end{equation} where $\mu$ plays the role of an arbitrary renormalization scale. Within a truncation scheme which focusses only on logarithmic divergences, it is possible to write down the above functions as perturbative series involving terms of the form $\alpha^{n} \ln^{n}$ (the so called leading log terms). We should keep in mind that the sQED has features of a $\varphi^4$ scalar field theory as well as the spinor QED. It has both quadratic and logarithmic ultraviolet divergences. Our truncation scheme makes it resemble the spinor QED or QCD, problems of our eventual interest. In the LLA, we then have \begin{eqnarray} F(k^{2},\Lambda^{2}) &=& 1 + \sum_{n=1}^{\infty} \alpha^{n} A_{n} \ln^{n} \left( \frac{k^{2}}{\Lambda^{2}} \right) \,, \label{F unrenormalized expansion} \\ {\cal Z}_{2}^{-1}(\mu^{2},\Lambda^{2}) &=& 1 + \sum_{n=1}^{\infty} \alpha^{n} B_{n} \ln ^{n} \left( \frac{\mu^{2}}{\Lambda^{2}} \right) \,, \label{Z function} \\ F_{R}(k^{2},\mu^{2}) &=& 1 + \sum_{n=1}^{\infty} \alpha^{n} C_{n} \ln ^{n} \left( \frac{k^{2}}{\mu^{2}} \right) \,. \label{F renormalized expansion} \end{eqnarray} (Note that the next to leading logs (NLL) are of the type $\alpha^n \ln^{n-1}$ and so on.) The MR condition, Eq.~(\ref{Scalar propagatos MR eq}), requires \begin{equation} A_{n}=C_{n}=(-1)^{n}B_{n}=\frac{A_{1}^{n}}{n!}\,, \label{coefficients} \end{equation} so that the functions $F$, $F_{R}$ and ${\cal Z}_{2}^{-1}$ obey a power law behavior. Thus the non perturbative solution of Eq.~(\ref{Scalar propagatos MR eq}) for $F$ in the LLA is \begin{equation} F(k^{2},\Lambda^{2}) = \left( \frac{k^{2}}{\Lambda^{2}} \right) ^{\beta} \,, \label{F unrenormalized lead log expansion} \end{equation} where the anomalous dimension $\beta$ is unknown at the non pertubative level. This is in contrast with perturbation theory, where $\beta=\alpha A_{1}$ is obvious from Eq.~(\ref{coefficients}). It is straightforward to calculate $A_{1}$ in one loop perturbation theory: taking the tree level values $\Gamma^{\nu}(\omega,k)=(\omega + k)^{\nu}$ and $S(\omega)=1/\omega^{2}$ in the gap equation, i.e., Eq.~(\ref{gap equation}), we get, on Wick rotating it to the Euclidean space, \begin{eqnarray} \frac{1}{F(k^{2},\Lambda^{2})} &=& 1 - \frac{\alpha}{4\pi^{3}} \frac{1}{k^{2}} \int_{E}{ \frac{d^{4} \omega}{\omega^{2}} \frac{(\omega + k)^{2}}{q^{2} } } \nonumber \\ & & \hspace{-.7cm} - \frac{\alpha}{4\pi^{3}} \frac{(\xi - 1)}{k^{2}} \int_{E}{ \frac{d^{4} \omega}{\omega^{2}} \frac{(\omega^{2} - k^{2})^{2}}{q^{4}} } \,. \label{one-loop F} \end{eqnarray} Note that we have dropped the photon and the scalar bubble contributions as they do not contribute to the LLA. Angular integration of Eq.~(\ref{one-loop F}) yields \begin{eqnarray} \frac{1}{F(k^{2},\Lambda^{2})} &=& 1 + \frac{\alpha (\xi - 3)}{4\pi} \int_{k^{2}}^{\Lambda^{2}}{ \hspace{-.15cm} \frac{d\omega^{2}}{\omega^{2}} } \nonumber \\ & & \hspace{-1.5cm} + \frac{\alpha}{4\pi} \frac{(\xi - 3)}{k^{4}} \int_{0}^{k^{2}}{ \hspace{-.3cm} d\omega^{2} \, \omega^{2} } - \frac{\alpha}{4\pi} \frac{\xi}{k^{2}} \int_{0}^{\Lambda^{2}}{ \hspace{-.3cm} d\omega^{2}} \,. \label{one-loop F 2} \end{eqnarray} After carrying out the radial integration in the above Eq.~(\ref{one-loop F 2}), dropping the quadratic and quartic divergencies ($\Lambda^{2}$ and $\Lambda^{4}$) coming from the last two terms on the right hand side of Eq.~(\ref{one-loop F 2}) and conserving only the logarithmic divergence (as we are interested in the LLA), we have \begin{eqnarray} F(k^{2},\Lambda^{2}) &=& 1+ \frac{\alpha (\xi - 3)}{4\pi} \ln \left( \frac{k^{2}}{\Lambda^{2}} \right) \,. \label{one-loop sacalar propagator} \end{eqnarray} Comparing Eqs.~(\ref{WFR-LKFT},\ref{one-loop sacalar propagator}), we deduce that $\xi_0=3$ is the correct choice for sQED till one loop order in perturbation theory. This is unlike the case of spinor QED, where Landau gauge $\xi=0$ works well for the same order of approximation. Comparing expression~(\ref{one-loop sacalar propagator}) with the perturbative expansion~(\ref{F unrenormalized expansion}) to one-loop order, we see that $A_{1}= (\xi - 3)/4\pi$. Therefore, perturbation theory suggests that the anomalous dimension in~(\ref{F unrenormalized lead log expansion}) is \begin{equation} \beta = \frac{\alpha (\xi - 3)}{4\pi} \,, \label{anomalous dimension} \end{equation} see also~\cite{Delbourgo:1977vh,Delbourgo:2003wd,Kreimer:2004xz,Bashir:2007qq}. One can readily note that the power behavior of~(\ref{F unrenormalized lead log expansion}), with $\beta$ given in~(\ref{anomalous dimension}), is the solution of the following integral equation: \begin{equation} \frac{1}{F(k^{2},\Lambda^{2})} = 1 + \frac{\alpha (\xi - 3)}{4\pi} \int_{k^{2}}^{\Lambda^{2}}{ \frac{d \omega^{2}}{\omega^{2}} \frac{F(\omega^{2},\Lambda^{2})}{F(k^{2},\Lambda^{2})} } \,. \label{integral equation for F} \end{equation} This term can be separated out in Eq.~(\ref{equation for F with effective vertex}) to impose the required condition of MR on the transverse form factor $\tau(\omega^2,k^2)$. This is what we study in the next section. \section{\label{sec:Vertex} The Transverse Vertex} Eq.~(\ref{integral equation for F}) imposes the following restriction on the transverse vertex through Eq.~(\ref{equation for F with effective vertex}): \begin{eqnarray} -2 \int_{0}^{k^{2}}{ \hspace{-.3cm} d \omega^{2} \left\{ \frac{3}{k^{2}} + \frac{(3-\xi)}{k^{2}} \frac{\omega^{2}}{k^{2}} +\frac{3}{\omega^{2} - k^{2}} \right. } & & \nonumber \\ \left. +\frac{(\xi - 3)}{k^{2}} \frac{F(\omega^{2},\Lambda^{2})}{F(k^{2},\Lambda^{2})} -\frac{3}{\omega^{2} - k^{2}} \frac{F(\omega^{2},\Lambda^{2})}{F(k^{2},\Lambda^{2})} \right\} & & \nonumber \\ -2 \int_{k^{2}}^{\Lambda^{2}}{ \hspace{-.3cm} d \omega^{2} \left\{ \frac{3}{\omega^{2} - k^{2}} -\frac{3}{\omega^{2} - k^{2}} \frac{F(\omega^{2},\Lambda^{2})}{F(k^{2},\Lambda^{2})} +\frac{\xi}{k^{2}} \right\} } & & \nonumber \\ + \int_{0}^{k^{2}}{ \hspace{-.3cm} d \omega^{2} F(\omega^{2}) \tau(\omega^{2},k^{2}) \left( \frac{\omega^{4}}{k^{4}} - 3 \frac{\omega^{2}}{k^{2}} \right) } & & \nonumber \\ + \int_{k^{2}}^{\Lambda^{2}}{ \hspace{-.3cm} d \omega^{2} F(\omega^{2}) \tau(\omega^{2},k^{2}) \left( \frac{k^{2}}{\omega^{2}} - 3 \right) } & = & 0 \,. \nonumber \\ & & \label{transverse vertex RESTRICTION} \end{eqnarray} Recall that in the above equation, we have neglected the contributions of the photon and the scalar bubble diagrams since they do not contribute to the one loop LLA, Eq.~(\ref{one-loop sacalar propagator}). Introducing the variable $x$, where \begin{eqnarray} x= \frac{\omega^{2}}{k^{2}} & \forall & \omega^{2} \in [0, k^{2}] \,, \\ x= \frac{k^{2}}{\omega^{2}} & \forall & \omega^{2} \in [k^2, \Lambda^{2}] \,, \end{eqnarray} in Eq.~(\ref{transverse vertex RESTRICTION}), the resulting restriction can be rewritten as \begin{equation} \int_{0}^{1}{ dx \, W(x) } = 0 \,, \label{W restriction} \end{equation} with \begin{eqnarray} W(x) &=& -6 x \frac{\left(1-x^{\beta}\right)}{x-1}+6 x^{-1} \frac{\left(1-x^{-\beta}\right)}{x-1} +2 \xi\left( 1 - x^{\beta} \right) \nonumber \\ & & + \left( x-3 \right) \left( x^{\beta} + x^{-2} \right) h(x) \,. \label{W definition} \end{eqnarray} Note that we have again kept only those terms which contribute to the LLA. The lower limit $0$ of $x$ integration in Eq.~(\ref{W restriction}) encodes the fact that we have taken $\Lambda^2 \Rightarrow \infty$. This can be done with impunity as the all order logarithmic divergence has already been separated out to construct the MR solution for the wavefunction renormalization $F$. Moreover, we have introduced the definition \begin{equation} h(x) \equiv x k^{2} F(k^{2},\Lambda^{2}) \tau(xk^{2},k^{2}) \,, \label{H function definition} \end{equation} which is a dimensionless function satisfying the property \begin{equation} h(x^{-1}) = x^{\beta - 1} h(x) \,, \label{H function property} \end{equation} with $\beta = (\xi - 3) / 4\pi$, as prescribed by Eq.~(\ref{anomalous dimension}). Employing Eq.~(\ref{W definition}) and the property in Eq.~(\ref{H function property}), we can write \begin{eqnarray} W(x) - W(x^{-1}) &=& 4 \left( x - 1 \right) \left( x^{\beta} + x^{-2} \right) h(x) \nonumber \\ & & \hspace{-1cm} + 6x \left( 1 - x^{\beta} \right) - 6x^{-1} \left( 1 - x^{-\beta} \right) \nonumber \\ & & \hspace{-1cm} + 2 \xi \left[ \left( 1 - x^{\beta} \right) - \left( 1 - x^{-\beta} \right) \right] \,. \label{W an W inverse equation} \end{eqnarray} Taking $x=p^{2}/k^{2}$ in~(\ref{W an W inverse equation}), and using the symmetry $\tau(p^{2},k^{2}) = \tau(k^{2},p^{2})$, it is straightforward to derive the expression for $\tau(k^{2},p^{2})$ in terms of $W(x)$ and the wavefunction renormalization $F$. On Wick rotating it back to the Minkowski space, it acquires the following form: \begin{eqnarray} \tau(k^{2},p^{2}) & = & - \frac{3}{2} \frac{1}{\left(k^{2}-p^{2}\right)} \left[ \frac{1}{F(k^{2})} - \frac{1}{F(p^{2})} \right] \nonumber \\ & & \hspace{-2cm} - \frac{\xi}{2} \frac{1}{\left(k^{2}-p^{2}\right)} \frac{F(k^{2}) + F(p^{2})}{s(k^{2},p^{2})} \left[ \frac{1}{F(k^{2})} - \frac{1}{F(p^{2})} \right] \nonumber \\ & & \hspace{-2cm} + \frac{1}{4} \frac{1}{\left( k^{2}-p^{2}\right)} \frac{1}{s(k^{2},p^{2})} \left[ W \left( \frac{k^{2}}{p^{2}}\right) - W \left( \frac{p^{2}}{k^{2}} \right) \right] \,, \label{tau in terms of W} \end{eqnarray} where we have introduced $F(k^{2}) \equiv F(k^{2},\Lambda^{2})$ as a simplifying notation. We also introduce the definition \begin{equation} s(k^{2},p^{2}) = F(k^{2}) \frac{k^{2}}{p^{2}} + F(p^{2}) \frac{p^{2}}{k^{2}} \,. \label{S function definition} \end{equation} In the transverse form factor, Eq.~(\ref{tau in terms of W}), the scalar structure $[1/F(k^{2}) - 1/F(p^{2})]$ appears, as first reported in spinor QED by Curtis and Pennington in Ref.~\cite{Curtis:1991fb}. The exact form of the function $W$ remains unknown. Actually, there exists a whole family of $W$-functions satisfying the integral restriction, Eq.~(\ref{W restriction}). However, for the sake of simplicity we can choose the trivial solution $W(x)=0$ for any dimensionless ratio $x$ of momenta. When substituted in Eq.~(\ref{tau in terms of W}), it leads to \begin{eqnarray} \tau(k^{2},p^{2}) & = & - \frac{3}{2} \frac{1}{\left(k^{2}-p^{2}\right)} \left[ \frac{1}{F(k^{2})} - \frac{1}{F(p^{2})} \right] \nonumber \\ & & \hspace{-2cm} - \frac{\xi}{2} \frac{1}{\left(k^{2}-p^{2}\right)} \frac{F(k^{2}) + F(p^{2})}{s(k^{2},p^{2})} \left[ \frac{1}{F(k^{2})} - \frac{1}{F(p^{2})} \right] \,. \label{tau with W=0} \end{eqnarray} This vertex has already been calculated in one loop perturbation theory by Bashir \textit{et. al.}, Ref.~\cite{Bashir:2007qq}, using dimensional regularization, in arbitrary gauge $\xi$ and dimensions $d$. For the massless case, in dimension $d=4$, they report \begin{eqnarray} && \hspace{-0.6cm} \tau_{BCD}(k^{2},p^{2},q^{2}) = \nonumber \\ && \hspace{-0.6cm} \frac{\alpha }{8\pi \Delta ^2}\bigg\{(k^2+p^2-4k\cdot p)\left(k\cdot p J_0+\ln\left(\frac{q^4}{p^2k^2}\right)\right) \nonumber \\ && \hspace{-0.6cm} + \frac{(k^2+p^2)q^2-8p^2k^2}{p^2- k^2}\ln\left(\frac{k^2}{p^2}\right) \nonumber \\ \nonumber && \hspace{-0.6cm} + (\xi-1)\left[k^2p^2 J_0+\frac{2[k^2p^2+k\cdot p(k^2+p^2)]}{k^2- p^2}\right]\ln\left(\frac{p^2}{k^2}\right) \nonumber \\ && \hspace{-0.6cm} +\frac{2k\cdot p}{k^2-p^2}\left[k^2\ln\left(\frac{q^2} {p^2}\right)-p^2\ln\left(\frac{q^2}{k^2}\right)\right] \bigg\} \,, \label{tau Yajaira } \end{eqnarray} where \begin{equation} J_{0} = \frac{2}{i\pi^{2}} \int_{M}{ d^{4} \omega \frac{1}{\omega^{2} \left( p-\omega \right)^{2} \left( k - \omega \right)^{2}} } \,, \label{J0 definition} \end{equation} with $q=k-p$. We now see if our proposal, Eq.~(\ref{tau with W=0}), fares well against the constraints of this perturbative form factor, Eq.~(\ref{tau Yajaira }). \section{\label{sec:PT} Perturbation Theory Constraints} In order to compare the vertex {\em ansatz}, Eq.~(\ref{tau with W=0}), based upon multiplicative renormalizability, against its one loop perturbative form, Eq.~(\ref{tau Yajaira }), it is convenient to take the asymptotic limit $k^{2}\gg p^{2}$ of external momenta in the latter vertex. The resulting $\tau_{\rm BCD}$ in the LLA is \begin{equation} \tau^{\rm asym}_{\rm BCD}(k^{2},p^{2}) \stackrel{k^{2}\gg p^{2}}{=} -3 \frac{\alpha}{4\pi} \frac{1}{k^{2}} \ln \left( \frac{k^{2}}{p^{2}} \right) \,. \label{tau perturbative yajaira} \end{equation} Expectedly, it is independent of $q^2$ and hence we drop this dependence from its argument. Note that this expression is also independent of the covariant gauge parameter $\xi$. It is unlike spinor QED where the leading asymptotic vertex is proportional to $\xi$. For a numerical check, we define \begin{eqnarray} \tilde{\tau}(x) = - \frac{k^2 \, \tau(k^{2},x k^{2})}{\alpha \ln x} \,, \label{dimensionless-tau} \end{eqnarray} where $x=p^2/k^2$ and we have suppressed the $q^2$ dependence for notational simplification. Thus: \begin{eqnarray} \tilde{\tau}^{\rm asym}_{\rm BCD}(x) = - \frac{3}{4\pi} \,. \label{tilde-tau} \end{eqnarray} In Fig.~(\ref{fig:-asymp}), we plot $\tilde{\tau}^{\rm asym}_{\rm BCD}(x)$ and $\tilde{\tau}_{\rm BCD}(x)$ as a function of $x$, the latter for different values of the gauge parameter $\xi$ and for a fixed value of $q^2$, chosen arbitrarily. In the asymptotic limit, all curves converge to a single value, as expected. \begin{figure}[ht] \includegraphics[width=0.53\textwidth]{Fig-asymp.pdf} \caption{\label{fig:-asymp} The analytical result, long dashed lines representing a constant value given in Eq.~(\ref{tilde-tau}) for the asymptotic transverse form factor $\tilde{\tau}^{\rm asym}_{\rm BCD}(x)$, agrees with the numerical plot of $\tilde{\tau}_{\rm BCD}(x)$ obtained from Eq.~(\ref{tau Yajaira }) in the limit $x \rightarrow 0$ for different gauges and an arbitrarily chosen value of $q^2=-0.7GeV^2$.} \end{figure} Using the perturbative expression, Eq.~(\ref{one-loop sacalar propagator}), for $F(k^{2})$ in Eq.~(\ref{tau with W=0}), and taking the asymptotic limit $k^{2}\gg p^{2}$, we have \begin{equation} \tau^{\rm asym}(k^{2},p^{2}) \stackrel{k^{2}\gg p^{2}}{=} \frac{3}{2} \frac{\alpha}{4\pi} \frac{\left( \xi - 3 \right)}{k^{2}} \ln \left( \frac{k^{2}}{p^{2}} \right) \label{tau perturbative Luis} \end{equation} in the LLA. Note that the transverse form factors, Eqs.~(\ref{tau perturbative yajaira}) and~(\ref{tau perturbative Luis}) have the functional form $({1}/{k^{2}}) \ln ({k^{2}}/{p^{2}})$. Furthermore, they are the same in the Feynman gauge ($\xi=1$). In order for them to be the same in an arbitrary gauge $\xi$, we must seek a non-trivial $W$-function in Eq.~(\ref{tau in terms of W}), still satisfying the restriction in Eq~(\ref{W restriction}), so that the corresponding perturbative vertex is consistent with Eq.~(\ref{tau perturbative yajaira}) in the asymptotic limit $k^{2}\gg p^{2}$. Perhaps the simplest such choice for $W$ is \begin{eqnarray} W \left( \frac{k^{2}}{p^{2}} \right) &=& \lambda \, \frac{k^{2}}{p^{2}} \ln \left( \frac{k^{2}}{p^{2}} \right) + \frac{\lambda}{2} \, \frac{k^{2}}{p^{2}} \,, \label{W perturbative} \\ \nonumber \end{eqnarray} with $\lambda = - 3\alpha (\xi - 1)/ 2\pi$. In the Feynman gauge ($\xi = 1$) $W=0$, i.e., there is no necessity of a non-trivial $W$-function since both perturbative vertices, Eqs. (\ref{tau perturbative yajaira}) and~(\ref{tau perturbative Luis}) are already the same. Note that the second term in the right hand side of Eq.~(\ref{W perturbative}) is a convenient term to ensure MR of the scalar propagator. It drops out in the LLA. Using the variable $x=k^{2}/p^{2}$ in Eq.~(\ref{W perturbative}), we have \begin{equation} W(x) = \lambda \, x \ln x + \frac{\lambda}{2} \, x \,, \label{W ansatz} \end{equation} so that the restriction in Eq.~(\ref{W restriction}) is trivially satisfied. Using the choice in Eq.~(\ref{W perturbative}) for $W$ in the vertex, Eq.~(\ref{tau in terms of W}), we can finally define the transverse form factor as: \begin{eqnarray} \tau(k^{2},p^{2}) & = & - \frac{3}{2} \frac{1}{\left(k^{2}-p^{2}\right)} \left[ \frac{1}{F(k^{2})} - \frac{1}{F(p^{2})} \right] \nonumber \\ & & \hspace{-2cm} - \frac{\xi}{2} \frac{1}{\left(k^{2}-p^{2}\right)} \frac{F(k^{2}) + F(p^{2})}{s(k^{2},p^{2})} \left[ \frac{1}{F(k^{2})} - \frac{1}{F(p^{2})} \right] \nonumber \\ & & \hspace{-2cm} - \frac{\left( \xi - 1 \right)}{ \left( k^{2} - p^{2} \right) s(k^{2},p^{2}) } \frac{3\alpha}{8\pi} \left[ \frac{k^{2}}{p^{2}} + \frac{p^{2}}{k^{2}} \right] \ln \left( \frac{k^{2}}{p^{2}} \right) \,. \label{tau Final} \end{eqnarray} Note that the Eqs.~(\ref{Ball-Chiu vertex decomposition},\ref{Longitudinal vertex},\ref{Transverse vertex},\ref{tau Final}) define our full vertex {\em ansatz}. It ensures the following key features of sQED: \begin{table*} \begin{tabular}{|c|l|c|c|c|} \hline & Structure & MR & $\beta$\\ \hline Bare Vertex & $(k+\omega)^\mu$& No &\\ BC Vertex & $[ (S^{-1} (\omega)-S^{-1}(k)) (\omega + k)^{\mu}]/(\omega^{2} - k^{2}) $ & No & \\ This work & $\Gamma_{T}^{\mu}(\omega,k) =\Gamma_{L}^{\mu}(\omega,k)+ \tau(\omega^{2},k^{2},q^{2})[\left( \omega \cdot q \right) k^{\mu} - \left(k \cdot q \right)\omega^{\mu}] \label{T mu}$ & Yes & $\alpha(\xi-3)/4\pi$ \\ \hline \end{tabular} \label{table-1} \caption{We compare different vertex {\em ans\"{a}tze}: Bare, BC and our proposal, the last being the only vertex satisfying the constraints of LKFT and MR. The last column gives the value of the exponent $\beta$ of the multiplicatively renormalizable wave-function renormalization in Eq.~(\ref{F unrenormalized lead log expansion}).} \label{tab_struc} \end{table*} \begin{itemize} \item It satisfies the WFGTI by construction,~\cite{Ward:1950xp,Green:1953te,Takahashi:1957xn}. \item It guarantees the LKFT property of the scalar propagator and can be checked by employing it in its SDE. In other words, it ensures the multiplicative renormalizability (MR) of the two point scalar propagator. \item It reduces to its one loop perturbation theory Feynman expansion in the limit of small coupling and asymptotic values of momenta $k^2 \gg p^2$. \item It has the same symmetry properties as the bare vertex under charge conjugation, parity and time reversal, which imply symmetry under $k \leftrightarrow p$. \item It is free of any kinematic singularities when $k^2 \Rightarrow p^2$, i.e., \begin{eqnarray} \underset {k^2 \Rightarrow p^2}{\rm lim} \; (k^2 - p^2) \, \tau(k^2,p^2) = 0 \,. \end{eqnarray} \end{itemize} An important thing to note is that in the {\rm ansatz} for $W$ given in Eq.~(\ref{W ansatz}), MR condition is satisfied independently of the value of the parameter $\lambda$. Moreover, $\lambda$ is tied to the anomalous dimensions $\beta$. To the first order in $\alpha$, we have \begin{eqnarray} \beta = - \frac{\lambda}{6} - \frac{\alpha}{2 \pi} \,. \end{eqnarray} The NLL and subsequent logs can be obtained by writing out: \begin{eqnarray} \beta = \frac{\alpha (\xi-3)}{4 \pi} + c_2 {\cal O}(\alpha^2) + c_3 {\cal O}(\alpha^3) + \cdots \,. \end{eqnarray} Note that the scalar and tensor vertices present in the SDE of the scalar propagator, Eq.~(\ref{gap equation}), can start contributing at the NLL and hence are required to determine the values of the coefficients $c_i, i\geq 2$. However, the NLL and constraints from subsequent orders can be absorbed in our {\em ansatz} for the effective vector vertex. Practically, this is achieved by a new definition for $\lambda$ without affecting the MR condition. Therefore, the procedure outlined above can easily accommodate the NLL, NNLL and so on. We only require $c_i$ for $i=2,3, \cdots$, which are provided by increasing orders of perturbation theory, see for example~\cite{Capper:1985nk}. Note that the kinematic dependence of the vertex on $q^2$ plays no role asymptotically and the standard analysis proceeds without reference to it. On the infrared domain, however, the kinematic dependence on $q^2$ may be important. Our vertex has this pitfall but its simplicity is reason enough for us to ignore this dependence. Finally, in Table~\ref{tab_struc}, we compare different vertex {\em ans\"{a}tze} as regards the correct behavior of the scalar propagator under LKFT and MR. Neither the bare vertex nor the BC vertex yield an MR solution. Our proposal is the only one satisfying this constraint with the exponent of the wavefunction renormalization in agreement with the all order LLA in perturbation theory. \section{\label{sec:Conc} Conclusions} In the massless quenched sQED, we have derived a practical and easy to implement constraint of multiplicative renormalizability on the three point scalar-photon vertex. It leads to a family of these vertices in terms of a constrained dimensionless function $W(x)$. It has a remarkably simple non perturbative integral restriction: \begin{equation} \int_{0}^{1}{ dx \, W(x) } = 0 \,, \nonumber \end{equation} which guarantees the multiplicative renormalizability of the scalar propagator to all orders in perturbation theory. We further pin down $W$ through the constraints of one loop perturbation theory in the asymptotic limit, lack of kinematic singularities and the imposition of discrete symmetries. Finally, we construct a simple example ensuring all these key features of the sQED. Though it is an example from one of the simplest QFTs, it provides a systematic procedure for constructing a three point function in terms of the corresponding two point function. This method is general and can be implemented in a similar manner to unquenched sQED as well as any other QFT of interest. In this connection, we would like to comment that an extension to the case of unquenched sQED is algebraically rather involved. For example for spinor QED, its unquenched version has been investigated in~\cite{Kizilersu:2009kg}. It involves the constraints of MR both on the fermion and photon propagators for massless fermions. However, the fact remains that in the limit of $n_f \rightarrow 0$, one recuperates the quenched QED results. Another obvious and straightforward extension of this work is to apply the same formalism to QCD and constrain the quark-gluon vertex through the requirements of MR. It will supplement the earlier works to improve our understanding of this three point function on lattice,~\cite{Skullerud:2003qu,Skullerud:2004gp,Kizilersu:2006et}, as well as through continuum methods,~\cite{Chang:2009zb,Qin:2013mta,Rojas:2013tza}. We naturally expect the quark-gluon vertex to invoke more W-functions because the transverse part of this three point vertex is a lot richer than the one in sQED with eight independent transverse vectors as compared to only one for the latter. This work is currently in progress. \\ \noindent {\bf Acknowledgements:} We are grateful to Sixue Qin, Robert Delbourgo and Alfredo Raya for helpful discussions. This work was partly supported by CIC, CONACyT and PRODEP grants.
1,314,259,994,643
arxiv
\section{Introduction} Security-typed languages restrict the ways that classified information can flow from high-security to low-security clients. \citet[Abadi \etal]{abadi:1999} pioneered the use of \emph{idempotent monads} to deliver this restriction in their \DefEmph{dependency core calculus} (DCC), parameterized in a poset of security levels $\LVL$. Covariantly in security levels $l\in\LVL$, a family of type operations $\alert{\TpSeal{l}{A}}$ satisfying the rules of an idempotent monad are added to the language; the idea is then that sensitive data can be hidden underneath $\TpSeal{l}$ and unlocked only by a client with a type that can be equipped with a $\TpSeal{l}$-algebra structure, \ie a \DefEmph{$\LvlPol{l}$-sealed type} in our terminology.\footnote{We use the term ``sealing'' for what \citet[Abadi \etal]{abadi:1999} call ``protection''; to avoid confusion, we impose a uniform terminology to encompass both our work and that of \opcit. A final notational deviation on our part is that we will distinguish a security level $l\in\LVL$ from the corresponding syntactical entity $\LvlPol{l}$.} For instance, a high-security client can read a medium-security bit: \[ \begin{mathblock} \Con{f} : \TpSeal{\Con{M}}{\TpBool}\to \TpSeal{\Con{H}}{\TpBool}\\ \Con{f}\, u = x \leftarrow u; \TmSeal{\Con{H}}{\prn{\Con{not}\,x}} \end{mathblock} \] There is however no corresponding program of type $\TpSeal{\Con{H}}{\TpBool}\to \TpSeal{\Con{M}}{\TpBool}$, because the type $\TpSeal{\Con{M}}{\TpBool}$ of medium-security booleans is not $\LvlPol{\Con{H}}$-sealed, \ie it cannot be equipped with the structure of a $\TpSeal{\Con{H}}$-algebra. In fact, up to observational equivalence it is possible to state a \DefEmph{noninterference result} that fully characterizes such programs: \begin{proposition*}[Noninterference] % For any closed function $\cdot\vdash\Con{f}:\TpSeal{\Con{H}}{\TpBool}\to\TpSeal{\Con{M}}{\TpBool}$, there exists a closed $\cdot\vdash b:\TpSeal{\Con{M}}\TpBool$ such that $\Con{f}\simeq \lambda\_.{b}$. % \end{proposition*} Intuitively the noninterference result above follows because you cannot ``escape'' the monad, but to prove such a result rigorously a model construction is needed. Today the state of the art is to employ a \DefEmph{relational model} in the sense of Reynolds in which a type is interpreted as a binary relation on some domain, and a term is interpreted by a relation-preserving function. Our contribution is to introduce an \DefEmph{intrinsic} and \DefEmph{non-relational semantics} of noninterference presenting several advantages that we will argue for, inspired by the recent modal reconstruction of \emph{phase distinctions} by \citet[Sterling and Harper]{sterling-harper:2021}. \subsection{Termination-insensitivity and the meaning of ``observation''}\label{sec:intro:tini} \begin{notation} We will write $\Lift{A}$ for the lifting monad that Abadi \etal notate $A_\bot$. \end{notation} When we speak of noninterference up to observational equivalence, much weight is carried by the choice of what, in fact, counts as an observation. In a functional language with general recursion, it is conventional to say that an observation is given by a computation of $\TpUnit$ type --- which necessarily either diverges or converges with the unique return value $\prn{}$. Under this notion of observation, noninterference up to observations takes a very strong character: \begin{quote} \emph{Termination-sensitive noninterference.} For a closed partial function $\cdot\vdash\Con{f}:\TpSeal{\Con{H}}\TpBool \to \Lift\,\prn{\TpSeal{\Con{M}}\TpBool}$, either $\Con{f} \simeq \lambda\_.\bot$ or there exists $\cdot\vdash b: \TpSeal{\Con{M}}\TpBool$ such that $\Con{f}\simeq\lambda\_.b$. \end{quote} If on the other hand we restrict observations to only terminating computations of type $\TpBool$, we evince a more relaxed \DefEmph{termination-insensitive} version of noninterference that allows leakage through the termination channel but \emph{not} through the ``return channel'': \begin{quote} \emph{Termination-insensitive noninterference.} For a closed partial function $\cdot\vdash\Con{f}:\TpSeal{\Con{H}}\TpBool\to\Lift\,\prn{\TpSeal{\Con{M}}\TpBool}$, given any closed $u,v$ on which $f$ terminates, we have $fu \simeq fv$. \end{quote} \subsection{Relational \emph{vs.}\ intrinsic semantics}\label{sec:relational-vs-intrinsic} To verify the noninterference property for the dependency core calculus, \citet[Abadi \etal]{abadi:1999} define a \DefEmph{relational semantics} that starts from an insecure model of computation (domain theory \emph{qua} dcpos) and restricts it by means of binary relations indexed in security levels that express the indistinguishability of sensitive bits to low-security clients. The indistinguishability relations are required to be preserved by all functions, ensuring the security properties of the model. The relational approach has an extrinsic flavor, being characterized by the \emph{post hoc} imposition of order (noninterference) on an inherently disordered computational model. We contrast the extrinsic relational semantics of \opcit with an \emph{intrinsic} denotational semantics in which the underlying computational model has security concerns ``built-in'' from the start. \subsection{Our contribution: intrinsic semantics of noninterference} The main contribution of our paper is to develop an \emph{intrinsic semantics} in the sense of \cref{sec:relational-vs-intrinsic}, in which termination-insensitive noninterference (\cref{sec:intro:tini}) is not bolted on but rather arises directly from the underlying computational model. To summarize our approach, instead of controlling the security properties of ordinary dcpos using a $\LVL$-indexed logical relation, we take semantics in a category of $\LVL$-indexed dcpos, \ie sheaves of dcpos on a space $\LvlTop$ in which each security level $l\in \LVL$ corresponds to an open/closed partition. Employing the viewpoint of \citet[Sterling and Harper]{sterling-harper:2021}, each of these partitions induces a \DefEmph{phase distinction} between data visible below security level $l$ (open) and data that is hidden (closed), leading to a novel account of the sealing monad $\TpSeal{l}$ as restriction to a closed subspace. Our intrinsic semantics has several advantages over the relational approach. Firstly, termination-insensitive noninterference arises directly from our computational model. Secondly, our model of secure information flow contributes to the consolidation and unification of ideas in programming languages by treating general recursion and security typing as instances of two orthogonal and well-established notions, namely \DefEmph{axiomatic \& synthetic domain theory} and \DefEmph{phase distinctions}/\DefEmph{Artin gluing} respectively. Termination-insensitivity then arises from the non-trivial interaction between these orthogonal layers. In particular, our computational model is an instance of axiomatic domain theory in the sense of \citet[Fiore]{fiore:1994}, and embeds into a sheaf model of {synthetic domain theory}~\citep{fiore-rosolini:1997,fiore-plotkin-power:1997, fiore-plotkin:1996,fiore-rosolini:1997:cpos,fiore:1997,fiore-rosolini:2001,matache-moss-staton:2021}. Hence the interpretation of the PCF fragment of DCC is interpreted exactly as in the standard Plotkin semantics of general recursion in categories of partial maps, in contrast to the relational model of Abadi~\etal. Lastly, the view of security levels as phase distinctions per \citet[Sterling and Harper]{sterling-harper:2021} advances a uniform perspective on noninterference scenarios that has already proved fruitful for resolving several problems in programming languages: \begin{enumerate} \item A generalized abstraction theorem for ML modules with strong sums~\citep{sterling-harper:2021}. \item Normalization and decidability of type checking for cubical type theory~\citep{sterling-angiuli:2021,sterling:2021:thesis} and multi-modal type theory~\citep{gratzer:normalization:2022}; guarded canonicity for guarded dependent type theory~\citep{gratzer-birkedal:2022}. \item The design and metatheory of the \textbf{calf} logical framework~\citep{niu-sterling-grodin-harper:2022} for simultaneously verifying the correctness and complexity of functional programs. \end{enumerate} The final benefit of the phase distinction perspective is that logical relations arguments can be re-cast as imposing an \emph{additional} orthogonal phase distinction between \emph{syntax} and \emph{logic/specification}, an insight originally due to Peter Freyd in his analysis of the existence and disjunction properties in terms of Artin gluing~\citep{freyd:1978}. We employ this insight in the present paper to develop a uniform treatment of our denotational semantics and its computational adequacy in terms of phase distinctions. % \section{Background: relational semantics of noninterference}\label{sec:relational-semantics} To establish noninterference for the dependency core calculus, \citet[Abadi \etal]{abadi:1999} define a relational model of their monadic language in which each type $A$ is interpreted as a dcpo $\vrt{A}$ equipped with a family of admissible binary relations $R^A_l$ indexed in security levels $l\in \LVL$. In the relational semantics, a term $\Gamma\vdash M : A$ is interpreted as a continuous function $\Mor[\vrt{M}]{\vrt{\Gamma}}{\vrt{A}}$ such that for all $l\in \LVL$, if $\gamma \mathrel{R\Sup{\Gamma}\Sub{l}} \gamma'$ then $\vrt{M}\gamma \mathrel{R\Sup{A}\Sub{l}} \vrt{M}\gamma'$. \begin{remark} % Two elements $u,v\in A$ such that $u\mathrel{R}_l^Av$ have been called \emph{equivalent} in subsequent literature, but this terminology may lead to confusion as there is nothing forcing the relation to be transitive, nor even symmetric nor reflexive. % \end{remark} The essence of the relational model is to impose \emph{relations} between elements that should not be distinguishable by a certain security class; a type like $\TpBool$ or $\Con{string}$ whose relation is totally discrete, then, allows any security class to distinguish all distinct elements. Non-discrete types enter the picture through the sealing modality $\TpSeal{l}$: \[ \vrt{\TpSeal{l}{A}} = \vrt{A} \qquad u\mathrel{R\Sup{\TpSeal{l}{A}}\Sub{k}}v \Longleftrightarrow \begin{cases} u\mathrel{R\Sup{A}\Sub{k}}v & \text{if } l \sqsubseteq k\\ \top & \text{otherwise} \end{cases} \] Under this interpretation, the denotation of a function $\TpSeal{\Con{H}}\TpBool\to\TpSeal{\Con{M}}{\TpBool}$ must be a constant function, as $u\mathrel{R\Sup{\TpBool}\Sub{\Con{H}}}v$ if and only iff $u=v$. By proving computational adequacy for this denotational semantics, one obtains the analogous \emph{syntactic} noninterference result up to observational equivalence. \emph{Generalization and representation of relational semantics.} The relations imposed on each type give rise to a form of cohesion in the sense of \citet[Lawvere]{lawvere:2007}, where elements that are related are thought of as ``stuck together''. Then noninterference arises from the behavior of maps from a relatively codiscrete space into a relatively discrete space, as pointed out by \citet[Kavvos]{kavvos:2019} in his \emph{tour de force} generalization of the relational account of noninterference in terms of axiomatic cohesion. Another way to understand the relational account is by \emph{representation}, as attempted by \citet[Tse and Zdancewic]{tse-zdancewic:2004} and executed by \citet[Bowman and Ahmed]{bowman-ahmed:2015}: one may embed DCC into a polymorphic lambda calculus in which the security abstraction is implemented by \emph{actual} type abstraction. \paragraph*{Adapting the relational semantics for termination-insensitivity} In the relational semantics of the dependency core calculus, the termination-sensitive version of noninterference is achieved by interpreting the \emph{lift} of a type in the following way: \[ \vrt{A_\bot} = \vrt{A}_\bot \qquad u \mathrel{R\Sup{A_\bot}\Sub{l}} v \Longleftrightarrow \prn{\IsDefd{u,v} \land u \mathrel{R\Sup{A}\Sub{l}} v} \lor \prn{u=v=\bot} \] To adapt the relational semantics for termination-insensitivity, Abadi \etal change the interpretation of lifts to identify \emph{all} elements with the bottom element: \[ \vrt{A_\bot} = \vrt{A}_\bot \qquad u \mathrel{R\Sup{A_\bot}\Sub{l}} v \Longleftrightarrow \prn{\IsDefd{u,v} \land u \mathrel{R\Sup{A}\Sub{l}} v} \lor \prn{u = \bot} \lor \prn{v = \bot} \] That all data is ``indistinguishable'' from the non-terminating computation means that the indistinguishability relation cannot be both transitive and non-trivial, a somewhat surprising state of affairs that leads to our critique of relational semantics for information flow below and motivates our new perspective based on the analogy between \emph{phase distinctions} in programming languages and \emph{open/closed partitions} in topological spaces~\citep{sterling-harper:2021}. \paragraph*{Critique of relational semantics for information flow} From our perspective there are several problems with the relational semantics of \citet[Abadi \etal]{abadi:1999} that, while not fatal on their own, inspire us to search for an alternative perspective. \emph{Failure of monotonicity.} First of all, within the context of the relational semantics it would be appropriate to say that an object $\prn{\vrt{A},R^A\Sub{\bullet}}$ is $\LvlPol{l}$-sealed when $R^A_l$ is the total relation. But in the semantics of Abadi \etal, it is not necessarily the case that a $\LvlPol{l}$-sealed object is $\LvlPol{k}$-sealed when $k\sqsubseteq l$. It is true that objects that are \emph{definable} in the dependency core calculus are better behaved, but in proper denotational semantics one is not concerned with the image of an interpretation function but rather with the entire category. \emph{Failure of transitivity.} A more significant and harder to resolve problem is the fact that the indistinguishability relation $R^A_l$ assigned to each type cannot be construed as an equivalence relation --- despite the fact that in real life, indistinguishability is indeed reflexive, symmetric, and transitive. As we have pointed out, the adaptation of DCC's relational semantics for termination-insensitivity is evidently incompatible with using (total or partial) equivalence relations to model indistinguishability, as transitivity would ensure that no two elements of $A_\bot$ can be distinguished from another. \emph{Where is the dominance?} Conventionally the denotational semantics for a language with general recursion begins by choosing a category of ``predomains'' and then identifying a notion of \emph{partial map} between them that evinces a \emph{dominance}~\citep{fiore:1994,rosolini:1986}. It is unclear in what sense the DCC's relational semantics reflects this hard-won arrangement; as we have seen, the adaptation of the relational semantics for termination-insensitivity further increases the distance from ordinary domain-theoretic semantics. \textbf{Perspective.} Abadi \etal's relational semantics is based on imposing secure information flow properties on an existing insecure model of partial computation, but this is quite distinct from an \emph{intrinsic denotational semantics} for secure information flow --- which would necessarily entail new notions of predomain and partial map that are sensitive to security from the start. In this paper we report on such an intrinsic semantics for secure information flow in which termination-insensitive noninterference arises inexorably from the chosen dominance. % \section{Central ideas of this paper} In this section, we dive a little deeper into several of the main concepts that substantiate the contributions of this paper. We begin by fixing a poset $\LVL$ of security levels closed under finite meets, for example $\LVL = \brc{\LvlLow\sqsubset\LvlMed\sqsubset\LvlHigh\sqsubset\LvlGod}$. The purpose of including a security level even higher than $\LvlHigh$ will become apparent when we explain the meaning of the sealing monad $\TpSeal{l}$. \begin{notation} % Given a space $\XTop$ and an open set $U\in\Opns{\XTop}$, we will write $\Sl{\XTop}{U}$ for the open subspace spanned by $U$ and $\ClSubcat{\XTop}{U}$ for the corresponding complementary closed subspace. We also will write $\Sh{\XTop}$ for the category of sheaves on the space $\XTop$. % \end{notation} \subsection{A space of abstract behaviors and security policies} We begin by transforming the security poset $\LVL$ into a topological space $\LvlTop$ of ``abstract behaviors'' whose algebra of open sets $\Opns{\LvlTop}$ can be thought of as a lattice of \emph{security policies} that govern whether a given behavior is permitted. \begin{definition} % An \DefEmph{abstract behavior} is a \emph{filter} on the poset $\LVL$, \ie a monotone subset $x\subseteq\LVL$ such that $\Conj{i<n}{l_i}\in x$ if and only if each $l_i\in x$. % \end{definition} \begin{definition} % A \DefEmph{security policy} is a lower set in $\LVL$, \ie an antitone subset $U\subseteq\LVL$. We will write $\alert{U\Vdash x}$ to mean $U$ permits the behavior $x$, \ie the subset $x\cap U$ is inhabited. % \end{definition} An abstract behavior $x$ denotes the set of security levels $l\in\LVL$ at which it is permitted; a security policy $U$ denotes the set of security levels \emph{above which} some behavior is permitted. \begin{construction} % We define $\LvlTop$ to be the topological space whose points are abstract behaviors, and whose open sets are of the form $\Compr{x}{U\Vdash x}$ for some security policy $U$.\footnote{Those familiar with the point-free topology of topoi~\citep{johnstone:2002,vickers:2007,anel-joyal:2021} will recognize that $\LvlTop$ is more simply described as the presheaf topos $\PrTop{\LVL}$: viewed as a space, it is the dcpo completion of $\OpCat{\LVL}$, and as a frame it is the free cocompletion of $\LVL$. The definition of $U\Vdash x$ then presents a computation of the \emph{stalk} $U_x$ of the subterminal sheaf $U\in\Sh{\LvlTop}$ at the behavior $x\in\LvlTop$.} % \end{construction} We have a meet-preserving embedding of posets $\EmbMor[\LvlPol{-}]{\LVL}{\Opns{\LvlTop}}$ that exhibits $\Opns{\LvlTop}$ as the free completion of $\LVL$ under joins. \begin{intuition}[Open and closed subspaces] % Each security level $l\in\LVL$ represents a security policy $\LvlPol{l}\in\Opns{\LvlTop}$ whose corresponding open subspace $\Sl{\LvlTop}{\LvlPol{l}}$ is spanned by the behaviors \emph{permitted} at security levels $l$ and above. Conversely the complementary closed subspace $\ClSubcat{\LvlTop}{\LvlPol{l}} = \LvlTop\setminus\Sl{\LvlTop}{\LvlPol{l}}$ is spanned by behaviors that are \emph{forbidden} at security level $l$ and below. % \end{intuition} \subsection{Sheaves on the space of abstract behaviors} Our intention is to interpret each type of a dependency core calculus as a \emph{sheaf} on the space $\LvlTop$ of abstract behaviors. To see why this interpretation is plausible as a basis for secure information flow, we note that a sheaf on $\LvlTop$ is the same thing as a presheaf on the poset $\LVL$, \ie a family of sets $\prn{A\Sub{l}}\Sub{l\in\LVL}$ indexed contravariantly in $\LVL$ in the sense that for $k\sqsubseteq l$ there is a chosen restriction function $A_l\to A_k$ satisfying two laws. Hence a sheaf on $\LvlTop$ determines (1) for each security level $l\in\LVL$ a choice of what data is visible under the security policy $\LvlPol{l}$, and (2) a way to \emph{redact} data as it passes under a more restrictive security policy $\LvlPol{k}\subseteq\LvlPol{l}$. \subsection{Transparency and sealing from open and closed subspaces}\label{sec:key-ideas:modalities} For any subspace $\TopIdent{Q}\subseteq\LvlTop$, a sheaf $A\in\Sh{\LvlTop}$ can be restricted to $\TopIdent{Q}$, and then extended again to $\LvlTop$. This composite operation gives rise to an \emph{idempotent monad} on $\Sh{\LvlTop}$ that has the effect of purging any data from $A\in\Sh{\LvlTop}$ that cannot be seen from the perspective of $\TopIdent{Q}$. The idempotent monads corresponding to the open and closed subspaces induced by a security level $l\in\LVL$ are named and notated as follows: \begin{enumerate} \item The \DefEmph{transparency monad} $A\mapsto \alert{\prn{\OpMod{\LvlPol{l}}{A}}}$ replaces $A$ with whatever part of it can be viewed under the policy $\LvlPol{l}$. The transparency monad is the function space $A\Sup{\LvlPol{l}}$, recalling that an open set of $\LvlTop$ is the same as a subterminal sheaf. When the unit is an isomorphism at $A$, we say that $A$ is \DefEmph{$\LvlPol{l}$-transparent}. \item The \DefEmph{sealing monad} $A\mapsto \alert{\prn{\ClMod{\LvlPol{l}}{A}}}$ removes from $A$ whatever part of it can be viewed under the policy $\LvlPol{l}$. The sealing monad can be constructed as the pushout $\LvlPol{l}\sqcup\Sub{\LvlPol{l}\times A}A$. When the unit is an isomorphism at $A$, we say that $A$ is \DefEmph{$\LvlPol{l}$-sealed}. \end{enumerate} \begin{figure} % \begingroup \tikzset{every node/.append style={opacity = 1}} \[ \begin{tikzpicture}[scale=.5,block/.append style = {fill=RegalBlue!60},baseline=(current bounding box.center)] \draw[block,fill opacity=.2] (0,0) rectangle node {$A\Sub{\LvlLow}$} (2,1); \draw[block,fill opacity=.4] (0,1) rectangle node {$A\Sub{\LvlMed}$} (2,2); \draw[block,fill opacity=.6] (0,2) rectangle node {$A\Sub{\LvlHigh}$} (2,3); \draw[block,fill opacity=.8] (0,3) rectangle node {$A\Sub{\LvlGod}$} (2,4); \draw[draw=none] (0,-1) rectangle node {$\Sh{\LvlTop}\ni A$} (2,0); \end{tikzpicture} \mapsto \begin{tikzpicture}[scale=.5,block/.append style = {fill=RegalBlue!60,fill opacity = .2},baseline=(current bounding box.center)] \draw[draw=none] (0,0) rectangle (2,4); \draw[draw=none] (0,0) rectangle (2,3); \draw[block,fill opacity = 0.2] (0,0) rectangle node {$A\Sub{\LvlLow}$} (2,1); \draw[block,fill opacity = 0.4] (0,1) rectangle node {$A\Sub{\LvlMed}$} (2,2); \draw[draw=none] (0,-1) rectangle node {$\vphantom{\Sh{\LvlTop}}\smash{\Sh{\Sl{\LvlTop}{\LvlPol{\LvlMed}}}}$} (2,0); \end{tikzpicture} \mapsto \begin{tikzpicture}[scale=.5,block/.append style = {fill=RegalBlue!60,fill opacity = .2},baseline=(current bounding box.center)] \draw[block,fill opacity = 0.2] (0,0) rectangle node {$A\Sub{\LvlLow}$} (2,1); \draw[block,fill opacity = 0.4] (0,1) rectangle node {$A\Sub{\LvlMed}$} (2,2); \draw[block,fill opacity = 0.4] (0,2) rectangle node {$A\Sub{\LvlMed}$} (2,3); \draw[block,fill opacity = 0.4] (0,3) rectangle node {$A\Sub{\LvlMed}$} (2,4); \draw[draw=none] (0,-1) rectangle node {$\Sh{\LvlTop}\ni \alert{\OpMod{\LvlPol{\LvlMed}}{A}}$} (2,0); \end{tikzpicture} \qquad \begin{tikzpicture}[scale=.5,block/.append style = {fill=RegalBlue!60,fill opacity = .2},baseline=(current bounding box.center)] \draw[block,fill opacity=0.2] (0,0) rectangle node {$A\Sub{\LvlLow}$} (2,1); \draw[block,fill opacity=0.4] (0,1) rectangle node {$A\Sub{\LvlMed}$} (2,2); \draw[block,fill opacity=0.6] (0,2) rectangle node {$A\Sub{\LvlHigh}$} (2,3); \draw[block,fill opacity=0.8] (0,3) rectangle node {$A\Sub{\LvlGod}$} (2,4); \draw[draw=none] (0,-1) rectangle node {$\Sh{\LvlTop}\ni A$} (2,0); \end{tikzpicture} \mapsto \begin{tikzpicture}[scale=.5,block/.append style = {fill=RegalBlue!60,fill opacity = .2},baseline=(current bounding box.center)] \draw[draw=none] (0,0) rectangle (2,4); \draw[draw=none] (0,0) rectangle (2,3); \draw[block,fill opacity=.8] (0,3) rectangle node {$A\Sub{\LvlGod}$} (2,4); \draw[block,fill opacity=.6] (0,2) rectangle node {$A\Sub{\LvlHigh}$} (2,3); \draw[draw=none] (0,-1) rectangle node {$\vphantom{\Sh{\LvlTop}}\smash{\Sh{\ClSubcat{\LvlTop}{\LvlPol{\LvlMed}}}}$} (2,0); \end{tikzpicture} \mapsto \begin{tikzpicture}[scale=.5,block/.append style = {fill=RegalBlue!60,fill opacity = .2},baseline=(current bounding box.center)] \draw[block,fill=gray,fill opacity=.05] (0,0) rectangle node {$\brc{\SlPt}$} (2,1); \draw[block,fill=gray,fill opacity=.05] (0,1) rectangle node {$\brc{\SlPt}$} (2,2); \draw[block,fill opacity=.6] (0,2) rectangle node {$A\Sub{\LvlHigh}$} (2,3); \draw[block,fill opacity=.8] (0,3) rectangle node {$A\Sub{\LvlGod}$} (2,4); \draw[draw=none] (0,-1) rectangle node {$\Sh{\LvlTop}\ni\alert{\ClMod{\LvlPol{\LvlMed}}{A}}$} (2,0); \end{tikzpicture} \] \endgroup \caption{The transparency and sealing monads for $\LvlMed\in\LVL$ on a sheaf $A\in\Sh{\LvlTop}$ visualized.} \label{fig:transp-and-sealing-viz} \end{figure} The transparency and sealing monads interact in two special ways, which can be made apparent by appealing to the visualization of their behavior that we present in \cref{fig:transp-and-sealing-viz}. \begin{enumerate} \item The $\LvlPol{l}$-transparent part of a $\LvlPol{l}$-sealed sheaf is trivial, \ie we have $\prn{\OpMod{\LvlPol{l}}{\prn{\ClMod{\LvlPol{l}}{A}}}}\cong \brc{\SlPt}$. \item Any sheaf $A\in\Sh{\LvlTop}$ can be reconstructed as the fiber product $\prn{\OpMod{\LvlPol{l}}{A}}\times\Sub{\ClMod{\LvlPol{l}}{\prn{\OpMod{\LvlPol{l}}{A}}}}\ClMod{\LvlPol{l}}{A}$. \end{enumerate} The first property above immediately gives rise to a form of noninterference, which justifies our intent to interpret DCC's sealing monad as $\alert{\TpSeal{l}{A} = \ClMod{\LvlPol{l}}{A}}$. \begin{observation}[Noninterference]\label{obs:naive-noninterference} Any map $\Mor{\ClMod{\LvlPol{l}}{A}}{\Con{bool}}$ is constant. \end{observation} \begin{proof} We may verify that the boolean sheaf $\TpBool$ is $\LvlPol{l}$-transparent for all $l\in\LVL$. \end{proof} Our sealing monad above is well-known to the type-and-topos--theoretic community as the \DefEmph{closed modality}~\citep{rijke-shulman-spitters:2020,schultz-spivak:2019,sga:4} corresponding to the open set $\LvlPol{l}\in\Opns{\LvlTop}$. In the context of (total) dependent type theory, our sealing monad has excellent properties not shared by those of \citet[Abadi \etal]{abadi:1999}, such as justifying \emph{dependent} elimination rules and commuting with identity types. In contrast to the \emph{classified sets} of \citet[Kavvos]{kavvos:2019} which cannot form a topos, our account of information flow is compatible with the full internal language of a topos. \subsection{Recursion and termination-insensitivity via sheaves of domains}\label{sec:key-ideas:recursion} To incorporate recursion into our sheaf semantics of information flow, in this section we consider \emph{internal dcpos} in $\Sh{\LvlTop}$, \ie sheaves of dcpos. Later in the technical development of our paper, we work in the axiomatic setting of synthetic domain theory, but all the necessary intuitions can also be understood concretely in terms of dcpos. Domain theory internal to $\Sh{\LvlTop}$ works very similarly to classical domain theory, but it must be developed without appealing to the law of the excluded middle or the axiom of choice as these do not hold in $\Sh{\LvlTop}$ except for a particularly degenerate security poset. \citet[De Jong and Escard\'o]{dejong-escardo:2021} explain how to set up the basics of domain theory in a suitably constructive manner, which we will not review. The sheaf-theoretic domain semantics sketched above leads immediately to a new and simplified account of termination-insensitivity. It is instructive to consider whether there is an analogue to \cref{obs:naive-noninterference} for partial continuous functions $\alert{\Mor{\ClMod{\LvlPol{l}}{A}}{\Lift\,{\TpBool}}}$. It is not the case that $\Lift\,\TpBool$ is $\LvlPol{l}$-sealed for all $l\in\LVL$, so it would not follow that any continuous map $\Mor{\ClMod{\LvlPol{l}}{A}}{\Lift\,{\TpBool}}$ is constant. A partial function always extends to a total function on a restricted domain, however, so we may immediately conclude the following: \begin{observation}[Termination-insensitive noninterference] % For any continuous map $\Mor[f]{\ClMod{\LvlPol{l}}{A}}{\Lift\,{\TpBool}}$ and elements $u,v:\ClMod{\LvlPol{l}}{A}$ with $fu$ and $fv$ defined, we have $fu = fv$. % \end{observation} This is the sense in which termination-insensitive noninterference arises automatically from the combination of domain theory with sheaf semantics for information flow. \section{Refined dependency core calculus}\label{sec:calculus} We now embark on the technical development of this paper, beginning with a call-by-push-value (cbpv) style~\citep{levy:2004} refinement of the dependency core calculus over a poset $\LVL$ of security levels. We will work informally in the logical framework of locally Cartesian closed categories \emph{\`a la} \citet[Gratzer and Sterling]{gratzer-sterling:2020}; we will write $\TCat$ for the free locally Cartesian closed category generated by all the constants and equations specified herein.% \NewDocumentCommand\RuleBlockJudgmental{}{ \Tp\Pos,\Tp\Neg : \LfSort\qquad \Tm : \Tp\Pos\to\LfSort\qquad \TpU : \Tp\Neg\to\Tp\Pos\qquad \TpF : \Tp\Pos\to\Tp\Neg } \NewDocumentCommand\RuleBlockRetBind{}{ \TmRet : A \to \TpU\TpF{A}\qquad \TmBind : \TpU\TpF{A} \to \prn{A\to \TpU{X}} \to \TpU{X} } \NewDocumentCommand\RuleBlockBindEqns{}{ \TmBind\,\prn{\TmRet\,{u}}\,f \equiv\Sub{\TpU{X}} f\,u\qquad \TmBind\, u\, \TmRet \equiv\Sub{\TpU\TpF{A}} u\qquad \TmBind\, \prn{\TmBind\, u\, f}\, g \equiv\Sub{\TpU{X}} \TmBind\, u\, \prn{\lambda x. \TmBind\, \prn{f\,x}\, g} } \NewDocumentCommand\RuleBlockFix{}{ \TmFix : \prn{\TpU{X}\to \TpU{X}}\to \TpU{X}\qquad \TmFix\, {f} \equiv f\,\prn{\TmFix\, f} } \NewDocumentCommand\RuleBlockFn{}{ \TpFn : \Tp\Pos\to\Tp\Neg\to\Tp\Neg\qquad \TpFn.\Tm : \prn{A \to \TpU{X}}\cong \TpU\,\prn{\TpFn\,A\,X} } \NewDocumentCommand\RuleBlockProd{}{ \TpProd : \Tp\Pos\to\Tp\Pos\to\Tp\Pos\qquad \TpProd.\Tm : {A \times B}\cong {\TpProd\,A\,B} } \NewDocumentCommand\RuleBlockUnit{}{ \TpUnit : \Tp\Pos\qquad \TpUnit.\Tm : \ObjTerm\cong \TpUnit } \NewDocumentCommand\RuleBlockIsSealed{}{ \SealedBelow{l} : \Tp\Pos\to\LfProp\qquad \SealedBelow{l}\,A \coloneqq \LvlPol{l} \to \Compr{x:A}{\forall y:A. x\equiv_A y} } \NewDocumentCommand\RuleBlockSeal{}{ \TpSeal{l} : \Tp\Pos\to\SealedTp{l}\Pos\qquad \TmSeal{l} : A\to \TpSeal{l}\,A\qquad } \NewDocumentCommand\RuleBlockUnseal{}{ \TmUnseal{l} : \brc{B : \SealedTp{l}\Pos} \to \TpSeal{l}\,{A} \to \prn{A \to B} \to B\qquad \TmUnseal{l}\, \prn{\TmSeal{l}\, u}\, f \equiv\Sub{B} f\,{u}\qquad \TmUnseal{l}\,u\,\prn{\lambda x. f\, \prn{\TmSeal{l}\,x}} \equiv\Sub{B} f\,{u} } \NewDocumentCommand\RuleBlockSum{}{ \TpSum : \Tp\Pos\to \Tp\Pos\to\Tp\Pos\qquad \TmInl : A \to \TpSum\,A\,B\qquad \TmInr : B \to \TpSum\,A\,B } \NewDocumentCommand\RuleBlockCase{}{ \TmCase : \TpSum\,A\,B\to \prn{A\to C}\to\prn{B\to C} \to C\qquad \TmCase\,\prn{\TmInl\, u}\,f\,g \equiv\Sub{C} f\,u\qquad \TmCase\,\prn{\TmInr\, v}\,f\,g \equiv\Sub{C} g\,v\qquad \TmCase\,u\,\prn{\lambda x. f\,\prn{\TmInl\,x}}\,\prn{\lambda x.f\,\prn{\TmInr\,x}} \equiv\Sub{C} f\, u } \NewDocumentCommand\RuleBlockTdcl{}{ \TmTdcl{l} : \brc{A : \SealedTp{l}\Pos} \to \TpSeal{l}\TpU\TpF{A} \to \TpU\TpF{A}\qquad \TmTdcl{l}\, \prn{\TmSeal{l}\, \prn{\TmRet\, u}} \equiv\Sub{\TpU\TpF{A}} \TmRet\, u } \NewDocumentCommand\RuleBlockLvl{}{ \LvlPol{l} : \LfProp\qquad \LvlPol{k}\to \LvlPol{l}&\prn{k\leq l\in\LVL}\qquad \LvlPol{k}\to \LvlPol{l}\to \LvlPol{k\land l} } \subsection{The basic language} We have value types $A:\Tp\Pos$ and computation types $X:\Tp\Neg$; because our presentation of cbpv does not include stacks, we will not include a separate syntactic category for computations but instead access them through thunking. The sorts of value and computation types and their adjoint connectives are specified below: \[ \begin{mathblock} \def\qquad{\qquad} \RuleBlockJudgmental \end{mathblock} \] We let $A,B,C$ range over $\Tp\Pos$ and $X,Y,Z$ over $\Tp\Neg$. We will often write $A$ instead of $\Tm\,{A}$ when it causes no ambiguity. Free computation types are specified as follows: \[ \begin{mathblock} \RuleBlockRetBind \end{mathblock} \qquad \begin{mathblock} \RuleBlockBindEqns \end{mathblock} \] We support general recursion in computation types: \[ \begin{mathblock} \def\qquad{\qquad} \RuleBlockFix \end{mathblock} \] We close the universe $X:\Tp\Neg \vdash \Tm\,{\TpU{X}}$ of computation types and thunked computations under all function types $\Tm\,{A}\to \Tm\,{\TpU{X}}$ by adding a new computation type constant $\TpFn$ equipped with a universal property like so: \[ \begin{mathblock} \def\qquad{\qquad} \RuleBlockFn \end{mathblock} \] We will treat this isomorphism implicitly in our informal notation, writing $\lambda x.u\prn{x}$ for both meta-level and object-level function terms. Finite product types are specified likewise: \[ \begin{mathblock} \RuleBlockProd \end{mathblock} \qquad \begin{mathblock} \RuleBlockUnit \end{mathblock} \] Sum types must be treated specially because we do not intend them to be coproducts in the logical framework: they should have a universal property for types, not for sorts. \[ \begin{mathblock} \RuleBlockSum \end{mathblock} \qquad \begin{mathblock} \RuleBlockCase \end{mathblock} \] \subsection{The sealing modality and declassification}\label{sec:calculus:sealing} For each $l\in\LVL$, we add an \emph{abstract} proof irrelevant proposition $\LvlPol{l} : \LfProp$ to the language; this proposition represents the condition that the ``client'' has a lower security clearance than $l$. This ``redaction'' is implemented by isolating the types that are \emph{sealed} at $\LvlPol{l}$, \ie those that become singletons in the presence of $\LvlPol{l}$: \[ \begin{mathblock} \begin{mathblock} \RuleBlockLvl \end{mathblock} \qquad \begin{mathblock} \RuleBlockIsSealed \end{mathblock} \end{mathblock} \] We will write $\SealedTp{l}\Pos\subseteq\Tp\Pos$ for the subtype spanned by value types $A$ for which $\SealedBelow{l}\,A$ holds. As in \cref{sec:key-ideas:modalities}, we will write $\SlPt$ for the unique element of an $\LvlPol{l}$-sealed type in the presence of $u:\LvlPol{l}$. Next we add the sealing modality itself: \[ \begin{mathblock} \RuleBlockSeal \end{mathblock} \qquad \begin{mathblock} \RuleBlockUnseal \end{mathblock} \] Finally a construct for declassifying the termination channel of a sealed computation: \[ \begin{mathblock} \def\qquad{\qquad} \RuleBlockTdcl \end{mathblock} \] \begin{remark} % The $\LvlPol{l}$ propositions play a purely book-keeping role, facilitating verification of program equivalences in the same sense as the ghost variables of \citet[Owicki and Gries]{owicki-gries:1976}. % \end{remark} % \section{Denotational semantics in synthetic domain theory}\label{sec:denotational-semantics} We will define our denotational semantics for information flow and termination-insensitive noninterference in a category of domains indexed in $\LVL$. To give a model of the theory presented in \cref{sec:calculus} means to define a locally Cartesian closed functor $\Mor{\TCat}{\ECat}$ where $\ECat$ is locally Cartesian closed. Unfortunately no category of domains can be locally Cartesian closed, but we can \emph{embed} categories of domains in a locally Cartesian closed category by following the methodology of \DefEmph{synthetic domain theory}~\citep{fiore-rosolini:1997,fiore-plotkin-power:1997, fiore-plotkin:1996,fiore-rosolini:1997:cpos,fiore:1997,fiore-rosolini:2001,matache-moss-staton:2021}.\footnote{In particular we focus on the style of synthetic domain theory based on Grothendieck topoi and well-complete objects. There is another very productive strain of synthetic domain theory based on realizability and replete objects that has different properties~\citep{hyland:1991,phoa:1991,taylor:1991,reus:1995,reus:1996,reus:1999,reus-streicher:1993,reus-streicher:1999}.} \subsection{A topos for information flow logic} Recall that $\LVL$ is a poset of security levels closed under finite meets. The presheaf topos $\LvlTop$ defined by the identification $\Sh{\LvlTop} = \brk{\OpCat{\LVL},\SET}$ contains propositions $\Yo[\LVL]{l}$ corresponding to every security level $l\in\LVL$, and is closed under both sealing and transparency modalities $\OpMod{\Yo[\LVL]{l}}E,\ClMod{\Yo[\LVL]{l}}{E}$ in the sense of \cref{sec:key-ideas:modalities}; in more traditional parlance, these are the \emph{open} and \emph{closed} modalities corresponding to the proposition $\Yo[\LVL]{l}$~\citep{rijke-shulman-spitters:2020}. It is possible to give a denotational semantics for a \emph{total} fragment of our language in $\Sh{\LvlTop}$, but to interpret recursion we need some kind of domain theory. We therefore define a topos model of synthetic domain theory that lies over $\LvlTop$ and hence incorporates the information flow modalities seamlessly. \subsection{Synthetic domain theory over the information flow topos} We will now work abstractly with a Grothendieck topos $\CmpTop$ equipped with a dominance $\Sigma\in\Sh{\CmpTop}$, called the \DefEmph{Sierpi\'nski space}, satisfying several axioms that give rise to a reflective subcategory of objects that behave like predomains. We leave the construction of $\CmpTop$ to our \PhraseAppendix, where it is built by adapting the recipe of \citet[Fiore and Plotkin]{fiore-plotkin:1996}. \begin{definition}[{\citet[Rosolini]{rosolini:1986}}] % A \DefEmph{dominion} on a category $\ECat$ is a stable class of monos closed under identity and composition. Given a dominion $\mathcal{M}$ such that $\ECat$ has finite limits, a \DefEmph{dominance} for $\mathcal{M}$ is a classifier $\Mor|>->|[\top]{\ObjTerm{\ECat}}{\Sigma}$ for the elements of $\mathcal{M}$ in the sense that every $\Mor|>->|{U}{A}\in\mathcal{M}$ gives rise to a \emph{unique} map $\Mor[\chi_U]{A}{\Sigma}$ such that $U \cong \chi_U^*\top$. % \end{definition} \NewDocumentCommand\ShLub{}{\bar{\boldsymbol{\omega}}} \NewDocumentCommand\ShChain{}{\boldsymbol{\omega}} \NewDocumentCommand\InclChain{}{\boldsymbol{\iota}} If $\ECat$ is locally cartesian closed, we may form the \DefEmph{partial element classifier} monad $\Mor[\Lift]{\ECat}{\ECat}$ for a dominance $\Sigma$, setting $\Lift{E} = \Sum{\phi:\Sigma} \OpMod{\phi}{E}$; given $e\in\Lift{E}$, we will write $\IsDefd{e}\in\Sigma$ for the termination support $\pi_1\,e$ of $e$. We are particularly interested in the case where $\Lift$ has a final coalgebra $\ShLub\cong\Lift{\ShLub}$ and an initial algebra $\Lift{\ShChain}\cong\ShChain$. When $\ECat$ is the category of sets, $\ShChain$ is just the natural numbers object $\mathbb{N}$ and $\ShLub$ is $\mathbb{N}_\infty$, the natural numbers with an infinite point adjoined. In general, one should think of $\ShChain$ as the ``figure shape'' of a formal $\omega$-chain $\Mor{\ShChain}{E}$ that takes into account the data of the dominance; then $\ShLub$ is the figure shape of a formal $\omega$-chain equipped with its supremum, given by evaluation at the infinite point $\infty\in\ShLub$. There is a canonical inclusion $\Mor|>->|[\InclChain]{\ShChain}{\ShLub}$ witnessing the \emph{incidence relation} between a chain equipped with its supremum and the underlying chain. \begin{restatable}{sdtaxiom}{AxSigmaFiniteJoins}\label{axiom:sigma-finite-joins} % $\Sigma$ has \DefEmph{finite joins} $\Disj{i<n}\phi_i$ that are preserved by the inclusion $\Sigma\subseteq\Omega$. We will write $\bot$ for the empty join and $\phi\lor\psi$ for binary joins. % \end{restatable} \begin{definition}[Complete types] % In the internal language of $\ECat$, a type $E$ is called \DefEmph{complete} when it is internally orthogonal to the comparison map $\Mor|>->|{\ShChain}{\ShLub}$. In the internal language, this says that for any formal chain $\Mor[e]{\ShChain}{E}$ there exists a \emph{unique} figure $\Mor[\hat{e}]{\ShLub}{E}$ such that $\hat{e}\circ\InclChain = e$. In this scenario, we write $\DLub{i\in\ShChain}{e\,i}$ for the evaluation $\hat{e}\,\infty$. % \end{definition} \begin{restatable}{sdtaxiom}{AxOmegaInductive}\label{axiom:omega-inductive} % The initial lift algebra $\ShChain$ is the colimit of the following $\omega$-chain of maps: \[ \begin{tikzpicture}[diagram] \node (0) {$\ObjInit$}; \node (1) [right = of 0] {$\Lift{\ObjInit}$}; \node (2) [right = of 1] {$\Lift^2{\ObjInit}$}; \node (3) [right = of 2] {$\ldots$}; \draw[->] (0) to node [above] {$!$} (1); \draw[->] (1) to node [above] {$\Lift{!}$} (2); \draw[->] (2) to node [above] {$\Lift^2{!}$} (3); \end{tikzpicture} \] % \end{restatable} \begin{definition} % A type $E$ is called a \DefEmph{predomain} when $\Lift{E}$ is complete. % \end{definition} \begin{restatable}{sdtaxiom}{AxSigmaPredomain}\label{axiom:sigma-predomain} % The dominance $\Sigma$ is a predomain. % \end{restatable} The category of predomains is complete, cocomplete, closed under lifting, exponentials, and powerdomains, and is a reflective exponential ideal in $\Sh{\CmpTop}$ --- thus better behaved than any classical category of predomains. The predomains with $\Lift$-algebra structure serve as an appropriate notion of \emph{domain} in which arbitrary fixed points can be interpreted by taking the supremum of formal $\ShChain$-chains of approximations $f^n\bot$; in addition to ``term-level'' recursion, we may also interpret recursive types. We impose two additional axioms for information flow: \begin{restatable}{sdtaxiom}{AxLvls}\label{axiom:lvls} % The topos $\CmpTop$ is equipped with a geometric morphism $\Mor[\LvlMapCmp]{\CmpTop}{\LvlTop}$ such that the induced functor $\Mor[\LvlMapCmp^*\Yo[\LVL]]{\LVL}{\Opns{\CmpTop}}$ is fully faithful and is valued in $\Sigma$-propositions. We will write $\gl{l}$ for each $\LvlMapCmp^*\Yo[\LVL]{l}$. % \end{restatable} \cref{axiom:lvls} ensures that our domain theory include computations whose termination behavior depends on the observer's security level. The following \cref{axiom:finset-transparent-predomain} is applied to the semantic noninterference property. \begin{restatable}{sdtaxiom}{AxFinTransparentPredomain}\label{axiom:finset-transparent-predomain} % Any constant object $\CmpTop^*\brk{n}\in\Sh{\CmpTop}$ for $\brk{n}$ a finite set is an $\LvlPol{l}$-transparent predomain for any $l\in\LVL$. % \end{restatable} The category $\Sh{\CmpTop}$ is closed under as many topos-theoretic universes~\citep{streicher:2005} as there are Grothendieck universes in the ambient set theory. For any such universe $\UU_i$, there is a subuniverse $\DDPos_i\subseteq\UU_i$ spanned by \emph{predomains}; we note that being a predomain is a property and not a structure. The object $\DDPos_i$ can exist because being a predomain is a local property that can be expressed in the internal logic. In fact, the predomains can be seen to be not only a reflective subcategory but also a reflective \emph{subfibration} as they are obtained by the internal localization at a class of maps~\citep{shulman:blog:reflective-subfibrations}; therefore the reflection can be internalized as a connective $\Mor{\UU_i}{\DDPos_i}$ implemented as a quotient-inductive type~\citep{shulman:blog:localization-hit}. We may define the corresponding universe of domains $\DDNeg_i$ to be the collection of predomains in $\DDPos_i$ equipped with $\Lift$-algebra structures. We hereafter suppress universe levels. \subsection{The stabilizer of a predomain and its action}\label{sec:stabilizer} In this section, we work internally to the synthetic domain theory of $\Sh{\CmpTop}$; first we recall the definition of an \emph{action} for a commutative monoid. \begin{definition} % Let $\prn{M,0,+}$ be a monoid object in the category of predomains; an \DefEmph{$M$-action structure} on a predomain $A$ is given by a function $\Mor[\mathord{\parallel_A}]{M\times A}{A}$ satisfying the identities $0\parallel_A a = a$ and $m \parallel_A n\parallel_A a = \prn{m + n}\parallel_A a$. % \end{definition} Write $\Sigma^\lor$ for the additive monoid structure of the Sierpi\'nski domain, with addition given by $\Sigma$-join $\phi\lor\psi$ and the unit given by the non-terminating computation $\bot$. Our terminology below is inspired by stabilizer subgroups in algebra. \begin{definition}[The stabilizer of a predomain]\label{def:stabilizer} % Given a predomain $A$, we define the \DefEmph{stabilizer} of $A$ to be the submonoid $\Stab{A}\subseteq\Sigma^\lor$ spanned by $\phi : \Sigma^\lor$ such that $A$ is $\phi$-sealed, \ie the projection map $\Mor{A\times \phi}{\phi}$ is an isomorphism. % \end{definition} \begin{remark} % We can substantiate the analogy between \cref{def:stabilizer} and stabilizer subgroups in algebra. Up to coherence issues that could be solved using higher categories, any category $\CatIdent{P}$ of predomains closed under subterminals and pushouts can be structured with a monoid action over $\Sigma^\lor$; the action $\Mor[\mathord{\parallel\Sub{\CatIdent{P}}}]{\Sigma^\lor\times\CatIdent{P}}{\CatIdent{P}}$ takes $A$ to the $\phi$-sealed object $\phi\parallel\Sub{\CatIdent{P}} A \coloneqq \ClMod{\phi}{A}$. Up to isomorphism, the identities for a $\Sigma^\lor$-action can be seen to be satisfied. Then we say that the stabilizer of a predomain $A\in\CatIdent{P}$ is the submonoid $\Stab{A}\subseteq\Sigma^\lor$ consisting of propositions $\phi$ such that $\phi\parallel\Sub{\CatIdent{P}} {A} \cong {A}$. % \end{remark} \begin{lemma}\label{lem:stabilizer-action} For any predomain $A$, we may define a canonical $\Stab{A}$-action on $\Lift{A}$: \begin{align*} \parallel\Sub{\Lift{A}} &: \Stab{A}\times \Lift{A}\to\Lift{A}\\ \phi \parallel\Sub{\Lift{A}} a &= \prn{ \phi \lor \IsDefd{a}, \brk{ \phi\hookrightarrow \SlPt, \IsDefd{a}\hookrightarrow a } } \end{align*} % \end{lemma} The stabilizer action described in \cref{lem:stabilizer-action} will be used to implement declassification of termination channels in our denotational semantics. \begin{lemma}\label{lem:stab-action-preserves-ret} The stabilizer action preserves terminating computations in the sense that $\phi\parallel\Sub{\Lift{A}} u = u$ for $\phi:\Stab{A}$ and terminating $u:\Lift{A}$. \end{lemma} \begin{proof} % We observe that $\phi\lor \top = \top$, hence for terminating $a$ we have $\phi\parallel\Sub{\Lift{A}} a = a$. % \end{proof} \subsection{The denotational semantics}\label{sec:interpretation} We now define an algebra for the theory $\TCat$ in $\Sh{\CmpTop}$; the initial prefix of this algebra is standard: \begin{gather*} \begin{mathblock} \bbrk{\Tp\Pos} = \DDPos\\ \bbrk{\Tp\Neg} = \DDNeg\\ \bbrk{\TpU}\,X = X\\ \bbrk{\TpF}\,A= \Lift{A}\\ \bbrk{\TmRet}\,a = a\\ \bbrk{\TmBind}\,m\,f = f^\sharp\,m\\ \bbrk{\TmFix}\,f = \Con{fix}\,{f}\\ \bbrk{\TpFn}\,A\,X = A\Rightarrow X\\ \bbrk{\TpFn.\Tm} = \MathComment{canonical}\\ \end{mathblock} \begin{mathblock} \bbrk{\TpProd}\,A\,B = A\times B\\ \bbrk{\TpProd.\Tm} = \MathComment{canonical}\\ \bbrk{\TpUnit} = \ObjTerm{\DDPos}\\ \bbrk{\TpUnit.\Tm} = \MathComment{canonical}\\ \bbrk{\TpSum}\,A\,B = A+B\\ \bbrk{\TmInl}\, a = \TmInl\,a\\ \bbrk{\TmInr}\, a = \TmInr\,a\\ \bbrk{\TmCase}\, u\,f\,g = \begin{cases} f\prn{x} & \text{if}\ u = \TmInl\,x\\ g\prn{x} & \text{if}\ u = \TmInr\,x \end{cases} \end{mathblock} \end{gather*} Note that the coproduct $A+B$ above is computed in the category of predomains\footnote{Any reflective subcategory of a cocomplete category is cocomplete: first compute the colimit in the outer category, and then apply the reflection.} and need not be preserved by the embedding into $\Sh{\CmpTop}$. We next add the security levels and the sealing modality, interpreted as the pushout of predomains $\ClMod{\LvlPol{l}}{A}$, again computed in the category of predomains. We define the unsealing operator for $B : \bbrk{\SealedTp{l}\Pos}$ using the universal property of the pushout. \[ \begin{mathblock} \bbrk{\LvlPol{l}} = \LvlPol{l} = \LvlMapCmp^*\Yo[\LVL]{l}\\ \bbrk{\TpSeal{l}}\, A = \ClMod{\LvlPol{l}}{A}\\ \bbrk{\TmSeal{l}}\, a = \ClIntro{\LvlPol{l}}\,{a}\\ \end{mathblock} \qquad \begin{mathblock} \bbrk{\TmUnseal{l}}\,u\,f = \\ \quad \begin{cases} f{x} & \text{if}\ u = \ClIntro{\LvlPol{l}}\,x\\ \SlPt & \text{if}\ u = \SlPt \end{cases} \end{mathblock} \] \begin{observation}\label{obs:seal-universal-property} % Morphisms $\Mor{\ClMod{\LvlPol{l}}{A}}{B}$ are in bijective correspondence with morphisms $\Mor{A}{B}$ that restricts to a \emph{weakly constant} function under $\LvlPol{l}$. % \end{observation} We may now interpret the termination declassification operation. Fixing a sealed type $A : \bbrk{\SealedTp{l}\Pos}$, we must define the dotted lift below using the universal property of the pushout and the action of the stabilizer of $A$ on $\Lift{A}$, noting that $\LvlPol{l}\in\Stab{A}$ by assumption: \begin{gather*} \begin{tikzpicture}[diagram,baseline=(nw.base)] \node (nw) {$A$}; \node (ne) [right = 2.5cm of nw] {$\Lift{A}$}; \node (sw) [below = of nw] {$\ClMod{\LvlPol{l}}\Lift{A}$}; \draw[>->] (nw) to node [above] {$\eta_A$} (ne); \draw[->] (nw) to node [left] {$\ClIntro{\LvlPol{l}}\circ\eta_A$} (sw); \draw[->,exists] (sw) to node [sloped,below] {$\bbrk{\TmTdcl{l}}$} (ne); \end{tikzpicture} \qquad \begin{mathblock} \bbrk{\TmTdcl{l}}\, u =\\ \quad \begin{cases} \LvlPol{l}\parallel\Sub{\Lift{A}} x & \text{if } u = \ClIntro{\LvlPol{l}}\,x\\ \LvlPol{l}\parallel\Sub{\Lift{A}}\bot & \text{if } u = \SlPt \end{cases} \end{mathblock} \end{gather*} To see that the above is well-defined, we observe that under $\LvlPol{l}$ both branches return the (unique) computation whose termination support is $\LvlPol{l}$. With this definition, the required computation rule holds by virtue of \cref{lem:stab-action-preserves-ret}. \subsection{Noninterference in the denotational semantics} \begin{definition} % A function $\Mor[u]{A}{B}$ is called \DefEmph{weakly constant}~\citep{kraus-escardo-coquand-altenkirch:2017} if for all $x,y:A$ we have $u\,x = u\,y$. A partial function $\Mor[u]{A}{\Lift{B}}$ is called \DefEmph{partially constant} if for all $x,y: A$ such that $\IsDefd{u\,x}\land\IsDefd{u\,y}$, we have $u\,x = u\,y$. % \end{definition} For the following, let $l\in\LVL$ be a security level. \begin{restatable}{lemma}{LemConstancy}\label{lem:constancy} % Let $A$ be a $\LvlPol{l}$-sealed predomain and let $B$ be a $\LvlPol{l}$-transparent predomain; then (1) any function $\Mor{A}{B}$ is weakly constant, and (2) any partial function $\Mor{A}{\Lift{B}}$ is partially constant. % \end{restatable} The following lemma follows from \cref{axiom:finset-transparent-predomain}. \begin{restatable}{lemma}{LemBoolModal}\label{lem:bool-modal} % The predomain $\bbrk{\TpBool}$ is $\LvlPol{l}$-transparent. % \end{restatable} In order for \cref{lem:constancy} to have any import as far as the equational theory is concerned, we must establish computational adequacy. This is the topic of \cref{sec:adequacy}. % \section{Adequacy of the denotational semantics}\label{sec:adequacy} We must argue that the denotational semantics agrees with the theory as far as convergence and return values is concerned. We do so using a Plotkin-style logical relations argument, phrased in the language of Synthetic Tait Computability~\citep{sterling:2021:thesis,sterling-harper:2021,sterling-angiuli:2021}. \subsection{Synthetic Tait computability of formal approximation}\label{sec:logical-relation} In this section we will work abstractly with a Grothendieck topos $\GlTop$ satisfying several axioms that will make it support a Kripke logical relation for adequacy. \begin{notation} % For each universe $\UU\in\Sh{\GlTop}$ there is a type $\ALG{\UU}$ of internal $\TCat$-algebras whose type components are valued in $\UU$. $\ALG{\UU}$ is a dependent record containing a field for every constant in the signature by which we generated $\TCat$. Assuming enough universes, functors $\Mor{\TCat}{\Sl*{\Sh{\GlTop}}{E}}$ correspond up to isomorphism to morphisms $\Mor{E}{\ALG{\UU}}$. This is the relationship between the internal language and the \emph{functorial semantics} \`a la Lawvere~\citep{lawvere:thesis:reprint}. % \end{notation} \begin{restatable}{stcaxiom}{AxPhases}\label{axiom:syn-cmp-phases} % There are two disjoint propositions $\SynOpn,\CmpOpn\in\Opns{\GlTop}$ such that $\SynOpn\land\CmpOpn=\bot$. We will refer to these as the \DefEmph{syntactic} and \DefEmph{computational phases} respectively. We will write $\BaseOpn=\SynOpn\lor\CmpOpn$ for the disjoint union of the two phases. % \end{restatable} \begin{restatable}{stcaxiom}{AxGenericModel}\label{axiom:syn-alg} % Within the syntactic phase, there exists a $\TCat$-algebra $\SynAlg : \ALG{\Sl{\UU}{\SynOpn}}$ such that the corresponding functor $\Mor{\TCat}{\Sl*{\Sh{\GlTop}}{\SynOpn}}$ is fully faithful. % \end{restatable} \begin{restatable}{stcaxiom}{AxSdtInCmpPhase}\label{axiom:sdt-in-cmp-phase} % Within the computational phase, the axioms of $\LVL$-indexed synthetic domain theory (\cref{axiom:sigma-finite-joins,axiom:sigma-predomain,axiom:omega-inductive,axiom:lvls,axiom:finset-transparent-predomain}) are satisfied. % \end{restatable} As a consequence of \cref{axiom:sdt-in-cmp-phase}, we have a \emph{computational} $\TCat$-algebra $\CmpAlg:\ALG{\Sl{\UU}{\CmpOpn}}$ given by the constructions of \cref{sec:interpretation}. Gluing together the two models $\SynAlg,\CmpAlg$ we see that $\Sl{\GlTop}{\BaseOpn}$ supports a model $\BaseAlg = \brk{\SynOpn\hookrightarrow \SynAlg,\CmpOpn\hookrightarrow \CmpAlg}$ of $\TCat$. The final \cref{axiom:lvls-under-closed-immersion} above is needed in the approximation structure of $\TmTdcl{l}$. \begin{restatable}{stcaxiom}{AxLvlsUnderClosedImmersion}\label{axiom:lvls-under-closed-immersion} % For each $l\in\LVL$ we have ${\CmpAlg.\LvlPol{l}} \leq \ClMod{\BaseOpn}{\SynAlg.\LvlPol{l}}$. % \end{restatable} \begin{theorem} % There exists a topos $\GlTop$ satisfying \cref{axiom:syn-cmp-phases,axiom:syn-alg,axiom:sdt-in-cmp-phase,axiom:lvls-under-closed-immersion} containing open subtopoi $\Sl{\GlTop}{\SynOpn} = \PrTop{\TCat}$ and $\Sl{\GlTop}{\CmpOpn} = \CmpTop$ such that the complementary closed subtopos is $\ClSubcat{\GlTop}{\BaseOpn} = \LvlTop$. % \end{theorem} \begin{proof} % We may construct a topos using a variant of the Artin gluing construction of \citet[Sterling and Harper]{sterling-harper:2021}, which we detail in our \PhraseAppendix. % \end{proof} By \cref{axiom:syn-cmp-phases,axiom:syn-alg}, any such topos $\GlTop$ supports a model of the \DefEmph{synthetic Tait computability} of \citet[Sterling and Harper]{sterling-harper:2021,sterling:2021:thesis}. In the internal language of $\Sh{\GlTop}$, the phase $\BaseOpn$ induces a pair of complementary transparency/open and sealing/closed modalities that can be used to synthetically construct formal approximation relations in the sense of Plotkin between computational objects and syntactical objects. Viewing an object $E\in\Sh{\GlTop}$ as a family $x:\OpMod{\CmpOpn}{E},x':\OpMod{\SynOpn}{E}\vdash \Compr{E}{\CmpOpn\hookrightarrow x, \SynOpn\hookrightarrow x'}$ of $\BaseOpn$-sealed types over the $\BaseOpn$-transparent type $\prn{\OpMod{\BaseOpn}{E}} \cong \prn{\prn{\OpMod{\CmpOpn}{E}}\times \prn{\OpMod{\SynOpn}{E}}}$, we may think of $E$ as a \emph{proof-relevant} formal approximation relation between its computational and syntactic parts, which we might term a ``formal approximation structure''. \begin{notation}[Extension types] % We recall \DefEmph{extension types} from \citet[Riehl and Shulman]{riehl-shulman:2017}. Given a proposition $\phi:\Omega$ and a partial element $e : \OpMod{\phi}{E}$, we will write $\Ext{E}{\phi}{e}$ for the collection of elements of $E$ that restrict to $e$ under $\phi$, \ie the subobject $\Mor|>->|{\Compr{x : E}{\OpMod{\phi} \prn{x = e}}}{E}$. Note that $\Ext{E}{\phi}{e}$ is always $\phi$-sealed, since it becomes the singleton type $\brc{e}$ under $\phi$. % \end{notation} Each universe $\UU$ of $\Sh{\GlTop}$ satisfies a remarkable \emph{strictification} property with respect to any proposition $\phi:\Omega$ that allows one to construct codes for dependent sums of families of $\phi$-sealed types over a $\phi$-transparent type in such a way that they restrict \emph{exactly} to the $\phi$-transparent part under $\phi$. This refinement of dependent sums is called a \DefEmph{strict glue type}:\footnote{In presheaves, the universes of \citet[Hofmann and Streicher]{hofmann-streicher:1997,streicher:2005} satisfy this property directly; for sheaves, there is an alternative transfinite construction of universes enjoying this property~\citep{gratzer-shulman-sterling:2022:universes}. Our presentation in terms of transparency and sealing is an equivalent reformulation of the strictness property identified by several authors in the context of the semantics of homotopy type theory~\citep{kapulkin-lumsdaine:2021,streicher:2014:simplicial,shulman:2015:elegant,cchm:2017,orton-pitts:2016,bbcgsv:2016,shulman:2019,awodey:2021:qms}.} \begin{mathpar} \inferrule[strict glue types]{ A:\OpMod{\phi}{\UU}\\ B : \prn{\OpMod{\prn{z:\phi}}A\,z} \to \UU\\ \forall x.\ \Con{isSealed}\Sub{\phi}\,\prn{B,x} }{ \GlueFam{\phi}{x:A}{B\,x} : \Ext{\UU}{z:\phi}{A\,z}\\\\ \Con{glue}_\phi : \Ext{\prn{\prn{x:\OpMod{\prn{z:\phi}}{A\,z}}\times B\,x} \cong \GlueFam{\phi}{x:A}{B\,x}}{\phi}{\pi_1} } \end{mathpar} \begin{notation}[Strict glue types] We impose two notations assuming $A,B$ as above. Given $a:\OpMod{\prn{z:\phi}}{A\,z}$ and $b : B\,a$, we write $\GlueEl{b}{\phi}{a}$ for $\Con{glue}_\phi\prn{a,b}$. Given $g : \GlueFam{\phi}{x:A}{B\,x}$, we write $\Unglue{\phi}g : B\,g$ for the element $\pi_2\,\prn{\Inv{\Con{glue}_\phi}\,g}$. \end{notation} \ifextendedversion Using the strict glue types it is possible to define very strict universes $\Sl{\UU}{\phi},\ClSubcat{\UU}{\phi}$ of $\phi$-transparent and $\phi$-sealed types respectively which are themselves $\phi$-transparent and $\phi$-sealed in the next universe as in \citet[\S3.6 of Sterling's dissertation]{sterling:2021:thesis}: \[ \begin{mathblock} \Sl{\UU}{\phi} : \Ext{\VV}{\phi}{\UU}\\ \ClSubcat{\UU}{\phi} : \Ext{\VV}{\phi}{\ObjTerm} \end{mathblock} \quad \begin{mathblock} \prn{\OpMod{\phi}{-}} : \Ext{\UU\to\Sl{\UU}{\phi}}{\phi}{\lambda A.A}\\ \prn{\ClMod{\phi}{-}} : \UU\to\ClSubcat{\UU}{\phi} \end{mathblock} \] The transparent subuniverse $\Sl{\UU}{\phi}$ is canonically isomorphic to $\OpMod{\phi}{\UU}$; given an element $A:\Sl{\UU}{\phi}$, elements of $A$ are the same as partial elements $\prn{z:\phi}\to A\,z$ under the former identification. In our notation, we suppress these identifications \emph{as well as} the introduction and elimination form for these $\phi$-partial elements. The sealed subuniverse $\ClSubcat{\UU}{\phi}$ is canonically isomorphic to $\Ext{\UU}{\phi}{\ObjTerm}$. The modal universes $\Sl{\UU}{\phi},\ClSubcat{\UU}{\phi}$ serve as \emph{(weak) generic objects} in the sense of \citet[Jacobs]{jacobs:1999} for the full subfibrations of $\FibMor[\Cod\Sub{\Sh{\GlTop}}]{\Sh{\GlTop}^\to}{\Sh{\GlTop}}$ spanned by fiberwise $\UU$-small $\phi$-transparent and $\phi$-sealed families of types respectively. \fi \begin{notation} % Let $E$ be a type in $\Sh{\GlTop}$ and fix elements $e : \OpMod{\CmpOpn}{E}$ and $e' : \OpMod{\SynOpn}{E}$ of the computational and syntactical parts of $E$ respectively; we will write $e \lhd_E e'$, pronounced ``$e$ formally approximates $e'$'', for the extension type $\Compr{E}{\CmpOpn\hookrightarrow e, \SynOpn\hookrightarrow e'}$. % \end{notation} This is the connection between synthetic Tait computability and analytic logical relations; the open parts of an object correspond to the \emph{subjects} of a logical relation and the closed parts of an object correspond to the evidence of that relation. \begin{definition}[Formal approximation relations] % A type $E$ is called a \DefEmph{formal approximation relation} when for any $\BaseOpn$-point $e:\OpMod{\BaseOpn}{E}$, the extension type $\Ext{E}{\BaseOpn}{e}$ is a proposition, \ie any two elements of $e\lhd_E e$ are equal. % \end{definition} We will write $\Rel{\UU}\subseteq\UU$ for the subuniverse of formal approximation relations. \begin{definition}[Admissible formal approximation relations] % Let $E$ be a formal approximation relation such that $\OpMod{\CmpOpn}{E}$ is a predomain equipped with an $\Lift$-algebra structure. We say that $E$ is \DefEmph{admissible} at $x:\OpMod{\SynOpn}{E}$ when the subobject $\Ext{E}{\SynOpn}{x}\subseteq\OpMod{\CmpOpn}{E}$ is admissible in the sense of synthetic domain theory, \ie contains $\bot$ and is closed under formal suprema of formal $\ShChain$-chains. We say that $E$ is admissible when it is admissible at every such $x$. \end{definition} \begin{restatable}[Scott induction]{lemma}{LemScottInduction}\label{lem:scott-induction} % Let $X$ be a formal approximation relation such that $\OpMod{\CmpOpn}{X}$ is a domain. Let $f : X\to X$ be an endofunction on $X$ and let $x:\OpMod{\SynOpn}{X}$ be a syntactical fixed point of $f$ in the sense that $\OpMod{\SynOpn}{\prn{x=f\,{x}}}$; if $X$ is admissible at $x$, then we have $\Con{fix}\,f\lhd_X x$. % \end{restatable} Our goal can be rephrased now in the internal language; choosing a universe $\VV\supset \UU$, we wish to define a suitable $\VV$-valued algebra $\Alg\in\ALG{\VV}$ that restricts under $\BaseOpn$ to $\BaseAlg$, \ie an element $\Alg\in\Ext{\ALG{\VV}}{\BaseOpn}{\BaseAlg}$. This can be done quite elegantly in the internal language of $\Sh{\GlTop}$, \ie the \emph{synthetic Tait computability of formal approximation structures}. The high-level structure of our model construction is summarized as follows: \begin{quote} % We interpret value types as \DefEmph{formal approximation structures} over a syntactic value type and a predomain; we interpret computation types as \DefEmph{admissible formal approximation relations} between a syntactic computation type and a domain. % \end{quote} To make this precise, we will define $\Alg.\Tp\Pos \in \Ext{\VV}{\BaseOpn}{\BaseAlg.\Tp\Pos}$ as the collection of types that restrict to an element of $\SynAlg.\Tp\Pos$ in the syntactic phase and to an element of $\CmpAlg.\Tp\Pos = \DDPos$ in the computational phase. This is achieved using strict gluing: \[ \begin{mathblock} % \Alg.\Tp\Pos = \GlueFam{\BaseOpn}{A:\BaseAlg.\Tp\Pos}{\Ext{\UU}{\BaseOpn}{\BaseAlg.\Tm\,A}} \end{mathblock} \quad \begin{mathblock} % \Alg.\Tm = \Unglue{\BaseOpn} \end{mathblock} \] The above is well-defined because $\BaseAlg.\Tp\Pos$ is $\BaseOpn$-transparent and $\Ext{\UU}{\BaseOpn}{\BaseAlg.\Tm\,A}$ is $\BaseOpn$-sealed. We also have $\OpMod{\SynOpn}{\Alg.\Tp\Pos} = \SynAlg.\Tp\Pos$ and $\OpMod{\CmpOpn}{\Alg.\Tp\Pos} = \DDPos$. Next we define the formal approximation structure of computation types: \[ \begin{mathblock} % \Alg.\Tp\Neg = \GlueFam{\BaseOpn}{ X:\BaseAlg.\Tp\Neg }{ \Compr{X' : \Ext{\Rel{\UU}}{\BaseOpn}{\BaseAlg.\Tm\,\prn{\BaseAlg.\TpU\, X}}}{ X'\ \text{is admissible} } } \end{mathblock} \] To see that the above is well-defined, we must check that the family component of the gluing is pointwise $\BaseOpn$-sealed, which follows because the property of being admissible is $\BaseOpn$-sealed. To see that this is the case, we observe that it is obviously $\SynOpn$-sealed and also (less obviously) $\CmpOpn$-sealed: under $\CmpOpn$, $X'$ restricts to the ``total'' predicate on $X$ which is always admissible. To define the thunking connective, we simply forget that a given admissible approximation relation was admissible: $\Alg.\TpU\, X = \GlueEl{ \Unglue{\BaseOpn}{X} }{\BaseOpn}{\BaseAlg.\TpU\,X}$. To interpret free computation types, we proceed in two steps; first we define the formal approximation relation as an element of $\Rel{\UU}$ and then we glue it onto syntax and semantics. \[ \begin{mathblock} % \brk{\TpF}\,A = \GlueFam{\BaseOpn}{ u : \BaseAlg.\prn{\TpU\TpF}\,A }{ \prn{\OpMod{\CmpOpn}\IsDefd{u}} \Rightarrow \ClMod{\BaseOpn}{ \exists a:A. \OpMod{\BaseOpn}{ u = \BaseAlg.\TmRet\,a } } } \\% % \Alg.\TpF\,A = \GlueEl{ \brk{\TpF}\,A }{\BaseOpn}{\BaseAlg.\TpF} \end{mathblock} \] In simpler language, we have $u \lhd\Sub{\brk{\TpF}\,A} v$ if and only if $v$ terminates syntactically whenever $u$ terminates such that the value of $u$ formally approximates the value of $v$. This is the standard clause for lifting in an adequacy proof, phrased in synthetic Tait computability. ; the use of the sealing modality is an artifact of synthetic Tait computability, ensuring that the relation is pointwise $\BaseOpn$-sealed. The $\TmRet,\TmBind$ operations are easily shown to preserve the formal approximation relations. The construction of formal approximation structures for product and function spaces is likewise trivial. Using Scott induction (\cref{lem:scott-induction}) we can show that fixed points also lie in the formal approximation relations; we elide the details. Next we deal with the information flow constructs, starting by interpreting each security policy $\Alg.\LvlPol{l}$ as $\BaseAlg.\LvlPol{l}$. The sealing modality is interpreted below: \[ \begin{mathblock} % \brk{\TpSeal{l}}\,A = \GlueFam{\BaseOpn}{ u : \BaseAlg.\TpSeal{l}A }{ \ClMod{\BaseOpn}{ \ClMod{\Alg.\LvlPol{l}}{ \Compr{ a:A }{ \OpMod{\BaseOpn}{ u = \BaseAlg.\TmSeal{l}a } } } } } \\% % \Alg.\TpSeal{l}A = \GlueEl{ \brk{\TpSeal{l}}\,A }{\BaseOpn}{\BaseAlg.\TpSeal{l}} \end{mathblock} \] \begin{restatable}[Fundamental theorem of logical relations]{theorem}{ThmFTLR} % The preceding constructions arrange into an algebra $\Alg \in \Ext{\ALG{\VV}}{\BaseOpn}{\BaseAlg}$. % \end{restatable} \subsection{Adequacy and syntactic noninterference results} The following definitions and results in this section are global rather than internal. We may immediately read off from the logical relation of \cref{sec:logical-relation} a few important properties relating value terms and their denotations. The results of this section depend heavily on the assumption that the functor $\EmbMor{\TCat}{\Sl*{\Sh{\GlTop}}{\SynOpn}}$ is fully faithful (\cref{axiom:syn-alg}). \NewDocumentCommand\Converges{m}{{#1}{\Downarrow}} \NewDocumentCommand\Diverges{m}{{#1}{\Uparrow}} \begin{restatable}[Value adequacy]{theorem}{ThmValAdequacy}\label{thm:value-adequacy} % For any closed values $\Mor[u,v]{\ObjTerm{\TCat}}{\TpBool}$, we have $\bbrk{u} = \bbrk{v}$ if and only if $u\equiv\Sub{\TpBool}v$; moreover we have either $u\equiv\Sub{\TpBool}\TmTt$ or $u\equiv\Sub{\TpBool}\TmFf$. % \end{restatable} Let $\Mor[u]{\ObjTerm{\TCat}}{\TpU\TpF{A}}$ be a closed computation. \begin{definition}[Convergence and divergence] % We say that $u$ \DefEmph{converges} when there exists $\Mor[a]{\ObjTerm{\TCat}}{A}$ such that $u = \TmRet\,a$. Conversely, we say that $u$ \DefEmph{diverges} when there does not exist such an $a$. We will write $\Converges{u}$ to mean that $u$ converges, and $\Diverges{u}$ to mean that $u$ diverges. % \end{definition} \begin{restatable}[Computational adequacy]{theorem}{ThmCmpAdequacy}\label{thm:cmp-adequacy} % The computation $u$ converges iff $\IsDefd{\bbrk{u}} = \top$. % \end{restatable} \begin{restatable}[Termination-insensitive noninterference]{theorem}{ThmTini}\label{thm:tini} % Let $A$ be a syntactic type such that $\SealedBelow{l}\,A$ holds; fix a term $\Mor[c]{A}{\TpU\TpF\,\TpBool}$. Then for all $\Mor[x,y]{\ObjTerm{\TCat}}{A}$ such that $\Converges{c\,x}$ and $\Converges{c\,y}$, we have $c\,x \equiv\Sub{\TpU\TpF\,\TpBool} c\,y$. % \end{restatable} We give an example of a program whose termination behavior hinges on a classified bit to demonstrate that our noninterference result is non-trivial. \begin{example} % There exists a $\LvlPol{l}$-sealed type $A$ and a term $\Mor[c]{A}{\TpU\TpF\,\TpUnit}$ such that for some $\Mor[x,y]{\ObjTerm{\TCat}}{A}$ we have $\Converges{c\,x}$ and yet $\Diverges{c\,y}$.% % \end{example} \begin{proof} % Choose $A\coloneqq\TpSeal{l}\,\TpBool$ and consider the following terms: \[ \begin{mathblock} \top \coloneqq \TmRet\,\prn{}\quad \bot \coloneqq \TmFix\,\prn{\lambda z.z}\quad x \coloneqq \TmSeal{l}\,\TmTt\quad y \coloneqq \TmSeal{l}\,\TmFf\\ c \coloneqq \lambda u.\, \TmTdcl{l}\, \prn{ \TmUnseal{l}\, u\, \prn{ \lambda b.\, \TmSeal{l}\,\prn{ \TmIf\, b\, \top\, \bot } } } \end{mathblock} \] We then have $c\,x \equiv\Sub{\TpU\TpF\,\TpUnit} \top$ and therefore $\Converges{c\,x}$. On the other hand, we have $c\,y \equiv\Sub{\TpU\TpF\,\TpUnit} \TmTdcl{l}\,\prn{\TmSeal{l}\, \bot}$; executing the denotational semantics, we have $\IsDefd{\bbrk{c\,y}} = \LvlPol{l}$. From the full and faithfulness assumption of \cref{axiom:lvls}, we know that $\LvlPol{l}$ is not globally equal to $\top$; hence we conclude from \cref{thm:cmp-adequacy} that $\Diverges{c\,y}$. % \end{proof}
1,314,259,994,644
arxiv
\section{Introduction} The probes of the epoch of reionization (EoR) and cosmic dawn remain outstanding aims of modern cosmology. While relevant information about the era of cosmic dawn remains elusive, important strides have been made in understanding the EoR since 2000, mainly owing to the detection of Gunn-Peterson effect at $z \simeq 6$ and the CMB temperature and polarization anisotropies by WMAP and Planck \citep{Planck2015,Fanetal}. The discovery of Gunn-Peterson trough indicates that the universe could be making a transition from fully ionized to neutral at $z\simeq 6$. The CMB anisotropy measurements are consistent with the universe being fully ionized at $z \simeq 8.5$. The current best bounds on the redshift of reionization from Planck put strong constraints on the redshift of reionization, $z_{\rm reion} = 8.5 \pm 1$ \citep{Planck2015}. Theoretical estimates {show} that the first { stars} in the universe might have formed at { $z\simeq 65$} { \citep{naoz06}} thereby ending the dark age of the universe. The emission of UV light from these structures carve out ionized regions which might have percolated at $z \simeq 9$ \citep[see e.g. ][and references therein]{Barkana2001}. However, the nature of these first sources that ionize and heat the intergalactic medium is difficult to establish within the framework of current theoretical models. The two mostly likely candidates are star-forming haloes and the precursors of quasars. In the latter case, the emission could be dominated by accretion onto a seed stellar-mass black hole, the case we consider in this paper. One way to probe this phase is through the detection of redshifted hyperfine transition of neutral hydrogen (\ion{H}{1}) from this era. The past one decade has seen major progress on both theoretical and experimental efforts in this direction. Theoretical estimates show that the global \ion{H}{1} signal is observable in both absorption and emission with its strength in the range $-200\hbox{--}20 \, \rm mK$ in a frequency range of $50\hbox{--}150 \, \rm MHz$, which corresponds roughly to a redshift range $25 > z > 8$ \citep[e.g. ][]{1997ApJ...475..429M,2000ApJ...528..597T,2004ApJ...608..611G,Sethi05,pritchard08,cohenfi}. The fluctuating component of the signal is likely to be an order of magnitude smaller on scales in the range $3\hbox{--}100$~Mpc { (comoving)}, which implies angular scales in the range $\simeq 1\hbox{--}30$~arc-minutes \citep[e.g. ][]{Zaldarriaga2004}; \citep[for reviews see e.g. ][]{Zaroubi2013,Furlanettoetal,MoralesWyithe}. Many of the ongoing and upcoming experiments have the the capability to detect this signal in hundreds of hours of integration \citep[e.g. ][]{2015aska.confE...3A,2014MNRAS.439.3262M,parsons12,mcquinn06,morales05,kulkarni16,pen09}. Upper limits on the fluctuating component of the \ion{H}{1} signal have been obtained by many ongoing experiments --- GMRT, MWA, PAPER, and LOFAR \citep{2017ApJ...838...65P,2016ApJ...833..102B,PAPER,GMRT}. In addition to the redshifted hyperfine line of HI, it might be possible to probe cosmic dawn and EoR using other spectral lines of the primordial gas. Therefore, we consider also HI recombination lines and hyperfine line of $^3$HeII. In this paper, we consider the impact of a {growing} black hole (BH)on the thermal and ionization state of the IGM in the redshift range {$8 < z <25$}. {There is copious observational evidences of the existence of supermassive black holes with masses upto $M\sim 10^9~M_\odot$ at $z\simeq 7$ \citep[see e.g.,][]{mortlock11,banados,shellqs-survey,viking-survey,panstarrs-survey,wu-bhs15}\footnote{http://www.homepages.ucl.ac.uk/~ucapeib/list\_of\_all\_quasars.htm}. The presence of such ``monstrous'' black holes in the young Universe with ages less than 500 Myr seems challenging because of strong radiative and wind feedback \citep[see in][]{bhgrowth-illustris,massiveblack-sim,bhgrowth-eagle,bhgrowth-horizonAGN,negri17,gaspari17,bhgrowth-illustristng,latif16,latif18}. In this paper we address the question of whether the regions around these growing BHs can be observed in 21~cm emission, helium hyperfine line and hydrogen recombination lines. } In the next section, we describe our model of photon emission from a BH that forms in the redshift range $20\hbox{--}25$ and subsequently grows owing to accretion. In section~\ref{sec:obser} we discuss possible observables that can probe the thermal and ionization evolution of the gas influenced by emission from the BH. In section~\ref{sec:resu} we present our main results. In section~\ref{sec:sumcon} we summarize our findings and make concluding remarks. Throughout this paper, we assume the spatially-flat $\Lambda$CDM model with the following parameters: $\Omega_m = 0.254$, $\Omega_B = 0.049$, $h = 0.67$ and $n_s = 0.96$, with the overall normalization corresponding to $\sigma_8 = 0.83$ \citep{Planck2015}. \section{Description of the model} The accretion onto a black hole (BH) is supposed to be a source of UV/X-ray photons. {Supermassive black holes (SMBH) with masses $\lower.5ex\hbox{\gtsima} 10^9M_\odot$ are known to exist at redshifts as high as $z>7$ \citep{mortlock11,banados}. One can expect that during their growth phase their predecessor would contribute to heating and ionization of the Universe. In order that a stellar-mass BH seed would grow to a $10^9M_\odot$ SMBH, a nearly continuous accretion with the Eddington rate is the most efficient regime, under which the BH mass $M_{BH}$ grows as} \citep{shapiro05,volonteri05,volonteri06}: \begin{equation} \label{epsi} M_{BH}(t) = M_{BH,t=0} {\rm exp}\left({1-\epsilon \over \epsilon} {t \over t_{E}}\right) \end{equation} where $M_{BH,t=0}$ is initial BH mass and $t_{E}=0.45$~Gyr, $\epsilon$ is radiative efficiency{ ---the efficiency of conversion of rest-mass energy to luminous energy by accretion onto a black hole of mass $M$ \citep{shapiro05}}, $\epsilon \simeq 0.1$ is taken as fiducial value; we discuss the impact of varying $\epsilon$ in later sections. { Following \citet{shapiro05} we assume that the efficiency of accretion luminosity $\epsilon_L \equiv L/L_E$, where $L_E$ is the Eddington luminosity, is equal to unity in our calculations;} { $\epsilon_L=1$ is thought to represent the upper observed limit of quasar luminosity \citep{mclure04}. } The spectrum of the ionizing radiation emitted during accretion is assumed to be a power-law: \begin{equation} L_\nu = L_0 \left({\nu \over \nu_{\rm H}}\right)^{\alpha} \label{lum} \end{equation} where $\alpha = -1.5$,{ which is assumed as a fiducial value in our calculations}, $L_0$ is a normalization coefficient, which is obtained for the bolometric luminosity of BH: $L_{BH} = 1.25\times 10^{38} M_{BH}$~erg/s in energy range from 13.6 to $10^4$~eV. The bolometric luminosity is assumed to be equal to the Eddington limit. The spectral energy distribution slope of active galactic nuclei is measured to be from $-1.7$ to $-1.4$ \citep{telfer02,scott04,shull12,stevans14,lusso15}, and we consider how this affects on our results below. Current theoretical models of first (Population III) in the Universe favor the IMF to be dominated by massive objects in the range from tens to hundreds of solar masses \citep[e.g.,][]{abel02,bromm02,yoshida08}. In the lower mass end (tens of solar masses) stars can form either due to various feedbacks \citep{tan04,hosokawa11}, or due to atomic cooling in metal-free gas with the virial temperature $T>10^4$~K \citep[e.g.,][]{becerra15}, or owing to cooling by metals/dust in a weakly enriched gas \citep[e.g.,][]{bromm01,dopcke13}. The seeds for BHs are the final product of the evolution of Population III stars with $M \sim 30\hbox{--}260M_\odot$ \citep[see e.g.,][]{woosley02}. Low-mass stars, $M \sim 30M_\odot$, are likely more numerous and might be more common seeds for BHs. However, only a small mass fraction $\sim 10$\% of their progenitors collapses to a BH and therefore these star do not contribute significantly to the growth of supermassive black holes. Even though this fraction increases for higher mass stars, it still remains less than 50\% for $M \lower.5ex\hbox{\ltsima} 100M_\odot$. However, stars with $M\lower.5ex\hbox{\gtsima} 260$~$M_\odot$ leave remnants---black holes, only slightly less massive than the progenitor. The Eddington rate is relatively slow: as seen from Eq. (\ref{epsi}) the accretion with $\epsilon=0.1$ increases the BH mass by factor 10 from $z=50$ to $z\simeq 20$ and by 30 to $z\simeq 17$. Therefore, only stellar progenitors of $M\lower.5ex\hbox{\gtsima} 260$~$M_\odot$ are capable of giving rise to SMBHs. Based on these considerations, in our calculations, we adopt $M_{BH}=300~M_\odot$ as fiducial value for BH seeds, though deviations from this value are also discussed. It is worth noting that in low-mass halos \citep[apparently $<10^6M_\odot$][]{wfryer12,jeon12,ricot11}, radiative and mechanical feedback can inhibit growing supermassive BH from a stellar mass BH seed. However, currently there are too few numrical simulations for firm conclusions about inhibitive feedbacks on the growth of SMBHs in more massive $\geq 10^8M_\odot$) minihalos \citep{wise18}. Stellar progenitors of BHs are formed in minihalos with total masses $M\sim 10^5-10^7~M_\odot$ \citep{haiman96,tegmark97}, \citep[see also review][]{Barkana2001}. {Eventually, depending on specific conditions in a minihalo a single very massive star or/and several less massive stars do form and produce copious amount of ionizing photons \citep{tum00,brom04}. As a result a significant fraction of gas in the host minihalo becomes ionized, and the escape fraction of Lyman continuum photons into the IGM can grow substantially \citep[see review][]{ciardi05rev,ferrara13}}. To model absorption of ionizing photons inside the halo we assume that average total column density of HI inherent to the host galaxy is $N_{\rm HI}^{h}$ with primordial abundance of elements: $X=0.76$, $Y_{\rm He}=0.24$. { In calculations we include not only absorption in the host galaxy, but in the circumgalactic gas within several virial radii ($\simeq 1\hbox{--}3$) as well. Neutral hydrogen fraction in the interstellar medium (ISM) of the host halo is determined by detailed evolution of the halo, e.g. gas coolng/heating, possible star formation and BH feedbacks, and so on. On the other hand, density and velocity profiles outside the halo might be altered by tidal interactions and merging with other halos. Therefore, finding a closer connection between $N_{\rm HI}^h$ and the underlying galactic ISM is very challenging \citep[see e.g.,][]{bromm03,whalen04,greif07,whalen08,v08,v12}, and lies out of the scope of the paper. However, it is obvious that on much larger scales where the diagnostics discussed in this paper arise, e.g. 21cm signal, details of gas distribution around host halo plays a minor role and only the average value of $N_{\rm HI}^h$ might suffice to model the absorption inside the halo.} Therefore, we consider several values for $N_{\rm HI}^h$ to model the host halo, with $N_{\rm H}^h = 10^{20}$~cm$^{-2}$ as a fiducial value. This choice is consistent with the fact that the total column density of minihalos with $M\lower.5ex\hbox{\ltsima} 10^9M_\odot$ formed at $z\simeq 10\hbox{--}20$ are less than $10^{21}$~cm$^{-2}$ (assuming the top-hat density profile, for simplicity). As we expect the gas inside halos to be partially ionized by both stellar progenitors and the BH itself, the adopted value of HI column density seems a reasonable conservative estimate. In Sec. \ref{ipar} we will discuss dependence of our results on its variation. The radiation from the growing BH can also be attenuated by neutral IGM gas. { Optical depth at distance $r$ from the BH is $\tau_\nu(r) = \int {\sigma_\nu^k(r) n^k(r) dr }$, where $k = {\rm HI, \ HeI, \ HeII}$, $\sigma_\nu^k$ are the cross-sections at frequency $\nu$ \citep{cen92,glover07}, the values $\sigma_\nu^k(r)$ and $n^k(r)$} depend on ionization and thermal history of the IGM, whose evolution is described below. { Then, t}he flux { of ionizing radiation} at a distance $r$ from the BH is \begin{equation} F_\nu = {L_\nu \over 4 \pi r^2} {\rm exp}(-\tau_{\rm h}-\tau_{\rm IGM}) \end{equation} where the first term in the exponent is due to the attenuation in the host galaxy (it depends on $N_{\rm HI}^h$ and we assume it to remain constant during the evolution), the second term is determined by absorption in the medium surrounding the BH and the intergalactic medium. In the hierarchical structure formation scenario, minihalos undergo mergers. {In some of which the seeds of BHs with intermediate mass form, and can efficiently grow only when a considerable reservoir of gas is available. It suggests that minihalos with growing BHs undergo frequent mergers, and collect a sufficient gas mass for BHs feeding. The lower the radiative efficiency during a BH growth the larger is the mass necessary to maintain the growth, and it may happen that this mass will exceed the initial baryon mass of the host minihalo.} For instance, for $\epsilon=0.2$ a BH mass grows about 33 times in $\sim$400~Myr. For a seed with $M_{BH}=300~M_\odot$ such a growth can be maintained in the host minihalo {as small as $M\lower.5ex\hbox{\ltsima} 10^5~M_\odot$. However, for $\epsilon=0.1$ in the same period} the BH mass increases by about $2.5\times 10^3$ times, whereas for $\epsilon=0.05$ the ratio reaches values as high as $1.5\times 10^7$. {Therefore, only those minihalos with the accretion rate higher than $\dot M\geq 0.4~M_\odot$ yr$^{-1}$ could host growing BH with such a low radiative efficiency.} In the $\Lambda$CDM model, star-forming minihalos for a wide range of masses $< 10^8 \, \rm M_\odot$ merge and virialize at $z\lower.5ex\hbox{\gtsima} 25$ as (3-4)$\sigma$ density peaks. {The merging rate of such minihalos seems to be sufficient} to provide the sites for feeding growth of massive BHs \citep{volonteri05}. Based on these considerations we start the evolution at $z_0=25$ and continue it for 400~Myr, which corresponds to the final redshift $z\simeq10$, which is close to the era at which reionization of the Universe is completed. Therefore we explicitly assume that black holes grow nearly with a steady state Eddington accretion rate on cosmological time scales. In this regard such a consideration excludes a possibility to incorporate here recently widely discussed direct monolithic collapse of supermassive black holes \citep{begel06}, as they apparently keep the accretion rate close to the Eddington limit only in a very short time scale $\sim 1$ Myr after formation \citep[see e.g.][]{john11}. How numerous could the high redshift BHs be? The space density of haloes that can host BHs at high redshifts can be computed using Press-Schechter formalism. Assuming a typical halo mass of $10^7 M_\odot$, the comoving density of such haloes increases from $10^{-2} \, \rm \, Mpc^{-3}$ to nearly $1 \, \rm Mpc^{-3}$ in the redshift range 10 to 20 \citep[e.g.][]{Barkana2001}. However, converting space density of haloes that can host BHs to the number density of BH precursors is highly uncertain. Their comoving density could lie in the range $10^{-3}\hbox{--}10^{-10} \, \rm Mpc^{-3}$ at $z \simeq 10$ \citep[e.g. Figure~4 of ][]{D14}. Two AGNs have been detected at $z>7$ and these AGNs host BHs with $M\simeq 10^9 \, \rm M_\odot$. In the case these BHs grew from stellar seeds, the growth could have commenced at $z\simeq 14$ to reach $M \simeq 10^9 \, \rm M_\odot$ within one Eddington time (0.45~Gyrs, which is close our final time in calculations) for $\epsilon = 0.05$. At smaller redshifts ($z\simeq 2\hbox{--}4$) such AGNs have absolute magnitudes in the range -26 to -28 (see Figure 5 in \citet{mclure04} and Figure 13 of \citet{qso-lumfunc}). Less massive BHs $M \simeq 10^8 M_\odot$ are expected to be more than hundred times more numerous \citep[Figure 6 in][]{volonteri06}. These BHs could have emerged from the same mass haloes but for larger values of $\epsilon$. In our mode, a minihalo with a seed BH is immersed into the IGM. {Dynamical state and structure of the transition layer between minihalo and surrounding gas can be in general complicated, with inhomogeneous distribution of gas density, temperature and velocity field.} We neglect the complications of this narrow interface and {match the minihalo directly to the IGM, starting our calculations from the internal boundary of the surrounding intergalactic gas. We assume this gas to have homogeneous distribution of density and temperature decreasing due to cosmological expansion: $\propto (1+z)^{-3}$ and $\propto (1+z)^{-2}$, {until} the ionizing radiation from a BH changes its thermodynamics}. We consider the evolution of gas enclosed in the concentric static spheres with a BH in the center. The radii of spheres extend from $10^3$ to $10^7$~pc { (all distances are expressed in physical units unless otherwise specified)}. The radii of neighbouring spheres differ by a factor $a_r=1.1$: $r_{i+1} = a_r r_i$, which yields 100 concentric shells accounting for the ratio of outer radius to inner radius. Note that the inner radius is about three times greater than the virial radius of minihalo with $M=10^7~M_\odot$ formed at $z=20$ \citep[e.g.][]{ciardi05rev}. In each sphere we solve thermal and ionization evolution of hydrogen, neutral helium and singly ionized helium. We consider the following processes for primordial plasma: collisional ionization, recombination, photoionization by UV/X-ray radiation from the BH attenuated by both the host galaxy and the surrounding IGM gas. The thermal evolution includes cooling due to collisional ionization for HI, HeI, HeII, recombination of HII, HeII (radiative and dielectronic), HeIII, collisional excitation of HI, HeI ($1^2S$ and $2^3S$), HeII, free-free emission and Compton cooling/heating, and photoionization heating. The reaction and cooling/heating rates are taken from \citet{cen92,glover07}. Because we consider ionization by X-ray radiation the influence by secondary electrons is taken into account as described in \citep{steenberg85,ricotti02}. In {equation of thermal evolution we add the cooling term due to the Hubble expansion, in order to correctly describe evolution} on time scales greater than the local age of the universe. We solve the equations on time scale 400~Myr, such that for the initial redshift $z_0=20$ our calculations complete at $8.5$. The initial gas temperature and HII fraction for a given redshift are obtained by using the RECFAST code \citep{recfast}, while helium in the initial state is assumed to be neutral. \section{observable features of Cosmic Dawn and Epoch of Reionization} \label{sec:obser} In this section we discuss in detail the possible observables in the redshift range $8.5 < z < 25$ owing to the impact of radiation from the accreting BH. \subsection{21cm line} Atomic collisions and scattering of UV photons couple the HI spin temperature { to} the gas kinetic temperature, $T_k$, { and the color temperature, $T_c$} \citep{field,wout}: \begin{equation} T_s^{\rm HI} = {T_{CMB} + y_c T_k + y_a T_c \over 1 + y_c + y_a} \label{tspin} \end{equation} Here $y_c$ and $y_a$ determine the coupling of { the} two states of the HI hyperfine splitting owing to collisions and Lyman-{ $\alpha$} photons (Wouthuysen-Field coupling), { respectively}; $y_c = C_{10}^{\rm HI} T_\star/(A_{\rm HI}T)$ with $T_\star = h\nu_{\rm HI}/k$ and $C_{10}^{\rm HI}$ being the collisional de-excitation rate of the hyperfine line of HI. { We assume that the color temperature is coupled to the gas kinetic temperature: $T_c \simeq T_k$.} The coefficient $y_a$ is similarly defined with collisional de-excitation rate replaced with the de-excitation rate owing to Lyman-$\alpha$ photons. Given the geometry of our physical setting, the coupling of the expanding gas with Lyman-$\alpha$ photons from BH needs to be discussed in detail. Lyman-$\alpha$ photons in the rest frame of BH are strongly absorbed in the halo of column density $N_{\rm HI} \simeq 10^{20} \, \rm cm^{-2}$ surrounding the BH as the line center cross-section for Lyman-$\alpha$ scattering is $\simeq 10^{-13} \, \rm cm^2$ (assuming a temperature $T\simeq 5000 \, \rm K$). Using Voigt profile one can show that photons of frequencies $\nu \simeq \nu_\alpha\pm 50\Delta\nu_D$, with $\Delta\nu_D=\nu_\alpha/c(2kT/m_p)^{1/2} \simeq 10^{-5} \nu_\alpha$ being the Doppler width -- can escape the halo. As the medium outside the halo is expanding the photons will redshift, and photons with frequencies larger than Lyman-$\alpha$ in BH rest frame can get absorbed in the expanding medium. Using local Hubble's law, $v = H(z)r$ one can show these photons to get absorbed for a range of distances $0.01\hbox{--}1 \, \rm Mpc$ from the halo. This motivates us to assume that the number of Lyman-$\alpha$ photons (which are photons with frequencies marginally above Lyman-$\alpha$ in BH rest frame) in the expanding medium suffer only geometric $1/r^2$ dilution. In addition, we also assume that {\it in situ} injected Lyman-$\alpha$ photons emerge due to recombinations with number density proportional to the local photoinization rate \citep[see~Eqs.~15~and~17~in][]{miralda04}. Here following \citet{field} we also explicitly assume the ``color temperature'' of Ly-$\alpha$ photons to be equal to the gas kinetic temperature. In the collisional de-excitation rate we take into account collisions with H atoms \citep{kuhlen} and electrons \citep{liszt}. $y_a$ is proportional to the number density of Lyman-$\alpha$ photons at the point of scattering. The differential brightness temperature for the redshifted HI line can be estimated \citep{miralda04,Furlanettoetal}: \begin{eqnarray} \Delta T^b_{\rm HI} = 25~{\rm mK} (1 + \delta) {n_{\rm HI} \over n} ~ {T_s^{\rm HI} - T_{CMB} \over T_s^{\rm HI}} \left({\Omega_b h \over 0.03}\right) \times \nonumber \\ \times \left({0.3 \over \Omega_m}\right)^{0.5} \left({1+z\over 10}\right)^{0.5} \left[ { H(z)/(1+z) \over dv_{||}/dr_{||} }\right] \label{tb21} \end{eqnarray} where $\delta$ is overdensity, which is neglected as we assume { the} uniform Hubble expansion at high redshifts so that the gradient of the proper velocity along the line of sight $dv_{||}/dr_{||}$ equals $H(z)/(1+z)$. { Near the halos hosted by BHs the line broadening is dominated by peculiar velocities of IGM gas rather than the Hubble expansion, in this case the center-of-line optical depth and consequently the brightness temperature $\Delta T^b_{\rm HI}$ decrease. } \subsubsection{Global condition on HI absorption from EDGES observation} In this work, we model the HI signal from gas surrounding an isolated accreting black hole. Such black holes are not the only source that can emit UV radiation relevant for modelling HI absorption and emission signal from high redshift. While we do not incorporate in our models all other possible sources in the redshift range of interest, it is important to know physical state of gas far from the zone of influence of the black hole. This would allow us to smoothly match HI signal from regions close to the black holes to the signal expected from ambient gas under global conditions. Recent EDGES observation \citep{2018Natur.555...67B} shows a sky-averaged absorption feature of strength $\Delta T \simeq -500 \,\rm mK$ in the frequency range $70$--$90$~MHz which corresponds to a redshift range $15$--$19$ for the redshifted HI line. The minimum temperature of the IGM at $z\simeq 19$ is $T \simeq 6 \, \rm K$ in the usual case (standard recombination history), and it follows from Eq.~(\ref{tb21}) that the absorption trough in the redshifted HI hyperfine line at $z\simeq 19$ should not have been deeper than around $-180 \, \rm mK$. One plausible explanation of EDGES results relates to overcooling of baryons by elastic collisions with dark matter particles, as suggested by \citet{2018Natur.555...71B}. In this case, as seen in Eqs.~(\ref{tspin}) and~(\ref{tb21}): ({\it a}) Lyman-$\alpha$ photons globally couple the spin temperature to matter temperature, i.e. $T y_\alpha \gg T_{CMB}$, such that $T_s = T$ at $z\simeq 19$, and ({\it b}) $T_s \ll T_{CMB}$ as the signal is seen in absorption and is strong. Note though that this explanation is still widely debated, because of a possible systematic error of the EDGES result \citep{hills18}. Another possible alternative explanations might be that there is additional radio background at $z\simeq 18$ whose temperature $T_{\rm radio}$ is higher than the CMB temperature; in this case we can replace $T_{\rm CMB}$ with the $T_{\rm CMB} +T_{\rm radio}$ in Eq.~(\ref{tb21}) \citep{feng18} and the enhancement of the observed signal is not owing to the cooling of baryons. It is also conceivable that the observed feature is owing to radiation from spinning dust grains in the Galactic ISM \citep{draine-miralda}. We explore here the implications of coupling between DM and baryons as a possible explanation of EDGES result. To model the global conditions implied by EDGES observations we solve one more equation for the dark matter temperature, $T_{\rm dm}$, which can be altered due to adiabatic expansion and the interaction with baryons. In addition, the term corresponding to the interaction of matter with dark matter is added to the matter temperature equation as well \citep[for details see e.g.][]{2018arXiv180303091B}. We follow \citet{2018arXiv180303091B} in modelling $\sigma_{\rm dm}$, the energy-exchange cross-section between dark matter and matter, as following the form of Rutherford scattering between a millicharge dark matter particle with electrons \citep[e.g.][]{dm-charged1,dm-charged2,dm-charged3,dm-charged4,dm-charged5}. In this case, $\sigma_{\rm dm} = 8\pi g^2 e^4/(m_e m_{\rm dm} v^4) \log{(\Lambda)}$, here $g$ is { the} ratio of dark matter charge to electron charge. $v$ is the relative velocity between the two particles and $\langle \sigma_{\rm dm} v \rangle$ corresponds to thermal averaging. The most significant aspect of such an interaction for our purposes is that it is proportional to $1/v^4$ and therefore at higher redshifts when the temperature is higher, the interaction is negligible. $n_{\rm dm}$ the number density of dark matter particles is another free parameter, $n_{\rm dm} = \rho_{\rm dm}/m_{\rm dm}$. As there are three free parameters in modelling this interaction---the number density of dark matter particle (or equivalently the mass of dark matter particle) and interaction strength between dark matter and baryons, and initial temperature of dark matter---a rich array of possible scenarios are possible \citep[for details see e.g.][]{2018arXiv180303091B}. It is not our aim here to constrain these parameters but obtain the global condition of the HI gas in the redshift range of interest. { To cool the baryons at $z\simeq 20$, we require, $n_{\rm dm} \sigma_{\rm dm} v > H$, where $H$ is the expansion rate of the universe. Using the expression for $\sigma_{\rm dm}$ given above, $v$ as the average speed of thermal electrons at $z\simeq 20$ before the additional cooling sets in, and assuming $1\%$ of the DM to be milli-charged, we obtain $g>10^{-7}$ for $m_{\rm dm} \simeq 10 \, \rm MeV$, in agreement with the results of \citet{2018arXiv180303091B}.} We also need a global heating source which allows the HI to heat above the CMB temperature for $z<15$, as the EDGES observation require. We add the corresponding term that gives additional heating source for global heating; this term is modelled as photoelectric heating by x-ray photons from sources except from the BH (see e.g. \citep{barkanafi,cohenfi,fialkov18}. As noted above, an essential component of our modelling the EDGES result is $T_s^{\rm HI}=T$ in Eq. (\ref{tspin}). All the parameters (global heating rate, cross-section of dark matter-baryon scattering, Lyman-$\alpha$ coupling) have been chosen to fit the 21 cm brightness temperature $T_b \sim -500$ mK at $z\sim 20$ as in \citep{2018Natur.555...67B}. \subsection{$^3${\rm HeII} hyperfine line} Another important hyperfine structure transition exists in a singly ionized helium-3 isotope $^3$HeII at 8.67~GHz\citep{townes57,sunyaev66,goldwire67,rood79,bell00,bagla09,mcquinn09,takeuchi14}. Similar to the HI hyperfine line this transition is excited by collisions with atoms and electrons and photon scattering. The rate of transition owing to collisions for a singly ionized helium-3 with electrons is given by: \begin{equation} C_{10}^{\rm ^3HeII} = n_e \left( {k T \over \pi m_e c^2} \right)^{1/2} c \sigma_e^{{\rm ^3HeII}} \end{equation} where $\sigma_e^{{\rm ^3HeII}}$ is the average cross-section of the spin exchange between $^3$HeII and electrons, which is approximated as (McQuinn \& Switzer 2009) \begin{equation} \sigma_e^{ {\rm ^3HeII}} \simeq {14.3 {\rm eV} \over k T} a_0^2, \end{equation} where $a_0$ is the Bohr radius. In this case, the Wouthuysen-Field coupling between the two levels is caused by photons of wavelength, $\lambda = 304 \, \rm \AA$ \citep[Eq.~17 in][]{mcquinn09}. The number density of these photons at the point of scattering is computed from the spectrum of BH emission. This allows us to calculate the differential brightness temperature of $^3$HeII line using Eq.~(\ref{tspin}): \begin{eqnarray} \Delta T^b_{^3{\rm HeII}} = 1.7\times 10^{-3}~{\rm mK} ~ (1 + \delta) ~ {n_{\rm HeII} \over n} \times \nonumber \\ \times ~ {T_s^{\rm ^3HeII} - T_{CMB} \over T_s^{\rm ^3HeII}} \left( {Y_{\rm ^3He}\over 10^{-5}}\right) \left({\Omega_b h \over 0.03}\right) \times \nonumber \\ \times \left({0.3 \over \Omega_m}\right)^{0.5} \left({1+z\over 10}\right)^{0.5} \left[ { H(z)/(1+z) \over dv_{||}/dr_{||} }\right] \label{tbhe} \end{eqnarray} where $Y_{\rm ^3He}$ is the primordial abundance of the helium-3 isotope, which is assumed to be equal $10^{-5}$, $n_{\rm HeII}$ is the number density of singly ionized helium-4 isotope. \subsection{Optical and radio- recombination lines} As seen in Figure~\ref{fig-evol}, growing BHs produce regions of high ionization which can potentially be detected in hydrogen recombination lines. The frequencies of H$nj$ lines are: $ \nu_{{\rm H}nj} = c R \left[ {1\over n^2} - {1\over j^2} \right]$ with $j>n$ and $R = 1.0968\times 10^5$~cm$^{-1}$ being the Rydberg constant for hydrogen. The emissivity of a recombination line averaged over a sphere of radius $r_i$ around the BH is: $\epsilon_{{\rm H}nj}(r_i) = q_{nj} \alpha_j n_e(r_i) n_{\rm HII}(r_i)$; $n_e(r_i)$ and $n_{\rm HII}(r_i)$ are number densities of $e$ and HII species, respectively, $\alpha_j$ is the recombination coefficient to $j$th state \citep[for detailed discussion and derivation see e.g.][]{rybicki-book}, and $q_{nj} \simeq A_{jn}/\sum\limits_{m<j}^{} A_{jm}$ is the probability that an atom recombined to $j$th state emits a photon by a spontaneous decay on to $n$th state. {In practice ${\rm H}n\alpha$ lines from transitions between the states $n$ and $n+1$ are usually considered because they are the strongest with the $A$-coefficients being the largest \citep[e.g.][]{rybicki-book}. } The emissivity in these lines can be approximated as: $\epsilon(r_i) \simeq 3.25 n^{-2.72} \alpha_B(r_i) n_e(r_i) n_{\rm HII}(r_i)$ \citep{rule13}, where $\alpha_B$ is the case B recombination rate \citep[eqn. 14.8 in][]{drainebook}. The total luminosity in $j$-line is $L_j = (h c / \lambda_j) \sum_i \epsilon_j(r_i) V(r_i)$, where $V_i$ is the volume of $i$-sphere. We include all spheres with $T>100$~K and achieve reasonable convergence for the predicted luminosity {as the ionized fraction falls faster than $r_i^{-2}$}. The flux in $j$-line is \begin{equation} F_{\nu_j} = {1 \over \Delta\nu_j} ~ { (1+z) L_j \over 4 \pi d_L^2} \label{flux} \end{equation} where $d_L$ is the luminosity distance. {The line width is $\Delta\nu_j = \max(1/\sum_{i<j} A_{ni},\Delta\nu_D)$.} {In all cases of interest here the Doppler broadening dominates, such that $\Delta\nu_j$ is} given by Doppler line width: \begin{equation} \Delta\nu_j = {\nu_{j0} \over c} \sqrt{2 k T \over m_p}. \label{width} \end{equation} \section{Results} \label{sec:resu} \subsection{{\rm HI} 21 cm observables} \label{sib:21cm} \begin{figure} \center \includegraphics[width=75mm]{evol-tgas-m300z20.eps} \break \includegraphics[width=75mm]{evol-xHII-m300z20.eps} \break \includegraphics[width=75mm]{evol-xHeII-m300z20.eps} \caption{ The radial distribution (radius is in physical, not comoving units) of the kinetic temperature (upper), HII fraction (middle), and HeII fraction (lower) around a BH {with initial mass $M_{BH,z_0}=300M_\odot$ and radiative efficiency $\epsilon=0.1$} starting its evolution at $z_0 = 20$ for several redshifts: $z = 16.5, 12.5, 10.5, 8.5$ (lines from left to right); {dashed lines stay for a BH with a constant mass {$M_{BH}=300M_\odot$}. } } \label{fig-evol} \end{figure} Figure~\ref{fig-evol} shows thermal and ionization evolution around both non-growing (constant mass) and growing BHs with the initial redshift $z_0 = 20$. The non-growing BH is surrounded by {the zone of influence} -- the region of (physical) size $r\lower.5ex\hbox{\ltsima} 10^{5}$~pc, in which the gas temperature and the ionized fraction of hydrogen and helium differ significantly from the background values. {In the central part of $r\sim 10$ kpc} the ionized fraction of hydrogen reaches close to unity and the temperature exceeds $10^4$~K. {The zone of ionized gas shows} a slow evolution with redshift, {$r\propto t$}. The growing BH produces more ionizing photons and the { zone of influence increases in time much faster than in the previous case, nearly as $r\propto t^{1.6}$}. For $\epsilon=0.1$ the size of a sphere, where the gas temperature and ionizing fraction differ markedly from the background values, becomes at redshift $z=8.5$ more than an order of magnitude larger as compared to that for a non-growing BH. The physical {size of the zone of influence} in the latter case grows from nearly 10~kpc to 300~kpc during the growth of the central BH. The {ionization fraction of hydrogen can reach a few percents within the region with temperature exceeding 300~K}. The zone of influence of helium is generally smaller and reaches on the upper end 100~kpc. First, we consider the case when the implications of EDGES results are not taken into account. Figure~\ref{figh1ya} shows radial profiles of brightness temperature for a static (dashed lines) and growing (solid lines) BHs for two different values of the radiative efficiency $\epsilon=0.1$ and $0.05$. As seen the brightness temperature peaks at the region with sufficiently high kinetic gas temperature $T$ and high fraction of the atomic hydrogen, i.e. where the product $Tx_{\rm HI}$ peaks. On the contrary, the signal from 21 cm vanishes where the Lyman-$\alpha$ coupling becomes inefficient. As seen from Eq.~(\ref{epsi}) lower $\epsilon$ causes higher accretion rate and higher luminosity. Consequently, the zones of influence are greater and the 21 cm line emission brightens at a given time, such that its appearance becomes more clearly pronounced. \begin{figure} \center \includegraphics[width=85mm]{evol-tb21-m300z20.eps} \includegraphics[width=85mm]{evol-tb21-m300z20-e0p05.eps} \caption{ {The brightness temperature in the 21 cm HI line as a function of radius is plotted for different cases around a BH with initial mass $M_{BH,z_0}=300~M_\odot$, starting its evolution at $z_0 = 20$ with the radiative efficiency $\epsilon = 0.1$ (upper panel) and 0.05 (lower panel). The dashed lines {shown in the upper panel} correspond to a non-growing BH with constant BH mass $M_{BH}=300~M_\odot$. }} \label{figh1ya} \end{figure} \begin{figure} \center \includegraphics[width=85mm]{r_tbext-vs-e_m300z20.eps} \caption{ The radii of spheres around a BH {with the initial mass $M_{BH,z_0}=300~M_\odot$} starting its evolution at $z_0 = 20$ versus the radiative efficiency $\epsilon$ at two redshifts: $z = 10.5$ (dashed lines) and 8.5 (solid lines). The red lines show the radius at which the brightness temperature in the 21 cm HI line $\Delta T_b$ reaches maximum and the green lines depict the radius at which $\Delta T_b$ is positive (see Figure~\ref{figh1ya}). } \label{fig-rad} \end{figure} \begin{figure} \center \includegraphics[width=85mm]{ang-fre-H-m300e01-z20-50.eps} \caption{ The dependence 'angular diameter -- observed frequency' for spheres emitted in the HI 21 cm line around a growing BH with radiative efficiency $\epsilon = 0.1$ starting its evolution at redshifts $z_0 = 50$, 30 and 20 (lines from left to right). The diameter of spheres is determined, where the brightness temperature in the 21 cm HI line $\Delta T_b^{\rm HI}$ reaches maximum (see Figure~\ref{figh1ya}). } \label{fig-adia-fre-HI} \end{figure} Figure~\ref{fig-rad} shows the radius at which the brightness temperature in the 21 cm HI line $\Delta T_b$ reaches maximum (red lines) and the radius beyond which $\Delta T_b$ becomes negative (green lines) versus the radiation efficiency $\epsilon$, at two redshifts: $z = 10.5$ (dashed lines) and 8.5 (solid lines) for a BH starting its evolution at $z_0 = 20$. Clearly seen is that the region influenced by growing BHs is larger for smaller $\epsilon$: for $\epsilon\simeq 0.05$ it extends up to $\sim 1$~Mpc, corresponding to the comoving scale $\simeq 10 \, \rm Mpc$ which is close to the spatial resolution of on-going radio-interferometers like LOFAR. For $\epsilon \sim 0.1\hbox{--}0.25$ it reaches around $0.1$~Mpc at $z\lower.5ex\hbox{\ltsima} 10.5$. This is also comparable to the mean distance between minihalos with $M\lower.5ex\hbox{\gtsima} 10^6~M_\odot$ at $z\sim 10$, what means that a growing BH can affect star-formation in neighbouring minihalos \citep{haiman00}, that can result in a stronger signal in the 21 cm line. The evolution of such region can be represented in observable values. Figure~\ref{fig-adia-fre-HI} shows how the angular diameter of the region emitting in 21 cm line depends on observed frequency $\nu_o = 1420~{\rm MHz}/(1+z)$ for a growing BH with radiative efficiency $\epsilon = 0.1$ starting its evolution at redshifts $z_0 = 50$, 30 and 20. The diameter of the emitting sphere is defined as that where the brightness temperature in the 21 cm HI line $\Delta T_b^{\rm HI}$ reaches the maximum (see Figure~\ref{fig-evol}). One can note that the angular size of the regions becomes greater than 1~arcmin at $\nu \sim 110-150$~MHz. The increase of radiative efficiency obviously leads to larger angular size, e.g. it grows up to 1.7~arcmin at 150~MHz ($z=8.5$). Ongoing radio interferometer such as LOFAR and upcoming SKA1-LOW have the capability of detecting the contrast between HI brightness temperature on angular scales of a few arcminutes. { This contrast could be detected statistically, e.g. by measuring two-point correlation function of the intensity of the redshifted HI line, or by imaging\footnote{for a discussion on sensitivities for these two observables in radio interferometry, see e.g. \citet{2008ApJ...673....1S}}. Our analysis can be extended to predict the two-point functions of the spatial distribution of HI but we do not attempt it here partly because these functions depend on the fraction of the universe in which the thermal and ionization history is impacted by early BHs \citep[e.g.][]{Zaldarriaga2004}; this fraction cannot be reliably computed because the space density of the precursor of these BHs is highly uncertain, as already noted above. For imaging, the projected sensitivity of SKA is expected to reach a few millikelvins on angular scales from 1--10 arcminutes \citep{2015aska.confE...1K}}. The expected contrast (Figure~\ref{figh1ya}), particularly in light of the recent EDGES result, is likely to reach a few hundred milli kelvins which is easily detectable by SKA1-LOW and possibly by LOFAR. \begin{figure} \center \includegraphics[width=85mm]{evol-tb21-e0p1m300z40-dm.eps} \caption{The brightness temperature in the 21 cm HI line vs radius around a BH with initial mass $M_{BH,z_0}=300~M_\odot$, $\epsilon = 0.1$ and $z_0 = 40$, with altered thermodynamics of baryons due to elastic scattering with cold dark matter. } \label{figh2ya} \end{figure} \begin{figure} \center \includegraphics[width=85mm]{evol-tb21-e0p1m300z40-heat-nodm.eps} \caption{ Same as in Figure \ref{figh2ya} without baryon cooling due to elastic collisions with dark matter, but with an additional heating coming from energy released by stars in the first episode of star formation. } \label{fig-rad-vsz040} \end{figure} \begin{figure} \center \includegraphics[width=85mm]{evol-tb21-m300z40-heat-compare.eps} \caption{ { The brightness temperature in the 21 cm HI line vs radius around a BH with initial mass $M_{BH,z_0}=300~M_\odot$, $\epsilon = 0.1$ and $z_0 = 40$, without (solid lines) and with (dashed lines) an additional heating coming from energy released by stars in the first episode of star formation. } } \label{fig-rad-vsz040-heatc} \end{figure} \subsubsection{Altered thermodynamics from DM cooling} \label{susub:dm} We discuss next the impact of altered thermal history on cosmological observables caused by additional cooling of baryons in elastic interactions with dark matter \citep{2018Natur.555...67B}. We show the impact of global thermal state of the neutral gas implied by this result in Figure~\ref{figh2ya}. Closer to the BH the thermal and ionization state of the gas is determined by the emission from the BH. However, unlike the case shown in Figure~\ref{figh1ya}, the HI signal is seen in strong absorption far away from the BH in the redshift range 15--19, in agreement with EDGES results discussed above. For comparison in Figure~\ref{fig-rad-vsz040} we show a similar model without baryon cooling forced by elastic interactions with dark-matter, but with heating from energy released in initial episodes of star formation as in \citep{barkanafi,cohenfi}. An obvious distinctive feature of models with DM cooling is that outside the zone of influence the brightness temperature follows the global behavior of HI spin temperature. This situation causes strong spatial contrast in HI brightness temperature in the redshift range of interest, which makes it easier to observe this signal using ongoing and future radio interferometers. \subsection{Dependences on initial parameters} \label{ipar} Minihalos are thought to form in high peaks of the cosmological density field. Even though for higher redshifts such peaks become rarer, minihalos can form as early as $z\sim 50$ \citep{gao-first05}. Such minihalos can host first BHs, which in turn can become progenitors of supermassive BHs $M\sim 10^9~M_\odot$ found at $z\sim 6-7$ \citep[e.g.][]{mortlock11,wu-bhs15}. {We briefly discuss possible observational manifestations from BHs began growing at higher redshifts.} One obvious consequence of a BH growing at higher redshift is the larger radius of the zone of influence at a given $z$. For instance, the size of the zone around a BH growing from $z_0=25$ is greater than that of a BH at $z_0=20$ by about 60\% {at $z=16.5$ (the corresponding 21 cm line shifts to 80~MHz) and 30\% at $z=9$ (the corresponding line peak frequency is 142~MHz).} The brightness temperature magnitude decreases from 5.7~mK at 80~MHz to 1.8~mK at 142~MHz, with weak dependence on the initial redshift $z_0$. Another important issue concerns the mass budget of a growing BH. Dark matter halos that host BHs should have sufficient amount of baryons to feed the BH. This requirement is especially critical for {lower values} of radiative efficiency $\epsilon$. As mentioned above, for $\epsilon=0.1$, a BH mass grows by roughly a factor $2.5\times 10^3$ in $\sim$400~Myr evolution, but the factor reaches $1.5\times 10^7$ for $\epsilon=0.05$. In such halos the HI column density might be higher than the fiducial value adopted here, $N_{\rm HI} = 10^{20}$~cm$^{-2}$. {An increase of the HI column density makes the brightness temperature radial profiles to shrink. For example, for $N_{\rm HI} = 10^{21}$~cm$^{-2}$}, the peak of 21 cm brightness temperature for $z=16.5$ shifts from $r\sim 7$~kpc, corresponding to the fiducial HI column density, to $\sim 3$~kpc. However, later on this difference diminishes, e.g. the ratio between the radii becomes about 1.3 at $z=8.5$. The difference in sizes of the zone where $\Delta T_b>0$, is smaller and becomes negligible for the final redshift. The masses of stellar BHs formed by very massive stars remains uncertain. It is conceivable that the initial mass of a BH may be { either lower or} higher than the fiducial value $M_{BH,t=0} = 300~M_\odot$. As expected more massive BHs produce larger zones of influence. { The radius of the zone at which the brightness temperature in HI 21 cm line reaches maximum depends on the initial mass of a BH seed as $r \sim M_{BH,t=0}^{0.38}{\rm exp}(z^{-1.16})$ for $\epsilon=0.1$ and $M_{BH,t=0}=(30-10^3)~M_\odot$. For example, } its radius increases by about a factor 1.5 for $M_{BH,t=0} = 10^3~M_\odot$ until $z=8.5$ if a BH starts growing $z_0=20$. The size of the zone of influence around a growing BH depends on the slope $\alpha$ of the spectral energy distribution (\ref{lum}), which might vary from $-1.7$ to $-1.4$. A flatter spectrum leads to a larger radius of the zone, whereas a steeper one produces a smaller zone. For instance, the zone around a growing BH with $\alpha=-1.7$ is $\sim 20-25$\% smaller than that for the fiducial value $\alpha=-1.5$. Finally, we consider how heating and Ly$\alpha$ background affect evolution of zones of influence around growing BHs. Resonance and high-energy photons produced due to the initial episode of star formation provide a homogeneous background in this case. Figures~\ref{figh1ya} and \ref{fig-rad-vsz040} show the HI signal around a halo with a growing BH immersed in the IGM evolved adiabatically and exposed to both X-ray and Ly$\alpha$ background photons as in \citet{barkanafi,cohenfi}. These models represent extreme cases: in the former there is no external background radiation, whereas the latter includes strong (maximum in the sense that $T\simeq T_s$) Ly$\alpha$ pumping rate and heating from background ionizing photons. We combine the expected HI signals for these models in Figure~\ref{fig-rad-vsz040-heatc}. The growth of the brightness temperature in the model with heating is due to strong Ly$\alpha$ background. The size of the influence zone increases with decreasing redshift in presence of the background. At $z=10$ the size doubles as compared to that in the model without the background radiation. At high redshifts, where heating is weak such an increase is small. \subsection{$^3${\rm HeII} hyperfine line} \label{sub:3he} As discussed above the other potential observable is the $^3$HeII hyperfine line. Unlike massive stars, { which can also ionize HeII \citep[e.g.,][]{tum00}}, BHs can form {large} HeII and even HeIII ionization zones. Figure~\ref{fig-evol} presents the radial distribution of the HeII fraction around BHs with a constant and a growing mass. The size of the HeII region around BH with constant mass of several hundreds solar masses is about 1-3~kpc, that is compared to the virial radius of the host dark matter minihalo. However, it increases by several ten or even hundred times around a growing BH. Such zones can emit in the hyperfine structure line of a singly ionized helium-3 isotope. The brightness temperature in the $^3$HeII line reaches several tens nanoK (Figure~\ref{figh1he3}), and the size of the emission zone can extend up to more than 10~kpc. The angular size of such zones at frequency $\sim 1$ GHz is of $0.3\hbox{--}0.4$ arcmin as shown in Figure~\ref{fig-adia-fre-3HeII}. Upcoming radio interferometer SKA1-MID can reach flux sensitivity of sub micro-Jansky at such frequencies at these angular scales \citep{2015aska.confE...3A}, which corresponds to brightness temperature sensitivity which is still orders of magnitude larger than the expected signal. Therefore it is unlikely this signal would be detected by upcoming radio interferometers. \begin{figure} \center \includegraphics[width=85mm]{evol-tb3HeII-m300_z20.eps} \includegraphics[width=85mm]{evol-tb3HeII-m300_z20-e0p05.eps} \caption{{Same as in \ref{figh1ya} for $^3$HeII 3.46~cm line.}} \label{figh1he3} \end{figure} \begin{figure} \center \includegraphics[width=85mm]{ang-fre-He-m300e01-z20-50.eps} \caption{ The dependence 'angular diameter -- observed frequency' for spheres emitted in the $^3$HeII 3 cm line around a growing BH with radiative efficiency $\epsilon = 0.1$ starting its evolution at redshifts $z_0 = 50$, 30 and 20 (lines from left to right). The diameter of spheres is defined as where the brightness temperature in the line $\Delta T_b^{\rm 3HeII}$ reaches maximum (see Figure~\ref{figh1he3}). } \label{fig-adia-fre-3HeII} \end{figure} \begin{figure*} \center \includegraphics[width=85mm]{flxHa-vs-z_m300z20.eps} \includegraphics[width=85mm]{flxHn30a-vs-z_m300z20.eps} \caption{ {The fluxes in H$\alpha$ (left panel) and Hn$\alpha, n=30$ (right panel) recombination lines that can be detected from partially ionized spheres around a BH {with the initial mass $M_{BH,z_0}=300~M_\odot$} starting its evolution at $z_0 = 20$ for several radiative efficiency $\epsilon = 0.05, 0.1, 0.2, 0.4, 0.6$ and static BH mass {$M_{BH}=300~M_\odot$} (lines from top to bottom)}. } \label{fig-flx} \end{figure*} \subsection{$n\alpha$ {\rm HI} recombination lines} We next consider recombination lines arising from ionized regions surrounding the accreting BH. Figure~\ref{fig-flx} presents flux in H$\alpha$ line (left panel) from partially ionized spheres around a BH starting its evolution at $z_0 = 20$. The size of regions that dominate emission in H$\alpha$ line is $\simeq 10 \, \rm kpc$. The flux for $\epsilon \sim 0.05$ exceeds $\mu$Jy at $z\lower.5ex\hbox{\ltsima} 11$. The angular size of such a region is $\simeq 0.2''$ which can be resolved by JWST, which has angular resolution of $\simeq 0.05''$. The redshifted H$\alpha$ line has wavelength $\simeq 7.8$~micron for this case which is accessible to mid-Infrared (MIRI) instrument aboard JWST. Around this wavelength, a source of flux $\simeq 0.2 \, \rm \mu$Jy can be detected with signal-to-noise $S/N=10$ in integration time of $10^4 \, \rm sec$. As this sensitivity corresponds to source within the resolution element of the instrument ($\simeq 0.05''$) and the source angular size is nearly four times the resolution, the sensitivity of detection degrades by nearly a factor of 4. A comparison of this estimate of sensitivity with fluxes shown in Figure~\ref{fig-flx} shows that the regions of influence around BHs with $\epsilon \sim 0.05$ (starting its growth at $z_0=20$) can be detected for $z\simeq 10\hbox{--} 12$. For BHs with higher radiation efficiency $\epsilon \sim 0.1$ the surrounding gas might be observable at $z\sim 8.5$ in H$\alpha$. Expected fluxes from transitions for $n=30$ (Figure~\ref{fig-flx}, right panel) are near the thresholds of modern radio telescopes only around very rapidly growing BHs ($\epsilon=0.05$). It should be noted that it is much easier to detect a $n,n-1$ transition for smaller $n$ as the flux of the line $\varepsilon \sim n^{-2.72}$. \section{Conclusions} \label{sec:sumcon} In this paper, we considered the impact of a {growing} black hole on thermal and ionization state of the IGM in the redshift range $8 < z <25$, and discuss possible observables that can probe this influence. We have found that the sizes of {zones of ionized gas} around growing BHs are greater as compared to that for a non-growing BH: for accretion with radiative efficiency $\epsilon=0.1$ they are more than order of magnitude larger at redshift $z=8.5$. The physical size of a zone of influence increases from nearly 10~kpc to 300~kpc during the growth of a BH. The most part of this region contains highly ionized hydrogen upto a reasonable fraction of unity, and temperature exceeding 300~K. Helium ionization region is generally smaller and reaches a maximum of 100~kpc. We consider three observables as probe of growing primordial BHs. We show that the influence region of 21~cm emission around an accreting BH with radiative efficiency $\epsilon\lower.5ex\hbox{\gtsima} 0.05\hbox{--}0.1$ could be in the range of a few hundred kilo-parsecs to 1 Mpc (Figure~\ref{figh1ya}). The angular scale of this emission and the spatial contrast of the HI signal is accessible to ongoing and upcoming radio telescopes such as SKA1-LOW. We also consider the impact of recent EDGES observation \citep{2018Natur.555...67B} and show that it greatly enhances the expected contrast (Figure~\ref{figh2ya}). We also study the emission of hyperfine line of $^3$HeII ($\lambda = 3.4 \, \rm cm$) from regions surrounding the growing BH. The brightness temperatures in these lines could reach tens of nano-Kelvin. Taking into account the sizes of these regions we anticipate that this emission cannot be detected by upcoming radio telescopes SKA1-MED. We finally consider hydrogen recombination lines (n,n-1) from ionized regions surrounding growing BHs. The H$\alpha$ line provides the best prospect of detection (Figure~\ref{fig-flx}); JWST can detect this line with $S/N=10$ in ten thousand seconds of integration. Expected fluxes from transitions between higher levels (e.g. Figure~\ref{fig-flx} for $n= 30$) are near the thresholds of modern radio telescopes only around very rapidly growing BHs. In sum: we model emission from an accreting primordial BH and study its impact on the ionization and thermal state of surrounding medium. We also consider the prospects of the detection of this dynamical process in the redshift range $8.5 < z < 25$. In conclusion we note that the observability of the features we discuss in the paper would be greatly boosted if the precursors of supermassive black holes could be detected at high redshifts. This possibility has been studied by \citet{valiante18sta,valiante18obs}. Their analysis suggests that future missions such as JWST will be able to detect high-mass BH seeds at $z\sim 16$ directly. \vspace{1cm} We are thankful to the referee for a careful reading of the manuscript and very detailed comments. This work is supported by the joint RFBR-DST project (RFBR 17-52-45063, DST P-276). The work by YS is done under partial support from the joint RFBR-DST project (17-52-45053), and the Program of the Presidium of RAS (project code 28). The code for the thermal evolution has been developed under support by Russian Scientific Foundation (14-50-00043).
1,314,259,994,645
arxiv
\section{Section Heading} \baselineskip=24pt \section{Basic properties of contact metric structures} A contact form on a $2n+1$-dimensional manifold $M$ is a one-form $\eta$ such that $\eta\wedge (d\alpha )^n$ is a volume form on $M$. Given a contact manifold $(M,\eta )$, there exist tensor fields $(\xi ,\phi , g)$, where $g$ is a Riemannian metric and $\xi$ is a unit vector field, called the Reeb field of $\eta$ and $\phi$ is an endomorphism of the tangent bundle of $M$ such that \begin{itemize} \item[(i)] $\eta (\xi )=1,~\phi^2 =-Id +\eta\otimes\xi ,~\phi\xi =0$ \item[(ii)] $d\eta =2g(.,\phi .)$ \end{itemize} The data $(M,\eta ,\xi ,\phi ,g)$ is called a contact metric structure; see (\cite{BLA}) for more details. Denoting by $\nabla$ the Levi-Civita connection of $g$, and by $$R(X,Y)Z=\nabla_X\nabla_YZ-\nabla_Y\nabla_XZ-\nabla_{[X,Y]}Z$$ its curvature tensor, a contact metric structure $(M,\eta ,\xi ,\phi ,g)$ is called Sasakian if the condition$$(\nabla_X\phi )Y=g(X,Y)\xi -\eta (Y)X$$ is satisfied for all tangent vectors $X$ and $Y$. A well known curvature characterization of the Sasakian condition is as follows: \begin{pro} {\it A contact metric structure $(M,\eta , \xi ,\phi ,g)$ is Sasakian if and only if $$R(X,Y)\xi =\eta (Y)X-\eta (X)Y$$ for all tangent vectors $X$ and $Y$.}\end{pro} A condition weaker than the Sasakian one is the K-contact condition. A contact metric structure $(M,\eta ,\xi ,\phi ,g)$ is called K-contact if the tensor field $h=\frac{1}{2}L_\xi \phi$ vanishes identically. Here, $L_\xi \phi$ stands for the Lie derivative of $\phi$ in the direction of $\xi$. The above K-contact condition is known to be equivalent to the Reeb vector field $\xi$ being a $g$- infinitesimal isometry, or a Killing vector field. The tensor field $h$ is known to be symmetric and anticommutes with $\phi$. An equally well known curvature characterization of K-contactness is as follows: \begin{pro}{\it A contact metric structure $(M,\eta ,\xi ,\phi ,g)$ is K-contact if and only if $$R(X,\xi )\xi =X-\eta (X)\xi$$ for all tangent vectors $X$.}\end{pro} The notation ''$l$" is common for the tensor $$lX=R(X,\xi )\xi$$ \begin{pro}\label{prop33} On a contact metric structure $(M,\eta ,\xi ,\phi ,g)$, the following identities hold: \begin{equation}\nabla_\xi h=\phi -h^2\phi -\phi l \label{bl1}\end{equation} \begin{equation}\phi l\phi -l=2(h^2+\phi^2) \label{bl2}\end{equation} \begin{equation}L_\xi h=\nabla_\xi h+2\phi h+2\phi h^2\label{bl3}\end{equation} \end{pro} \proof The first two identities appear in Blair's book (\cite{BLA}). We establish the third one. $$\begin{array}{rcl}( L_\xi h) X&=&[\xi ,hX]-h[\xi ,X]\\&=&\nabla_\xi(hX)-\nabla_{hX}\xi -h(\nabla_\xi X-\nabla_X\xi )\\&=&(\nabla_\xi h)X+h\nabla_\xi X-[-\phi hX-\phi h^2X]-h\nabla_\xi X+h[-\phi X-\phi hX]\\ &=&(\nabla_\xi h)X+h\nabla_\xi X+\phi hX+\phi h^2X-h\nabla_\xi X-h\phi X+\phi h^2 X\\ &=&(\nabla_\xi h)X+2\phi hX+2\phi h^2X \end{array} $$ $\qed$ Given a contact metric structure $(M,\eta ,\xi , \phi , g)$, its $D_a$-homothetic deformation is a new contact metric structure $(M,\overline{\eta}, \overline{\xi }, \overline{\phi}, \overline{g})$ given by a real number $a>0$ and $$\overline{\eta}=a\eta,~~\overline{\xi}=\frac{\xi}{a},~~\overline{\phi}=\phi$$ $$\overline{g}=ag+a(a-1)\eta\otimes\eta$$ D-homothetic deformations preserve the K-contact and Sasakian conditions. \section{Weakly $(\kappa , \mu )$- spaces} A direct calculation shows that under a $D_a$ homothetic deformation, the curvature tensor transforms as follows: $$\begin{array}{rcl}a\overline{R}(X,Y)\overline{\xi}&=&R(X,Y)\xi -(a-1)[(\nabla_X\phi )Y-(\nabla_Y\phi )X+\eta (X)(Y+hY)\\&&-\eta (Y)(X+hX)]+(a-1)^2[\eta (Y)X-\eta (X)Y] \end{array} $$ Letting $Y=\xi$ and recalling $\nabla_\xi\phi =0$, we get:$$\begin{array}{rcl}a\overline{R}(X,\xi )\overline{\xi}&=&R(X,\xi )\xi-(a-1)[(\nabla_X\phi )\xi +\eta (X)\xi -(X+hX)\\&&+(a-1)^2[X-\eta (X)\xi ] \end{array} $$ On any contact metric manifold, the following identity holds: $$(\nabla_X\phi)\xi =-\phi\nabla_X\xi=-X+\eta (X)-hX.$$ Taking into account of this identity, we see that the curvature tensor deforms as follows: $$a^2\overline{R}(X,\overline{\xi})\overline{\xi}=R(X,\xi )\xi +(a^2-1)(X-\eta (X)\xi )+2(a-1)hX$$ Equivalently, since $\xi =a\overline{\xi}$ and $h=a\overline{h}$, \begin{equation}\overline{R}(X,\overline{\xi})\overline{\xi}=\frac{1}{a^2}R(X,\xi )\xi +\frac{a^2-1}{a^2}(X-\overline{\eta} (X)\overline{\xi})+\frac{2a-2}{a}\overline{h}X\label{k6}\end{equation} It follows from (\ref{k6}) that, under a $D_a$-homothetic deformation, the condition $R(X,\xi )\xi =0$ transforms into $$\overline{R}(X,\overline{\xi} )\overline{\xi} =\kappa (X-\overline{\eta} (X)\overline{\xi} )+\mu \overline{h}X$$ where $\kappa =\frac{a^2-1}{a^2}$ and $\mu =\frac{2a-2}{a}$. As a generalization of both $R(X,\xi )\xi =0$ and the K-contact condition, $R(X,\xi )\xi =X-\eta (X)\xi $, we consider $$R(X,\xi )\xi =\kappa (X-\eta(X)\xi ) +\mu hX.$$ We call this the weak $(\kappa , \mu )$ condition. The same generalization was refered to as Jacobi $(\kappa , \mu )$-contact s manifold in \cite{GHS}. Let us point out also that a strong $(\kappa ,\mu )$ condition $R(X,Y)\xi=\kappa (\eta(Y)X-\eta (X)Y)+\mu (\eta (Y)hX-\eta (X) hY)$ has been introduced in \cite{BKP}. Examples of weakly $(\kappa ,\mu )$ spaces which are not strongly $(\kappa ,\mu )$ are provided by the Darboux contact forms $\eta =\frac{1}{2}(dz-\sum y^idx^i)$ on ${\bf R}^{2n+1}$ with associated metric $$g=\frac{1}{4}\left(\begin{array}{ccr} \delta_{ij}+y^iy^j+\delta_{ij}z^2&\delta_{ij}z&-y^i\\\delta_{ij}z&\delta_{ij}&0\\-y^j&0&1\end{array}\right) $$ (see \cite{BLA}). Other examples of weakly $(\kappa ,\mu )$-spaces have been found on normal bundles of totally geodesic Legendre submanifolds in Sasakian manifolds (see \cite{BAN}). The two notions of $(\kappa ,\mu )$-spaces are D-homothetically invariant. It follows from identity (\ref{k6}) that, if $(M,\eta ,\xi ,\phi ,g )$ is a (weak) $(\kappa ,\mu )$ structure, then the $D_a$-homothetic deformation $( \overline{\eta}, \overline{\xi}, \phi ,\overline{g} )$ is a (weak) $(\overline{\kappa}, \overline{\mu})$-structure with: \begin{equation}\label{km} \overline{\kappa}=\frac{\kappa +a^2-1}{a^2},~~\overline{\mu}=\frac{\mu +2a-2}{a}\end{equation} The tensor fields $\phi$ and $h$ on a weakly $(\kappa ,\mu )$-space are related by the identities in the following proposition. \begin{pro}\label{prop4} On a weakly $(\kappa ,\mu )$-space $(M,\eta ,\xi , \phi , g)$, the following identities hold: \begin{equation} h^2=(\kappa -1)\phi^2,~~\kappa \le 1\label{ka1}\end{equation} \begin{equation}\nabla_\xi h=-\mu\phi h\label{ka2}\end{equation} \begin{equation} L_\xi h=(2-\mu )\phi h+2(1-\kappa )\phi\label{eq6}\end{equation} \end{pro} \proof Starting with identity (\ref{bl2}), which is valid on any contact metric structure, one has, for any tangent vector $X$: $$\begin{array}{rcl}2h^2X+2\phi^2X&=&\phi (\kappa \phi X+\mu h\phi X)-(\kappa (-\phi^2X)+\mu hX)\\&=&\kappa\phi^2X+\mu\phi h\phi X+\kappa\phi^2X-\mu hX\\&=&\kappa\phi^2X-\mu h\phi^2X+\kappa\phi^2X-\mu hX\\&=&2\kappa\phi^2X\end{array} $$ Hence, grouping terms $$2h^2X=(2\kappa -2)\phi^2X$$ So $$h^2=(\kappa -1)\phi^2$$ But, since $h$ is symmetric, $h^2$ must be a non-negative operator, hence $\kappa \le 1$, proving (\ref{ka1}). From identity (\ref{bl1}) combined with $lX=\kappa (X-\eta (X)\xi )+\mu hX$, we see that $$\begin{array}{rcl}(\nabla_\xi h)X&=&\phi X-h^2\phi X-\phi (\kappa (X-\eta (X)\xi )+\mu hX)\\&=&\phi X-(\kappa -1)\phi^3X-\kappa\phi X-\mu\phi hX\\&=&(1-\kappa +(\kappa -1))\phi X-\mu\phi hX\\&=&-\mu \phi hX \end{array} $$ proving (\ref{ka2}). Next, combining identities (\ref{bl3}), (\ref{ka2}) and (\ref{ka1}), one has: $$\begin{array}{rcl}L_\xi h&=&\nabla_\xi h+2\phi h+2\phi h^2\\&=&-\mu \phi h+2\phi h+2\phi h^2\\ &=&-\mu \phi h+2\phi h+2\phi (\kappa -1)\phi^2\\&=&-\mu \phi h+2\phi h-2(\kappa -1)\phi\\&=&(2-\mu )\phi h+2(1-\kappa )\phi\end{array} $$ proving (\ref{eq6}). $\qed$ Tangent bundle's structure on a $(\kappa ,\mu )$-space is described by the following theorem: \begin{Theorem} Let $(M^{2n+1},\eta , \xi ,\phi , g)$ be a weakly $(\kappa ,\mu )$, contact metric manifold. Then $\kappa \le 1$. If $\kappa =1$, then the structure is K-contact. If $\kappa <1$, then the tangent bundle $TM$ decomposes into three mutually orthogonal distributions $D(0)$, $D(\lambda )$ and $D(-\lambda )$, the eigenbundles determined by tensor $h$'s eigenspaces, where $\lambda =\sqrt{1-\kappa }$. \end{Theorem} \proof Clearly, $\kappa =1$ is exactly the K-contact condition. Suppose $\kappa <1$. Since $h\xi =0$ and $h$ is symmetric, it follows from identity (\ref{ka1}), Proposition \ref{prop4}, ($h^2=(\kappa -1)\phi^2$), that the restriction $h_{|D}$ of $h$ to the contact subbundle $D$ has eigenvalues $\lambda =\sqrt{1-k}$ and $-\lambda$. By $D(\lambda ),~D(-\lambda )~and~D(0)$, we denote the corresponding eigendistributions. If $X\in D(\lambda )$, then $h\phi X=-\phi hX=-\lambda\phi X$. Thus $\phi X\in D(-\lambda )$ which shows that the three distributions above are mutually orthogonal. $\qed$ To shade some light on the difference between weak $(\kappa ,\mu )$ and strong $(\kappa ,\mu )$-spaces, we propose a weak, semi-symmetry condition. We say that a contact metric space $(M,\eta ,\xi ,\phi , g)$ is weakly semi-symmetric if $R(X,\xi )R=0$ for all tangent vectors $X$ where $R$ is the curvature operator. We will prove the following: \begin{Theorem}\label{Theo2} Let $(M,\eta ,\xi ,\phi , g)$ be a weakly semi-symmetric, contact metric weakly $(\kappa ,0)$-space. Then $(M,\eta ,\xi ,\phi ,g )$ is a strongly $(\kappa ,0)$-space. \end{Theorem} \proof The weakly semi- symmetric condition means that $(R(X,\xi )R)(Y,\xi )\xi =0$ holds for any tangent vectors $X$ and $Y$. Extending $Y$ into a local vector field, we have: $$\begin{array}{rcl} 0&=&R(X,\xi )R(Y,\xi )\xi-R(R(X,\xi )Y,\xi )\xi -R(Y,R(X,\xi )\xi )\xi -\\&&R(Y,\xi )R(X,\xi )\xi\\ &=&R(X,\xi )(\kappa (Y-\eta (Y)\xi ))-\kappa (R(X,\xi )Y-\eta (R(X,\xi )Y)\xi )-\\&&R(Y,\kappa (X-\eta (X)\xi ))\xi -R(Y,\xi )(\kappa (X-\eta (X)\xi ))\\ &=&\kappa R(X,\xi )Y-\kappa \eta (Y)R(X,\xi )\xi-\kappa R(X,\xi )Y+\kappa\eta (R(X,\xi)Y)\xi -\\&&\kappa R(Y,X)\xi +\kappa\eta (X)R(Y,\xi )\xi -\kappa R(Y,\xi )X+\kappa\eta (X)R(Y,\xi )\xi\\ &=&-\kappa^2\eta (Y)X-\kappa g(R(X,\xi )\xi ,Y)\xi -\kappa R(Y,X)\xi +\kappa^2\eta (X)Y-\\&&\kappa R(Y,\xi )X+\kappa^2\eta (X)Y-\kappa^2\eta (X)\eta (Y)\xi\\&=&-\kappa^2\eta (Y)X-\kappa g(\kappa (X-\eta (X)\xi ),Y)\xi -\kappa R(Y,X)\xi +2\kappa^2\eta (X)Y-\\&&\kappa R(Y,\xi )X-\kappa^2\eta (X)\eta (Y)\xi\\0&=&-\kappa^2\eta (Y)X-\kappa^2 g(X,Y)\xi -\kappa R(Y,X)\xi +2\kappa^2\eta (X)Y-\kappa R(Y,\xi )X\end{array} $$Equation \begin{equation}\label{w1} -\kappa^2\eta (Y)X-\kappa^2 g(X,Y)\xi -\kappa R(Y,X)\xi +2\kappa^2\eta (X)Y-\kappa R(Y,\xi )X=0\end{equation} is valid for any $X$ and $Y$. Exchanging $X$ and $Y$ leads to \begin{equation}\label{w2}-\kappa^2\eta (X)Y-\kappa^2g(Y,X)\xi -\kappa R(X,Y)\xi +2\kappa^2\eta (Y)X-\kappa R(X,\xi )Y=0\end{equation} Substracting equation (\ref{w1}) from equation (\ref{w2}), we obtain:\begin{equation}3\kappa^2 (\eta (Y)X-\eta (X)Y)+\kappa (R(Y,X)\xi -R(X,Y)\xi )+\kappa (R(Y,\xi )X-R(X,\xi )Y)=0\label{w3}\end{equation} By the first Bianchi Identity, $R(Y,\xi )X-R(X,\xi )Y=R(Y,X)\xi$ holds. Incorporating this identity into (\ref{w3}), we obtain the following: \begin{equation} 3 \kappa^2(\eta (Y)X-\eta (X)Y)+3\kappa R(Y,X)\xi =0\end{equation} which implies the strong $(\kappa ,0)$ condition $$R(X,Y)\xi =\kappa (\eta (Y)X-\eta (X)Y)$$ $\qed$ Theorem \ref{Theo2} applies to the case $\kappa =1$ and has the following interesting corollary: \begin{cor} A weakly semi-symmetric K-contact manifold is Sasakian. \end{cor} \proof In the $\kappa =1$ case, the strong $(\kappa ,\mu )$ condition is exactly the Sasakian condition. $\qed$ Weakly $(\kappa ,\mu )$-structures with $\kappa =1$, (the $K$-contact ones), are D-homothetically fixed. A non-K-contact weakly $(k,\mu )$ structure cannot be D-homothetically deformed into a K-contact one. Weakly $(k,\mu )$-structures with $\mu =2$ are also D-homothetically fixed. As a consequence, a weakly $(k, \mu )$ structure with $\mu \neq 2$ cannot be deformed into one with $\mu =2$ neither. Existence of these homothetically fixed $(\kappa ,\mu )$ structures depends on an invariant that was first introduced by Boeckx for strongly $(\kappa ,\mu )$ structures in (\cite{BOE}). \section{The Boeckx invariant} The Boeckx invariant, $I_M$ of a weakly contact $(\kappa ,\mu )$ space is defined by $$I_M=\frac{1-\frac{\mu }{2}}{\sqrt{1-k}}=\frac{1-\frac{\mu}{2}}{\lambda}.$$ $I_M$ is a D-homothetic invariant. Any two D-homothetically related weakly $(\kappa ,\mu )$-structures have the same Boeckx invariant. The following lemma is crucial in proving existence of $D$-homothetically fixed $(\kappa ,\mu )$-structures. \bl\label{lemma1} Let $(M,\alpha , \xi ,\phi , g )$ be a non-K-contact, weakly $(\kappa ,\mu )$ space.\begin{itemize} \item[(i)] If $I_M>1$, then $2-\mu -\sqrt{1-k}>0$ and $2-\mu +\sqrt{1-k}>0$. \item[(ii)] If $I_M<-1$, then $2-\mu +\sqrt{1-k}<0$ and $2-\mu -\sqrt{1-k}<0.$ \item[(iii)] $|I_M|<1$ if and only if $0<2\lambda +2-\mu$ and $0<2\lambda +\mu -2$ \end{itemize} \el \proof (i). Suppose $I_M>1$. Then $1-\frac{\mu}{2}>\sqrt{1-\kappa}$ and $\mu <2$. $$\begin{array}{rcr}1-\frac{\mu}{2} >\sqrt{1-\kappa}&\Rightarrow&2-\mu >2\sqrt{1-k}\\&\Rightarrow& 2-\mu >\sqrt{1-k} ~and~ 2-\mu >-\sqrt{1-k}\\&\Rightarrow&2-\mu-\sqrt{1-k}>0~and~2-\mu +\sqrt{1-k}>0\end{array} $$ (ii). Suppose $I_M<-1$. Then $1-\frac{\mu }{2}<-\sqrt{1-k}$ and $\mu >2$. $$\begin{array}{rcr} 1-\frac{\mu}{2}<-\sqrt{1-k}&\Rightarrow&2-\mu <-2\sqrt{1-k} ~and ~2-\mu <\sqrt{1-k}\\&\Rightarrow&2-\mu <-\sqrt{1-k}~and ~2-\mu <\sqrt{1-k}\\&\Rightarrow&2-\mu +\sqrt{1-k}<0~and~2-\mu -\sqrt{1-k}<0\\&&\qed\end{array} $$ (iii). $|I_M|<1$ if and only if $-1<\frac{1-\frac{\mu}{2}}{\lambda}<1$. Equivalently $$-1<\frac{2-\mu}{2\lambda}<1~~and~~-1<\frac{\mu -2}{2\lambda}<1$$ Thus $$-2\lambda <2-\mu <2\lambda~~and~~-2\lambda <\mu -2<2\lambda$$ Or, $$0<2\lambda +2-\mu ~and~~0<2\lambda +\mu -2$$ $\qed$ \section{D-homothetically fixed structures on weakly $(\kappa ,\mu )$-spaces} \subsection{K-contact structures on weakly $(\kappa ,\mu )$ spaces} We have pointed out that D-homothetic deformations of non K-contact weakly $(\kappa ,\mu )$ structures remain non K-contact. However, on $(\kappa ,\mu )$-spaces with large Boeckx invariant, K-contact structures coexist with $(\kappa ,\mu )$ structures. \bt \label{theo2}Let $(M,\eta , \xi , \phi ,g )$ be a non-K-contact, weakly $(k,\mu )$-space whose Boeckx invariant $I_M$ satisfies $|I_M|>1$. Then, $M$ admits a K-contact structure $(M,\eta , \xi, \overline{\phi}, \overline{g})$ compatible with the contact form $\eta$.\et \proof We define tensor fields $\overline{\phi}$ and $\overline{g}$ by \begin{equation}\label{def1}\overline{\phi}=\frac{\epsilon}{(1-k)\sqrt{(2-\mu )^2-4(1-k)}}(L_\xi h\circ h)\end{equation} $$\overline{g}=-\frac{1}{2}d\eta (., \overline{\phi}.)+\eta\otimes\eta$$ where $$\epsilon=\left\{\begin{array}{ll}+1&if~I_M>0\\-1&if~I_M<0\end{array}\right.$$ From the formula $h^2=-(1-k)\phi^2$ and $L_\xi h=(2-\mu )\phi h+2(1-k)\phi$ in Proposition \ref{prop4}, we obtain $$\begin{array}{rcl} (L_\xi h\circ h)^2&=&(2-\mu )^2(1-k)^2\phi^2-4(1-k)^2\phi^2h^2\\ &=&(1-k)^2((2-\mu )^2-4(1-k))\phi^2 \end{array} $$ That is: $$(L_\xi h\circ h)^2=\lambda^4\alpha (-Id+\eta\otimes \xi )$$ where $\lambda =\sqrt{1-k}$ and $\alpha =(2-\mu )^2-4(1-k).$ One sees that, if $\alpha >0$, then $\overline{\phi}=\frac{\epsilon}{\lambda^2\sqrt\alpha}(L_\xi h\circ h)$ defines an almost complex structure on the contact subbundle. Notice also that $\alpha >0$ is equivalent to $|I_M|>1$. We will show that $\overline{\phi}$ is $\xi$ invariant. For that, it suffices to show that the Lie derivative of $L_\xi h\circ h$ vanishes in the $\xi$ direction. $$\begin{array}{rcl}L_\xi (L_\xi h\circ h)&=&L_\xi ((2-\mu )(1-k)\phi +2(1-k)\phi h)\\&=&2(2-\mu )(1-k )h+4(1-k)h^2+2(1-k)[(2-\mu )\phi^2h+\\&&2(1-k)\phi^2]\\ &=&2(2-\mu )(1-k)h+4(1-k)h^2+2(1-k)(2-\mu )\phi^2h+\\&&4(1-k)^2\phi^2\\ &=&-4(1-k)^2\phi^2+4(1-k)^2\phi^2 =0 \end{array} $$ Next, we will show that $\overline{g}=-\frac{1}{2}d\eta (., \overline{\phi}. )+\eta\otimes\eta$ is an adapted Riemannian metric for the structure tensors $(\eta , \overline{\xi}, \overline{\phi})$. That is $\overline{g}$ is a bilinear, symmetric, positive definite tensor with $$d\eta =2 \overline {g}(., \overline{\phi}).$$ From the definition of $\overline{g}$, we have, for arbitrary tangent vectors $X$ and $Y$: $$\begin{array}{rcl} \overline{g}(X,Y)&=& -\frac{1}{2}d\eta (X\overline{\phi}Y)+\eta (X)\eta (Y)\\ &=&-\frac{1}{2(1-\kappa )\sqrt{4(1-\kappa )-(2-\mu )^2}}d\eta (X,(1-\kappa )(2-\mu )\phi +\\&&(1-\kappa )2\phi h)Y)+\eta (X)\eta (Y)\\&=& \frac{2-\mu }{\sqrt{4(1-\kappa )-(2-\mu )^2}}g(X,Y)+\frac{2}{\sqrt{4(1-\kappa )-(2-\mu )^2}}g(X,hY)+\\&&(1-\frac{1}{\sqrt{4(1-\kappa )-(2-\mu )^2}})\eta(X)\eta(Y)\\&=&\frac{2-\mu }{\sqrt{4(1-\kappa )-(2-\mu )^2}}g(Y,X)+\frac{2}{\sqrt{4(1-\kappa )-(2-\mu )^2}}g(Y,hX)+\\&&(1-\frac{1}{\sqrt{4(1-\kappa )-(2-\mu )^2}})\eta(Y)\eta(X\\&=&\overline{g}(Y,X) \end{array} $$ proving symmetry of $\overline{g}$. We used $h$'s symmetry in the step before the last. For $\overline{g}$'s positive definitness, first observe that $\overline{g}(\xi ,\xi )=1>0$. Then for any non-zero tangent vector $X$ in the contact bundle $D$, using the definition of $\overline{\phi }$ in (\ref{def1}), the formula for $L_\xi h$ from identity (\ref{eq6}) in Proposition \ref{prop4}, we have: $$\begin{array}{rcl}\overline{g}(X,X)&=&-\frac{1}{2}d\eta (X,\overline{\phi }X)\\&=&-\frac{\epsilon (2-\mu )}{2\sqrt{(2-\mu )^2-4(1-\kappa )}}d\eta (X,\phi X)-\frac{\epsilon}{\sqrt{(2-\mu )^2-(4(1-\kappa )}}d\eta (X,\phi hX)\\ &=&\frac{\epsilon (2-\mu )}{\sqrt{(2-\mu )^2-4(1-\kappa )}}g(X,X)+\frac{2\epsilon}{\sqrt{(2-\mu )^2-4(1-\kappa )}}g(X,hX)\\&=& \frac{\epsilon}{\sqrt{(2-\mu )^2-4(1-\kappa )}}((2-\mu )g(X,X)+2g(X,hX)) \end{array} $$ If $X\in D(\lambda )$, then $$\overline{g}((X,X)=\frac{\epsilon}{\sqrt{(2-\mu )^2-4(1-\kappa )}}((2-\mu )+2\sqrt{1-\kappa })g(X,X)).$$ By Lemma \ref{lemma1}, (i), (ii), the inequality $$\epsilon ((2-\mu )-2\sqrt{1-\kappa}))>0$$ holds when $|I_M|>1$. Therefore $\overline{g}(X,X)>0$. In the same way, if $X\in D(-\lambda )$, then $$\overline{g}(X,X)=\frac{\epsilon}{\sqrt{(2-\mu )^2-4(1-\kappa )}}((2-\mu )-2\sqrt{1-\kappa })g(X,X))$$ which is also $>0$ by Lemma \ref{lemma1}, (i) and (ii). This concludes the proof of $\overline{g}$'s positivity. We easily verify that $\overline{g}$ is an adapted metric. $$\begin{array}{rcl}2\overline{g}(X,\overline{\phi}Y)&=&-d\eta (X,\overline{\phi}^2Y)\\&=&-d\eta (X,-Y+\eta (Y)\xi )\\&=&d\eta (X,Y).\end{array} $$ $\qed$ \noindent{\bf Remark:} As a consequence of Theorem \ref{theo2}, contact forms on compact, weakly $(\kappa ,\mu )$-spaces with $|I_M|>1$ admit associated K-contact structures, hence verify Weinstein's Conjecture about the existence of closed Reeb orbits.( see\cite{ RUK}). On weakly $(\kappa ,\mu )$-spaces with small Boeckx invariant, it turns out that $(\kappa ,2)$ structures coexist with $(\kappa ,\mu\ne 2)$ structures. This will be established in the next subsection. \subsection{Contact metric weakly $(\kappa , 2 )$-spaces} Given a non-K-contact, weakly $(\kappa ,\mu )$-space $(M,\eta ,\xi ,\phi ,g)$, we define the D-homothetic invariant tensor field $$\tilde{\phi}=\frac{1}{\sqrt{1-k}}h$$ \begin{Lemma}\label{lem2} Denoting by $$\tilde{h}=\frac{1}{2}L_\xi\tilde{\phi }=\frac{1}{2\sqrt{1-k}}L_\xi h,$$ the following identities are satisfied: \begin{equation}\tilde{h}=\frac{1}{2\sqrt{1-k}}((2-\mu )\phi h+2(1-k)\phi )\label{tilde1}\end{equation} \begin{equation}\tilde{h}^2=((1-k)-(1-\frac{\mu}{2})^2)\phi^2 \label{tilde2}\end{equation} \end{Lemma} \proof From the third identity in Proposition \ref{prop33}, combined with identity (\ref{eq6}), Proposition \ref{prop4}, we get $$2(\sqrt{1-k})\tilde{h}=L_\xi h=(2-\mu )\phi h+2(1-\kappa )\phi $$ So $$\tilde{h}=\frac{1}{2\sqrt{1-k}}(2-\mu )\phi h+2(1-\kappa )\phi$$ which is (\ref{tilde1}). The proof of (\ref{tilde2}) is a straightforward calculation. $\qed$ \noindent {\bf Remark}: If $|I_M|<1$, then $1-k-(1-\frac{\mu}{2})^2>0$. Therefore, identity (\ref{tilde2}) suggests that $\tilde{h}$ can be used to define a complex structure on the contact subbundle.\vskip 12pt Define the tensor field $\phi_1$ by:$$\phi_1=\frac{1}{\sqrt{1-k-(1-\frac{\mu}{2})^2}}\tilde{h}=\frac{1}{\sqrt{1-k-(1-\frac{\mu}{2})^2}}\frac{1}{2\sqrt{1-k}}((2-\mu)\phi h+2(1-k)\phi )$$ \begin{pro} \label{prop5}The tensor field $\phi_1$ satisfies $$\phi_1^2=-I+\eta\otimes \xi$$ \begin{equation} h_1=\frac{1}{2}L_\xi\phi_1=(\sqrt{1-I_M^2})h\label{phi11}\end{equation}\end{pro} \proof The identity $\phi_1^2=\phi^2=-I+\eta\otimes\xi$ follows from Lemma \ref{lem2}, (\ref{tilde2}). As for identity (\ref{phi11}), we proceed as follows: $$\begin{array}{rcl}h_1=\frac{1}{2}(L_\xi\phi_1)&=&\frac{1}{4\sqrt{(1-\kappa )(1-\kappa-(1-\frac{\mu}{2})^2}} L_\xi ((2-\mu )\phi h+2(1-\kappa )\phi\\&=&\frac{1}{4 \sqrt{(1-\kappa )(1-\kappa -(1-\frac{\mu}{2})^2}}[(2-\mu)((L_\xi\phi )h+\phi L_\xi h)+\\&&2(1-\kappa )L_\xi \phi]\\&=& \frac{1}{4 \sqrt{(1-\kappa )(1-\kappa -(1-\frac{\mu}{2})^2}}((2-\mu )(2h^2+\phi((\mu -2)h\phi+\\&&2(1-\kappa )\phi ) +4(1-\kappa )h)\\ &=&\frac{1}{4 \sqrt{(1-\kappa )(1-\kappa -(1-\frac{\mu}{2})^2}}[2(2-\mu )h^2-(2-\mu )^2\phi h\phi +\\&&2(2-\mu )(1-\kappa )\phi^2+4(1-\kappa )h]\\&=&\frac{1}{4 \sqrt{(1-\kappa )(1-\kappa -(1-\frac{\mu}{2})^2}}[2(2-\mu )(\kappa -1)\phi^2-(2-\mu )^2h+\\&&2(2-\mu )(1-\kappa )\phi^2 +4(1-\kappa )h]\\&=& \frac{1}{4 \sqrt{(1-\kappa )(1-\kappa -(1-\frac{\mu}{2})^2}}[4(1-\kappa )-(2-\mu )^2]h\\&=&\sqrt{\frac{4(1-\kappa )-(2-\mu )^2}{4(1-\kappa )}}h=(\sqrt{1-I_M^2})h \end{array} $$ $\qed$ As pointed out earlier, when a D-homothetic deformation is applied to a weakly $(\kappa , \mu )$ structure with $\mu =2$, the $\mu$ value remains the same as is seen from one of formulas (\ref{km}): $$\overline{\mu}=\frac{\mu +2a-2}{a}$$ As a consequence, weakly $(\kappa ,2 )$ structures cannot be obtained through D-homothetic deformations. In the case $|I_M|<1$, we prove the following theorem: \bt \label{theo4} Let $(M, \eta , \xi , \phi , g)$ be a non-K-contact, weakly $(\kappa , \mu )$-space with Boeckx invariant $I_M$ satisfying $ |I_M|<1$. Then, there is a weakly $(\kappa_1 ,\mu_1 )$ structure $(M,\eta , \xi , \phi_1, g_1 )$ where $\mu_1=2$ and $\kappa_1=\kappa +(1-\frac{\mu}{2})^2$.\et \proof Define $g_1$ by $$g_1(X,Y)=-\frac{1}{2}d\eta (X,\phi_1 Y)+\eta (X)\eta (Y).$$ We will show that $g_1$ is a Riemannian metric adapted to $\phi_1$ and $\eta$, i.e. $$d\eta =2g_1(.,\phi_1 )$$ For any tangent vectors $X$ and $Y,$ $$\begin{array}{rcl}g_1(X,Y)&=&-\frac{1}{2}\frac{1}{\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}d\eta (X,\tilde{h}Y)+\eta (X)\eta (Y)\\&=&-\frac{1}{2\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}d\eta (X,(2-\mu)\phi hY)+d\eta (X,2(1-\kappa )\phi Y))+\\&&\eta (X)\eta (Y)\\&=&-\frac{1}{2\sqrt{1\kappa -(1-\frac{\mu}{2})^2}}(2g(X,(2-\mu )\phi^2hY)+2d\eta (X, 2(1-\kappa )\phi^2Y))+\\&&\eta (X)\eta (Y)\\&=& -\frac{1}{2\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}d\eta (X,(2-\mu)\phi hY)+d\eta (X,2(1-\kappa )\phi Y))+\\&&\eta (X)\eta (Y))\\&=&-\frac{1}{2\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}(2g((2-\mu )\phi^2hX,Y)+2d\eta (2(1-\kappa )\phi^2X,Y))+\\&&\eta (Y)\eta (X)\\&=& -\frac{1}{2}\frac{1}{\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}d\eta (Y,\tilde{h}X)+\eta (Y)\eta (X)\\&=& g_1(Y,X)\end{array} $$ proving that $g_1$ is a symmetric tensor. To prove positivity of $g_1$, first observe that $g_1(\xi ,\xi )=1>0$. Next, for any $X$ in the contact distribution, $$\begin{array}{rcl}g_1(X,X)&=&-\frac{1}{2}d\eta (X,\frac{1}{2\sqrt{1-\kappa}\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}((2-\mu )\phi hX+2(1-\kappa ) \phi X)\\&=&-\frac{(2-\mu )}{4\sqrt{1-\kappa }\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}d\eta (X,\phi hX)-\frac{(1-\kappa )}{2\sqrt{1-\kappa}\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}d\eta (X,\phi X)\\&=&\frac{1}{2\sqrt{1-\kappa }\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}((2-\mu )g(X,hX)+2(1-\kappa )g(X,X))\end{array} $$ \begin{equation}\label{fr1}g_1(X,X)=\frac{1}{2\sqrt{1-\kappa }\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}((2-\mu )g(X,hX)+2(1-\kappa )g(X,X))\end{equation} If $X\in D(\lambda )$, then (\ref{fr1}) becomes $$g_1(X,X)=\frac{1}{\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}}(2\sqrt{1-\kappa }+(2-\mu ))g(X,X))>0 $$ The last inequality follows from Lemma \ref{lemma1}, (iii). If $X\in D(-\lambda )$, then (\ref{fr1}) becomes $$g_1(X,X)=\frac{1}{\sqrt{1-\kappa-(1-\frac{\mu}{2})^2}} (2\sqrt{1-\kappa} -(2-\mu ))g(X,X)>0$$ also following from Lemma \ref{lemma1}, (iii). We now prove that $g_1$ is an adapted metric. Directly from the definition of $g_1$, $$\begin{array}{rcl}2g_1(X, \phi_1Y)&=&-d\eta (X,\phi^2_1Y)\\&=&d\eta (X,Y) \end{array} $$ $\qed$ Finally, we show that the structure $(M,\eta ,\xi ,\phi_1, g_1)$ is a weakly $(\kappa_1 ,2 )$-structure. By Proposition \ref{prop5}, (\ref{phi11}), the positive eigenvalue of $h_1$ is $$\lambda_1=\sqrt{1-I_M^2}\lambda =\sqrt{(1-\kappa )(1-I_M^2)}=\sqrt{1-\kappa -(1-\frac{\mu}{2})^2}.$$ Since $(\eta ,\xi , \phi_1, g_1)$ is a contact metric structure, identity (\ref{bl1}), Proposition \ref{prop33} holds. $$\overline{\nabla}_\xi h_1=\phi_1-\phi_1l_1-\phi_1h_1^2.$$ For any tangent vector field $X$, one has $$\phi_1X-\phi_1l_1X-\phi_1h^2X=(\overline{\nabla}_\xi h_1)X$$ $$\begin{array}{rcl}\phi_1X-\phi_1l_1X-\lambda_1^2\phi_1X&=&\overline{\nabla}_\xi (h_1X)-h_1\overline{\nabla}_\xi X\\&=&\overline{\nabla}_{h_1X}\xi +[\xi , h_1X]-h_1(\overline{\nabla}_X\xi +[\xi ,X])\\&=&-\phi_1h_1X-\phi_1h_1^2X+(L_\xi h_1)X+h_1[\xi ,X]\\&&-h_1(-\phi_1X-\phi_1h_1X+[\xi ,X])\\ \phi_1X-\phi_1l_1X-\lambda_1^2\phi_1X&=&-2\phi_1h_1X-2\lambda_1^2\phi_1X+(L_\xi h_1)X \end{array} $$ Applying $\phi_1$ on both sides of the above identity, one has $$\phi_1^2X+l_1X-\lambda_1^2\phi_1^2X=2h_1X-2\lambda_1^2\phi_1^2X+\phi_1(L_\xi h_1)X$$ Solving for the tensor field $l_1$ gives \begin{equation}\label{m2}l_1X=2h_1X-(1+\lambda_1^2)\phi_1^2X+(\phi_1L_\xi h_1)X\end{equation} From Proposition \ref{prop5}, we know $L_\xi h_1=\sqrt{1-I_M^2}L_\xi h$ and $L_\xi h=(\mu -2)h\phi +2(1-\kappa )\phi$ from Proposition \ref{prop4}. Also $\phi_1=\frac{1}{2\sqrt{(1-\kappa )(1-\kappa -(1-\frac{\mu}{2})^2}}(L_\xi h)$. A direct calculation shows that $$\begin{array}{rcl} \phi_1L_\xi h_1&=&\frac{\sqrt{1-I_M^2}}{2\sqrt{(1-\kappa )(1-\kappa -(1-\frac{\mu}{2})^2}}(L_\xi h)^2\\ &=&\frac{\sqrt{1-I_M^2}}{2\sqrt{(1-\kappa )(1-\kappa -(1-\frac{\mu}{2})^2}}(1-\kappa )[4(1-\kappa )-(\mu -2)^2]\phi^2\\&=&\frac{1}{2}[4(1-\kappa )-(\mu -2)^2]\phi^2\end{array} $$ Reporting this in identity (\ref{m2}), we get: $$\begin{array}{rcl}l_1X&=&2h_1X-(1+\lambda_1^2)\phi_1^2X+\frac{1}{2}[4(1-\kappa )-(2-\mu )^2]\phi^2X\\&=&2h_1X-(1+1-\kappa -(1-\frac{\mu}{2})^2)(-X+\eta (X)\xi +2(1-\kappa )-\\&&\frac{(2-\mu )^2}{2}(-X+\eta (X)\xi\\&=&2h_1X+(\kappa +\frac{(2-\mu )^2}{4})(X-\eta (X)\xi) \end{array} $$ Which is the $(\kappa_1, \mu_1)$ condition with $\mu_1=2$ and $\kappa_1=\kappa +\frac{(2-\mu)^2}{4}$. $\qed$
1,314,259,994,646
arxiv
\section{Introduction} The extragalactic background light (EBL) is the diffuse and isotropic radiation field from ultraviolet to far infrared wavelengths \citep[see e.g.][for a review]{2001ARA&A..39..249H,2005PhR...409..361K}. It originates from the starlight integrated over all epochs and starlight emission reprocessed by interstellar dust. These two distinct components lead to two maxima in the spectral energy distribution (SED) of the EBL, the first at $\sim$ 1\,\murm m (starlight) and the second at $\sim$ 100\,\murm m (dust). Narrow spectral features like absorption lines are smeared out in the integration over redshift leading to a smooth shape of the EBL at $z=0$. Further contributions may come from diffuse emission from galaxy clusters \citep{2007ApJ...671L..97C}, unresolved active galactic nuclei \citep{2006A&A...451..443M}, Population III stars \citep[e.g.][]{2002MNRAS.336.1082S,2009A&A...498...25R}, or exotic sources like dark matter powered stars in the early universe \citep{MaurerDS}. The SED of the EBL at $z=0$ comprises information about the star and galaxy formation rates and the dust content of galaxies. Direct measurements of the EBL are, however, impeded, especially in the infrared, by foreground emission such as the zodiacal light \citep{1998ApJ...508...25H}. Therefore, upper and lower limits are often the only available information about the EBL density. Lower limits are derived from integrated galaxy number counts e.g. by the \textit{Hubble Space Telescope} in the optical \citep{2000MNRAS.312L...9M} and the \textit{Spitzer} telescope in the infrared \citep{2004ApJS..154...39F}. Several authors have modeled the EBL in the past \citep[e.g.][]{2005AIPC..745...23P,2006ApJ...648..774S,2008A&A...487..837F,2010A&A...515A..19K,2011MNRAS.410.2556D}. Although the approaches forecast different EBL densities at $z=0$, the most recent models more or less agree on the overall EBL shape. The observations of very high energy (energy $E \gtrsim 100$\,GeV; VHE) $\gamma$-rays~from extragalactic sources with imaging atmospheric Cherenkov telescopes (IACTs) has opened a new window to constrain the EBL density. Most of the extragalactic $\gamma$-ray sources are active galactic nuclei (AGN), especially blazars \citep[see e.g.][]{1995PASP..107..803U}. The $\gamma$-rays~from these objects are attenuated by the pair production mechanism: $\gamma_\mathrm{VHE} + \gamma_\mathrm{EBL} \rightarrow e^+ + e^-$ \citep{1962Nikishov,1966PhRvL..16..479J,1967PhRv..155.1404G}. If assumptions are made about the properties of the intrinsic blazar spectrum, a comparison with the observed spectrum allows to place upper limits on the EBL intensity \citep[e.g.][]{1992ApJ...390L..49S}. In this context, the spectra of Markarian (Mkn) 501 during an extraordinary flare \citep{1999A&A...349...11A} and of the distant blazar H\,1426+482 \citep{2003A&A...403..523A} resulted in the first constraints of the EBL density from mid to far infrared (MIR and FIR) wavelengths. With the new generation of IACTs, limits were derived from the spectra of 1ES\,1101-232 and H\,2356-309 \citep{2006Natur.440.1018A} and 1ES\,0229+200 \citep{2007A&A...475L...9A} in the near infrared (NIR), and in the optical from the MAGIC observation of 3C\,279 \citep{2008Sci...320.1752M}. \citet[][henceforth MR07]{2007A&A...471..439M} use a sample of all at that time known blazars and test a large number of different EBL shapes to derive robust constraints over a large wavelength range. The authors exclude EBL densities that produce VHE spectra, characterized by $\Difft{N}{E} \propto E^{-\Gamma}$, with $\Gamma < \Gamma_\mathrm{limit}$ (being $\Gamma_\mathrm{limit} = 1.5$ for realistic and $\Gamma_\mathrm{limit} = 2/3$ for extreme scenarios) or an exponential pile up at highest energies. With the advent of the large area telescope (LAT) on board the \emph{Fermi} satellite \citep{2009ApJ...697.1071A} and its unprecedented sensitivity at high energies ($100\unit{MeV} \lesssim E \lesssim 100\unit{GeV}$; HE), further possibilities arose to confine the EBL density. Bounds can be derived either by considering solely \emph{Fermi}-LAT~observations of AGN and gamma-ray bursts \citep{2010ApJ...723.1082A,2010A&A...520A..34R} or by combining HE with VHE spectra \citep[e.g.][]{2010ApJ...714L.157G,2011ApJ...733...77OO}. It has also been proposed that the \emph{Fermi}-LAT~can, in principle, measure the EBL photons upscattered by electrons in lobes of radio galaxies directly \citep{2008ApJ...686L...5G}. Attenuation limits can also be estimated by modeling the entire blazar SED in order to forecast the intrinsic VHE emission \citep{2002MNRAS.336..721K,2010ApJ...715L..16M}. In this paper, results from the recently published \emph{Fermi} two year catalog \citep[][henceforth 2FGL]{2FGL} together with a comprehensive VHE spectra sample are used to place upper limits on the EBL density. This approach relies on minimal assumptions about the intrinsic spectra. The VHE sample is composed of spectra measured with different instruments, thereby ensuring that the results are not influenced by the possible systematic bias of an individual instrument or observation. The article is organized as follows. In Section \ref{sec:grid} the calculation of the attenuation is presented in order to correct the observed spectra for absorption. The resulting intrinsic spectra are subsequently described with analytical functions. Section \ref{sec:excl} outlines in detail the different approaches to constrain the EBL before the selection of VHE spectra is addressed in Section \ref{sec:samples}. The combination of VHE and HE spectra of variable sources will also be discussed. The results are presented in Section \ref{sec:results} before concluding in Section \ref{sec:concl}. Throughout this work a standard $\Lambda$CDM cosmology is assumed with $\Omega_m = 0.3$, $\Omega_\Lambda = 0.7$, and $h=0.72$. \section{Intrinsic VHE gamma-ray spectra} \label{sec:grid} The intrinsic energy spectrum $\Difft{N_\mathrm{int}}{E}$ of a source at redshift $z_0$ at the measured energy $E$ differs from the observed spectrum $\Difft{N_\mathrm{obs}}{E}$ due to the interaction of source photons with the photons of the EBL which is most commonly expressed as \begin{equation} \Diff{N_\mathrm{obs}}{E} = \Diff{N_\mathrm{int}}{E} \times \exp\left[-\tau_\gamma(E,z_0)\right]\label{eqn:abs}. \end{equation} The strength of the attenuation at energy $E$ is given by the optical depth $\tau_\gamma$: a threefold integral over the distance $\ell$, the cosine $\mu$ of the angle between the photon momenta, and the energy $\epsilon$ of the EBL photons \citep[e.g.][]{2005ApJ...618..657D}, \begin{eqnarray} \tau_{\gamma}(E,z_0) &=&\nonumber\\ &{}&\hskip-30pt \int\limits_0^{z_0} \mathrm d \ell(z) \int\limits_{-1}^{+1} \mathrm d \mu \frac{1 - \mu}{2} \int\limits_{\epsilon^\prime_\mathrm{thr}}^{\infty}\mathrm d \epsilon^\prime n_\mathrm{EBL}(\epsilon^\prime, z) \sigma_{\gamma\gamma}(E^\prime,\epsilon^\prime,\mu). \label{eqn:tau} \end{eqnarray} The primed values correspond to the redshifted energies and $n_\mathrm{EBL}(\epsilon^\prime, z)$ denotes the comoving EBL photon number density. The threshold energy for pair production is given by $\epsilon_\mathrm{thr}^\prime = \epsilon_\mathrm{thr}(E^\prime,\mu)$ with $E^\prime = E(1+z)$. The cross section for pair production, $\sigma_{\gamma\gamma}$, is strongly peaked at a wavelength \citep[e.g.][]{2000A&A...359..419G} \begin{equation} \lambda_\ast = \frac{hc}{\epsilon_\ast} \approx 1.24 \left(\frac{E}{\mathrm{TeV}}\right)\,\murm\mathrm{m}\label{eqn:ppwave}, \end{equation} and, therefore, VHE $\gamma$-rays~predominantly interact with EBL photons from optical to FIR wavelengths. The comoving EBL photon density is described here by splines constructed from a grid in ($\lambda{,}\nu I_\nu$)-plane (see MR07 for further details). This ensures independence of EBL model assumptions and allows for a great variety of EBL shapes to be tested. Furthermore, the usage of splines drastically reduces the effort to compute the complete threefold integral of Eq. \ref{eqn:tau} numerically as shown in MR07. Each spline is defined by the choice of knot points and weights from the grid in the $(\lambda{,}\nu I_\nu)$-plane. The grid is bound by a minimum and a maximum shape, shown in Figure \ref{fig:grid-basic}. The setup of grid points is taken from MR07. The minimum shape tested is set to reproduce the lower limits from the galaxy number counts from \emph{Spitzer} \citep{2004ApJS..154...39F} while the maximum shape roughly follows the upper limits derived from measurements. To reduce the computational costs, the extreme cases considered by MR07 of the EBL density in the optical and near infrared (NIR) are not tested. Moreover, with current VHE spectra the EBL intensity is only testable up to a wavelength $\lambda \approx 100$\,\murm m so no additional grid points beyond this wavelength are used. In total this range of knots and weights allows for 1,920,000 different EBL shapes. A much smaller spacing of the grid points is not meaningful as small structures are smeared out in the calculation of $\tau_\gamma$ and the EBL can be understood as a superposition of black bodies that are not arbitrarily narrow in wavelength \citep[MR07,][]{RauePhD}. \begin{figure}[tb] \centering \includegraphics[angle= 270, width = 1. \linewidth]{thegrid} \caption{ Upper panel: Grid in wavelength versus the energy density of the EBL used to construct the EBL shapes for testing (red bullets). Also shown are the minimum and maximum shape tested (solid lines) and the same for the grid of MR07 (blue triangles; dashed lines). Lower panel: Minimum and maximum EBL shape tested versus EBL limits and measurements \citep[light gray symbols, see][and references therein]{2011arXiv1106.4384R}. } \label{fig:grid-basic} \end{figure} In most previous studies, no EBL evolution with redshift is assumed when computing EBL upper limits using VHE $\gamma$-ray observations. Neglecting the evolution leads to an overestimation of the optical depth between 10\,\% ($z = 0.2$) and 35\,\% ($z= 0.5$) \citep[][]{2008IJMPD..17.1515R} and, consequently, too rigid upper limits (see Appendix \ref{sec:evo}). In this study, the $z$ evolution is accounted for by a phenomenological ansatz \citep[e.g.][]{2008IJMPD..17.1515R}: the effective cosmological photon number density scaling is changed from $n_\mathrm{EBL} \propto (1 + z)^3$ to $n_\mathrm{EBL} \propto (1 + z)^{3-f_\mathrm{evo}}$. For a value of $f_\mathrm{evo}= 1.2$ a good agreement is found between this simplified approach and complete EBL model calculations for redshifts $z \lesssim 0.7$ \citep{2008IJMPD..17.1515R}. Including the redshift evolution of the EBL in general decreases the attenuation compared to the no-evolution case and, therefore, weaker EBL limits are expected The intrinsic $\gamma$-ray spectrum for a given EBL shape and measured $\gamma$-ray spectrum is reconstructed by solving Eq. \ref{eqn:abs} for $\Difft{N_\mathrm{int}}{E}$. For a spectrum with $n$ energy bins the relation reads \begin{equation} \left(\Diff{N_\mathrm{int}}{E}\right)_i = \left(\Diff{N_\mathrm{obs}}{E}\right)_i \times \exp\left[{\tau_\gamma(E_i,z_0)}\label{eqn:deabs}\right], \quad i = 1,\ldots,n, \end{equation} where the energy of the logarithmic bin center is denoted by $E_i$. A systematic error is introduced by using $\tau_\gamma$ calculated for the energy at the bin center since, on the one hand, the attenuation can change dramatically within relatively wide energy bins and, on the other hand, the mean attenuation actually depends on the intrinsic spectral shape in the energy bin \citep{2009ApJ...691L..91S}. The introduced error is studied by comparing $\tau_\gamma$ with an averaged value of the optical depth over the highest energy bin for the spectra that are attenuated most. These spectra are described with an analytical function $f(E)$ (a power or broken power law, cf. Table \ref{tab:models}) and the averaged optical depth $\langle \tau_\gamma \rangle$ is found to be \begin{equation} \langle \tau_\gamma \rangle = \frac{\int\limits_{\Delta E}\tau_\gamma(E,z) f(E)\,\mathrm{d} E}{\int\limits_{\Delta E} f(E)\,\mathrm{d}E}. \end{equation} The results are summarized in Table \ref{tab:diffave}. The ratios $\langle\tau_\gamma\rangle/\tau_\gamma$ are close to, but always smaller than, one and the optical depth is overestimated by $<$ 5\,\%. Thus, the simplified approach adds marginally to the uncertainties of the upper limits. \begin{table}[htb] \centering \caption{ Comparison between the optical depth at the logarithmic bin center of the highest energy bin and the averaged value over the bin width.} \label{tab:diffave} \begin{scriptsize} \begin{tabular}{c|ccc|ccc} \hline \hline \multirow{2}{*}{Source} & \multicolumn{3}{c|}{Minimum EBL shape tested} & \multicolumn{3}{c}{Maximum EBL shape tested} \\ {} & $\tau_\gamma$ & $\langle\tau_\gamma\rangle$ & $\fract{\langle\tau_\gamma\rangle}{\tau_\gamma}$ & $\tau_\gamma$ & $\langle\tau_\gamma\rangle$ & $\fract{\langle\tau_\gamma\rangle}{\tau_\gamma}$ \\ \hline 3C\,279 & 3.48 & 3.34 & 0.96 & 18.33 & 17.61 & 0.96 \\ H\,1426+428 & 2.54 & 2.53 & 0.99 & 12.61 & 12.48 & 0.99 \\ 1ES\,1101-232 & 2.69 & 2.68 & 1.00 & 13.62 & 13.57 & 1.00 \\ Mkn\,501 & 3.27 & 3.21 & 0.98 & 11.86 & 11.67 & 0.98 \\ \hline \end{tabular} \end{scriptsize} \tablefoot{ The HEGRA spectrum is used here for Mkn\,501, see Table \ref{tab:samples} for the references. } \end{table} The intrinsic spectra obtained by means of Eq. \ref{eqn:deabs} will be described by analytical functions in order to test the fit parameters for their physical feasibility. An analytical description of the spectrum is determined by fitting a series of functions listed in Table \ref{tab:models}. A $\chi^2$-minimization algorithm \citep[utilizing the MINUIT package routines, see][]{Minuit} is employed, starting with the first function of the table, a simple power law. The fit is not considered valid if the corresponding probability is $P_\mathrm{fit}(\chi^2) < 0.05$. In this case the next function with more model parameters from Table \ref{tab:models} is evaluated. For a given energy spectrum of $n$ data points, only functions are examined with $n-1 > 0$ degrees of freedom. If more than one fit results in an acceptable fit probability, an $F$-Test is used to determine the preferred hypothesis. The parameters of the model with more fit parameters are examined if the test results in a 95\,\% probability that description of the data has improved. \begin{table*}[htb] \caption{Analytical functions fitted to the deabsorbed spectra.} \label{tab:models} \centering \begin{tabular}{lclc} \hline \hline Description & Abbreviation & Formula $\Difft{N_\mathrm{int}}{E}$& \# of parameters\\ \hline Power law & \multirow{2}{*}{PL} & \multirow{2}{*}{$\mathit{PL}_1$} & \multirow{2}{*}{2}\\ {} & {} & {} & {}\\ Broken power law & \multirow{2}{*}{BPL} & \multirow{2}{*}{$\mathit{PL}_1 \times \mathit{CPL}_{12}$} & \multirow{2}{*}{4}\\ with transition region& {} & {} & {} \\ {} & {} & {} & {}\\ Broken power law with transition region & \multirow{2}{*}{SEBPL} & \multirow{2}{*}{ $\mathit{PL}_1\times \mathit{CPL}_{12} \times \mathit{Pile}$ } & \multirow{2}{*}{6}\\ and super-exponential pile up & {} & {} & {}\\ {} & {} & {} & {}\\ Double broken power law & \multirow{2}{*}{DBPL} & \multirow{2}{*}{$\mathit{PL}_1\times \mathit{CPL}_{12} \times \mathit{CPL}_{23}$} & \multirow{2}{*}{6}\\ with transition region& {} & {} & {} \\ {} & {} & {} & {}\\ Double broken power law & \multirow{2}{*}{SEDBPL} & \multirow{2}{*}{ $\mathit{PL}_1\times \mathit{CPL}_{12} \times \mathit{CPL}_{23} \times \mathit{Pile}$} & \multirow{2}{*}{8}\\ with super-exponential pile up& {} & {} & {} \\ {} & {} & {} & {}\\ \hline \end{tabular} \tablefoot{The functions are a power law, a curved power law and a super exponential pile up defined as $\mathit{PL}_i = N_0\,E^{-\Gamma_i},~ \mathit{CPL}_{ij} = \left[ 1 + \left(\fract{E}{E^\mathrm{break}_i}\right)^{f_i} \right]^{\fract{(\Gamma_i - \Gamma_j)}{f_i}},~\mathrm{and}~ \mathit{Pile} = \exp\left[\left(E / E_\mathrm{pile}\right)^\beta\right]$, respectively, where all energies are normalized to 1\,TeV. The smoothness parameters $f_i$ are held constant and the break energies $E^\mathrm{break}_i$ are forced to be positive. Only positive pile up, i.e. $E_\mathrm{pile} > 0$, is tested. } \end{table*} \section{Exclusion criteria for the EBL shapes} \label{sec:excl} In the following, arguments to exclude EBL shapes will be presented. While the first criteria are based on the expected concavity of the intrinsic VHE spectra, the second set of criteria relies on the integral of the intrinsic VHE emission. \subsection{Concavity} \label{sec:curvature} Observations have led to the commonly accepted picture that particles are accelerated in jets of AGN thereby producing non-thermal radiation. The SED of these objects is dominated by two components. The first low-energy component from infrared to X-ray energies is due to synchrotron radiation from a distribution of relativistic electrons. The second component responsible for HE and VHE emission can be explained by several different emission models. In leptonic blazar models, photons are upscattered by the inverse Compton (IC) process. The involved photon fields originate from synchrotron emission \citep[e.g.][]{1996ApJ...461..657B}, the accretion disk \citep{1993ApJ...416..458D}, or the broad line region \citep[e.g.][]{1994ApJ...421..153S}. In hadronic blazar models, on the other hand, $\gamma$-ray emission is produced either by proton synchrotron radiation \citep[e.g.][]{2001APh....15..121M} or photon pion production \citep[e.g.][]{2003APh....18..593M}. These simple emission models which commonly describe the measured data satisfactorily, do not predict a spectral hardening in the transition from HE to VHE nor within the VHE band. This is also confirmed by observations of nearby sources. On the contrary, the spectral slope is thought to become softer with energy, either due to Klein-Nishina effects in leptonic scenarios and / or a cut off in the spectrum of accelerated particles. However, in more specific scenarios a spectral hardening is possible. If mechanisms like, e.g., second order IC scattering, internal photon absorption \citep[e.g.][]{2008MNRAS.387.1206A}, comptonization of low frequency radiation by a cold ultra-relativistic wind of particles \citep{2002A&A...384..834A}, or multiple HE and VHE $\gamma$-ray~emitting regions in the source \citep{2011arXiv1108.4568L} contribute significantly to the overall spectrum, convex curvature or an exponential pile can indeed occur. Nevertheless, neither of these features has been observed with certainty in nearby sources. Furthermore, it would imply serious fine tuning if such components appeared in all examined sources in the transition from the optical thin, i.e. $\tau_\gamma < 1$, to optical thick regime, $\tau_\gamma \ge 1$. This seems unlikely, considering the large number of EBL shapes tested. Consequently, EBL shapes leading to an intrinsic VHE spectrum which is not concave will be excluded. This expectation is formulated through three test criteria: \paragraph{(i) \emph{Fermi}-LAT~spectrum as an upper limit.} With the launch of the \emph{Fermi} satellite and the current generation of IACTs, there is an increasing number of broad-band AGN energy spectra measured in the HE and VHE domains. Thus, the least model dependent approach is to test spectra against a convex curvature in the transition from HE to VHE by regarding the spectral index measured by \emph{Fermi}, $\Gamma_\mathrm{HE}$, as a limit on the reconstructed intrinsic index at VHE, $\Gamma$. Hence, the intrinsic VHE spectrum is regarded as unphysical if the following condition is met, \begin{equation} \Gamma + \sigma_{\mathrm{stat}} + \sigma_\mathrm{sys} < \Gamma_\mathrm{HE} - \sigma_\mathrm{HE,~stat}. \end{equation} The statistical error $\sigma_{\mathrm{stat}}$ is estimated from the fit of an analytical function to the intrinsic spectrum whereas the systematic uncertainty $\sigma_\mathrm{sys}$ is used that is estimated by the respective instrumental team. The statistical uncertainty $\sigma_{\mathrm{HE,~stat}}$ of the \emph{Fermi}-LAT~spectral index is given by the 2FGL or the corresponding publication, see Section \ref{sec:samples}. This exclusion criterion will be referred to as \emph{VHE-HEIndex}~ criterion in the following. Note that this is \emph{not} the same criterion as used by \citet{2011ApJ...733...77OO}. They assume that the VHE index should be equal to the index measured with the \emph{Fermi}-LAT. \paragraph{(ii) Super exponential pile up.} Furthermore, shapes will be excluded that lead to an intrinsic VHE spectrum that piles up super exponentially at highest energies. This is the case if it is best described by the analytical functions abbreviated SEBPL or SEDBPL, see Table \ref{tab:models}, and the pile-up energy is positive within a $1\sigma$ confidence, \begin{equation} E_\mathrm{pile} - \sigma_\mathrm{pile} > 0.\label{eqn:pileup} \end{equation} This additional independent exclusion criterion relies solely on VHE observations which are subject to the attenuation in contrast to \emph{Fermi}-LAT~observations. It will be denoted as \emph{PileUp}~ criterion throughout this study. \paragraph{(iii) VHE concavity.} In the case that the intrinsic spectrum is best described by either a BPL or a DBPL, it is considered as convex if the following inequalities are \textit{not} fulfilled, \begin{eqnarray} \Gamma_1 - \sigma_1 &\leqslant& \Gamma_2 + \sigma_2 \nonumber\\ {and}\quad\Gamma_2 - \sigma_2 &\leqslant& \Gamma_3 + \sigma_3 ~\mathrm{(DBPL)},\label{eqn:convex} \end{eqnarray} and the corresponding EBL shape will be rejected. Again, 1$\sigma$ uncertainties of the fitting procedure are used. This criterion will be referred to as \textit{VHEConcavity}. It is very similar to the argument formulated in (ii) as intrinsic spectra that show an exponential rise may often be equally well described by a BPL or DBPL. However, with the \emph{VHE\-Concavity}~ criterion also intrinsic spectra can be excluded that show only mild convexity, i.e. no exponential pile up. \subsection{Cascade emission and energy budget} \label{sec:integral} In this Section two new approaches are introduced that are based on the integrated intrinsic emission. These methods rely on a number of parameters, whose values are, so far, not accurately determined by observations or for which only upper and lower limits exist. Therefore, the following two criteria have to be regarded as a theoretical motivated possibility to constrain the EBL in the future. As it will be shown in Section \ref{sec:results}, the final upper limits are not improved by these criteria and are, thus, independent of the model parameters chosen here. \subsubsection{Cascade emission} \label{sec:cascade} EBL photons that interact with VHE $\gamma$-rays~produce $e^+e^-$ pairs. These secondary pairs can generate HE radiation by upscattering cosmic microwave background (CMB) photons by means of the IC process. This initiates an electromagnetic cascade as these photons can again undergo pair production \citep[e.g.][]{1987MNRAS.227..403S,1994ApJ...423L...5A,2002ApJ...580L...7D,2009ApJ...703.1078D,2011arXiv1106.5508K}. The amount of cascade radiation, that points back to the source, depends on the field strength $B_\mathrm{IGMF}$ of the intergalactic magnetic field and its correlation length $\lambda_B$. The values of $B_\mathrm{IGMF}$ and $\lambda_B$ are unknown and only upper and lower limits exist \citep[see e.g.][for a compilation of limits]{2009PhRvD..80l3012N}. If the field strength is large (see Eq. \ref{eqn:defl}) or if the correlation length is small compared to the cooling length $ct_\mathrm{cool}$ of the $e^+e^-$ pairs for IC scattering, the pairs are quickly isotropized and extended halos of $\gamma$-ray emission form around the initial source \citep[e.g][]{1994ApJ...423L...5A,2002ApJ...580L...7D,2009PhRvD..80b3010E,2009ApJ...703.1078D}. Furthermore, the time delay of the cascade emission compared to the primary emission depends on $B_\mathrm{IGMF}$ and $\lambda_B$. VHE $\gamma$-rays~need to be produced for a sufficiently long period so that the reprocessed radiation is observable \citep[e.g.][]{2011ApJ...733L..21D}. The cascade emission has been used to place lower limits on $B_\mathrm{IGMF}$ and $\lambda_B$ by assuming a certain EBL model \citep{2010Sci...328...73N,2010MNRAS.406L..70T,2011MNRAS.tmp..570T,2011ApJ...733L..21D,2011ApJ...727L...4D,2011ApJ...735L..28H}. Conversely, one can place upper limits on the EBL density under the assumption of a certain magnetic field strength. This novel approach is followed here whereas, in previous studies, the cascade emission is neglected when deriving upper limits on the EBL density. A higher EBL density leads to a higher production of $e^+e^-$ pairs and thus to a higher cascade emission that is potentially detectable with the \emph{Fermi}-LAT. If the predicted cascade radiation exceeds the observations of the \emph{Fermi}-LAT, the corresponding EBL shape can be excluded. Conservative upper limits are derived if the following assumptions are made: (i) The HE emission of the source is entirely due to the cascade. (ii) The observed VHE spectrum is fitted with a power law with a super exponential cut off at the highest measured energy of the spectrum. This minimizes the reprocessed emission and allows to consider only the first generation of the cascade. (iii) The $e^+e^-$ pairs are isotropized in the intergalactic magnetic field, minimizing the reprocessed emission. This condition is equal to the demand that the deflection angle $\vartheta$ of the particles in the magnetic field is $\approx \pi$. Assuming $\lambda_B \gg ct_\mathrm{cool}$, the deflection angle for electrons with an energy $\gamma mc^2 \approx E / 2$, where $E$ is the energy of the primary $\gamma$ ray, can be approximated by \citep{2010MNRAS.406L..70T,2010Sci...328...73N} \begin{equation} \vartheta\approx \frac{ct_\mathrm{cool}}{R_\mathrm{L}} = 1.17\left(\frac{B_\mathrm{IGMF}}{10^{-15}\unit{G}}\right)(1+z_r)^{-4}\left(\frac{\gamma}{10^6}\right)^{-2}, \label{eqn:defl} \end{equation} with $z_r$ the redshift where the IC scattering occurs and $R_\mathrm{L}$ the Larmor radius. The IC scattered $e^+e^-$ pairs give rise to $\gamma$-rays~with energy $\epsilon \approx \gamma^2h\nu_\mathrm{CMB} \approx 0.63 (E/\mathrm{TeV})^2\,\mathrm{GeV}$, with $h\nu_\mathrm{CMB} = 634$\,\murm eV the peak energy of the CMB. The $\gamma$ factor in Eq. \ref{eqn:defl} can be eliminated in favor of $\epsilon$, and, solving for $B_\mathrm{IGMF}$, the pairs are isotropized if $B_\mathrm{IGMF} \approx 4.2\times10^{-15}~(1+z_r)^4(\epsilon /\mathrm{GeV}) \unit{G} \approx 5 \times 10^{-13} \unit{G}$ for $\epsilon = 100$\,GeV, the maximum energy measured with the \emph{Fermi}-LAT~considered here and the maximum redshift where the IC scattering can occur, i.e. the redshift of the source.\footnote{ Accordingly, this $B$-field value ensures isotropy regardless were the IC scattering occurs.} This value of $B_\mathrm{IGMF}$ is in accordance with all experimental bounds \citep[see e.g.][especially Figures 1 and 2]{2009PhRvD..80l3012N} For correlation lengths $\lambda_B \gg ct_\mathrm{cool} \approx 0.65 (E/\mathrm{TeV})^{-1}(1+z_r)^{-4}$\,Mpc $\approx \mathcal{O}(\mathrm{Mpc})$ the most stringent constraints come from Faraday rotation measurements \citep{1976Natur.263..653K,1999ApJ...514L..79B} which limit $B_\mathrm{IGMF} \lesssim 10^{-9}$~G. Furthermore, the adopted value cannot be excluded neither with possible observations of deflections of ultra-high energy cosmic rays \citep{1995ApJ...455L..21L} nor with constrained simulations of magnetic fields in galaxy clusters, both setting an upper limit on $B_\mathrm{IGMF} \lesssim 10^{-12}$\,G \citep{2005JCAP...01..009D,2009MNRAS.392.1008D}. For value of $B_\mathrm{IGMF}$, the cascade emission is detectable if a steady $\gamma$-ray~emission of the source for the last $\Delta t \approx 10^6$ years is assumed \citep[see e.g.][]{2011ApJ...733L..21D,2011MNRAS.tmp..570T}. Other energy loss channels apart from IC scattering like synchrotron radiation or plasma instabilities \citep[][]{2011arXiv1106.5494B} are neglected. However, if the latter are present, the field strength is even higher, or the lifetime of the VHE source is shorter, no significant cascade emission is produced or it has not reached earth so far. The cascade emission $F(\epsilon)$ is calculated with Eq. \ref{eqn:cascade_full} in Appendix \ref{sec:cascade-form} following \citet{2011MNRAS.tmp..570T} and \citet{2011ApJ...733L..21D}. For isotropy, the observed cascade emission has to be further modified with the solid angle $\Omega_c \approx \pi\theta_c^2$ into which the intrinsic blazar emission is collimated where $\theta_c$ is the semi-aperture of the irradiated cone. For blazars one has $\theta_c \sim 1/\Gamma_\mathrm{L}$ where $\Gamma_\mathrm{L}$ is the bulk Lorentz factor of the plasma of the jet. The observed emission is then found to be \citep{2011MNRAS.tmp..570T} \begin{equation} F_\mathrm{obs}(\epsilon) = 2 \frac{\Omega_c}{4\pi}F(\epsilon),\label{eqn:CasEmi} \end{equation} where the factor of two accounts for the contribution of both jets in the isotropic case. The exclusion criterion for an EBL shape at the 2$\sigma$ level reads \begin{equation} F_\mathrm{obs}(\epsilon_\mathrm{meas}) > F_\mathrm{meas} + 2\sigma_\mathrm{meas},\label{eqn:cascade} \end{equation} where $\epsilon_\mathrm{meas}$, $F_\mathrm{meas}$, $\sigma_\mathrm{meas}$ are the measured energy, flux and statistical uncertainty reported in the 2FGL, respectively. In the case that the source is not detected $F_\mathrm{meas} = 0$ and $\sigma_\mathrm{meas}$ represents the $1\,\sigma$ upper limit on the flux. As an example, Figure \ref{fig:CasSpec} shows the observed and intrinsic VHE spectrum for a specific EBL shape of the blazar 1ES\,0229+200 together with the \emph{Fermi} upper limits \citep{2010MNRAS.406L..70T}. The different model curves demonstrate the degeneracy between the different parameters entering the calculation. The EBL shape used to calculate the intrinsic VHE spectrum is not excluded in the isotropic case since the emission does not overproduce the \emph{Fermi} upper limits. This is contrary to the case of $B_\mathrm{IGMF} = 10^{-20}$ G and $\Delta t = 3$ years where the predicted cascade flux exceeds the \emph{Fermi}-LAT~upper limits in the 1-10\,GeV range. To obtain conservative upper limits of the EBL, only the isotropic case is assumed in the following, i.e. $B_\mathrm{IGMF} = 5 \times 10^{-13} \unit{G}$ which implies that the source has to be steady for a lifetime of $\Delta t \gtrsim 10^6$ years. Furthermore, a Lorentz factor of $\Gamma_\mathrm{L} = 10$ is generically assumed for all sources. \begin{figure}[htb] \centering \includegraphics[width = .8\linewidth, angle = 270]{CascadeSpectrum} \caption{Cascade emission for a certain EBL shape and the VHE spectrum of 1ES\,0229+200 \citep{2007A&A...475L...9A}. The observed spectrum (dark red points and line) is fitted with a power law with an exponential cut off and corrected for the EBL absorption (dark blue dashed line and points). The green lines show the cascade emission resulting from the reprocessed flux (light gray shaded area) for a constant emission over the last three years and different magnetic field strengths. The red dotted line shows the reprocessed emission if the $e^+e^-$ pairs are isotropized. The latter does not overproduce the \emph{Fermi} upper limits \citep[black diamonds][]{2010MNRAS.406L..70T} and hence the corresponding EBL shape is not excluded. The light and dark gray area together are equal to the integrated flux that is compared to the Eddington luminosity (see Section \ref{sec:eddington}). } \label{fig:CasSpec} \end{figure} \subsubsection{Total energy budget} \label{sec:eddington} The jets of AGN, the production sites for HE/VHE emission, are powered by the accretion of matter onto a central black hole \citep[e.g.][]{1995PASP..107..803U}. If the radiation escapes isotropically from the black hole, the balancing of the gravitational and radiation force leads to the maximum possible luminosity due to accretion, the Eddington luminosity, \citep[e.g.][]{2009herb.book.....D} \begin{equation} L_\mathrm{edd}(M_\bullet) \approx 1.26 \times 10^{38} ~\frac{M_\bullet}{M_\odot} \unit{ergs}\unit{s}^{-1}, \end{equation} where $M_\bullet$ is the black-hole mass normalized to the mass of the sun, $M_\odot$. Assuming that the total emission of an AGN is not super-Eddington, the Eddington luminosity is the maximum power available for the two jets, $P_\mathrm{jet} \le L_\mathrm{edd}/2$, which is a sum of several contributions which all can be represented as \citep[e.g.][]{1993MNRAS.264..228C,2011MNRAS.410..368B} $P_i = \pi R^{\prime\,2}\Gamma^2 \beta c U^\prime_i$, in the case that the radiation is emitted by an isotropically radiating relativistic plasma blob in the comoving frame. The blob of radius $R^\prime$ in the comoving frame moves with a bulk Lorentz factor $\Gamma_\mathrm{L}$ and corresponding speed $\beta c$, $U^\prime_i$ is the comoving energy density. The energy density of the produced radiation is $U^\prime_\mathrm{r} = L^\prime/(4\pi R^{\prime\,2}c) = L / (4\pi \delta_\mathrm{D}^4R^{\prime\,2}c)$. The last equality connects the comoving luminosity with the luminosity in the lab frame via the Doppler factor $\delta_\mathrm{D} = [(1+z)\Gamma_\mathrm{L}(1-\beta\cos\theta)]^{-1} \approx 2\Gamma_\mathrm{L}/[(1+z)(1 + \theta^2\Gamma_\mathrm{L}^2)] $ where $\theta$ is the angle between the jet axis and the line of sight. The approximation holds for $\theta \ll 1$ and $\Gamma_\mathrm{L} \gg 1$. Assuming $\theta \approx \theta_c$, the Doppler factor and the bulk Lorentz factor are equal up to the redshift factor $1 + z$, $\delta_\mathrm{D} \approx \Gamma_\mathrm{L}$. The power produced in radiation is a robust lower limit for the entire power of the jet \citep[e.g.][]{2011MNRAS.410..368B}, \begin{equation} P_\mathrm{r} \approx \fract{L}{(4\Gamma_\mathrm{L}^2)} < P_\mathrm{jet} \le L_\mathrm{edd}/2 \end{equation} Solving the inequality for the observed luminosity, one arrives at an additional exclusion criterion for EBL shapes, namely, if the intrinsic energy flux at VHE is larger than the associated Eddington energy flux, \begin{equation} (1 + z)^{2 - \Gamma_\mathrm{int} }\int\limits_{E_\mathrm{min}}^{E_\mathrm{max}}\Diff{N_\mathrm{int}}{E}\mathrm d E > \frac{\Gamma_\mathrm{L}^2L_\mathrm{edd}(M_\odot)}{2\pi d_\mathrm{L}^2}\label{eqn:eddington} \end{equation} where $E_\mathrm{min}$ and $E_\mathrm{max}$ are the minimum and maximum energy of the intrinsic VHE spectrum which is described with a power law with index $\Gamma_\mathrm{int}$. The the factor $(1 + z)^{2-\Gamma_\mathrm{int}}$ accounts for the K-correction and $d_\mathrm{L}$ is the luminosity distance given by \begin{equation} d_\mathrm{L} = \frac{(1 + z)~c}{H_0} \int\limits_0^z \frac{\mathrm{d}z^\prime}{\sqrt{\Omega_m(1+z^\prime)^3 + \Omega_\Lambda}}. \end{equation} For a conservative estimate, $M_\bullet + \sigma_{M_\bullet}$ is used in the calculation of $L_\mathrm{edd}$. The assumption of a non super-Eddington luminosity is, however, somewhat speculative as super-Eddington emission has been observed e.g. in the variable source 3C\,454.3 \citep{2011ApJ...733L..26A}. In Section \ref{sec:results}, it will be shown that the capability of the Eddington criterion to exclude EBL shapes is extremely limited. For this reason, only steady sources (listed in Table \ref{tab:steadysrc}) will be considered for this criterion. Here, it is only emphasized that it is in principle possible to constrain the EBL with this argument. Excluding EBL shapes with cascade emission (Eq. \ref{eqn:cascade}) and the total energy budget of the source (Eq. \ref{eqn:eddington}), will be referred to as the \textit{IntVHELumi} (short for intrinsic VHE luminosity) criterion. \section{VHE AGN Sample} \label{sec:samples} \begin{table*}[thb] \caption{VHE AGN spectra used in this study. If not stated otherwise in the text, the \emph{Fermi} slope and variability index are taken from the 2FGL.} \label{tab:samples} \begin{scriptsize} \begin{center} \begin{tabular}{lcccccc|c|c} \hline \hline \multirow{2}{*}{Source} & \multirow{2}{*}{Redshift} & \multirow{2}{*}{Experiment} & Energy Range & VHE Slope & \emph{Fermi} Slope & \multirow{2}{*}{Variability Index} & \multirow{2}{*}{Reference} & \multirow{2}{*}{Comments}\\ {} & {} & {} & (TeV) & $\Gamma \pm \sigma_\mathrm{stat} \pm \sigma_\mathrm{sys}$ & $\Gamma \pm \sigma_\mathrm{stat}$& {} & {} & {}\\ \hline Mkn\,421& 0.031 & HESS & 1.75 -- 23.1 & $2.05~\pm~0.22$ & $1.77~\pm~0.01$ & $112.8$ & (1) & hardest index\\ Mkn\,501& 0.034 & MAGIC & 0.17 -- 4.43 & $2.79~\pm~0.12$ & $1.64~\pm~0.09$ & $72.33$ & (2) & hardest index\\ Mkn\,501& 0.034 & HEGRA & 0.56 -- 21.45 & $1.92~\pm~0.03~\pm~0.20$ & $1.64~\pm~0.09$ & $72.33$ & (3) & hardest index\\ 1ES\,2344+514& 0.044 & MAGIC & 0.19 -- 4.00 & $2.95~\pm~0.12~\pm~0.20$ & $1.72~\pm~0.08$ & $28.13$ & (4) & steady \\ Mkn\,180& 0.045 & MAGIC & 0.18 -- 1.31 & $3.25~\pm~0.66$ & $1.74~\pm~0.08$ & $19.67$ & (5) & steady \\ 1ES\,1959+650& 0.048 & HEGRA & 1.52 -- 10.94 & $2.83~\pm~0.14~\pm~0.08$ & $1.94~\pm~0.03$ & $52.30$ & (6) & hardest index\\ 1ES\,1959+650& 0.048 & MAGIC & 0.19 -- 2.40 & $2.58~\pm~0.18$ & $1.94~\pm~0.03$ & $52.30$ & (7) & hardest index\\ BL\,Lacertae& 0.069 & MAGIC & 0.16 -- 0.70 & $3.6~\pm~0.5$ & $2.11~\pm~0.04$ & $267.0$ & (8) & hardest index\\ PKS\,2005-489& 0.071 & HESS & 0.34 -- 4.57 & $3.20~\pm~0.16~\pm~0.10$ & $1.90~\pm~0.06$ & $68.86$ & (9) & hardest index\\ RGB\,J0152+017& 0.080 & HESS & 0.31 -- 2.95 & $2.95~\pm~0.36~\pm~0.20$ & $1.79~\pm~0.14$ & $27.73$ & (10) & steady \\ PKS\,2155-304& 0.116 & HESS & 0.25 -- 3.20 & $3.34~\pm~0.05~\pm~0.1$ & $1.81~\pm~0.11$ & $262.9$ & (11) & simul \\ RGB\,J0710+591& 0.125 & VERITAS & 0.42 -- 3.65 & $2.69~\pm~0.26~\pm~0.20$ & $1.53~\pm~0.12$ & $29.86$ & (12) & steady \\ H\,1426+428& 0.129 & HEGRA & 0.78 -- 5.37 & -- & $1.32~\pm~0.12$ & $22.16$ & (13) & steady\\ 1ES\,0806+524& 0.138 & MAGIC & 0.31 -- 0.63 & $3.6~\pm~1.0~\pm~0.3$ & $1.94~\pm~0.06$ & $37.80$ & (14) & steady \\ H\,2356-309& 0.165 & HESS & 0.23 -- 1.71 & $3.06~\pm~0.15~\pm~0.10$ & $1.89~\pm~0.17$ & $20.19$ & (15) & steady \\ 1ES\,1218+304& 0.182 & MAGIC & 0.09 -- 0.63 & $3.0~\pm~0.4$ & $1.71~\pm~0.07$ & $40.00$ & (16) & steady \\ 1ES\,1218+304& 0.182 & VERITAS & 0.19 -- 1.48 & $3.08~\pm~0.34~\pm~0.2$ & $1.71~\pm~0.07$ & $40.00$ & (17) & steady \\ 1ES\,1101-232& 0.186 & HESS & 0.18 -- 2.92 & $2.88~\pm~0.17$ & $1.80~\pm~0.21$ & $25.74$ & (18) & steady \\ 1ES\,1011+496& 0.212 & MAGIC & 0.15 -- 0.59 & $4.0~\pm~0.5$ & $1.72~\pm~0.04$ & $48.05$ & (19) & hardest index\\ 1ES\,0414+009& 0.287 & HESS & 0.17 -- 1.13 & $3.44~\pm~0.27~\pm~0.2$ & $1.98~\pm~0.16$ & $15.56$ & (20) & steady \\ PKS\,1222+21~\tablefootmark{a}& 0.432 & MAGIC & 0.08 -- 0.35 & $3.75~\pm~0.27~\pm~0.2$ & $1.95~\pm~0.21$ & $13030$ & (21) & simul\\ 3C\,279& 0.536 & MAGIC & 0.08 -- 0.48 & $4.1~\pm~0.7~\pm~0.2$ & $2.22~\pm~0.02$ & $2935$ & (22) & hardest index\\ \hline 1ES\,0229+200~\tablefootmark{b}& 0.140 & HESS & 0.60 -- 11.45 & $2.5~\pm~0.19~\pm~0.10$ & -- & -- & (23) & -- \\ \hline \end{tabular} \tablefoot{See the text for details on the \textit{Comments} column. \tablefoottext{a}{There was no simultaneous measurement during the 0.5\,h in which MAGIC detected the source. However, the index used in high energies was extracted from \emph{Fermi} data in the 2.5\,h before and after the MAGIC observation \citep{2011ApJ...730L...8A}.} \tablefoottext{b}{The spectrum is only tested against the \emph{IntVHELumi}~ criterion as it is not detected with the \emph{Fermi}-LAT.} } \tablebib{ (1)~{\citet{2011arXiv1106.1035T}}; (2)~{\citet{2011ApJ...727..129A}}; (3)~{\citet{1999A&A...349...11A}}; (4)~{\citet{2007ApJ...662..892A}}; (5)~{\citet{2006ApJ...648L.105A}}; (6)~{\citet{2003A&A...406L...9A}}; (7)~{\citet{2008ApJ...679.1029T}}; (8)~{\citet{2007ApJ...666L..17A}}; (9)~{\citet{2010A&A...511A..52H}}; (10)~{\citet{2008A&A...481L.103A}}; (11)~{\citet{2009ApJ...696L.150A}}; (12)~{\citet{2010ApJ...715L..49A}}; (13)~{\citet{2003A&A...403..523A}}; (14)~{\citet{2009ApJ...690L.126A}}; (15)~{\citet{2010A&A...516A..56H}}; (16)~{\citet{2006ApJ...642L.119A}}; (17)~{\citet{2009ApJ...695.1370A}}; (18)~{\citet{2006Natur.440.1018A}}; (19)~{\citet{2007ApJ...667L..21A}}; (20)~{\citet{2012arXiv1201.2044T}}; (21)~{\citet{2011ApJ...730L...8A}}; (22)~{\citet{2008Sci...320.1752M}}; (23)~{\citet{2007A&A...475L...9A}} } \end{center} \end{scriptsize} \end{table*} In the past four years, the number of discovered VHE emitting AGN has doubled. In this section samples of VHE spectra are defined that are evaluated with the \emph{VHE-HEIndex}, \emph{PileUp}~ and \emph{VHE\-Concavity}~ criteria (Section \ref{sec:sample-conc}) and with the \emph{IntVHELumi}~ criterion (Section \ref{sec:sample-int}). \subsection{Sample tested against concavity criteria} \label{sec:sample-conc} For this part of the analysis, 22 VHE spectra from 19 different sources are used. AGN are included in the sample only if their redshift is known, there is no confusion with other sources, and they are detected with the \emph{Fermi}-LAT. This excludes the known VHE sources 3C\,66A and 3C\,66B, 1ES\,0229+200, PG\,1553+113, and S\,50716+714. Two spectra from the same source are only considered if they cover different energy ranges. Furthermore, the radio galaxies Centaurus A and M~87 are not included since they are too close and measured at energies that are too low to yield any constraints of the EBL density. Spectra that are a combination of several instruments are not included due to possible systematic uncertainties. If two or more spectra are available for a variable source, the VHE spectrum is chosen that is measured simultaneously with \emph{Fermi}-LAT~observations. If the \emph{Fermi} spectrum is best described with a logarithmic parabola, the spectral index determined at the pivot energy is used for the comparison with the intrinsic VHE spectra. The entire AGN sample is listed in Table \ref{tab:samples} together with the redshift, the energy range, the spectral index at VHE energies, the index measured with the \emph{Fermi}-LAT, the variability index given in the 2FGL, and the corresponding references. AGN are known to be variable sources both in overall flux and spectral index. This poses a problem for the \emph{VHE-HEIndex}~ criterion as it relies on the comparison of \emph{Fermi}-LAT~and IACT spectra. To address this issue, one can roughly divide the overall source sample into three categories: \begin{enumerate} \item \emph{Steady sources in the \emph{Fermi}-LAT~energy band.} In this category, all sources are assembled that show a variability index $<$ 41.64 in the 2FGL which corresponds to a likelihood fit probability of more than 1\,\% that the source is steady \citep{2FGL}. For these sources simultaneous measurements are not required regardless if they are steady (like 1ES\,1101-232, \citealt{2007A&A...470..475A}) or variable (like H\,1426+428, see below) at VHE. This does not affect the upper limits derived here because the \emph{Fermi} index remains valid as a lower limit independent of the VHE index. These sources are marked as ``steady'' in the last column of Table \ref{tab:samples}. \item \emph{Variable sources with simultaneous measurements.} Some of the variable sources were observed simultaneously with the \emph{Fermi}-LAT~and IACTs in multiwavelength campaigns, namely PKS\,2155-304 with HESS \citep{2009ApJ...696L.150A}, and PKS\,1222+21 with MAGIC \citep{2011ApJ...730L...8A}. Instead of the spectral slopes given in the 2FGL, the \emph{Fermi}-LAT~spectra from these particular observations are used to test the EBL shapes. These sources are marked as ``simul'' in the last column in Table \ref{tab:samples}. Note, however, that the observation times might not be equal for the individual instruments since the sensitivities for the \emph{Fermi}-LAT~and IACTs are different. Nevertheless, the arising systematic uncertainty is negligible for the sources under consideration. In the case of PKS 2155-304, the source was observed in a quiescent state where no fast flux variability is expected. PKS\,1222+21, on the other hand, was observed in a HE flaring state and \emph{Fermi}-LAT~observations are not available the 30 minutes of MAGIC observations. Instead, \citet{2011ApJ...730L...8A} derive the \emph{Fermi} spectrum from 2.5\,hrs of data encompassing these 30 minutes. This is justified, since the source remained in this high flux state for several days with little spectral variations \citep[cf. Figure 2 in][]{2011ApJ...733...19T}. Accordingly, the maximum time lag allowed for observations to be considered as simultaneous is of the order of an hour. \item \emph{Variable sources not simultaneously measured.} For some variable sources, no simultaneous data are available, namely, 1ES\,1011+496, 1ES\,1959+650, 3C\,279, BL\,Lacertae, the flare spectra of Mkn\,501 and Mkn\,421, and PKS\,2005-489 (see Table \ref{tab:samples} for the references). In these cases, the literature was examined for dedicated \emph{Fermi}-LAT~analyses of the corresponding sources in order to find hardest spectral index published. In the cases of 1ES\,1011+496, 1ES\,1959+650, PKS\,2005-489 and Mkn\,421 the indices reported in the 2FGL are the hardest published so far. The hardest indices for BL\,Lacertae and Mkn\,501 are obtained by \citet{2010ApJ...716...30A} and \citet{2011ApJ...727..129A}, respectively, see Table \ref{tab:samples} for the corresponding values. The distant quasar 3C\,279 was observed with the \emph{Fermi}-LAT~during a $\gamma$-ray flare in 2009 and the measured spectral indices vary between $\sim 2$ and $\sim 2.5$ \citep[compare Fig. 1 in][]{2010Natur.463..919A}. Thus, the catalog index of $2.22\pm0.02$ is appropriate to use. Table \ref{tab:samples} refers to all the spectra discussed here as ``hardest index'' in the last column. \end{enumerate} Additional uncertainties are introduced for the VHE observation with a maximum time lag between the measurement and the launch of the \emph{Fermi} satellite, which is the case for Mkn\,501 and H\,1426+428. In the case of H\,1426+428, no detection has been reported after the HEGRA measurement in 2002 at VHE which might suggest that the source is now in a quiescent state. The 2002 spectrum with an observed spectral index of $\Gamma = 1.93 \pm 0.47$ is used in this study. The hard spectrum promises stronger limits with the \emph{VHE-HEIndex}~ criterion than the 2000 spectrum which has a spectral slope of $\Gamma = 2.79 \pm 0.33$. The source showed a change in flux by a factor of 2.5 between the 1999/2000 and 2002 observation runs but the spectral slope remained constant \citep{2003A&A...403..523A}. Additionally, the \emph{Fermi} index of $1.32\pm0.12$ is the hardest of the entire sample and, in summary, it is chosen to include the source in the study. As for Mkn\,501, the spectrum of the major outbreak was measured up to 21\,TeV and, consequently, it is a promising VHE spectrum to constrain the EBL density at FIR wavelengths. As it turns out, it excludes most shapes due to the \emph{PileUp}~ and \emph{VHE\-Concavity}~ criteria. These criteria are independent of the \emph{Fermi} index and, therefore, not affected by the difference in observation time. \subsection{Sample tested against intrinsic VHE luminosity} \label{sec:sample-int} For the integral criterion presented in Section \ref{sec:cascade} and \ref{sec:eddington}, only spectra from steady sources are used in order to avoid systematic uncertainties introduced by variability. Only spectra are examined which suffer from large attenuations and are measured at energies beyond several TeV. These spectra are the most promising candidates for constraints as they show the highest values of integrated intrinsic emission. On the other hand, the spectrum of 1ES\,0229+200 which has been reanalyzed by \citet{2010MNRAS.406L..70T} can be tested against the \emph{IntVHELumi}~ condition as upper limits on the HE flux suffice and no spectral information is required for this criterion. Otherwise, the same selection criteria apply as for the sample tested against concavity (known redshift, etc.). The VHE spectra evaluated with the \emph{IntVHELumi}~ criterion together with the central black-hole masses of the corresponding sources are summarized in Table \ref{tab:steadysrc}. \begin{table}[tb] \caption{Sources used to exclude EBL shapes with the \emph{IntVHELumi}~ criterion.} \label{tab:steadysrc} \begin{center} \begin{tabular}{lc} \hline \hline \multirow{2}{*}{Source} & Black-hole mass \\ {} & $\log_{10}(M_\bullet / M_\odot)$ \\ \hline 1ES\,0229+200 & $9.16~\pm~0.11$\\ 1ES\,0414+009 & $9.3$\\ 1ES\,1101-232 & $9$\\ 1ES\,1218+304 & $8.04~\pm~0.24$\\ H\,1426+428 & $8.65~\pm~0.13$\\ H\,2356-309 & $8.08~\pm~0.23$\\ RGB\,J0152+017 & $9$\\ RGB\,J0710+591 & $8.25~\pm~0.22$\\ \hline \end{tabular} \end{center} \tablefoot{The black hole masses $M_\bullet$ are taken from \citet{2008MNRAS.385..119W} except from RGB\,J0710+591 and 1ES\,0414+009 for which the masses are given in \citet{2005ApJ...631..762W} and \citet{2000ApJ...532..816U}, respectively. No measurements of the central black hole masses of 1ES\,1101-232 and RGB\,J0152+017 are available so the fiducial value of $M_\bullet = 10^9M_\odot$ is used here. } \end{table} \begin{figure*}[htb] \centering \includegraphics[width = .33\linewidth , angle=270]{histexctest} \caption{Histogram of the fraction of excluded shapes of the different VHE spectra. The columns show the total fraction of rejected shapes as well as the fraction excluded by the different criteria. The column labeled ``Curvature'' combines the \emph{VHE\-Concavity}~ and \emph{PileUp}~ criteria. Spectra that allow more than 90\,\% of all shapes are not shown.} \label{fig:Hist} \end{figure*} \section{Results} \label{sec:results} The upper limits on the EBL density are derived by calculating the envelope shape of all \textit{allowed} EBL shapes. The influence of the different exclusion criteria is examined by inspecting the envelope shape due to the \emph{VHE-HEIndex}~ argument alone and successively adding the other criteria and reevaluating the resulting upper limits. Furthermore, the impact of the VHE spectra responsible for the most stringent limits in the optical, MIR and FIR will be investigated by excluding these spectra from the sample and inspect the change in the upper limits. Figure \ref{fig:Hist} shows a histogram of the fractions of rejected shapes by each VHE spectrum, where the different colors represent the different criteria that lead to the exclusion of an EBL shape. It should be noted that individual shapes can be rejected by several criteria at the same time, and, therefore, the different columns may add up to a number larger than indicated by the total column. Results for spectra that exclude no (BL\,Lacertae, 1ES\,2344+514, and Mkn\,180) or less than 10\,\% of all EBL shapes (the MAGIC spectrum of 1ES\,1959+650, the HESS spectra of RGB\,J0152+017 and Mkn 421, as well as the HEGRA spectrum of 1ES\,1959+650) are not shown. Most EBL shapes are excluded by the VHE spectra of H\,1426+428, 1ES\,1101-232, and Mkn\,501. The influence of H\,1426+428, and Mkn\,501 on the limits in the MIR and FIR and of 3C\,279 together with PKS\,1222+21 in the optical will be examined by excluding these spectra from the sample. These sources provide strong constraints in the respective wavelength bands. Note that removing 1ES\,1101-232 from the source sample does not change the upper limits since a number of spectra of sources with comparable redshifts (cf. Table \ref{tab:samples}) exclude the same EBL shapes as 1ES\,1101-232, e.g. 1ES\,0414+009, the VERITAS spectrum of 1ES\,1218+304, H\,2356-309, and PKS\,2005-489. \begin{figure*}[htb] \centering \includegraphics[width = .7 \linewidth, angle = 270]{Compare_Criteria_2FGL} \caption{Limits on the EBL density for different exclusion criteria. The solid line is the envelope shape of all allowed shapes from the combination of all VHE spectra whereas the dashed curve shows the envelope shape without considering the VHE spectrum of 3C\,279. The dashed dotted line is the envelope shape without H\,1426+428 and the dotted line displays the upper limits without Mkn\,501.} \label{fig:CompCriteria} \end{figure*} Different combinations of exclusion criteria are shown in the panels of Figure \ref{fig:CompCriteria}. Each panel depicts the limits for the complete spectrum sample and, additionally, the resulting EBL constraints if the spectra discussed above are omitted. By itself, the \emph{VHE-HEIndex}~ criterion gives strong upper limits in the optical and MIR on the EBL density if all spectra are included (upper left panel of Figure \ref{fig:CompCriteria}). In the optical, the limits are dominated by the spectra of 3C\,279 and PKS\,1222+21, so, consequently, the restrictions are significantly weaker without these spectra (dashed line in Figure \ref{fig:CompCriteria}). The spectra are influenced most by changes of the EBL density in the optical which is inferred from the maximum energies of 480\,GeV and 350\,GeV for 3C\,279 and PKS\,1222+21, respectively. They translate into maximum cross sections for pair production at 0.6\,\murm m (3C\,279) and 0.43\,\murm m (PKS\,1222+21), see Eq. \ref{eqn:ppwave}. The constraints are almost unaltered if only one of these spectra is excluded from the sample. In the MIR, the spectrum of H\,1426+428 provides firm limits on the EBL density whereas scarcely any EBL shape is rejected due to the spectrum of Mkn\,501 with the \emph{VHE-HEIndex}~ criterion. \begin{figure}[tb] \centering \includegraphics[width = 0.915\linewidth]{ConvexExample} \caption{Upper panel: Example of an EBL shape excluded by Mkn\,501 with the \emph{VHE\-Concavity}~ criterion. Lower panel: The spectra of Mkn\,501 and Mkn\,421 corrected with this particular EBL shape. The flux of the latter is scaled by $10^{-3}$ for better visibility. For Mkn\,501, a double broken power law provides the best description with a spectral index $\Gamma_3=-35$ at highest energies, the maximum value tested in the fitting procedure. In the case of Mkn\,421, a simple power law suffices.} \label{fig:Mkn501} \end{figure} The combination of the \emph{VHE-HEIndex}~ and \emph{VHE\-Concavity}~ criterion strengthens the upper limits between 2\,\murm m and 10\,\murm m, as shown in the upper right panel of Figure \ref{fig:CompCriteria}. Convex intrinsic spectra are the result of an EBL density with a positive gradient between lower and higher wavelengths and, thus, a combination with the \emph{VHE-HEIndex}~ criterion is necessary to exclude shapes with a high EBL density that are rather constant in wavelength. Therefore, on their own, neither the \emph{PileUp}~ nor the \emph{VHE\-Concavity}~ criterion provide strong upper limits. Combining the \emph{PileUp}~ and \emph{VHE-HEIndex}~ arguments results in very similar limits as the combination of the \emph{VHE-HEIndex}~ and \emph{VHE\-Concavity}~ criterion. This degeneracy between the \emph{PileUp}~ and \emph{VHE\-Concavity}~ criterion is also demonstrated in Figure \ref{fig:Mkn501}. The spectrum of Mkn\,501 corrected with a certain EBL shape shows a strong exponential rise at highest energies but is best described with a double broken power law. The combination of the \emph{PileUp}~ and \emph{VHE\-Concavity}~ together with the \emph{VHE-HEIndex}~ criterion yields robust upper limits in the FIR as displayed in the lower right panel of Figure \ref{fig:CompCriteria}. The constraints in the FIR are entirely due to the spectrum of Mkn\,501 although the spectrum of Mkn\,421 is also measured beyond 20\,TeV and both sources have a comparable redshift. However, the spectrum of Mkn\,421 rejects far less shapes than Mkn\,501. Indeed, an exponential rise is observed in intrinsic spectra of Mkn\,421 for certain EBL realizations (e.g. the corrected Mkn\,421 spectrum in Figure \ref{fig:Mkn501}) but a power law is found to be the best description of the spectrum. Compared to the \emph{VHE-HEIndex}~ criterion alone, the combination with the \emph{IntVHELumi}~ criterion leads to improved upper limits only if H\,1426+428 is discarded from the sample (lower left panel of Figure \ref{fig:CompCriteria}). Most shapes are rejected by the VHE spectrum of 1ES\,0229+200 which is also the sole spectrum which excludes a very limited number of shapes with the Eddington luminosity argument. Remarkably, the spectrum excludes more than 60\,\% of all shapes. The \emph{IntVHELumi}~ criterion has the most substantial effect in the infrared part of the EBL density as the highest energies of the spectrum of 1ES\,0229+200 contribute most to the integral flux. The maximum energy measured in the spectrum is 11.45\,TeV and thus the limits are most sensitive to changes in the EBL around 14\,\murm m. The influence of the choice of the bulk Lorentz factor $\Gamma_\mathrm{L}$ (and hence of the Doppler factor $\delta_\mathrm{D}$ since $\Gamma_\mathrm{L} \approx \delta_\mathrm{D}$ is assumed) on the envelope shape can be seen from Figure \ref{fig:CompIntegral}, where the upper limits are shown for $\Gamma_\mathrm{L} =$ 5, 10, and 50. As $\Gamma_\mathrm{L}$ enters quadratically into the calculation of the flux (cf. Eq. \ref{eqn:CasEmi}) and for the Eddington luminosity (Eq. \ref{eqn:eddington}) the choice of the value of $\Gamma_\mathrm{L}$ is critical for the number of rejected EBL shapes. The bulk Lorentz factor is unknown for the sources tested with \emph{IntVHELumi}~~and for the combination with the other criteria $\Gamma_\mathrm{L} = 10$ is generically chosen. However, even with this oversimplified choice of $\Gamma_\mathrm{L}$, the \emph{IntVHELumi}~~criterion does not lead to improvements of the upper limits compared to the combination of the \emph{VHE-HEIndex}~, \emph{VHE\-Concavity}~~and \emph{PileUp}~~criteria. Conversely, this implies that the final upper limits will not depend on the specific choice of model parameters and assumptions that enter the evaluation of the \emph{IntVHELumi}~~criterion. \begin{figure}[tb] \centering \includegraphics[width = .75 \linewidth, angle = 270]{Comp_Integral} \caption{Upper limits solely due to the \emph{IntVHELumi}~ criterion for different values of the bulk Lorentz factor $\Gamma_\mathrm{L}$ and the Doppler factor $\delta_\mathrm{D}$ of the emitting region. With increasing Lorentz and Doppler factor, respectively, the limits become worse (see Eqs \ref{eqn:CasEmi} and \ref{eqn:eddington}). } \label{fig:CompIntegral} \end{figure} \begin{figure*}[htb] \centering \includegraphics[width = .75 \linewidth, angle = 270]{Results} \caption{ Upper limits derived in this study. (a) The envelope shape (upper limits) of all allowed shapes (dark gray lines). Also shown are the grid points as light gray bullets. (b) The constraints compared to the the upper limits of MR07, \citet{2006Natur.440.1018A,2007A&A...475L...9A}, and \citet{2008Sci...320.1752M}. (c) Upper limits of this study together with three EBL models \citep{2008A&A...487..837F,2010A&A...515A..19K,2011MNRAS.410.2556D}. (d) Upper limits requiring different minimum numbers of VHE spectra that exclude an EBL shape. } \label{fig:master-result} \end{figure*} The final result for the upper limits is the combination of all criteria and all VHE spectra, shown in Figure \ref{fig:master-result}. It is the envelope shape of all allowed EBL realizations, cf. Figure \ref{fig:master-result}a, which itself is excluded by several VHE spectra and it should, thus, not be regarded as a possible level of the EBL density. For the maximum energy of all VHE spectra of 23.1\,TeV, the cross section for pair production peaks at a wavelength of the EBL photons of $\lambda_\ast\approx 29$\,\murm m. More than half of the interactions occur in a narrow interval $\Delta\lambda = (1 \pm 1/2)\lambda_\ast$ around the peak wavelength \citep[e.g][]{2006Natur.440.1018A} and hence the constraints are not extended beyond 100\,\murm m. Albeit including the evolution of the EBL with redshift, the derived upper limits are below 5\,nW\,m$^{-2}$\,sr$^{-1}$ in the range from 8\,\murm m to 31\,\murm m. A comparison of the constraints with previous works is shown in Figure \ref{fig:master-result}b. Above 30\,\murm m, the constraints are consistent with those derived in MR07. For wavelengths between 1\,\murm m and 4\,\murm m the limits are in accordance with the results of \citet{2006Natur.440.1018A,2007A&A...475L...9A} and \citet{2008Sci...320.1752M}. The strong limits of \citet{2008Sci...320.1752M} who utilized the spectrum of 3C\,279 are not reproduced. Note, however, that these limits are derived by changing certain free parameters (e.g. fraction of UV emission escaping the galaxies) of the EBL model of \citet{2002A&A...386....1K} while the current approach allows for generic EBL shapes. Consequently, an EBL shape with a high density at UV / optical wavelengths followed by a steep decline towards optical and NIR wavelengths produces a soft intrinsic spectrum of 3C\,279 that cannot be excluded by any criterion. Furthermore, an inspection of the spectrum of 3C\,279 shows that the fit will be dominated by the first two energy bins due to the smaller error bars. Thus, a convex spectrum is often still sufficiently described with a soft power law. In general, it should be underlined that all of the above limits from recent studies use a theoretically motivated bound on the intrinsic spectral slope of $\Gamma = 1.5$. The upper limits derived here are not in conflict with the EBL model calculations of \citet{2008A&A...487..837F}, \citet{2010A&A...515A..19K} and \citet{2011MNRAS.410.2556D} and are compatible with the lower limit galaxy number counts derived from Spitzer measurements \citep{2004ApJS..154...39F} (see Figure \ref{fig:master-result}c). In the FIR, the models of \citet{2008A&A...487..837F} and \citet{2011MNRAS.410.2556D} lie above the derived upper limits, though one should note that the EBL limit at these wavelengths relies on a single spectrum (Mkn\,501). Between $\sim$\,1\,\murm m and $\sim$\,14\,\murm m there is, however, a convergence between the upper limits and model calculations and at 13.4\,\murm m the EBL is constrained below 2.7\,nW\,m$^{-2}$\,sr$^{-1}$, just above the EBL models. This leaves not much room for additional components such as Population III stars and implies that the direct measurements of \citet{2005ApJ...626...31M} are foreground dominated as discussed in \citet{2005ApJ...635..784D}. The EBL models, the upper limits from previous works, and the results derived here are shown together in Figure \ref{fig:master-result-app}. \begin{figure*}[htb] \centering \includegraphics[width = .6 \linewidth, angle = 270]{master-result} \caption{ Upper limits of this work together with previous limits and EBL models. } \label{fig:master-result-app} \end{figure*} In general, most of the tested EBL shapes are excluded by more than one spectrum (Figure \ref{fig:counthist}). While 0.23\,\% of all EBL shapes are excluded by only one of the spectra in the sample, the majority of shapes (93\,\%) is rejected by five spectra or more. Figure \ref{fig:master-result}d shows the limits for different minimum numbers of VHE spectra that rule out an EBL shape. From NIR to MIR wavelengths, the limits are only slightly worsened if at least two spectra are required to exclude EBL shapes. If at least five spectra are ought to reject an EBL shape, the EBL density remains confined below 40 nW\,sr$^{-1}$\,m$^{-2}$ in the optical. Thus, from optical to MIR wavelengths, the limits are robust against individual spectra that possibly have a peculiar intrinsic shape due to one of the mechanisms discussed in Section \ref{sec:curvature}. Especially in the MIR and FIR, however, the limits are weakened as they mainly depend on two spectra, H\,1426+428 and Mkn\,501. This underlines the need for more spectra measured beyond several TeV in order to draw conclusions about the EBL density in the MIR and FIR from VHE blazar measurements. \begin{figure}[htb] \centering \includegraphics[width = 0.65 \linewidth,angle = 270 ]{counthisttest} \caption{ Percentage of all shapes excluded by at least a certain number of spectra. The majority of rejected shapes is not allowed by more than five spectra. } \label{fig:counthist} \end{figure} Given the similarities in procedures used, the systematic uncertainties of the limits derived in this study are similar to the ones derived in MR07 and \citet{}. They have been estimated to be 31\,\% in optical to near infrared and 32 - 55\% in mid to far infrared wavelengths, mainly from the grid spacing and the uncertainties on the absolute energy scale of ground based VHE instruments which is taken to be 15\,\%. Note that \citet{2010A&A...523A...2M} achieved a cross-calibration using the broadband SED of the Crab Nebula between the \emph{Fermi}-LAT~and IACTs by shifting the IACT measurements by $\sim$\,5\,\% in energy. As it turns out, additional uncertainties arise from the phenomenological description of the EBL evolution ($<$ 4\,\% for a redshift $z = 0.2$ and $<$ 10\,\% for $z = 0.5$, \citealt{2008IJMPD..17.1515R}). Uncertainties in the calculation of the cascade emission are caused by the choice of the model parameters which are, however, difficult to quantify. The same applies for the assumption that steady sources do not show super-Eddington luminosities. Since the most stringent limits do not rely on these exclusion criteria, these uncertainties do not affect the final results of the upper limits. Additionally, the measurement capabilities of the \emph{Fermi}-LAT~affect the \emph{VHE-HEIndex}~~criterion and hence the upper limits. While the 2FGL does not quote the systematic errors on the individual spectral indices, it gives a number of sources of systematic errors: the effective area, the diffuse emission model, and the handling of front and back converted events. The systematic error on the effective area is estimated to be between 5\,\% and 10\,\%, while the errors on the diffuse emission model mainly effects sources inside the galactic plane. Furthermore, the isotropic emission for front and back converted events is assumed to be equal. This leads to underestimation of the flux below 400\,MeV and might produce harder source spectra. As harder spectra in the \emph{Fermi}-LAT~band weaken the upper limits, the results derived here can, again, be regarded as conservative. \section{Summary and Conclusions} \label{sec:concl} In this paper, new upper limits on EBL density over a wide wavelength range from the optical to the far infrared are derived, utilizing the EBL attenuation of HE and VHE $\gamma$-rays~from distant AGN. A large number of possible EBL realization is investigated, allowing for possible features from, e.g., the first stars. Evolution of the EBL density with redshift is taken into account in the calculations using a phenomenological prescription \citep[see e.g.][]{2008IJMPD..17.1515R}. A large sample of VHE spectra consisting of 23 spectra from 20 different sources with redshifts ranging from $z = 0.031$ to $0.536$ is used in the analysis. The VHE spectra are corrected for absorption and subsequently investigated for their physical feasibility. Two basic criteria are examined: (1) concavity of the high energy part of the spectrum spanning from HE to VHE and (2) total integral flux in the VHE, a novel way to probe the EBL density. For the former criterion, spectra from the \emph{Fermi}-LAT~at HE are used as a conservative upper limit, combined with criteria on the overall VHE concavity. This is a more conservative argument than a theoretically motivated bound on the intrinsic spectral index at VHE of, say, $\Gamma = 1.5$. This value, used in previous studies, is somewhat under debate as a harder index can be possible, for instance, if the underlying population of relativistic electrons is very narrow \citep{2006MNRAS.368L..52K,2009MNRAS.399L..59T,2011arXiv1106.4201L}, in the case of internal photon absorption \citep{2008MNRAS.387.1206A}, or in proton-synchrotron models \citep[e.g.][]{2000NewA....5..377A,2011ApJ...738..157Z}. For the latter criterion, the expected cascade emission is investigated and, additionally, the total intrinsic luminosity is compared to the Eddington luminosity of the AGN. Limits on the EBL density are derived using each of the criteria individually and for combinations of the criteria. In addition, the influence of individual data sets is tested. The obtained constraints reach from 0.4\,\murm m to 100\,\murm m and are below 5\,nW\,m$^{-2}$\,sr$^{-1}$ between 8\,\murm m and 31\,\murm m even though more conservative criteria are applied and the evolution of the EBL with redshift is accounted for. In the optical, the EBL density is limited below 24 nW\,m$^{-2}$\,sr$^{-1}$. The limits forecast a low level of the EBL density from near to far infrared wavelengths also predicted by the models of \citet{2010A&A...515A..19K} and \citet{2011MNRAS.410.2556D} which is in accordance with MR07. Furthermore, the constraints exclude the direct measurements of \citet{2005ApJ...626...31M}. Certain mechanisms, however, are discussed in the literature that effectively reduce the the attenuation of $\gamma$-rays~due to pair production. For instance, if cosmic rays produced in AGN are not deflected strongly in the intergalactic magnetic field they could interact with the EBL and form VHE $\gamma$-rays~that contribute to the VHE spectrum \citep{2010APh....33...81E,2010PhRvL.104n1102E,2011ApJ...731...51E}. Other suggestions are more exotic as they invoke the conversion of photons into axion like particles \citep[e.g][]{2009MNRAS.394L..21D,2009JCAP...12..004M} or the violation of Lorentz invariance \citep[e.g.][]{2008PhRvD..78l4010J}. Future simultaneous observations of extragalactic blazars with the \emph{Fermi}-LAT~and IACTs have the potential to further constrain the EBL density. \begin{acknowledgements} MM would like to thank the state excellence cluster ``Connecting Particles with the Cosmos'' at the university of Hamburg. The authors would also like to thank the anonymous referee for helpful comments improving the manuscript. \end{acknowledgements}
1,314,259,994,647
arxiv
\section{Les r\'esultats} \bigskip \subsection{Calcul des caract\`eres sur les \'el\'ements compacts} Pour tout groupe r\'eductif d\'efini sur $F$ ou ${\mathbb F}_{q}$ (qui est le corps r\'esiduel de $F$), not\'e $H$, on note par la lettre gothique $\mathfrak{h}$ son alg\`ebre de Lie. Fixons un caract\`ere continu $\psi_{F}:F\to {\mathbb C}^{\times}$ de conducteur l'id\'eal maximal $\varpi \mathfrak{o}$. Pour $\sharp=iso$ ou $an$, l'alg\`ebre de Lie $\mathfrak{g}_{\sharp}(F)$ s'identifie naturellement \`a un sous-espace de l'alg\`ebre des endomorphismes de $V_{\sharp}$. Ainsi, $trace (XY)$ est bien d\'efinie pour $X,Y\in \mathfrak{g}_{\sharp}(F)$. On introduit une transformation de Fourier $f\mapsto \hat{f}$ dans $C_{c}^{\infty}(\mathfrak{g}_{\sharp}(F))$ par la formule usuelle $$\hat{f}(X)=\int_{\mathfrak{g}_{\sharp}(F)}f(Y)\psi_{F}(trace(XY))\,dY.$$ Le mesure $dY$ est la mesure de Haar sur $\mathfrak{g}_{\sharp}(F)$ qui est autoduale, c'est-\`a-dire que $\hat{\hat{f}}(X)=f(-X)$ pour tous $f$, $X$. L'exponentielle $exp$ est bien d\'efinie dans un voisinage de $0$ dans $\mathfrak{g}_{\sharp}(F)$, \`a valeurs dans un voisinage de $1$ dans $G_{\sharp}(F)$. On munit $G_{\sharp}(F)$ de la mesure de Haar telle que jacobien de l'exponentielle vaille $1$ au point $0$. Soit $x\in G_{\sharp}(F)$ un \'el\'ement fortement r\'egulier et compact (cf. \cite{W5}1.2). Soit $f\in C_{c}^{\infty}(G_{\sharp}(F))$. On d\'efinit l'int\'egrale orbitale $$J(x,f)=D^{G_{\sharp}}(x)^{1/2}\int_{G_{\sharp}(F)}f(g^{-1}xg)\,dg,$$ o\`u $D^{G_{\sharp}}$ est le module de Weyl usuel. Si maintenant $f=(f_{iso},f_{an})\in C_{c}^{\infty}(G_{iso}(F))\oplus C_{c}^{\infty}(G_{an}(F))$ et si $x\in G_{\sharp}(F)$ est comme ci-dessus, on pose $J(x,f)=J(x,f_{\sharp})$. On a d\'efini un espace ${\cal R}^{par}$ en \cite{W5} 1.5. On d\'efinit une application lin\'eaire $\Psi:{\cal R}^{par}\to C_{c}^{\infty}(G_{iso}(F))\oplus C_{c}^{\infty}(G_{an}(F))$ de la fa\c{c}on suivante. Par lin\'earit\'e, il suffit de fixer $\sharp=iso$ ou $an$ et $(n',n'')\in D_{\sharp}(n)$ et de la d\'efinir sur la composante $C'_{n'}\otimes C''_{n'',\sharp}$ de ${\cal R}^{par}$. Soit donc $\varphi\in C'_{n'}\otimes C''_{n'',\sharp}$. C'est une fonction sur ${\bf SO}(2n'+1;{\mathbb F}_{q})\times {\bf O}(2n'')_{\sharp}({\mathbb F}_{q})$. On introduit le sous-groupe $K_{n',n''}^{\pm}$ de $G_{\sharp}(F)$, cf. \cite{W5} 1.2. Le groupe pr\'ec\'edent s'identifie \`a $K_{n',n''}^{\pm}/K_{n',n''}^{u}$. Alors $\varphi$ appara\^{\i}t comme une fonction sur $K_{n',n''}^{\pm}$, invariante par $K_{n',n''}^{u}$. On la prolonge par $0$ hors de $K_{n',n''}^{\pm}$ et on la multiplie par $mes(K_{n',n''}^{\pm})^{-1}$. On obtient un \'el\'ement de $ C_{c}^{\infty}(G_{\sharp}(F))$ qui est l'une des composantes de $\Psi(\varphi)$. L'autre composante est nulle. Soit $\sharp=iso$ ou $an$ et soit $\pi\in Irr_{unip,\sharp}$, cf. \cite{W5} 1.3. Pour $f\in C_{c}^{\infty}(G_{\sharp}(F))$, on d\'efinit l'op\'erateur $\pi(f)$. Il est de rang fini et poss\`ede une trace. D'apr\`es Harish-Chandra, il existe une fonction localement int\'egrable $\Theta_{\pi}$ sur $G_{\sharp}(F)$, localement constante sur les \'el\'ements fortement r\'eguliers, de sorte que $$trace(\pi(f))=\int_{G_{\sharp}(F)}\Theta_{\pi}(x)f(x)\,dx$$ pour tout $f$. Posons $f_{\pi}=\Psi\circ proj_{cusp}\circ Res(\pi)$, avec les notations de \cite{W5} 1.5. \ass{Proposition}{Soit $x\in G_{\sharp}(F)$ un \'el\'ement fortement r\'egulier et compact. On a l'\'egalit\'e $$D^{G_{\sharp}}(x)^{1/2}\overline{\Theta_{\pi}(x)}=J(x,f_{\pi}).$$} Cela r\'esulte de \cite{MW} th\'eor\`eme 1.9 et lemme 4.2. \subsection{Un r\'esultat de transfert de fonctions} Pour $\sharp=iso$ ou $an$ et pour $f\in C_{c}^{\infty}(G_{\sharp}(F))$, on dit que $f$ est cuspidale si et seulement si ses int\'egrales orbitales sont nulles en tout point fortement r\'egulier et non elliptique de $G_{\sharp}(F)$. On note $C_{cusp}^{\infty}(G_{\sharp}(F))$ l'espace des fonctions cuspidales. L'application $\Psi:{\cal R}^{par}\to C_{c}^{\infty}(G_{iso}(F))\oplus C_{c}^{\infty}(G_{an}(F))$ introduite en 1.1 envoie l'espace ${\cal R}^{par}_{cusp}$ de \cite{W5} 1.5 dans $C_{cusp}^{\infty}(G_{iso}(F))\oplus C_{cusp}^{\infty}(G_{an}(F))$. Pour un \'el\'ement $f$ de ce dernier espace, on note $f_{iso}$ et $f_{an}$ ses deux composantes. Il nous faut adapter les d\'efinitions usuelles de l'endoscopie \`a notre cas o\`u nous travaillons simultan\'ement avec nos deux groupes $G_{iso}$ et $G_{an}$. Pour $x,y\in G_{iso}(F)\cup G_{an}(F)$, on dit que $x$ et $y$ sont conjugu\'es si et seulement s'il existe $\sharp=iso$ ou $an$ tels que $x,y\in G_{\sharp}(F)$ et $x$ et $y$ sont conjugu\'es par un \'el\'ement de $G_{\sharp}(F)$. Il y a une correspondance bijective entre les classes de conjugaison stable dans $G_{iso}(F)$ form\'ees d'\'el\'ements elliptiques et fortement r\'eguliers et les classes de conjugaison stable dans $G_{an}(F)$ form\'ees d'\'el\'ements elliptiques et fortement r\'eguliers. Appelons classe totale de conjugaison stable (form\'ee d'\'el\'ements elliptiques et fortement r\'eguliers) la r\'eunion d'une telle classe dans $G_{iso}(F)$ et de celle qui lui correspond dans $G_{an}(F)$. On sait d'ailleurs que chacune de ces classes contient le m\^eme nombre de classes de conjugaison (ordinaires). Soit $f\in C_{cusp}^{\infty}(G_{iso}(F))\oplus C_{cusp}^{\infty}(G_{an}(F))$ et soit $x\in G_{iso}(F)$ un \'el\'ement fortement r\'egulier et elliptique. On d\'efinit l'int\'egrale orbitale stable $S(x,f)$ par $$S(x,f)=\sum_{y}J(y,f) ,$$ o\`u $y$ parcourt les \'el\'ements de la classe totale de conjugaison stable de $x$, \`a conjugaison pr\`es. On note $z(x)$ le nombre de termes $y $ intervenant dans cette somme. Pour $\sharp=iso$ ou $an$ et $f\in C_{c}^{\infty}(G_{\sharp}(F))$, on d\'efinit $S(x,f)$ en consid\'erant que $f$ est un \'el\'ement de $C_{cusp}^{\infty}(G_{iso}(F))\oplus C_{cusp}^{\infty}(G_{an}(F))$ dont la composante sur $G_{\sharp}(F)$ est la fonction donn\'ee et dont l'autre composante est nulle. Soit $(n_{1},n_{2})\in D(n)$. Ce couple d\'etermine (\`a conjugaison pr\`es) un \'el\'ement $h\in Sp(2n;{\mathbb C})$ tel que $h^2=1$ et une donn\'ee endoscopique de $G_{iso}$ ou $G_{an}$ dont le groupe endoscopique est $G_{n_{1},iso}\times G_{n_{2},iso}$, cf. \cite{W5} 2.1. On note $\Delta_{h}$ le facteur de transfert relatif \`a cette donn\'ee (il est uniquement d\'etermin\'e). Soit $f\in C_{cusp}^{\infty}(G_{iso}(F))\oplus C_{cusp}^{\infty}(G_{an}(F))$ et $(x_{1},x_{2})\in G_{n_{1},iso}(F)\times G_{n_{2},iso}(F)$ un couple $n$-r\'egulier form\'e d'\'el\'ements elliptiques. On entend par $n$-r\'egulier le fait que la classe de conjugaison stable de $(x_{1},x_{2})$ correspond par endoscopie \`a une classe de conjugaison stable fortement r\'eguli\`ere dans $G_{iso}(F)$. On pose $$J^{ endo}(x_{1},x_{2},f)=\sum_{x}\Delta_{h}((x_{1},x_{2}),x)J(x,f) ,$$ o\`u $x$ parcourt les \'el\'ements de $G_{iso}(F)\cup G_{an}(F)$ fortement r\'eguliers et elliptiques, \`a conjugaison pr\`es. Bien s\^ur, on peut se limiter aux \'el\'ements dans la classe totale de conjugaison stable qui correspond \`a $(x_{1},x_{2})$ (en dehors, les facteurs de transfert sont nuls). La somme est donc finie. Dans le cas o\`u $n_{1}=n$ et $n_{2}=0$, auquel cas $(x_{1},x_{2})$ se r\'eduit \`a un seul \'el\'ement $x=x_{1}$, on retrouve la d\'efinition pr\'ec\'edente: $J^{endo}(x_{1},x_{2},f)=S(x,f)$. Comme ci-dessus, pour $\sharp=iso$ ou $an$ et $f\in C_{c}^{\infty}(G_{\sharp}(F))$, on d\'efinit $J^{ endo}(x_{1},x_{2},f)$ en consid\'erant $f$ comme un \'el\'ement de $C_{cusp}^{\infty}(G_{iso}(F))\oplus C_{cusp}^{\infty}(G_{an}(F))$ dont une composante est nulle. Soient $(n_{1},n_{2})\in D(n)$, $f\in C_{cusp}^{\infty}(G_{iso}(F))\oplus C_{cusp}^{\infty}(G_{an}(F))$, $f_{1}\in C_{cusp}^{\infty}(G_{n_{1},iso}(F))\oplus C_{cusp}^{\infty}(G_{n_{1},an}(F))$ et $f_{2}\in C_{cusp}^{\infty}(G_{n_{2},iso}(F))\oplus C_{cusp}^{\infty}(G_{n_{2}, an}(F))$. On dit que $f_{1}\otimes f_{2}$ est un transfert de $f$ relatif \`a $(n_{1},n_{2})$ (ou \`a $h$) si et seulement si on a l'\'egalit\'e $$S(x_{1},f_{1})S(x_{2},f_{2})=J^{endo}(x_{1},x_{2},f)$$ pour tout couple $n$-r\'egulier $(x_{1},x_{2})\in G_{n_{1},iso}(F)\times G_{n_{2},iso}(F)$ form\'e de deux \'el\'ements elliptiques. Soit $(r',r'',N',N'')\in \Gamma$, cf. \cite{W5}1.8, soient $\varphi'\in {\mathbb C}[\hat{W}_{N'}]_{cusp}$ et $\varphi''\in {\mathbb C}[\hat{W}_{N''}]_{cusp}$. Pour un nombre r\'eel $x$, on note $[x]$ sa partie enti\`ere. Posons $$r'_{1}=sup([\frac{r'+r''}{2}],-[\frac{r'+r''}{2}]-1),\,\, r''_{1}=\vert [\frac{r'+r''+1}{2}]\vert ,$$ $$r'_{2}=sup([\frac{r'-r''}{2}],-[\frac{r'-r''}{2}]-1),\,\, r''_{2}=\vert [\frac{r'-r''+1}{2}]\vert .$$ $$n_{1}=r_{1}^{'2}+r'_{1}+r^{''2}_{1}+N',\,\,n_{2}=r_{2}^{'2}+r'_{2}+r^{''2}_{2}+N'',$$ $$(N'_{1},N''_{1})=(N',0),\,,\ (N'_{2},N''_{2})=(N'',0),$$ $$\gamma_{1}=(r'_{1},r''_{1},N'_{1},N''_{1}),\,\, \gamma_{2}=(r'_{2},r''_{2},N'_{2},N''_{2}).$$ D'apr\`es la relation (2) du paragraphe 4 ci-dessous, la d\'efinition de $n_{1}$ et $n_{2}$ peut se r\'ecrire $$n_{1}=\frac{(r'+r'')^2+(r'+r''+1)^2-1}{4}+N',\,\, n_{2}=\frac{(r'-r'')^2+(r'-r''+1)^2-1}{4}+N''.$$ On v\'erifie que $(n_{1},n_{2})\in D(n)$, que $\gamma_{1}\in \Gamma_{n_{1}}$ et $\gamma_{2}\in \Gamma_{n_{2}}$. L'\'el\'ement $\varphi'$ peut \^etre consid\'er\'e comme un \'el\'ement de ${\cal R}_{cusp}(\gamma_{1})$ et l'\'el\'ement $\varphi''$ peut \^etre consid\'er\'e comme un \'el\'ement de ${\cal R}_{cusp}(\gamma_{2})$. Posons $\varphi=\varphi'\otimes \varphi''$. C'est un \'el\'ement de ${\cal R}_{cusp}(\gamma)$. Posons $$f=\Psi\circ k\circ\rho\iota(\varphi),\,\, f_{1}=\Psi\circ k\circ\rho\iota(\varphi'),\,\, f_{2}=\Psi\circ k\circ\rho\iota(\varphi''),$$ avec les notations de \cite{W5} 1.9, 1.10. Bien s\^ur, dans les deux derni\`eres d\'efinitions, ce sont les applications $\Psi$, $k$ et $\rho\iota$ relatives \`a $n_{1}$ et $n_{2}$ qui interviennent. \ass{Proposition}{(i) Sous les hypoth\`eses ci-dessus, $f_{1}\otimes f_{2}$ est un transfert de $f$ relatif \`a $(n_{1},n_{2})$. (ii) Pour $(\bar{n}_{1},\bar{n}_{2})\in D(n)$ diff\'erent de $(n_{1},n_{2})$, le transfert de $f$ relatif \`a $(\bar{n}_{1},\bar{n}_{2})$ est nul.} Nous d\'emontrerons cette proposition dans la section 3. Nous l'admettons pour la suite de la pr\'esente section. \bigskip \subsection{Transfert de repr\'esentations elliptiques} Soit $(n_{1},n_{2})\in D(n)$, auquel on associe un \'el\'ement $h\in Sp(2n;{\mathbb C})$ tel que $h^2=1$. Soient $(\lambda_{1},s_{1};1)\in \mathfrak{St}_{n_{1},unip,disc}$ et $(\lambda_{2},s_{2},1)\in \mathfrak{St}_{n_{2},unip,disc}$, cf. \cite{W5} 2.4. Comme en \cite{W5} 2.1, on associe \`a ces donn\'ees un triplet $(\lambda,s,h)\in \mathfrak{Endo}_{unip-disc}$. \ass{Th\'eor\`eme}{On a les \'egalit\'es $transfert_{h,iso}(\Pi_{iso}(\lambda_{1},s_{1},1)\otimes \Pi_{iso}(\lambda_{2},s_{2},1))=\Pi_{iso}(\lambda,s,h)$; $transfert_{h,an}(\Pi_{iso}(\lambda_{1},s_{1},1)\otimes \Pi_{iso}(\lambda_{2},s_{2},1))=-\Pi_{an}(\lambda,s,h)$.} Preuve. Notons $\Theta_{1,iso}$, $\Theta_{1,an}$, $\Theta_{2,iso}$, $\Theta_{2,an}$, $\Theta_{iso}$, $\Theta_{an}$ les caract\`eres-fonctions des repr\'esentations $\Pi_{iso}(\lambda_{1},s_{1},1)$, $\Pi_{an}(\lambda_{1},s_{1},1)$, $\Pi_{iso}(\lambda_{2},s_{2},1)$, $\Pi_{an}(\lambda_{2},s_{2},1)$, $\Pi_{iso}(\lambda,s,h)$, $\Pi_{an}(\lambda,s,h)$. Les \'egalit\'es \`a d\'emontrer se traduisent par des \'egalit\'es entre ces caract\`eres. Toutes les repr\'esentations intervenant sont elliptiques. D'apr\`es \cite{Ar2} th\'eor\`eme 6.2, il suffit donc de d\'emontrer ces \'egalit\'es restreintes aux \'el\'ements elliptiques des diff\'erents groupes. Ecrivons ces \'egalit\'es. Posons $sgn_{iso}=1$ et $sgn_{an}=-1$. Pour $\sharp=iso$ ou $an$ et pour $x\in G_{\sharp}(F)$ fortement r\'egulier et elliptique, on doit avoir $$(1) \qquad D^{G_{\sharp}}(x)^{1/2}\Theta_{\sharp}(x)=sgn_{\sharp}\sum_{x_{1},x_{2}}D^{G_{n_{1},iso}}(x_{1})^{1/2}D^{G_{n_{2},iso}}(x_{2})^{1/2}\Delta_{h}((x_{1},x_{2}),x)$$ $$\Theta_{1,iso}(x_{1})\Theta_{2,iso}(x_{2}),$$ o\`u $x_{1}$, resp. $x_{2}$, parcourt les \'el\'ements fortement r\'eguliers et elliptiques de $G_{n_{1},iso}(F)$, resp. $G_{n_{2},iso}(F)$, \`a conjugaison stable pr\`es. Ici encore, on peut se limiter aux couples $(x_{1},x_{2})$ correspondant par endoscopie \`a la classe de conjugaison stable de $x$. La somme est finie. Posons $f_{1}=\Psi\circ proj_{cusp}\circ Res\circ D\circ \Pi(\lambda_{1},s_{1},1)$, $f_{2}=\Psi\circ proj_{cusp}\circ Res\circ D\circ \Pi(\lambda_{2},s_{2},1)$, $f=\Psi\circ proj_{cusp}\circ Res\circ D\circ \Pi(\lambda,s,h)$. Pour $\sharp=iso$ ou $an$, on a l'\'egalit\'e $proj_{cusp}\circ Res\circ D\circ \Pi_{\sharp}(\lambda,s,h)=(-1)^nsgn_{\sharp}proj_{cusp}\circ Res\circ \Pi_{\sharp}(\lambda,s,h)$, cf. \cite{MW} corollaire 5.7. D'apr\`es la proposition 1.1, on a donc l'\'egalit\'e $$D^{G_{\sharp}}(x)^{1/2}\overline{\Theta_{\sharp}(x)}=(-1)^n sgn_{\sharp}J(x,f ).$$ On a des \'egalit\'es analogues pour $\Theta_{1}(x_{1})$ et $\Theta_{2}(x_{2})$. Ces derni\`eres \'egalit\'es entra\^{\i}nent (2) soient $j=1,2$, $\sharp_{j}=iso$ ou $an$, $x_{j}\in G_{n_{j},iso}(F)$ et $y_{j}\in G_{n_{j},\sharp_{j}}(F)$; supposons que $x_{j}$ et $y_{j}$ sont fortement r\'eguliers et elliptiques et que leurs classes de conjugaison stable sont \'egales si $\sharp_{j}=iso$ ou se correspondent si $\sharp_{j}=an$; alors $J(x_{j},f_{j})=J(y_{j},f_{j})$. Cela traduit le fait que $\Theta_{j,iso}$ est stable et que $-\Theta_{j,an}$ en est son transfert, cf. \cite{W5} 2.1. Parce que les facteurs de transfert valent $\pm 1$ et sont donc \`a valeurs r\'eelles, l'\'egalit\'e (1) se r\'ecrit $$(3) \qquad J(x,f)=\sum_{x_{1},x_{2}}\Delta_{h}((x_{1},x_{2}),x) J(x_{1},f_{1})J(x_{2},f_{2})$$ pour tout $x\in G_{iso}(F)\cup G_{an}(F)$ fortement r\'egulier et elliptique. La propri\'et\'e de base de l'endoscopie est que les int\'egrales orbitales $J(x,f)$ sont d\'etermin\'ees par les int\'egrales endoscopiques $J^{endo}(\bar{x}_{1},\bar{x}_{2},f)$ quand $(\bar{n}_{1},\bar{n}_{2})$ d\'ecrit $ D(n)$ et $(\bar{x}_{1},\bar{x}_{2})\in G_{\bar{n}_{1},iso}(F)\times G_{\bar{n}_{2},iso}(F)$ d\'ecrit les couples $n$-r\'eguliers form\'es d'\'el\'ements elliptiques. Donc (3) \'equivaut \`a ce que, pour toutes donn\'ees $(\bar{n}_{1},\bar{n}_{2})$ et $(\bar{x}_{1},\bar{x}_{2})$ comme ci-dessus, on ait l'\'egalit\'e $$J^{endo}(\bar{x}_{1},\bar{x}_{2},f)=\sum_{x}\Delta_{\bar{h}}((\bar{x}_{1},\bar{x}_{2}),x)J(x,f)$$ $$=\sum_{x}\Delta_{\bar{h}}((\bar{x}_{1},\bar{x}_{2}),x)\sum_{x_{1},x_{2}}\Delta_{h}((x_{1},x_{2}),x) J(x_{1},f_{1})J(x_{2},f_{2}),$$ o\`u on somme sur les \'el\'ements $x\in G_{iso}(F)\cup G_{an}(F)$ fortement r\'eguliers et elliptiques modulo conjugaison. On a not\'e $\bar{h}$ l'\'el\'ement de $Sp(2n;{\mathbb C})$ tel que $\bar{h}^2=1$ associ\'e \`a $(\bar{n}_{1},\bar{n}_{2})$. On r\'ecrit l'\'egalit\'e ci-dessus: $$J^{endo}(\bar{x}_{1},\bar{x}_{2},f)=\sum_{x_{1},x_{2}} J(x_{1},f_{1})J(x_{2},f_{2})\sum_{x}\Delta_{\bar{h}}((\bar{x}_{1},\bar{x}_{2}),x) \Delta_{h}((x_{1},x_{2}),x).$$ La somme int\'erieure en $x$ est nulle sauf si $(\bar{n}_{1},\bar{n}_{2})=(n_{1},n_{2})$ et, \`a conjugaison stable pr\`es, $(\bar{x}_{1},\bar{x}_{2})=(x_{1},x_{2})$. Si ces conditions sont v\'erifi\'ees, la somme vaut le nombre de $x$, pris \`a conjugaison pr\`es, qui interviennent dans la somme. C'est le nombre $z(x)$ d\'efini en 1.2 pour l'un quelconque de ces $x$. Or on v\'erifie facilement l'\'egalit\'e $z(x)=z(\bar{x}_{1})z(\bar{x}_{2})$. Dans le cas o\`u $(\bar{n}_{1},\bar{n}_{2})=(n_{1},n_{2})$, on obtient donc $$J^{endo}(\bar{x}_{1},\bar{x}_{2},f)=z(\bar{x}_{1})z(\bar{x}_{2})J(\bar{x}_{1},f_{1})J(\bar{x}_{2},f_{2}).$$ Mais la propri\'et\'e (2) entra\^{\i}ne que, pour $j=1,2$, $z(\bar{x}_{j})J(\bar{x}_{j},f_{j})=S(\bar{x}_{j},f_{j})$. L'\'egalit\'e ci-dessus se transforme en $$J^{endo}(\bar{x}_{1},\bar{x}_{2},f)=S(\bar{x}_{1},f_{1})S(\bar{x}_{2},f_{2}).$$ Autrement dit, $f_{1}\otimes f_{2}$ est un transfert de $f$. R\'esumons: l'\'egalit\'e (3) \'equivaut aux assertions (4)(a) $f_{1}\otimes f_{2}$ est un transfert de $f$ relatif \`a $(n_{1},n_{2})$; (4)(b) pour $(\bar{n}_{1},\bar{n}_{2})\in D(n)$ diff\'erent de $(n_{1},n_{2})$, le transfert de $f$ relatif \`a $(\bar{n}_{1},\bar{n}_{2})$ est nul. Calculons $f$. D'apr\`es le lemme 2.3 de \cite{W5} et parce que ${\cal F}^{par}$ est une involution, on a $proj_{cusp}={\cal F}^{par}\circ proj_{cusp}\circ \mathfrak{F}^{par}$. Par d\'efinition de $\mathfrak{F}^{par}$, on a aussi $\mathfrak{F}^{par}\circ Res\circ D=Res\circ D\circ {\cal F}$. D'o\`u $$f=\Psi\circ {\cal F}^{par}\circ proj_{cusp}\circ Res\circ D\circ {\cal F}\circ \Pi(\lambda,s,h))=\Psi\circ {\cal F}^{par}\circ proj_{cusp}\circ Res\circ D\circ \Pi(\lambda,h,s).$$ La d\'ecomposition de $\lambda$ associ\'ee \`a $h$ est $\lambda=\lambda_{1}\cup \lambda_{2}$. Il r\'esulte des d\'efinitions que $$\Pi(\lambda,h,s)=\sum_{\epsilon_{1},\epsilon_{2}}\pi(\lambda_{1},\epsilon_{1},\lambda_{2},\epsilon_{2})\epsilon_{1}(s_{1})\epsilon_{2}(s_{2}),$$ o\`u $(\epsilon_{1},\epsilon_{2})$ parcourt $\{\pm 1\}^{Jord_{bp}(\lambda_{1})}\times \{\pm 1\}^{Jord_{bp}(\lambda_{2})}$ et les termes $\epsilon_{1}(s_{1})$ et $\epsilon_{2}(s_{2})$ sont d\'efinis en interpr\'etant $\epsilon_{1}$ et $\epsilon_{2}$ comme des \'el\'ements de ${\bf Z}(\lambda_{1},1)^{\vee}$ et ${\bf Z}(\lambda_{2},1)^{\vee}$, cf. \cite{W5} 1.3. La proposition \cite{W5} 1.11 calcule $Res\circ D(\pi(\lambda_{1},\epsilon_{1},\lambda_{2},\epsilon_{2}) )$. {\bf Changement de notations.} Dans la formule de cette proposition figure un isomorphisme $j$ entre deux espaces, l'un d'eux \'etant l'espace ${\cal R}$. Distinguer ces deux espaces nous a \'et\'e utile dans la deuxi\`eme section de \cite{W5}. Maintenant, cela ne nous sert plus. Pour simplifier, on identifie par $j$ les deux espaces en question et on fait dispara\^{\i}tre $j$ de la notation. \bigskip On obtient $$Res\circ D\circ \Pi(\lambda,h,s)=\sum_{\epsilon_{1},\epsilon_{2}}\epsilon_{1}(s_{1})\epsilon_{2}(s_{2})Rep\circ \rho\iota(\boldsymbol{\rho}_{\lambda_{1},\epsilon_{1}}\otimes \boldsymbol{\rho}_{\lambda_{2},\epsilon_{2}}).$$ Les applications $Rep$, $k$, ${\cal F}^{par}$ et $\rho\iota$ commutent aux projections cuspidales. De plus ${\cal F}^{par}\circ Rep=k$. Il en r\'esulte que $$f=\sum_{\epsilon_{1},\epsilon_{2}}\epsilon_{1}(s_{1})\epsilon_{2}(s_{2})\Psi\circ k\circ \rho\iota\circ proj_{cusp}(\boldsymbol{\rho}_{\lambda_{1},\epsilon_{1}}\otimes \boldsymbol{\rho}_{\lambda_{2},\epsilon_{2}}).$$ Un calcul analogue s'applique \`a $f_{1}$ et $f_{2}$. On a cette fois ${\cal F}(\lambda_{j},s_{j},1)=(\lambda_{j},1,s_{j})$ pour $j=1,2$ et la d\'ecomposition de $\lambda_{j}$ associ\'ee \`a $1$ est $\lambda_{j}=\lambda_{j}\cup \emptyset$. Autrement dit, les deuxi\`emes composantes de la formule ci-dessus disparaissent. On obtient $$f_{1}=\sum_{\epsilon_{1}}\epsilon_{1}(s_{1})\Psi\circ k\circ \rho\iota\circ proj_{cusp}(\boldsymbol{\rho}_{\lambda_{1},\epsilon_{1}}),$$ $$f_{2}=\sum_{\epsilon_{2}}\epsilon_{2}(s_{2})\Psi\circ k\circ \rho\iota\circ proj_{cusp}(\boldsymbol{\rho}_{\lambda_{2},\epsilon_{2}}),$$ o\`u les repr\'esentations $\boldsymbol{\rho}_{\lambda_{1},\epsilon_{2}}$ et $\boldsymbol{\rho}_{\lambda_{2},\epsilon_{2}}$ sont assimil\'ees \`a des produits tensoriels dont le deuxi\`eme terme est trivial. Pour $(\epsilon_{1},\epsilon_{2})\in \{\pm 1\}^{Jord_{bp}(\lambda_{1})}\times \{\pm 1\}^{Jord_{bp}(\lambda_{2})}$, posons $$f_{\epsilon_{1},\epsilon_{2}}=\Psi\circ k\circ proj_{cusp}(\boldsymbol{\rho}_{\lambda_{1},\epsilon_{1}}\otimes \boldsymbol{\rho}_{\lambda_{2},\epsilon_{2}}),$$ $$f_{\epsilon_{1}}=\Psi\circ k\circ proj_{cusp}(\boldsymbol{\rho}_{\lambda_{1},\epsilon_{1}}),$$ $$f_{\epsilon_{2}}=\Psi\circ k\circ proj_{cusp}(\boldsymbol{\rho}_{\lambda_{2},\epsilon_{2}}).$$ Les propri\'et\'es (4) r\'esultent des propri\'et\'es suivantes, pour tout $(\epsilon_{1},\epsilon_{2})$: (5)(a) $f_{\epsilon_{1}}\otimes f_{\epsilon_{2}}$ est un transfert de $f_{\epsilon_{1},\epsilon_{2}}$ relatif \`a $(n_{1},n_{2})$; (5)(b) pour $(\bar{n}_{1},\bar{n}_{2})\in D(n)$ diff\'erent de $(n_{1},n_{2})$, le transfert de $f_{\epsilon_{1},\epsilon_{2}}$ relatif \`a $(\bar{n}_{1},\bar{n}_{2})$ est nul. Fixons $\epsilon_{1}$ et $\epsilon_{2}$. A $(\lambda_{1},\epsilon_{1})$ et $(\lambda_{2},\epsilon_{2})$ sont associ\'es des couples $(k_{1},N_{1})$ et $(k_{2},N_{2})$, puis un triplet $\gamma=(r',r'',N_{1},N_{2})\in \Gamma$, cf. \cite{W5} 1.11. L'\'el\'ement $proj_{cusp}(\boldsymbol{\rho}_{\lambda_{1},\epsilon_{1}}\otimes \boldsymbol{\rho}_{\lambda_{2},\epsilon_{2}})$ appartient \`a ${\cal R}_{cusp}(\gamma)$. De m\^eme, en rempla\c{c}ant $n$ par $n_{j}$ pour $j=1,2$, \`a $(\lambda_{j},\epsilon_{j})$ sont associ\'es des termes $\gamma_{1}$ et $\gamma_{2}$ analogues. Ainsi qu'on l'a remarqu\'e ci-dessus, il faut consid\'erer que $(\lambda_{j},\epsilon_{j})$ sont le premier couple d'un quadruplet $(\lambda_{j}^+,\epsilon_{j}^+,\lambda_{j}^-,\epsilon_{j}^-)$ dont le second couple est trivial. On voit que $\gamma_{j}$ est de la forme $(r'_{j},r''_{j},N_{j},0)$. L'\'el\'ement $proj_{cusp}(\boldsymbol{\rho}_{\lambda_{j},\epsilon_{j}})$ appartient \`a ${\cal R}_{cusp}(\gamma_{j})$. En consid\'erant la recette de \cite{W5} 1.11 qui calcule $r',r'',r'_{1},r''_{1},r'_{2},r''_{2}$ en fonction de $k_{1}$ et $k_{2}$, on constate que $r'_{1},r''_{1},r'_{2},r''_{2}$ se d\'eduisent de $(r',r'')$ par les formules de 1.2. On est alors dans la situation de ce paragraphe, les fonctions $\varphi'$ et $\varphi''$ \'etant respectivement $proj_{cusp}(\boldsymbol{\rho}_{\lambda_{1},\epsilon_{1}})$ et $proj_{cusp}(\boldsymbol{\rho}_{\lambda_{2},\epsilon_{2}})$. La proposition 1.2 affirme que les propri\'et\'es (5)(a) et (5)(b) sont v\'erifi\'ees. Cela ach\`eve la d\'emonstration. $\square$ \bigskip \subsection{D\'emonstration du th\'eor\`eme 2.1 de \cite{W5}} Soit $h\in Sp(2n;{\mathbb C})$ tel que $h^2=1$, auquel est associ\'e un couple $(n_{1},n_{2})\in D(n)$. Pour $j=1,2$, soit $(\lambda_{j},s_{j},1)\in \mathfrak{St}_{n_{j},tunip}$. Pour $j=1,2$, notons $\cup_{b\in B_{j}}\{s_{j,b},s_{j,b}^{-1}\}$ l'ensemble des valeurs propres de $s_{j}$ autres que $\pm 1$ et notons $$\lambda_{j}=\lambda_{j}^+\cup \lambda^-_{j}\cup\bigcup_{b\in B_{j}}(\lambda_{j,b}\cup\lambda_{j,b})$$ la d\'ecomposition de $\lambda_{j}$ associ\'ee \`a $s_{j}$, cf. \cite{W5} 2.1. Posons $\lambda_{j,0}=\lambda_{j}^+\cup \lambda_{j}^-$, $n_{j,0}=S(\lambda_{j,0})/2$ et $m_{j,b}=S(\lambda_{j,b})$ pour $b\in B_{j}$. Introduisons un sous-groupe parabolique $P_{j}$ de $G_{n_{j},iso}$ de composante de Levi $$M_{j}=(\prod_{b\in B_{j}}GL(m_{j,b}))\times G_{n_{j,0},iso}.$$ Pour tout $b\in B_{j}$, soit $\chi_{j,b}$ le caract\`ere non ramifi\'e de $F^{\times}$ tel que $\chi_{j,b}(\varpi)=s_{j,b}$. L'\'el\'ement $s_{j}$ se restreint en un \'el\'ement $s_{j,0}\in Sp(2n_{j,0};{\mathbb C})$ de sorte que $\lambda_{j,0}=\lambda_{j}^+\cup \lambda_{j}^-$ soit la d\'ecomposition de $\lambda_{j,0}$ associ\'ee \`a cet \'el\'ement. Introduisons la repr\'esentation de $M_{j}(F)$ $$\sigma_{j,b}=(\otimes_{b\in B_{j}}(\chi_{j,b}\circ det)st_{\lambda_{j,b}}))\otimes \Pi_{iso}(\lambda_{j,0},s_{j,0},1)$$ (on rappelle que $st_{m}$ est la repr\'esentation de Steinberg de $GL(m;F)$). Il est connu que $$\Pi_{iso}(\lambda_{j},s_{j},1)=Ind_{P_{j}}^{G_{n_{j},iso}}(\sigma_{j}).$$ Introduisons le triplet $(\lambda,s,h)$ associ\'e \`a $h$, $(\lambda_{1},s_{1},1)$ et $(\lambda_{2},s_{2},1)$. Posons $n_{0}=n_{0,1}+n_{0,2}$. Soit $\sharp=iso$ ou $an$. Dans le cas $\sharp=an$, supposons provisoirement $n_{0}\geq1$. On introduit un sous-groupe parabolique $P$ de $G_{\sharp}$ de composante de Levi $$M=(\prod_{j=1,2,b\in B_{j}}GL(m_{j,b}))\times G_{n_{0},\sharp}.$$ Parce que l'\'el\'ement $h$ commute \`a $s$, il se restreint en un \'el\'ement $h_{0}\in Sp(2n_{0};{\mathbb C})$ et la repr\'esentation $\Pi_{\sharp}(\lambda_{0},s_{0},h_{0})$ de $G_{n_{0},\sharp}(F)$ est bien d\'efinie. On introduit la repr\'esentation de $M(F)$ $$\sigma=(\otimes_{j=1,2,b\in B_{j}}(\chi_{j,b}\circ det)st_{\lambda_{j,b}}))\otimes \Pi_{\sharp}(\lambda_{0},s_{0},h_{0}).$$ On v\'erifie que $$\Pi_{\sharp}(\lambda,s,h)=Ind_{P}^{G_{\sharp}}(\sigma).$$ Pour $j=1,2$, le triplet $(\lambda_{j,0},s_{j,0},1)$ appartient \`a $\mathfrak{St}_{n_{j,0},unip-quad}$. Le th\'eor\`eme 1.3 entra\^{\i}ne que $$\Pi_{\sharp}(\lambda_{0},s_{0},h_{0})=sgn_{\sharp}transfert_{h_{0},\sharp}(\Pi_{iso}(\lambda_{0,1},s_{0,1},1)\otimes \Pi_{iso}(\lambda_{0,2},s_{0,2},1)).$$ Le transfert $transfert_{h_{0},\sharp}$ se prolonge en un transfert entre les groupes $M_{1}\times M_{2}$ et $M$ (le transfert \'etant l'identit\'e sur les composantes $GL(m_{j,b})$). Pour celui-ci, $\sigma_{1}\otimes \sigma_{2}$ se transf\`ere en $sgn_{\sharp}\sigma$. Mais on sait que le transfert commute \`a l'induction. Il r\'esulte alors des descriptions ci-dessus que $$\Pi_{\sharp}(\lambda,s,h)=sgn_{\sharp}transfert_{h,\sharp}(\Pi_{iso}(\lambda_{1},s_{1},1)\otimes \Pi_{iso}(\lambda_{2},s_{2},1)).$$ Dans le cas particulier o\`u $\sharp=an$ et $n_{0}=0$, il n'y a plus de sous-groupe parabolique $P$. Le membre de droite ci-dessus est nul car c'est le transfert d'une induite \`a partir d'un sous-groupe parabolique qui n'est pas relevant pour $G_{\sharp}$. Le membre de gauche est nul lui-aussi car, d'apr\`es la relation \cite{W5} 1.3(1), toutes les composantes irr\'eductibles de $\Pi(\lambda,s,h)$ vivent sur l'unique groupe $G_{iso}(F)$. L'\'egalit\'e ci-dessus est donc aussi v\'erifi\'ee dans ce cas particulier. Elle d\'emontre le th\'eor\`eme 2.1 de \cite{W5}. $\square$ \bigskip \section{Trois lemmes de transfert pour des alg\`ebres de Lie} \subsection{ Le cas sp\'ecial orthogonal impair} Cette section et la suivante s'appuient sur les calculs d\'ej\`a faits dans \cite{MW}. On n'a gu\`ere envie de reproduire ces calculs, ni les d\'efinitions compliqu\'ees des objets qui y apparaissent. On se contentera d'indiquer les r\'ef\'erences n\'ecessaires. Dans cette section, on va \'enoncer des lemmes se transfert pour des alg\`ebres de Lie. Il y a trois cas: le cas sp\'ecial orthogonal impair, le cas sp\'ecial orthogonal pair et le cas unitaire. Les preuves sont similaires dans les trois cas (et beaucoup plus faciles dans le dernier). On n'\'ecrira que celle concernant le cas sp\'ecial orthogonal pair, qui est le plus subtil. Dans ces trois paragraphes, on oublie les objets $n$, $G_{iso}$ et $G_{an}$ pr\'ec\'edemment fix\'es afin de lib\'erer ces notations. On va introduire de nouveaux entiers $n$ et on conserve l'hypoth\`ese $p>6n+4$. Dans ce paragraphe, on consid\`ere un entier $n\geq1$ et un \'el\'ement $\eta\in F^{\times}/F^{\times2}$. On construit comme en \cite{W5} 1.1 les deux espaces quadratiques $(V_{iso},Q_{iso})$ et $(V_{an},Q_{an})$ d\'efinis sur $F$ tels que $dim_{F}(V_{iso})=dim_{F}(V_{an})=2n+1$ et $\eta(Q_{iso})=\eta(Q_{an})=\eta$. On note $G_{iso}$ et $G_{an}$ leurs groupes sp\'eciaux orthogonaux. On consid\`ere quatre entiers $r',r'',N',N''\in {\mathbb N}$ tels que $$r^{'2}+r^{''2}+2N'+2N''=2n+1,$$ $$r'\equiv 1+val_{F}(\eta)\,\,mod\,\,2{\mathbb Z},\,\, r''\equiv val_{F}(\eta)\,\,mod\,\,2{\mathbb Z}.$$ On pose $N=N'+N''$. On a d\'efini en \cite{MW} 3.11 et 3.12 une application $${\cal Q}(r',r'')^{Lie}\circ \rho_{N}^*\circ \iota_{N',N''}:{\mathbb C}[\hat{W}_{N'}]_{cusp}\otimes {\mathbb C}[\hat{W}_{N''}]_{cusp}\to C_{cusp}^{\infty}(\mathfrak{g}_{iso}(F))\oplus C_{cusp}^{\infty}(\mathfrak{g}_{an}(F))$$ (en \cite{MW}, l'indice $cusp$ \'etait remplac\'e par $ell$ mais le sens \'etait le m\^eme). Rappelons que, pour $m\in {\mathbb N}$, les classes de conjugaison dans $W_{m}$ sont param\'etr\'es par les paires de partitions $(\alpha,\beta)$ telles que $S(\alpha)+S(\beta)=m$. Pour $w\in W_{m}$, notons $\varphi_{w}$ la fonction caract\'eristique de la classe de conjugaison de $w$. Alors ${\mathbb C}[\hat{W}_{m}]_{cusp}$ a pour base les $\varphi_{w}$ quand $w$ d\'ecrit, \`a conjugaison pr\`es, les \'el\'ements dont la classe est param\'etr\'ee par un couple de partitions de la forme $(\emptyset,\beta)$. On fixe des \'el\'ements $w'\in W_{N'}$ et $w''\in W_{N''}$ dont les classes sont param\'etr\'ees par des couples de cette forme. On pose $$f={\cal Q}(r',r'')^{Lie}\circ\rho_{N}^*\circ\iota_{N',N''}(\varphi_{w'}\otimes \varphi_{w''}).$$ Soit $(n_{1},n_{2})\in D(n)$. A ce couple est associ\'ee une donn\'ee endoscopique de $G_{iso}$ et $G_{an}$. On utilise les m\^emes d\'efinitions qu'en 1.2. Celles-ci se descendent aux alg\`ebres de Lie, c'est-\`a-dire qu'il y a un transfert de $ C_{cusp}^{\infty}(\mathfrak{g}_{iso}(F))\oplus C_{cusp}^{\infty}(\mathfrak{g}_{an}(F))$ dans $$\left( C_{cusp}^{\infty}(\mathfrak{g}_{n_{1},iso}(F))\oplus C_{cusp}^{\infty}(\mathfrak{g}_{n_{1},an}(F))\right)\otimes \left( C_{cusp}^{\infty}(\mathfrak{g}_{n_{2},iso}(F))\oplus C_{cusp}^{\infty}(\mathfrak{g}_{n_{2},an}(F))\right).$$ On le note $transfert_{n_{1},n_{2}}$. Le facteur de transfert est uniquement d\'efini. Les formules en sont donn\'ees en \cite{W3} proposition X.8. Il convient de supprimer les discriminants de Weyl de ces formules, que l'on a incorpor\'es aux int\'egrales orbitales. Le $\eta$ de la d\'efinition X.7 de \cite{W3} n'est pas notre pr\'esent $\eta$, c'est $(-1)^n\eta$. Fixons $\eta_{1},\eta_{2}\in F^{\times}/F^{\times2}$ tels que $\eta_{1}\eta_{2}=\eta$. Notons $t'_{1}$ l'\'el\'ement de l'ensemble $\{\frac{r'+r''+1}{2},\frac{r'+r''-1}{2}\}$ qui est de la m\^eme parit\'e que $1+val_{F}(\eta_{1})$. Notons $t''_{1}$ l'autre \'el\'ement. Notons $t'_{2}$ l'\'el\'ement de l'ensemble $\{\frac{\vert r'-r''\vert +1}{2},\frac{\vert r'-r''\vert -1}{2}\}$ qui est de la m\^eme parit\'e que $1+val_{F}(\eta_{2})$. Notons $t''_{2}$ l'autre \'el\'ement. D\'efinissons deux entiers $n_{1}$ et $n_{2}$ par les formules $$2n_{1}+1=t^{'2}_{1}+t^{''2}_{1}+2N',\,\,2n_{2}+1=t^{'2}_{2}+t^{''2}_{2}+2N'',$$ qui sont \'equivalentes \`a $$n_{1}=\frac{(r'+r'')^2-1}{4}+N',\,\,n_{2}=\frac{(r'-r'')^2-1}{4}+N''.$$ On v\'erifie que $n_{1}+n_{2}=n$. Pour $j=1,2$, on peut consid\'erer $G_{n_{j},iso}$ et $G_{n_{j},an}$ comme les groupes sp\'eciaux orthogonaux d'espaces quadratiques de discriminants $\eta_{j}$. On peut donc appliquer la m\^eme construction que ci-dessus, o\`u le couple $(N',N'')$ est remplac\'e par $(N',0)$ si $j=1$, par $(N'',0)$ si $j=2$. Cela nous permet de d\'efinir les \'el\'ements $$f_{1}={\cal Q}(t'_{1},t''_{1})^{Lie}\circ\rho^*_{N'}\circ\iota_{N',0}(\varphi_{w'})\in C_{cusp}^{\infty}(\mathfrak{g}_{n_{1},iso}(F))\oplus C_{cusp}^{\infty}(\mathfrak{g}_{n_{1},an}(F)),$$ $$f_{2}={\cal Q}(t'_{2},t''_{2})^{Lie}\circ\rho^*_{N''}\circ\iota_{N'',0}(\varphi_{w''})\in C_{cusp}^{\infty}(\mathfrak{g}_{n_{2},iso}(F))\oplus C_{cusp}^{\infty}(\mathfrak{g}_{n_{2},an}(F)).$$ Notons $sgn$ l'unique caract\`ere non trivial de $\mathfrak{o}^{\times}/\mathfrak{o}^{\times2}$. D\'efinissons une constante $C$ par les formules suivantes: si $r''\leq r'$, $C=sgn(-1)^{\frac{r'+r''-1}{2}}sgn(-\eta_{2}\varpi^{-val_{F}(\eta_{2})})^{val_{F}(\eta)}$; si $r'<r''$, $C=sgn(-1)^{\frac{r'+r''-1}{2}}sgn_{CD}(w'')sgn(-\eta_{2}\varpi^{-val_{F}(\eta_{2})}) ^{1+val_{F}(\eta)}$. \ass{Lemme}{(i) On a l'\'egalit\'e $transfert_{n_{1},n_{2}}(f)=Cf_{1}\otimes f_{2}$. (ii) Soit $(\bar{n}_{1},\bar{n}_{2})\in D(n)$ un couple diff\'erent de $(n_{1},n_{2})$. Alors $transfert_{\bar{n}_{1},\bar{n}_{2}}(f)=0$.} \bigskip \subsection{Le cas sp\'ecial orthogonal pair} Dans ce paragraphe, on consid\`ere un entier $n\geq1$ et un \'el\'ement $\eta\in F^{\times}/F^{\times2}$. On exclut le cas $n=1$, $\eta=1$. On construit comme en \cite{W5} 1.1 les deux espaces quadratiques $(V_{iso},Q_{iso})$ et $(V_{an},Q_{an})$ d\'efinis sur $F$ tels que $dim_{F}(V_{iso})=dim_{F}(V_{an})=2n$ et $\eta(Q_{iso})=\eta(Q_{an})=\eta$. On note $G_{\eta,iso}$ et $G_{\eta,an}$ leurs groupes sp\'eciaux orthogonaux. On consid\`ere quatre entiers $r',r'',N',N''\in {\mathbb N}$ tels que $$r^{'2}+r^{''2}+2N'+2N''=2n,$$ $$r'\equiv r''\equiv\,\,val_{F}(\eta)\,\,mod\,\,2{\mathbb Z} .$$ On pose $N=N'+N''$. On a d\'efini en \cite{MW} 3.11 et 3.12 une application $${\cal Q}(r',r'')^{Lie}\circ \rho_{N}^*\circ \iota_{N',N''}:{\mathbb C}[\hat{W}_{N'}]_{cusp}\otimes {\mathbb C}[\hat{W}_{N''}]_{cusp}\to C_{cusp}^{\infty}(\mathfrak{g}_{\eta,iso}(F))\oplus C_{cusp}^{\infty}(\mathfrak{g}_{\eta,an}(F)).$$ On fixe des \'el\'ements $w'\in W_{N'}$ et $w''\in W_{N''}$ dont les classes de conjugaison sont param\'etr\'ees par des couples de partitions de la forme $(\emptyset,\beta')$ et $(\emptyset,\beta'')$. On pose $$f={\cal Q}(r',r'')^{Lie}\circ\rho_{N}^*\circ\iota_{N',N''}(\varphi_{w'}\otimes \varphi_{w''}).$$ {\bf Remarque.} Dans le cas o\`u $r'=r''=0$, cette fonction est nulle si le couple $(w',w'')$ ne v\'erifie pas la relation $sgn(\eta\varpi^{-val_{F}(\eta)})sgn_{CD}(w')sgn_{CD}(w'')=1$. \bigskip Soit $(n_{1},n_{2})\in D(n)$ et soient $\eta_{1},\eta_{2}\in F^{\times}/F^{\times2}$ tels que $\eta=\eta_{1}\eta_{2}$. A ces objets est associ\'ee une donn\'ee endoscopique de $G_{\eta,iso}$ et $G_{\eta,an}$. Le groupe endoscopique de cette donn\'ee est $G_{n_{1},\eta_{1},iso}\otimes G_{n_{2},\eta_{2},iso}$. De nouveau, il y a un transfert de $ C_{cusp}^{\infty}(\mathfrak{g}_{\eta,iso}(F))\oplus C_{cusp}^{\infty}(\mathfrak{g}_{\eta,an}(F))$ dans $$\left( C_{cusp}^{\infty}(\mathfrak{g}_{n_{1},\eta_{1},iso}(F))\oplus C_{cusp}^{\infty}(\mathfrak{g}_{n_{1},\eta_{1},an}(F))\right)\otimes \left( C_{cusp}^{\infty}(\mathfrak{g}_{n_{2},\eta_{2},iso}(F))\oplus C_{cusp}^{\infty}(\mathfrak{g}_{n_{2},\eta_{2},an}(F))\right).$$ On le note $transfert_{n_{1},\eta_{1},n_{2},\eta_{2}}$. Il y a un choix naturel de facteur de transfert. Les formules en sont donn\'ees en \cite{W3} proposition X.8. Il convient encore d'en supprimer les discriminants de Weyl. Le $\eta$ de la d\'efinition X.7 de \cite{W3} n'est pas notre pr\'esent $\eta$, c'est $(-1)^n\eta$. On note $\Delta_{n_{1},\eta_{1},n_{2},\eta_{2}}$ ce facteur de transfert. Posons $$t'_{1}=t''_{1}=\frac{r'+r''}{2},\,\, t'_{2}=t''_{2}=\frac{\vert r'-r''\vert }{2}.$$ D\'efinissons deux entiers $n_{1}$ et $n_{2}$ par les formules $$2n_{1}=t^{'2}_{1}+t^{''2}_{1}+2N',\,\,2n_{2}=t^{'2}_{2}+t^{''2}_{2}+2N'',$$ qui sont \'equivalentes \`a $$n_{1}=\frac{(r'+r'')^2}{4}+N',\,\, n_{2}=\frac{(r'-r'')^2}{4}+N''.$$ On v\'erifie que $n_{1}+n_{2}=n$. Fixons $\eta_{1},\eta_{2}\in F^{\times}/F^{\times2}$ tels que $\eta_{1}\eta_{2}=\eta$ et $$(1) \qquad val_{F}(\eta_{1})\equiv t'_{1}=t''_{1}\,,\,mod\,\,2{\mathbb Z},\,\,val_{F}(\eta_{2})\equiv t'_{2}=t''_{2}\,,\,mod\,\,2{\mathbb Z}.$$ Pour $j=1,2$, on peut appliquer la m\^eme construction que ci-dessus \`a $n_{j}$ et $\eta_{j}$, o\`u le couple $(N',N'')$ est remplac\'e par $(N',0)$ si $j=1$, par $(N'',0)$ si $j=2$. Cela nous permet de d\'efinir les \'el\'ements $$f_{1,\eta_{1}}={\cal Q}(t'_{1},t''_{1})^{Lie}\circ\rho^*_{N'}\circ\iota_{N',0}(\varphi_{w'})\in C_{cusp}^{\infty}(\mathfrak{g}_{n_{1},\eta_{1},iso}(F))\oplus C_{cusp}^{\infty}(\mathfrak{g}_{n_{1},\eta_{1},an}(F)),$$ $$f_{2,\eta_{2}}={\cal Q}(t'_{2},t''_{2})^{Lie}\circ\rho^*_{N''}\circ\iota_{N'',0}(\varphi_{w''})\in C_{cusp}^{\infty}(\mathfrak{g}_{n_{2},\eta_{2},iso}(F))\oplus C_{cusp}^{\infty}(\mathfrak{g}_{n_{2},\eta_{2},an}(F)).$$ D\'efinissons une constante $C_{\eta_{1},\eta_{2}}$ par les formules suivantes: si $r''\leq r'$, $C_{\eta_{1},\eta_{2}}= sgn(\eta_{2}\varpi^{-val_{F}(\eta_{2})})^{val_{F}(\eta)}$; si $r'<r''$, $C_{\eta_{1},\eta_{2}}=sgn(-1)^{ val_{F}(\eta_{2})}sgn_{CD}(w'') sgn(\eta_{2}\varpi^{-val_{F}(\eta_{2})})^{1+val_{F}(\eta)} $. \ass{Lemme}{Soient $\eta_{1},\eta_{2}\in F^{\times}/F^{\times2}$ tels que $\eta_{1}\eta_{2}=1$. (i) Si (1) est v\'erifi\'e, on a l'\'egalit\'e $transfert_{n_{1},\eta_{1},n_{2},\eta_{2}}(f)=C_{\eta_{1},\eta_{2}}f_{1,\eta_{1}}\otimes f_{2,\eta_{2}}$. (ii) Soit $(\bar{n}_{1},\bar{n}_{2})\in D(n)$. Supposons que $(\bar{n}_{1},\bar{n}_{2})\not=(n_{1},n_{2})$ ou que (1) ne soit pas v\'erifi\'e. Alors $transfert_{\bar{n}_{1},\eta_{1},\bar{n}_{2},\eta_{2}}(f)=0$.} La d\'emonstration de ce lemme occupe les paragraphes 2.4 \`a 2.8. \bigskip \subsection{Le cas unitaire} Dans ce paragraphe, on fixe une tour d'extensions finies non ramifi\'ees $E/E^{\natural}/F$, avec $[E:E^{\natural}]=2$, et un entier $d\geq1$. On consid\`ere les espaces vectoriels $V$ sur $E$, de dimension $d$, munis d'une forme hermitienne non d\'eg\'en\'er\'ee $Q$, relativement \`a l'extension $E/E^{\natural}$. De nouveau, il y a deux classes d'isomorphie de couples $(V,Q)$, qui se distinguent par la valuation du d\'eterminant de $Q$ (ce d\'eterminant est un \'el\'ement de $E^{\natural,\times}/norme_{E/E^{\natural}}(E^{\times })$, sa valuation est l'image de ce terme dans ${\mathbb Z}/2{\mathbb Z}$ par l'application $val_{E^{\natural}}$). On note $(V_{iso},Q_{iso})$ celui pour lequel cette valuation est paire et $(V_{an},Q_{an})$ celui pour lequel cette valuation est impaire. On note $G_{iso}$ et $G_{an}$ les groupes unitaires de ces espaces hermitiens. Pour $m\in {\mathbb N}$, on sait que les classes de conjugaison dans $\mathfrak{S}_{m}$ sont param\'etr\'ees par les partitions de $m$. On note ${\mathbb C}[\hat{\mathfrak{S}}_{m}]_{U-cusp}$ l'espace des fonctions sur $\mathfrak{S}_{m}$, invariantes par conjugaison, \`a support dans des classes de conjugaison param\'etr\'ees par des partitions dont tous les termes non nuls sont impairs. Soient $(d',d'')\in D(d)$. Dans \cite{MW} 3.1, 3.2 et 3.3, on a d\'efini une application $${\cal Q}(d',d'')^{Lie}\circ\rho^*_{d}\circ \iota_{d',d''}:{\mathbb C}[\hat{\mathfrak{S}}_{d'}]_{U-cusp}\otimes {\mathbb C}[\hat{\mathfrak{S}}_{d''}]_{U-cusp}\to C_{cusp}^{\infty}(\mathfrak{g}_{iso}(E^{\natural}))\oplus C_{cusp}^{\infty}(\mathfrak{g}_{an}(E^{\natural})) .$$ Fixons $w'\in \mathfrak{S}_{d'}$ et $w''\in \mathfrak{S}_{d''}$ dont les classes de conjugaison sont param\'etr\'ees par des partitions dont tous les termes non nuls sont impairs. On note $\varphi_{w'}$ et $\varphi_{w''}$ les fonctions caract\'eristiques de ces classes de conjugaison. On pose $$f={\cal Q}(d',d'')^{Lie}\circ\rho^*_{d}\circ \iota_{d',d''}(\varphi_{w'}\otimes \varphi_{w''}).$$ Soit $(d_{1},d_{2})\in D(d)$. Ce couple d\'etermine une donn\'ee endoscopique de $G_{iso}$ et $G_{an}$. Le groupe endoscopique de cette donn\'ee est $G_{d_{1},iso}\times G_{d_{2},iso}$. Il y a un transfert de $C_{cusp}^{\infty}(\mathfrak{g}_{iso}(E^{\natural}))\oplus C_{cusp}^{\infty}(\mathfrak{g}_{an}(E^{\natural}))$ dans $$\left(C_{cusp}^{\infty}(\mathfrak{g}_{d_{1},iso}(E^{\natural}))\oplus C_{cusp}^{\infty}(\mathfrak{g}_{d_{1},an}(E^{\natural}))\right)\otimes \left(C_{cusp}^{\infty}(\mathfrak{g}_{d_{2},iso}(E^{\natural}))\oplus C_{cusp}^{\infty}(\mathfrak{g}_{d_{2},an}(E^{\natural}))\right).$$ On le note $transfert_{d_{1},d_{2}}$. Il y a un choix naturel de facteur de transfert, cf. \cite{W3} proposition X.8. Comme pr\'ec\'edemment, on supprime les discriminants de Weyl. Le $\eta$ de la d\'efinition X.7 de \cite{W3} est une unit\'e de $E^{\natural}$ si $d$ est impair, une unit\'e de $E$ de trace nulle dans $E^{\natural}$ si $d$ est pair. Dans le cas o\`u $(d_{1},d_{2})=(d',d'')$, on peut appliquer la construction ci-dessus en rempla\c{c}ant $d$ par $d'$, resp. $d''$, et $(d',d'')$ par $(d',0)$, resp. $(d'',0)$. On d\'efinit ainsi $$f_{1}={\cal Q}(d',0)^{Lie}\circ\rho^*_{d'}\circ \iota_{d',0}(\varphi_{w'}),$$ $$f_{2}={\cal Q}(d'',0)^{Lie}\circ\rho^*_{d''}\circ \iota_{d'',0}(\varphi_{w''}).$$ \ass{Lemme}{(i) On a l'\'egalit\'e $transfert_{d',d''}(f)=f_{1}\otimes f_{2}$. (ii) Pour un couple $(d_{1},d_{2})\in D(d)$ diff\'erent de $(d',d'')$, on a l'\'egalit\'e $transfert_{d_{1},d_{2}}(f)=0$.} \bigskip \subsection{Une expression \`a l'aide de transform\'ees de Fourier d'int\'egrales orbitales} On commence la d\'emonstration du lemme 2.2. On utilise les notations de ce paragraphe et les donn\'ees que l'on y a fix\'ees. Rappelons que, pour $\sharp=iso$ ou $an$, on d\'efinit dans l'espace $\mathfrak{g}_{\sharp}(F)$ la notion d'\'el\'ement topologiquement nilpotent. En identifiant $\mathfrak{g}_{\sharp}(F)$ \`a un sous-ensemble de l'alg\`ebre des endomorphismes de $V_{\sharp}$, un \'el\'ement $Y\in \mathfrak{g}_{\sharp}(F)$ est topologiquement nilpotent si et seulement si la suite des puissances $Y^{m}$ tend vers $0$ quand $m$ tend vers l'infini. Par construction de l'application ${\cal Q}(r',r'')^{Lie}$, la fonction $f$ est \`a support topologiquement nilpotent. On note $(\emptyset,\beta')$ et $(\emptyset,\beta'')$ les couples de partitions param\'etrant les classes de conjugaison de $w'$ et $w''$. Posons $\beta=\beta'\cup\beta''$, $\beta=(\beta_{1}\geq...\geq \beta_{t}>0)$. Pour tout $k\in \{1,...,t\}$, consid\'erons la tour d'extensions non ramifi\'ees $E_{k}/E_{k}^{\natural}/F$ telle que $[E_{k}:E_{k}^{\natural}]=2$ et $[E_{k}^{\natural}:F]=\beta_{k}$. On fixe un \'el\'ement $X_{k}\in E_{k}^{\times}$ tel que $val_{E_{k}}(X_{k})=0$, que $trace_{E_{k}/E_{k}^{\natural}}(X_{k})=0$ et que, en notant $\bar{X}_{k}$ la r\'eduction de $X_{k}$ dans le corps r\'esiduel ${\mathbb F}_{q^{2\beta_{k}}}$, tous les conjugu\'es de $\bar{X}_{k}$ par le groupe de Galois de l'extension ${\mathbb F}_{q^{2\beta_{k}}}/{\mathbb F}_{q}$ soient distincts. On suppose de plus que, pour $k,k'\in \{1,...,t\}$ avec $k\not=k'$, $\bar{X}_{k}$ n'est pas conjugu\'e \`a $\bar{X}_{k'}$. Cette hypoth\`ese est loisible puisque $p>6n+4$. Posons $R=sup(r',r'')$, $r=inf(r',r'')$, $J=\{1,...,R\}$, $\hat{J}=\{j\in J; j\leq R-r, j\,\,pair\}$. Fixons un ensemble de repr\'esentants $\Gamma_{0}$ de $\mathfrak{o}^{\times}/(1+\varpi\mathfrak{o})$.Pour tout $j\in J$ tel que $j>R-r$, fixons un ensemble de repr\'esentants $\Gamma_{j}$ de $\mathfrak{o}^{\times}/\mathfrak{o}^{\times2}$. Pour tout $j\in J$ tel que $j\leq R-r$, posons $\Gamma_{j}=\Gamma_{0}$. Notons $\Gamma$ le sous-ensemble des $\gamma=(\gamma_{j})_{j\in J}\in \prod_{j\in J}\Gamma_{j}$ tels que pour tout $j\in \hat{J}$, $\gamma_{j-1}\not\in \gamma_{j}+\varpi\mathfrak{o}$; (1) $sgn(\eta\varpi^{-val_{F}(\eta)}\prod_{j\in J}\gamma_{j})=sgn_{CD}(w')sgn_{CD}(w'')$. {\bf Remarques.} L'ensemble $\Gamma$ n'a \'evidemment rien \`a voir avec celui de \cite{W5} 1.8. D'autre part, dans le cas o\`u $r'=r''=0$, on a $J=\emptyset$ et $\prod_{j\in J}\Gamma_{j}$ a un unique \'el\'ement. Cet \'el\'ement v\'erifie (1) si et seulement si $sgn(\eta\varpi^{-val_{F}(\eta)} =sgn_{CD}(w')sgn_{CD}(w'')$. Donc $\Gamma$ est vide si cette \'egalit\'e n'est pas v\'erifi\'ee. Dans l'\'egalit\'e du lemme ci-dessous, le membre de droite est nul. C'est coh\'erent avec la remarque de 2.2 qui nous dit que $f_{\sharp}=0$. \bigskip On note $\sigma:\Gamma\to {\mathbb C}$ la fonction d\'efinie par $$\sigma(\gamma)=\left(\prod_{j\in \hat{J}}(q-2+sgn(\gamma_{j-1}\gamma_{j}))sgn(\gamma_{j-1}\gamma_{j}(\gamma_{j-1}-\gamma_{j}))\right)$$ $$\prod_{j=R-r+1,...R; \,j\,\, impair}sgn(-\gamma_{j}).$$ Notons $\bar{F}$ une cl\^oture alg\'ebrique de $F$. Pour $\gamma\in \Gamma$ et pour $j\in J$, on fixe $a_{j}(\gamma)\in \bar{F}^{\times}$ tel que si $j\in \hat{J}$, $a_{j}(\gamma)^{2j-2}=\varpi\gamma_{j}$; si $j\leq R-r$ et $j$ est impair, $a_{j}(\gamma)^{2j}=\varpi\gamma_{j}$; si $j> R-r$, $a_{j}(\gamma)^{2(2j-R+r-1)}=\varpi\gamma_{j}$. On note $F_{j}(\gamma)$ l'extension $F[a_{j}(\gamma)]$, $F_{j}^{\natural}(\gamma)=F[a_{j}(\gamma)^2]$ la sous-extension d'indice $2$, $\mathfrak{p}_{j}(\gamma)$ l'id\'eal maximal de l'anneau des entiers de $F_{j}(\gamma)$ et $$A_{j}(\gamma)=\{a_{j}\in a_{j}(\gamma)+\mathfrak{p}_{j}(\gamma)^2; trace_{F_{j}(\gamma)/F_{j}^{\natural}(\gamma)}(a_{j})=0\}.$$ On pose $A(\gamma)=\prod_{j\in J}A_{j}(\gamma)$. Remarquons que $A(\gamma)$ est naturellement un espace principal homog\`ene sous un certain groupe. On munit $A(\gamma)$ d'une mesure invariante par l'action de ce groupe et de masse totale $1$. Consid\'erons des \'el\'ements $\gamma\in \Gamma$ et $a=(a_{j})_{j\in J}\in A(\gamma)$. Consid\'erons des familles $c=(c_{j})_{j\in J}$ et $u=(u_{k})_{k=1,...,t}$ telles que $c_{j}\in F_{j}^{\natural}(\gamma)^{\times}$ pour $j\in J$ et $u_{k}\in E^{\natural,\times}_{k}$ pour $k=1,...,t$. Pour $j\in J$, on construit un espace quadratique $(V_{j},Q_{j})$: $V_{j}=F_{j}(\gamma)$ et, pour $v,v'\in V_{j}$, $Q_{j}(v,v')=[F_{j}(\gamma)/F]^{-1}trace_{F_{j}(\gamma)/F}(\tau_{j}(v)v'c_{j})$, o\`u $\tau_{j}$ est l'unique \'el\'ement non trivial du groupe de Galois de $F_{j}(\gamma)/F_{j}^{\natural}(\gamma)$. Pour $k=1,...,t$, on construit un espace quadratique $(V_{k},Q_{k})$: $V_{k}=E_{k}$ et, pour $v,v'\in V_{k}$, $Q_{k}(v,v')=[ E_{k}/F]^{-1}trace_{ E_{k}/F}(\tau_{k}(v)v'u_{k})$, o\`u $\tau_{k}$ est l'unique \'el\'ement non trivial du groupe de Galois de $ E_{k}/E_{k}^{\natural}$. En utilisant la relation $r'\equiv r''\equiv val_{F}(\eta)$ impos\'ee en 2.2 et la relation (1) ci-dessus, on montre que la somme directe des $(V_{j},Q_{j})$ et des $(V_{k},Q_{k})$ est isomorphe \`a l'un des deux espaces de 2.2, disons \`a $(V_{\sharp},Q_{\sharp})$. On fixe un isomorphisme. On d\'efinit alors un \'el\'ement $X\in \mathfrak{g}_{\sharp}(F)$: il respecte chacun des sous-espaces $V_{j}$ et $V_{k}$; pour $j\in J$, il agit sur $V_{j}$ par multiplication par $a_{j}$ et, pour $k=1,...,t$, il agit sur $V_{k}$ par multiplication par $X_{k}$. La classe de conjugaison de $X$ par le groupe orthogonal $O(Q_{\sharp};F)$ est bien d\'etermin\'ee par nos donn\'ees de d\'epart. Il y a une petite difficult\'e caus\'ee par la parit\'e de la dimension de $V_{\sharp}$: cette classe de conjugaison par $O(Q_{\sharp};F)$ se d\'ecompose en deux classes de conjugaison par $G_{\sharp}(F)$. Au lieu de $X$, nous fixons donc des \'el\'ements $X^+$ et $X^-$ dans chacune de ces deux classes. On voit facilement que les constructions ne d\'ependent que des images des $c_{j}$ dans les groupes $F_{j}^{\natural}(\gamma)^{\times}/norme_{F_{j}(\gamma)/F_{j}^{\natural}(\gamma)}(F_{j}(\gamma)^{\times})$ et des images des $u_{k}$ dans les groupes $E_{k}^{\natural,\times}/norme_{E_{k}/E_{k}^{\natural}}(E_{k}^{\times})$. Parce que les $F_{j}(\gamma)/F$ sont totalement ramifi\'es et les $E_{k}/F$ sont non ramifi\'es, ces groupes s'identifient respectivement \`a $\mathfrak{o}^{\times}/\mathfrak{o}^{\times2}$ et ${\mathbb Z}/2{\mathbb Z}$. En posant ${\cal E}=(\mathfrak{o}^{\times}/\mathfrak{o}^{\times2})^J$ et ${\cal U}=({\mathbb Z}/2{\mathbb Z})^t$, on peut donc remplacer les donn\'ees $c=(c_{j})_{j\in J}$ et $u=(u_{k})_{k=1,...,t}$ ci-dessus par des familles $c=(c_{j})_{j\in J}\in {\cal E}$ et $u=(u_{k})_{k=1,...,t}\in {\cal U}$. Notons plus pr\'ecis\'ement $X^+(a,c,u)$ et $X^-(a,c,u)$ les \'el\'ements associ\'es ci-dessus \`a de telles familles. Soient $\gamma\in \Gamma$ et $e\in {\cal E}$. On d\'efinit une nouvelle famille $c [\gamma,e]=(c[\gamma,e]_{j})_{j\in J}\in {\cal E}$ par les \'egalit\'es suivantes, pour $j\in J$: si $r''\leq r'$, $c[\gamma,e]_{j}=(-1)^j\gamma_{j}e_{j}$; si $r'<r''$, $c[\gamma,e]_{j}=(-1)^{j+1}e_{j}$ \noindent (il s'agit plut\^ot des images modulo $\mathfrak{o}^{\times2}$ des expressions indiqu\'ees). Pour $a\in A(\gamma)$ et $\zeta=\pm$, on note d\'esormais $X^{\zeta}(a,e,u)$ l'\'el\'ement $X^{\zeta}(a,c[\gamma,e],u)$ introduit ci-dessus. Il appartient \`a $\mathfrak{g}_{\sharp}(F)$ pour un certain indice $\sharp$, que l'on note plus pr\'ecisement $\sharp(a,e,u)$. Soit $\sharp=iso$ ou $an$. De m\^eme qu'en 1.1, on introduit une transformation de Fourier dans $C_{c}^{\infty}(\mathfrak{g}_{\sharp}(F))$. Rappelons un r\'esultat d'Harish-Chandra. Soit $X\in \mathfrak{g}_{\sharp}(F)$ un \'el\'ement r\'egulier et elliptique. Il existe une fonction $Y\mapsto \hat{i}_{\sharp}(X,Y)$, d\'efinie et localement constante sur l'ensemble des \'el\'ements elliptiques r\'eguliers de $\mathfrak{g}_{\sharp}(F)$ de sorte que, pour tout tel \'el\'ement $Y$ et pour toute $\phi\in C_{c}^{\infty}(\mathfrak{g}_{\sharp}(F))$, on ait l'\'egalit\'e $$J(X ,\hat{\phi})=\int_{\mathfrak{g}_{\sharp}(F)}\hat{i}_{\sharp} (X,Y)\phi(Y)D^{G_{\sharp}}(Y)^{-1/2}dY.$$ Les \'el\'ements $X^{\zeta}(a,e,u)$ ci-dessus, tels que $\sharp(a,e,u)=\sharp$, sont elliptiques r\'eguliers dans $\mathfrak{g}_{\sharp}(F)$. On note $\hat{i}^{\zeta}_{\sharp}[a,e,u]$ la fonction $Y\mapsto \hat{i}_{\sharp}(X^{\zeta}(a,e,u),Y)$. On pose $$\hat{i}_{\sharp}[a,e,u]=\frac{1}{2}(\hat{i}_{\sharp}^+[a,e,,u]+\hat{i}_{\sharp}^-[a,e,u]).$$ Pour un triplet $(a,e,u)$ tel que $\sharp(a,e,u)\not=\sharp$, on pose $\hat{i}_{\sharp}[a,e,u]=0$. On fixe une d\'ecomposition $\{1,...,t\}=K'\sqcup K''$ de sorte que $\beta'$, resp. $\beta''$, soit form\'ee des $\beta_{k}$ pour $k\in K'$, resp. $k\in K''$. On d\'efinit un caract\`ere $\kappa^{{\cal U}}$ de ${\cal U}$ par $$\kappa^{{\cal U}}(u)=(-1)^{\sum_{k\in K''}u_{k}}.$$ On va se limiter \`a un sous-groupe ${\cal E}^0$ de ${\cal E}$. C'est le sous-groupe des familles $e=(e_{j})_{j\in J}\in {\cal E}$ telles que pour tout $j\in \hat{J}$, tel que $j<R$, $e_{j-1}=e_{j}$. {\bf Remarque.} La condition $j<R$ est automatique si $r>0$. \bigskip On d\'efinit le caract\`ere $\kappa^0$ du groupe ${\cal E}^0$ par $$\kappa^0(e)=\prod_{j\in \hat{J}}sgn(e_{j-1}).$$ On doit enfin d\'efinir quelques constantes. On note ${\cal O}(w')$ et ${\cal O}(w'')$ les classes de conjugaison de $w'$ dans $W_{N'}$ et de $w''$ dans $W_{N''}$. On pose $$C(w')=\vert {\cal O}(w')\vert \vert W_{N'}\vert ^{-1}q^{N'/2}\prod_{k\in K'}(q^{\beta_{k}}+1)^{-1},$$ et on d\'efinit $C(w'') $ de fa\c{c}on similaire. On pose $\alpha(r',r'',w',w'')=sgn((-1)^{(r'+r'')/2}\eta\varpi^{-val_{F}(\eta)})$ si $val_{F}(\eta)$ est pair; $\alpha(r',r'',w',w'')= sgn((-1)^{(r'+r'')/2}\eta\varpi^{-val_{F}(\eta)})sgn_{CD}(w')sgn_{CD}(w'')$ si $val_{F}(\eta)$ est impair; $$C(r',r'')=2^{1-r'-r''}\left((q-1)^2(q-3)\right)^{(r-R)/2}.$$ \ass{Lemme}{ Pour $\sharp=iso$ ou $an$ et pour tout \'el\'ement $Y\in \mathfrak{g}_{\sharp}(F)$ qui est elliptique, r\'egulier et topologiquement nilpotent, on a l'\'egalit\'e $$J(Y,f_{\sharp})=C(w')C(w'')C(r',r'')\alpha(r',r'',w',w'')\sum_{\gamma\in \Gamma}\sigma(\gamma)$$ $$\int_{A(\gamma)}\sum_{e\in {\cal E}^0}\sum_{u\in {\cal U}}\kappa^0(e)\kappa^{{\cal U}}(u)\hat{i}_{\sharp}[a,e,u](Y) \,da.$$} Preuve. Posons $N=N'+N''$. En \cite{MW} 3.15 (1), on a \'ecrit une \'egalit\'e $$(2) \qquad f_{\sharp}=\sum_{(n',n'')\in D(N)}\sum_{(v',v'')\in {\cal V}_{n',n'',\sharp}} b(v',v''){\cal Q}(r',r'';v',v'')_{\sharp}^{Lie}.$$ Les ${\cal Q}(r',r'';v',v'')_{\sharp}^{Lie}$ sont des fonctions sur $\mathfrak{g}_{\sharp}(F)$ et les $b(v',v'')$ sont des coefficients. L'ensemble ${\cal V}_{n',n'',\sharp}$ est un ensemble de repr\'esentants des classes de conjugaison dans $W_{n'}\times W_{n''}$ param\'etr\'ees par des couples de partitions de la forme $(\emptyset,\delta')$, $(\emptyset,\delta'')$ et tels que $\delta'\cup \delta''=\beta$. Il y a une restriction (l'hypoth\`ese 3.13(1) de \cite{MW}), que nous levons en posant ${\cal Q}(r',r'';v',v'')_{\sharp}^{Lie}=0$ si elle n'est pas v\'erifi\'ee (dans \cite{MW}, cette fonction n'est d\'efinie que sous cette hypoth\`ese 3.13(1)). En supprimant ainsi cette restriction, l'ensemble ${\cal V}_{n',n'',\sharp}$ devient ind\'ependant de l'indice $\sharp$ et nous supprimons cet indice. Il y a une application naturelle $$\delta:{\cal U}\to \cup_{(n',n'')\in D(N)}{\cal V}_{n',n''}.$$ Pour $u=(u_{k})_{k=1,...,t}\in {\cal U}$, on note $\delta'(u)$ la partition form\'ee des $\beta_{k}$ pour $k$ tel que $u_{k}=0$ et $\delta''(u)$ celle form\'ee des $\beta_{k}$ pour $k$ tel que $u(k)=1$. L'application ci-dessus envoie $u$ sur le couple de classes de conjugaison param\'etr\'ees par $(\emptyset,\delta'(u))$ et $(\emptyset,\delta''(u))$. L'\'egalit\'e 3.15(6) de \cite{MW} et les calculs qui la suivent montrent que, pour $(v',v'')\in \cup_{(n',n'')\in D(N)}{\cal V}_{n',n''}$, on a l'\'egalit\'e $$(3) \qquad b(v',v'')=C_{1}\sum_{u\in \delta^{-1}(v',v'')}\kappa^{{\cal U}}(u),$$ o\`u $$(4) \qquad C_{1}=\vert {\cal O}(w')\vert \vert W_{N'}\vert ^{-1}\vert {\cal O}(w'')\vert \vert W_{N''}\vert ^{-1}.$$ Fixons $(v',v'')\in \cup_{(n',n'')\in D(N)}{\cal V}_{n',n''}$. L'int\'egrale orbitale $J(Y,{\cal Q}(r',r'';v',v'')_{\sharp}^{Lie})$ est calcul\'ee par une application successive des propositions 3.13 et 3.14 de \cite{MW}, o\`u l'on remplace $(w',w'')$ par $(v',v'')$. On obtient $$(5) \qquad J(Y,{\cal Q}(r',r'';v',v'')_{\sharp}^{Lie})=C_{2}(r',r'',v',v'')\sum_{\gamma\in \Gamma}\sigma(\gamma)$$ $$\int_{A(\gamma)}\sum_{e\in {\cal E}^0_{MW}}\kappa^0_{MW}(e)\hat{i}_{\sharp,MW}[a,e,v',v''](Y)\,da,$$ o\`u $C_{2}(r',r'',v',v'')$ est une certaine constante sur laquelle nous allons revenir; $\Gamma$ et, pour $\gamma\in \Gamma$, $A(\gamma)$ et $\sigma(\gamma)$ sont les termes que l'on a d\'efinis ci-dessus; les autres termes ${\cal E}^0_{MW}$ etc... sont quelque peu diff\'erents des n\^otres, on y a ajout\'e un indice $MW$ pour les en distinguer. On constate que le couple $(v',v'')$ intervient dans (5) d'une part par la constante, d'autre part par la fonction $\hat{i}_{\sharp,MW}[a,e,v',v'']$. Pour ce qui est de la constante, le couple $(v',v'')$ intervient via des termes $\vert T'\vert \vert T''\vert $ et des produits $sgn_{CD}(v')sgn_{CD}(v'')$, cf. les formules de \cite{MW}. Il r\'esulte de la d\'efinition ci-dessus que ce dernier produit vaut $sgn_{CD}(w')sgn_{CD}(w'')$. En se reportant \`a la d\'efinition des tores $T'$ et $T''$, on calcule explicitement $$(6) \qquad \vert T'\vert \vert T''\vert =\prod_{k=1,...,t}(q^{\beta_{k}}+1).$$ En conclusion, $C_{2}(r',r'',v',v'')$ ne d\'epend pas de $(v',v'')$ mais seulement de $(w',w'')$. On la note plut\^ot $C_{2}(r',r'',w',w'')$. Quant \`a la fonction $\hat{i}_{\sharp,MW}[a,e,v',v'']$, elle ne d\'epend de $(v',v'')$ que par un \'el\'ement $X$ d\'efini en \cite{MW} 3.13. On a un certain choix pour cet \'el\'ement. On constate que, pour tout $u\in \delta^{-1}(v',v'')$, on peut prendre pour $X$ un \'el\'ement construit comme ci-dessus \`a l'aide des donn\'ees $(X_{k})_{k=1,...,t}$ et $u$. Notons $\hat{i}_{\sharp,MW}[a,e,u]$ la fonction associ\'ee \`a ce choix. L'\'egalit\'e suivante r\'esulte alors alors de (2), (3) et (5): $$(7) \qquad J(Y,f_{\sharp})=C_{1}C_{2}(r',r'',w',w'')\sum_{\gamma\in \Gamma}\sigma(\gamma)\int_{A(\gamma)}\sum_{e\in {\cal E}^0_{MW}}\sum_{u\in {\cal U}}\kappa^0_{MW}(e)\kappa^{{\cal U}}(u)\hat{i}_{\sharp,MW}[a,e,u](Y)\,da.$$ Supposons $(r',r'')\not=(0,0)$. Il y a une application naturelle de notre ensemble ${\cal E}^0$ dans l'ensemble ${\cal E}^0_{MW}$. A $e\in {\cal E}^0$, elle associe l'unique \'el\'ement $e_{MW}\in {\cal E}^0_{MW}$ tel que $e_{MW,j}=e_{j}$ pour $j=1,...,R-1$. L'\'el\'ement $e_{MW,R}$ est d\'efini par les \'egalit\'es $\prod_{j=R-r+1,...,R}e_{MW,j}=1$ si $r>0$; $e_{MW,R}=e_{MW,R-1}$ si $r=0$. On constate que $\kappa^0(e)=\kappa^0_{MW}(e_{MW})$. Fixons $\gamma\in \Gamma$ et $a\in A(\gamma)$. Dans l'\'enonc\'e du lemme, on peut \'evidemment limiter la somme en $e$ et $u$ \`a la somme sur les couples $(e,u)$ tels que $\sharp(a,e,u)=\sharp$: si cette condition n'est pas v\'erifi\'ee, $\hat{i}_{\sharp}[a,e,u]=0$. Notons $({\cal E}^0\times{\cal U})_{a,\sharp}$ ce sous-ensemble. On constate alors que l'application $$\begin{array}{ccc}({\cal E}^0\times{\cal U})_{a,\sharp}&\to&{\cal E}_{MW}^0\times {\cal U}\\ (e,u)&\mapsto& (e_{MW},u)\\ \end{array}$$ est bijective et que, pour $(e,u)\in ({\cal E}^0\times {\cal U})_{a,\sharp}$, on a l'\'egalit\'e $\hat{i}_{\sharp,MW}[a,e_{MW},u]=\hat{i}_{\sharp}[a,e,u]$. En utilisant ces propri\'et\'es, l'\'egalit\'e (7) devient alors l'\'egalit\'e de l'\'enonc\'e, aux constantes pr\`es. Il ne reste donc plus qu'\`a d\'emontrer l'\'egalit\'e $$(8) \qquad C_{1}C_{2}(r',r'',w',w'')=C(w')C(w'')C(r',r'')\alpha(r',r'',w',w'').$$ On a suppos\'e $(r',r'')\not=(0,0)$. Si $(r',r'')=(0,0)$, les termes $\Gamma$, ${\cal E}^0$ etc... disparaissent et la formule (7) se compare directement \`a celle de l'\'enonc\'e. De nouveau, il ne reste plus qu'\`a comparer les constantes. La constante $C_{2}(r',r'',w',w'')$ est le produit de la constante $C\alpha(w',w'')_{\sharp}$ de la proposition 3.14 de \cite{MW} et de l'inverse de la constante de la proposition 3.13 de cette r\'ef\'erence. Malheureusement, il y a des erreurs dans ces constantes. On a d\'ej\`a effectu\'e quelques corrections dans \cite{W2} page 335: la constante de la proposition 3.13 est $$(9)\qquad q^{-N/2}2^{-\beta(r',r'')}\gamma(r',r'')_{\sharp}\vert T'\vert \vert T''\vert $$ (en fait, $T'$ et $T''$ sont associ\'es \`a un couple $(v',v'')\in {\cal V}_{n',n''}$ mais, d'apr\`es (5), le produit de leurs nombres d'\'el\'ements en est ind\'ependant); on a l'\'egalit\'e $$C=2^{1-r'-r''-\beta(r',r'')}\left((q-1)^2(q-3)\right)^{(r-R)/2}.$$ A l'aide de (4) et (6), on voit que le produit de $C_{1}$, $C$ et de l'inverse de (9) vaut $C(w')C(w'')C(r',r'')\gamma(r',r'')_{\sharp}^{-1}$. Rappelons qu'en \cite{W5} 1.1, on a associ\'e \`a $Q_{\sharp}$ des discriminants $\eta'(Q_{\sharp})$ et $\eta''(Q_{\sharp})$, notons-les simplement $\eta'_{\sharp}$ et $\eta''_{\sharp}$. D'apr\`es \cite{MW} 3.13, on a $\gamma(r',r'')_{\sharp}=sgn((-1)^{(r'+r'')/2} \eta'_{\sharp}\eta''_{\sharp})$, si $r'$ et $r''$ sont pairs, c'est-\`a-dire si $val_{F}(\eta)$ est pair; $\gamma(r',r'')_{\sharp}=1$ si $val_{F}(\eta)$ est impair. Il y a une autre correction \`a apporter \`a \cite{MW}. On a $\alpha(w',w'')_{\sharp}=1$ si $val_{F}(\eta)$ est pair; $\alpha(w',w'')_{\sharp}=sgn((-1)^{(r'-r'')/2}\eta'_{\sharp}\eta''_{\sharp})sgn_{CD}(w')sgn_{CD}(w'')$ si $val_{F}(\eta)$ est impair. Cette constante provient en fait de \cite{W3} paragraphes VII.25 et VII.26. Dans cette r\'ef\'erence, on avait suppos\'e $r'$ et $r''$ pairs et obtenu la constante $1$. Dans \cite{MW}, on a abusivement adopt\'e cette valeur m\^eme quand $r'$ et $r''$ sont impairs. En reprenant la preuve de \cite{W3} VII.26 et en y supposant $r'$ et $r''$ impairs, on obtient la constante ci-dessus. Signalons que, dans le cas sp\'ecial orthogonal impair, il n'y a (\`a notre connaissance) pas de correction \`a apporter \`a la formule de \cite{MW} pour la constante $\alpha(w',w'')_{\sharp}$. \bigskip D'apr\`es \cite{W5} 1.1, on a l'\'egalit\'e $\eta'_{\sharp}\eta''_{\sharp}(-1)^{r''}=\eta\varpi^{-val_{F}(\eta)}$. En la reportant dans la formule ci-dessus, on obtient l'\'egalit\'e $$\gamma(r',r'')_{\sharp}^{-1}\alpha(w',w'')_{\sharp}=\alpha(r',r'',w',w'').$$ En mettant ces calculs bout-\`a-bout, on obtient l'\'egalit\'e (8), ce qui ach\`eve la d\'emonstration. $\square$ \bigskip \subsection{Transformation de l'expression pr\'ec\'edente} On note ${\cal L}$ l'ensemble des couples $(L_{1},L_{2})$ de sous-ensembles de $\{1,...,R-r\}$ tels que $\{1,...,R-r\} $ est r\'eunion disjointe de $L_{1}$ et $L_{2}$; pour tout $j\in \hat{J}$, les intersections $L_{1}\cap \{j-1,j\}$ et $L_{2}\cap\{j-1,j\}$ ont un unique \'el\'ement; on note cet \'el\'ement $l_{1}(j/2)$, resp. $l_{2}(j/2)$; si $r=0$ et $R>0$, $l_{2}(R/2)=R-1$. Pour un tel couple, on d\'efinit un caract\`ere $\kappa^{L_{2}}$ de ${\cal E}$ par $$\kappa^{L_{2}}(e)=\prod_{j=1,...,(R-r)/2}sgn(e_{l_{2}(j)}).$$ \ass{Lemme}{ Pour $\sharp=iso$ ou $an$ et pour tout \'el\'ement $Y\in \mathfrak{g}_{\sharp}(F)$ qui est elliptique, r\'egulier et topologiquement nilpotent, on a l'\'egalit\'e $$J(Y,f_{\sharp})=C(w')C(w'')C(r',r'')\alpha(r',r'',w',w'')\vert {\cal L}\vert ^{-1}\sum_{(L_{1},L_{2})\in {\cal L}}\sum_{\gamma\in \Gamma}\sigma(\gamma)$$ $$\int_{A(\gamma)}\sum_{e\in {\cal E}}\sum_{u\in {\cal U}}\kappa^{L_{2}}(e)\kappa^{{\cal U}}(u)\hat{i}_{\sharp}[a,e,u](Y) \,da.$$} Preuve. En vertu de l'\'enonc\'e pr\'ec\'edent, il suffit de prouver que, pour $e\in {\cal E}$, on a les \'egalit\'es (1) $\sum_{(L_{1},L_{2})\in {\cal L}}\kappa^{L_{2}}(e)=0$ si $e\not\in {\cal E}^0$; (2) $\sum_{(L_{1},L_{2})\in {\cal L}}\kappa^{L_{2}}(e)=\vert {\cal L}\vert \kappa^0(e)$, si $e\in {\cal E}^0$. Supposons $e\not\in {\cal E}^0$. Alors il existe $j\in \{1,...,(R-r)/2\}$, avec $j<R/2$ si $r=0$ et $R>0$, de sorte que $e_{2j-1}\not=e_{2j}$. On fixe un tel $j$. On regroupe les \'el\'ements de ${\cal L}$ en paires. Si $(L_{1},L_{2})$ est un \'el\'ement d'une paire, l'autre \'el\'ement $(L'_{1},L'_{2})$ est obtenu en \'echangeant $l_{1}(j)$ et $l_{2}(j)$ et en ne changeant pas les autres $l_{1}(h)$, $l_{2}(h)$ pour $h\not=j$. On a alors $$\kappa^{L_{2}}(e)=sgn(e_{l_{2}(j)})\prod_{h\not=j}sgn(e_{l_{2}(h)}),$$ $$\kappa^{L'_{2}}(e)=sgn(e_{l_{1}(j)})\prod_{h\not=j}sgn(e_{l_{2}(h)}).$$ Puisque $e_{l_{2}(j)}\not=e_{l_{1}(j)}$, la somme de ces termes est nulle, ce qui prouve (1). Supposons $e\in {\cal E}^0$. Pour tout $j=1,...,(R-r)/2$ et tout $(L_{1},L_{2})\in {\cal L}$, on a l'\'egalit\'e $sgn(e_{l_{2}(j)})=sgn(e_{2j-1})$. Cela r\'esulte de l'\'egalit\'e $e_{2j-1}=e_{2j}$ impos\'ee aux \'el\'ements de ${\cal E}^0$, sauf dans le cas o\`u $r=0$, $R>0$ et $j=R/2$. Mais dans ce cas, $l_{2}(j)=2j-1$ par d\'efinition de ${\cal L}$. En cons\'equence, $\kappa^{L_{2}}(e)=\prod_{j=1,...,(R-r)/2}sgn(e_{2j-1})=\kappa^0(e)$. D'o\`u (2). $\square$ \bigskip \subsection{Calcul de facteurs de transfert} Pour $\gamma\in \Gamma$, $a\in A(\gamma)$, $e\in {\cal E}$, $u\in {\cal U}$ et $\zeta=\pm$, on a introduit un \'el\'ement $X^{\zeta}(a,e,u)$. Fixons $\gamma$ et $a$. Quand $e$, $u$ et $\zeta$ varient, ces \'el\'ements $X^{\zeta}(a,e,u)$ d\'ecrivent un ensemble de repr\'esentants des classes de conjugaison contenues dans deux classes totales de conjugaison stable dans $\mathfrak{g}_{iso}(F)\cup \mathfrak{g}_{an}(F)$. Ces deux classes se d\'eduisent l'une de l'autre par conjugaison par des \'el\'ements de d\'eterminant $-1$ des groupes orthogonaux. On n'a pas donn\'e de crit\`ere pour distinguer $X^+(a,e,u)$ de $X^-(a,e,u)$. On voit que l'on peut effectuer ces choix de signes de sorte que, pour $\zeta$ fix\'e et quand $e$ et $u$ varient, les \'el\'ements $X^{\zeta}(a,e,u)$ d\'ecrivent un ensemble de repr\'esentants des classes de conjugaison contenues dans l'une de nos deux classes totales de conjugaison stable. On fixe un \'el\'ement dans cette classe qui appartienne \`a $\mathfrak{g}_{iso}(F)$, on le note $X^{\zeta}(a,w',w'')$ . On a d\'efini $t'_{1}$, $t''_{1}$, $t'_{2}$, $t''_{2}$ en 2.2. On a les \'egalit\'es $t'_{1}=t''_{1}=(R+r)/2$ et $t'_{2}=t''_{2}=(R-r)/2$. Fixons $(L_{1},L_{2})\in {\cal L}$. Soit $\gamma\in \Gamma$. On d\'efinit deux suites $\gamma^{L_1}=(\gamma^{L_1}_{j})_{j=1,...,t'_{1}}$ et $\gamma^{L_2}=(\gamma^{L_2}_{j})_{j=1,...,t'_{2}}$ par les formules suivantes: pour $j\in\{1,...,R-r\} $, $\gamma^{L_1}_{j}=\gamma_{l_{1}(j)}$, $\gamma^{L_2}_{j}=\gamma_{l_{2}(j)}$; pour $j\in \{(R-r)/2+1,...,(R+r)/2\}$, $\gamma^{L_1}_{j}=\gamma_{j+(R-r)/2}$. Pour $a\in A(\gamma)$, on d\'efinit de m\^eme des suites $a^{L_1}=(a^{L_1}_{j})_{j=1,...,t'_{1}}$ et $a^{L_2}=(a^{L_2}_{j})_{j=1,...,t'_{2}}$. On d\'efinit un \'el\'ement $\eta[L_{2},\gamma]\in F^{\times}/F^{\times2}$ par les deux propri\'et\'es: $val_{F}(\eta[L_{2},\gamma])\equiv t'_{2}\,\,mod\,\,2{\mathbb Z}$; $sgn(\eta[L_{2},\gamma]\varpi^{-val_{F}(\eta[L_{2},\gamma])}\prod_{j=1,...,t'_{2}}\gamma^{L_2}_{j})=sgn_{CD}(w'')$. On d\'efinit $\eta[L_{1},\gamma]$ par l'\'egalit\'e $\eta[L_{1},\gamma]\eta[L_{2},\gamma]=\eta$. On constate que les donn\'ees $\gamma^{L_1}$, $a^{L_1}$, $(N',0)$, $w'_{1}=w'$ et $w''_{1}=\emptyset$ v\'erifient les m\^emes conditions que $\gamma$, $a$, $(N',N'')$, $w'$ et $w''$, le couple $(n,\eta)$ \'etant remplac\'e par $(n_{1},\eta[L_{1},\gamma])$. De m\^eme pour les donn\'ees $\gamma^{L_2}$, $a^{L_2}$, $(N'',0)$, $w'_{2}=w''$ et $w''_{2}=\emptyset$, le couple $(n,\eta)$ \'etant remplac\'e par $(n_{2},\eta[L_{2},\gamma])$. De m\^eme que l'on a construit ci-dessus des \'el\'ements $X^{\zeta}(a,w',w'')$, on construit dans $\mathfrak{g}_{n_{1},\eta[L_{1},\gamma],iso}(F)$ un \'el\'ement $X^{\zeta}(a^{L_1},w')$ et dans $\mathfrak{g}_{n_{2},\eta[L_{2},\gamma],iso}(F)$ un \'el\'ement $X^{\zeta}(a^{L_2},w'')$. La correspondance endoscopique entre classes de conjugaison stable est un peu perturb\'ee par les signes $\zeta$. Pr\'ecis\'ement, pour $\zeta_{1},\zeta_{2}=\pm$, il existe $\zeta=\pm$ tel que la classe de conjugaison stable de $(X^{\zeta_{1}}(a^{L_1},w'),X^{\zeta_{2}}(a^{L_2},w''))$ dans $\mathfrak{g}_{n_{1},\eta[L_{1},\gamma],iso}(F)\times\mathfrak{g}_{n_{2},\eta[L_{2},\gamma],iso}(F)$ corresponde \`a la classe totale de conjugaison stable de $X^{\zeta}(a,w',w'')$. Ainsi, pour $e\in {\cal E}$ et $u\in {\cal U}$, le facteur de transfert $\Delta_{n_{1},\eta[L_{1},\gamma],n_{2},\eta[L_{2},\gamma]} ((X^{\zeta_{1}}(a^{L_1},w'),X^{\zeta_{2}}(a^{L_2},w'')),X^{\zeta}(a,e,u))$ est d\'efini et non nul. {\bf Remarque.} Il peut arriver que $n_{1}=0$ ou $n_{2}=0$. Dans ce cas, les termes $X^{\zeta_{1}}(a^{L_1},w')$ ou $X^{\zeta_{2}}(a^{L_2},w'')$ disparaissent. \bigskip Posons $$d(r',r'',\gamma,L_{2})=sgn(\eta\varpi^{-val_{F}(\eta)})^{(R-r)/2}sgn_{CD}(w')^{(R-r)/2}sgn_{CD}(w'')^{(R-r)/2+val_{F}(\eta)}$$ $$\left(\prod_{j\in \hat{J}}sgn(\gamma_{j-1}\gamma_{j})^{j/2-1}sgn(\gamma_{j-1}-\gamma_{j})\right)\left(\prod_{j=R-r+1,...,R}sgn(\gamma_{j})^{(R-r)/2}\right)$$ $$sgn(-1)^{val_{F}(\eta[L_{2},\gamma])B}sgn(\eta[L_{2},\gamma]\varpi^{-val_{F}(\eta[L_{2},\gamma])})^Bsgn_{CD}(w'')^B,$$ o\`u $B=0$ si $r'\geq r''$, $B=1$ si $r'<r''$. \ass{Lemme}{Sous les hypoth\`eses ci-dessus, on a l'\'egalit\'e $$\Delta_{n_{1},\eta[L_{1},\gamma],n_{2},\eta[L_{2},\gamma]} ((X^{\zeta_{1}}(a^{L_1},w'),X^{\zeta_{2}}(a^{L_2},w'')),X^{\zeta}(a,e,u))=d(r',r'',\gamma,L_{2})\kappa^{L_{2}}(e)\kappa^{{\cal U}}(u).$$ } Preuve. Pour tout polyn\^ome $Q$, on note $Q'$ son polyn\^ome d\'eriv\'e. Notons $P$ le polyn\^ome caract\'eristique de $X^{\zeta}(a,e,u)$, vu comme endomorphisme de $V_{\sharp(a,e,u)}$. Pour $j\in \{1,...,t'_{2}\}$, posons $$C_{l_{2}(j)}=(-1)^n\eta[F_{l_{2}(j)}(\gamma):F]c[\gamma,e]_{l_{2}(j)}a_{l_{2}(j)}^{-1}P'(a_{l_{2}(j)}),$$ et notons $sgn_{l_{2}(j)}$ le caract\`ere quadratique de $F^{\natural}_{l_{2}(j)}(\gamma)^{\times}$ associ\'e \`a l'extension quadratique $F_{l_{2}(j)}(\gamma)$. Pour $k\in K''$, posons $$C_{k}=(-1)^n\eta[E_{k}:F] \varpi^{u_{k}}X_{k}^{-1}P'(X_{k}),$$ et notons $sgn_{k}$ le caract\`ere quadratique de $E^{\natural}_{k}$ associ\'e \`a l'extension quadratique $E_{k}$. {\bf Remarque.} Les formules ci-dessus ne d\'efinissent $C_{l_{2}(j)}$ et $C_{k}$ que modulo les groupes $norme_{F_{l_{2}(j)}(\gamma)/F^{\natural}_{l_{2}(j)}(\gamma)}(F_{l_{2}(j)}(\gamma)^{\times})$, resp. $norme_{E_{k}/E_{k}^{\natural}}( E^{\times}_{k})$. C'est sans importance pour la suite. \bigskip D'apr\`es \cite{W3} proposition X.8, on a l'\'egalit\'e $$(1) \qquad \Delta_{n_{1},\eta[L_{1},\gamma],n_{2},\eta[L_{2},\gamma]} ((X^{\zeta_{1}}(a^{L_1},w'),X^{\zeta_{2}}(a^{L_2},w'')),X^{\zeta}(a,e,u))=$$ $$\left(\prod_{j=1,...,t'_{2}}sgn_{l_{2}(j)}(C_{l_{2}(j)})\right)\left(\prod_{k\in K''}sgn_{k}(C_{k})\right).$$ Pour $k\in K''$, le caract\`ere $sgn_{k}$ est non ramifi\'e. Pour calculer $sgn_{k}(C_{k})$, il suffit de calculer la valuation de $C_{k}$ modulo $2{\mathbb Z}$. On voit facilement que la valuation de $(-1)^n[E_{k}:F] X_{k}^{-1}P'(X_{k})$ est nulle. Celle de $\eta\varpi{u_{k}}$ est $val_{F}(\eta)+u_{k}$. D'o\`u $sgn_{k}(C_{k})=(-1)^{val_{F}(\eta)+u_{k}}$. Le produit des $(-1)^{u_{k}}$ pour $k\in K''$ est \'egal \`a $\kappa^{{\cal U}}(u)$. Le produit des $(-1)^{val_{F}(\eta)}$ est $(-1)^{\vert K''\vert val_{F}(\eta)}$. Mais $(-1)^{\vert K''\vert }=sgn_{CD}(w'')$. D'o\`u $$(2) \qquad \prod_{k\in K''}sgn_{k}(C_{k})=\kappa^{{\cal U}}(u)sgn_{CD}(w'')^{val_{F}(\eta)}.$$ Fixons $j=1,...,t'_{2}$. Posons pour simplifier $l_{1}=l_{1}(j)$, $l_{2}=l_{2}(j)$ et notons $l$ l'\'el\'ement de $\hat{J}$ tel que $\{l_{1},l_{2}\}=\{l-1,l\}$. Le polyn\^ome $P$ se d\'ecompose en produit des polyn\^omes caract\'eristiques $P_{k}$ des $X_{k}$, pour $k=1,...,t$, et des polyn\^omes caract\'eristiques $P_{h}$ des $a_{h}$ pour $h=1,...,R$. On a $$P'(a_{l_{2}})=P'_{l_{2}}(a_{l_{2}})\left(\prod_{h=1,...,R; h\not=l_{2}}P_{h}(a_{l_{2}})\right)\left(\prod_{k=1,...,t}P_{k}(a_{l_{2}})\right).$$ On v\'erifie que tous les termes ci-dessus sauf le premier appartiennent \`a $F^{\natural}_{l_{2}}(\gamma)^{\times}$. Pour le premier, son produit avec $a_{l_{2}}$ appartient \`a ce groupe. On a calcul\'e les images par $sgn_{l_{2}}$ de presque tous ces termes en \cite{W3} page 400. Pour $k=1,...,t$, on a $sgn_{l_{2}}(P_{k}(a_{l_{2}}))=-sgn(-1)^{[E_{k}:F]/2}$. Parce que $[E_{k}:F]/2=\beta_{k}$, on a $\sum_{k=1,...,t}[E_{k}:F]/2=N$. On a aussi $(-1)^t=sgn_{CD}(w')sgn_{CD}(w'')$. On obtient $$sgn_{l_{2}}(\prod_{k=1,...,t}P_{k}(a_{l_{2}}))=sgn(-1)^Nsgn_{CD}(w')sgn_{CD}(w'').$$ Soit $h\in \{1,...,l-2\}$. On a $sgn_{l_{2}}(P_{h}(a_{l_{2}}))=sgn(-1)^{[F_{h}(\gamma):F]/2}$. Par construction, tous les $[F_{h}(\gamma):F]$ sont impairs et le terme ci-dessus vaut $sgn(-1)$. Puisque $l$ est pair, le produit de ces termes sur $h=1,...,l-2$ vaut $1$. Pour $h\in \{l+1,...,R\}$, on a $sgn_{l_{2}}(P_{h}(a_{l_{2}}))=sgn(-1)^{1+[F_{l_{2}}(\gamma):F]/2}sgn(\gamma_{l_{2}}\gamma_{h})$. Comme ci-dessus, $[F_{l_{2}}(\gamma):F]/2$ est impair et le terme ci-dessus vaut $sgn(\gamma_{l_{2}}\gamma_{h})$. Puisque $l$ est pair, le produit de ces termes sur $h=l+1,...,R$ vaut $sgn(\gamma_{l_{2}})^R\prod_{h=l+1,...,,R}sgn(\gamma_{h})$. On a $sgn_{l_{2}}(a_{l_{2}}P'_{l_{2}}(a_{l_{2}}))=sgn((-1)^{[F_{l_{2}}/F]/2}[F_{l_{2}}:F])$, ou encore $sgn(-[F_{l_{2}}:F])$. Il reste un dernier terme $P_{l_{1}}(a_{l_{2}})$, dont l'image par $sgn_{l_{2}}$ n'est pas calcul\'ee dans \cite{W3}. Le polyn\^ome caract\'eristique de $a_{l_{1}}$ est $P_{l_{1}}(Y)=Y^{2l-2}-\varpi\gamma_{l_{1}}$. Donc $P_{l_{1}}(a_{l_{2}})=a_{l_{2}}^{2l-2}-\varpi\gamma_{l_{1}}=\varpi(\gamma_{l_{2}}-\gamma_{l_{1}})$. Mais $-\varpi^{-1}\gamma_{l_{2}}$ est une norme de l'extension $F_{l_{2}}(\gamma)/F_{l_{2}}^{\natural}(\gamma)$ (c'est la norme de $\varpi^{-1}a_{l_{2}}^{l-1}$). On a donc $$sgn_{l_{2}}(P_{l_{1}}(a_{l_{2}}))=sgn_{l_{2}}(-\varpi^{-1}\gamma_{l_{2}}\varpi(\gamma_{l_{2}}-\gamma_{l_{1}}))=sgn_{l_{2}}(-\gamma_{l_{2}}(\gamma_{l_{2}}-\gamma_{l_{1}}))=sgn(-\gamma_{l_{2}}(\gamma_{l_{2}}-\gamma_{l_{1}})),$$ puisque $sgn_{l_{2}}$ co\"{\i}ncide avec $sgn$ sur $\mathfrak{o}^{\times}$. En rassemblant ces calculs, on obtient $$sgn_{a_{l_{2}}}(a_{l_{2}}P'(a_{l_{2}}))=sgn(-1)^Nsgn_{CD}(w')sgn_{CD}(w'')sgn([F_{l_{2}}(\gamma):F])sgn(\gamma_{l_{2}}-\gamma_{l_{1}})$$ $$sgn(\gamma_{l_{2}})^{R+1}\prod_{h=l+1,...,,R}sgn(\gamma_{h}).$$ Le terme $C_{l_{2}}$ est le produit de $a_{l_{2}}P'(a_{l_{2}})$ et de $(-1)^n\eta[F_{l_{2}}(\gamma):F]c[\gamma,e]_{l_{2}}a_{l_{2}}^{-2}$. Le terme $a_{l_{2}}^{-2}$ se remplace par $-1$ car $-a_{l_{2}}^{-2}$ est la norme de $a_{l_{2}}^{-1}$. Le facteur $[F_{l_{2}}(\gamma):F]$ compense celui intervenant dans la formule pr\'ec\'edente. On a $$sgn_{l_{2}}(\eta)=sgn_{l_{2}}(\eta\varpi^{-val_{F}(\eta)})sgn_{l_{2}}(\varpi^{val_{F}(\eta)}).$$ Le premier facteur se remplace par $sgn(\eta\varpi^{-val_{F}(\eta)})$. Puisque $-\varpi^{-1}\gamma_{l_{2}}$ est une norme, le deuxi\`eme facteur se remplace par $sgn(-\gamma_{l_{2}})^{val_{F}(\eta)}$, ou encore, puisque $val_{F}(\eta)$ est de la m\^eme parit\'e que $R$, par $sgn(-\gamma_{l_{2}})^R$. En se reportant \`a la d\'efinition de $c[\gamma,e]_{l_{2}}$, on peut \'ecrire $c[\gamma,e]_{l_{2}}=(-1)^{l_{2}}\gamma_{l_{2}}e_{l_{2}}(-\gamma_{l_{2}})^{B}$, o\`u $B $ a \'et\'e d\'efini avant l'\'enonc\'e. D'o\`u $$sgn_{l_{2}}(c[\gamma,e]_{l_{2}})=sgn((-1)^{l_{2}}\gamma_{l_{2}}e_{l_{2}})sgn(\gamma_{l_{2}})^B.$$ On obtient alors $$sgn_{l_{2}}(C_{l_{2}})=sgn(-1)^{N+n+R+1+l_{2}}sgn(\eta\varpi^{-val_{F}(\eta)})sgn_{CD}(w')sgn_{CD}(w'') sgn(e_{l_{2}}) sgn(\gamma_{l_{2}}-\gamma_{l_{1}})$$ $$ sgn(-\gamma_{l_{2}})^B \prod_{h=l+1,...,,R}sgn(\gamma_{h}).$$ On a $n-N=(r^{'2}+r^{''2})/2$. Puisque $r'$ et $r''$ sont de m\^eme parit\'e, on constate que $n-N$ est de la m\^eme parit\'e que $r'$ et $r''$, ou encore de $R$. Le premier terme se simplifie donc en $sgn(-1)^{1+l_{2}}$. On constate aussi que $(-1)^{1+l_{2}}(\gamma_{l_{2}}-\gamma_{l_{1}})=\gamma_{l-1}-\gamma_{l}$. D'o\`u $$(3) \qquad sgn_{l_{2}}(C_{l_{2}})= sgn(\eta\varpi^{-val_{F}(\eta)})sgn_{CD}(w')sgn_{CD}(w'') sgn(e_{l_{2}}) sgn(\gamma_{l-1}-\gamma_{l})$$ $$ sgn(-\gamma_{l_{2}})^B \prod_{h=l+1,...,,R}sgn(\gamma_{h}).$$ R\'etablissons l'indice $j$ et faisons le produit de ces expressions sur $j=1,...,t'_{2}=(R-r)/2$. Les premiers termes donnent $sgn(\eta\varpi^{-val_{F}(\eta)})^{(R-r)/2}sgn_{CD}(w')^{(R-r)/2}sgn_{CD}(w'')^{(R-r)/2}$. Le produit des $sgn(e_{l_{2}(j)})$ est \'egal \`a $\kappa^{L_{2}}(e)$. Le produit des termes suivants est $\prod_{j\in \hat{J}}sgn(\gamma_{j-1}-\gamma_{j})$. On a $$\prod_{j=1,...,t'_{2}}sgn(-\gamma_{l_{2}(j)})=sgn(-1)^{t'_{2}}sgn(\prod_{j=1,...,t'_{2}}\gamma^{L_2}_{j}).$$ Puisque $t'_{2}$ est de la m\^eme parit\'e que $val_{F}(\eta[L_{2},\gamma])$, le premier terme vaut $sgn(-1)^{val_{F}(\eta[L_{2},\gamma])}$. Par d\'efinition de $\eta[L_{2},\gamma]$, le deuxi\`eme terme vaut $sgn(\eta[L_{2},\gamma]\varpi^{-val_{F}(\eta[L_{2},\gamma])})sgn_{CD}(w'')$. Consid\'erons le dernier produit de la formule (3). Pour $h\in \{R-r+1,...,R\}$, le terme $sgn(\gamma_{h})$ intervient pour tout $j$. Le produit en $j$ donne $sgn(\gamma_{h})^{(R-r)/2}$. Pour $h\in \hat{J}$, les termes $sgn(\gamma_{h-1})$ et $sgn(\gamma_{h})$ interviennent pour les $j< h/2$. Le produit donne $sgn(\gamma_{h-1}\gamma_{h})^{h/2-1}$. D'o\`u $$(4)\qquad \prod_{j=1,...,t'_{2}}sgn_{l_{2}(j)}(C_{l_{2}(j)})=\kappa^{L_{2}}(e)\left(sgn(\eta\varpi^{-val_{F}(\eta)})sgn_{CD}(w')sgn_{CD}(w'')\right)^{(R-r)/2}$$ $$\left(\prod_{j\in \hat{J}}sgn(\gamma_{j-1}\gamma_{j})^{j/2-1}sgn(\gamma_{j-1}-\gamma_{j}))\right)\left(\prod_{j=R-r+1,...,R}sgn(\gamma_{j})^{(R-r)/2}\right)$$ $$sgn(-1)^{val_{F}(\eta[L_{2},\gamma])B}sgn(\eta[L_{2},\gamma]\varpi^{-val_{F}(\eta[L_{2},\gamma])})^Bsgn_{CD}(w'')^B.$$ Le lemme r\'esulte de (1), (2) et (4). $\square$ \bigskip \subsection{ D\'emonstration du (ii) du lemme 2.2} Soient $(\bar{n}_{1},\bar{n}_{2})\in D(n)$ et $\eta_{1},\eta_{2}\in F^{\times}/F^{\times2}$ tels que $\eta_{1}\eta_{2}=\eta$. Soient $\bar{Y}_{1}$, resp. $\bar{Y}_{2}$ un \'el\'ement r\'egulier, elliptique et topologiquement nilpotent de $\mathfrak{g}_{\bar{n}_{1},\eta_{1}}(F)$, resp. $\mathfrak{g}_{\bar{n}_{2},\eta_{2}}(F)$. On va calculer l'int\'egrale orbitale endoscopique $J^{endo}(\bar{Y}_{1},\bar{Y}_{2},f)$, cf. 1.2 dont les d\'efinitions s'adaptent aux alg\`ebres de Lie. Soit $\sharp=iso$ ou $an$. Pour $(L_{1},L_{2})\in {\cal L}$, $\gamma\in \Gamma$ et $a\in A(\gamma)$, posons $$(1) \qquad E_{\sharp}[a,L_{1},L_{2}](\bar{Y}_{1},\bar{Y}_{2})=d(r',r'',\gamma,L_{2})\sum_{Y}\Delta_{\bar{n}_{1},\eta_{1},\bar{n}_{2},\eta_{2}}((\bar{Y}_{1},\bar{Y}_{2}),Y)$$ $$\sum_{e\in {\cal E}}\sum_{u\in {\cal U}}\kappa^{L_{2}}(e)\kappa^{\cal U}(u)\hat{i}_{\sharp}[a,e,u](Y).$$ En utilisant le lemme 2.5, on calcule $$ J^{endo}(\bar{Y}_{1},\bar{Y}_{2},f_{\sharp})=\vert {\cal L}\vert ^{-1}\sum_{(L_{1},L_{2})\in {\cal L}}\sum_{\gamma\in \Gamma} C(\gamma,L_{2})\int_{A(\gamma)} E_{\sharp}[a,L_{1},L_{2}](\bar{Y}_{1},\bar{Y}_{2})\,da,$$ o\`u on a pos\'e $$C(\gamma,L_{2}) =d(r',r'',\gamma,L_{2})C(w')C(w'')C(r',r'')\alpha(r',r'',w',w'')\sigma(\gamma).$$ Par d\'efinition, on a l'\'egalit\'e $$ J^{endo}(\bar{Y}_{1},\bar{Y}_{2},f)=J^{endo}(\bar{Y}_{1},\bar{Y}_{2},f_{iso})+J^{endo}(\bar{Y}_{1},\bar{Y}_{2},f_{an}).$$ D'o\`u l'\'egalit\'e $$(2)\qquad J^{endo}(\bar{Y}_{1},\bar{Y}_{2},f)=\vert {\cal L}\vert ^{-1}\sum_{(L_{1},L_{2})\in {\cal L}}\sum_{\gamma\in \Gamma} C(\gamma,L_{2})\int_{A(\gamma)} E[a,L_{1},L_{2}](\bar{Y}_{1},\bar{Y}_{2})\,da,$$ o\`u on a pos\'e $$E[a,L_{1},L_{2}](\bar{Y}_{1},\bar{Y}_{2})=E_{iso}[a,L_{1},L_{2}](\bar{Y}_{1},\bar{Y}_{2})+E_{an}[a,L_{1},L_{2}](\bar{Y}_{1},\bar{Y}_{2}).$$ Fixons $ \sharp=iso$ ou $an$, $(L_{1},L_{2})\in {\cal L}$, $\gamma\in \Gamma$ et $a\in A(\gamma)$. Rappelons que, pour $e\in {\cal E}$ et $u\in {\cal U}$, on a l'\'egalit\'e $\hat{i}_{\sharp}[a,e,u]=\frac{1}{2}(\hat{i}^+_{\sharp}[a,e,u]+\hat{i}^-_{\sharp}[a,e,u])$. Supposons $n_{1}\not=0$ et $n_{2}\not=0$. On a d\'efini en 2.6 une application qui, \`a $\zeta_{1},\zeta_{2}=\pm$ associe $\zeta=\pm$ de sorte que les classes de conjugaison stable de $(X^{\zeta_{1}}(a^{L_1},w''),X^{\zeta_{2}}(a^{L_2},w''))$ et $X^{\zeta}(a,e,u)$ se correspondent. Notons-la $Z$. Elle est surjective et ses fibres ont deux \'el\'ements. On a donc $$\hat{i}_{\sharp}[a,e,u]=\frac{1}{4}\sum_{\zeta_{1},\zeta_{2}=\pm}\hat{i}^{Z(\zeta_{1},\zeta_{2})}_{\sharp}[a,e,u].$$ Utilisons le lemme 2.6. Alors $$ \kappa^{L_{2}}(e)\kappa^{{\cal U}}(u)\hat{i}_{\sharp}[a,e,u]=\frac{1}{4}\sum_{\zeta_{1},\zeta_{2}=\pm}d(r',r'',\gamma,L_{2})$$ $$\Delta_{n_{1},\eta[L_{1},\gamma],n_{2},\eta[L_{2},\gamma]}((X^{\zeta_{1}}(a^{L_1},w'),X^{\zeta_{2}}(a^{L_2},w'')),X^{Z(\zeta_{1},\zeta_{2})}(a,e,u))\hat{i}^{Z(\zeta_{1},\zeta_{2})}_{\sharp}[a,e,u].$$ L'\'egalit\'e (1) se r\'ecrit $$(3) \qquad E_{\sharp}[a,L_{1},L_{2}](\bar{Y}_{1},\bar{Y}_{2})=\frac{1}{4}\sum_{\zeta_{1},\zeta_{2}=\pm} \sum_{Y}\Delta_{\bar{n}_{1},\eta_{1},\bar{n}_{2},\eta_{2}}((\bar{Y}_{1},\bar{Y}_{2}),Y) E_{\sharp}^{\zeta_{1},\zeta_{2}}[a,L_{1},L_{2}](Y),$$ o\`u $$E_{\sharp}^{\zeta_{1},\zeta_{2}}[a,L_{1},L_{2}](Y)=$$ $$ \sum_{e\in {\cal E}}\sum_{u\in {\cal U}}\Delta_{n_{1},\eta[L_{1},\gamma],n_{2},\eta[L_{2},\gamma]}((X^{\zeta_{1}}(a^{L_1},w'),X^{\zeta_{2}}(a^{L_2},w'')),X^{Z(\zeta_{1},\zeta_{2})}(a,e,u))\hat{i}^{Z(\zeta_{1},\zeta_{2})}_{\sharp}[a,e,u](Y).$$ Fixons $\zeta_{1}$ et $\zeta_{2}$. Dans la formule d\'efinissant $E_{\sharp}^{\zeta_{1},\zeta_{2}}[a,L_{1},L_{2}](Y)$, la somme est en fait limit\'ee aux $(e,u)$ tels que $\sharp(a,e,u)=\sharp$ (pour les autres, $\hat{i}^{Z(\zeta_{1},\zeta_{2})}_{\sharp}[a,e,u]$ est nulle). Les $X^{Z(\zeta_{1},\zeta_{2})}(a,e,u)$ parcourent un ensemble de repr\'esentants des classes de conjugaison par $G_{\sharp}(F)$ dans la classe de conjugaison stable correspondant \`a celle de $(X^{\zeta_{1}}(a^{L_1},w'),X^{\zeta_{2}}(a^{L_2},w''))$. On voit qu'aux diff\'erences de notations pr\`es et \`a une constante pr\`es, $E_{\sharp}^{\zeta_{1},\zeta_{2}}[a,L_{1},L_{2}](Y)$ n'est autre que le membre de droite de l'\'egalit\'e de la conjecture 2 de \cite{W4} VIII.7. Cette conjecture est d\'emontr\'ee depuis que Ngo Bao Chau a d\'emontr\'e le lemme fondamental. On va appliquer cette conjecture. Posons $$C_{\sharp}(n_{1},\eta[L_{1},\gamma],n_{2},\eta[L_{2},\gamma])=\gamma_{\psi_{F}}(\mathfrak{g}_{n_{1},\eta[L_{1},\gamma],iso}(F))\gamma_{\psi_{F}}(\mathfrak{g}_{n_{2},\eta[L_{2},\gamma],iso}(F))\gamma_{\psi_{F}}(\mathfrak{g}_{\sharp}(F))^{-1},$$ avec les d\'efinitions de \cite{W4} VIII.5. Pour $j=1,2$ et pour un \'el\'ement r\'egulier $Y_{j}$ de $\mathfrak{g}_{n_{j},\eta[L_{j},\gamma],iso}(F)$, notons $z_{iso}(Y_{j})$ le nombre de classes de conjugaison par $G_{n_{j},\eta[L_{j},\gamma],iso}(F)$ dans la classe de conjugaison stable de $Y_{j}$. Rappelons que $X^{\zeta_{1}}(a^{L_{1}},w')$ est un \'el\'ement d'une certaine classe de conjugaison stable, cf. 2.6. Un ensemble de classes de conjugaison par $G_{n_{1},\eta[L_{1},\gamma],iso}(F)$ dans cette classe de conjugaison stable est form\'e d'\'el\'ements $X^{\zeta_{1}}(a^{L_{1}},e_{1},u_{1})$, o\`u $e_{1}$ et $u_{1}$ d\'ecrivent des ensembles ${\cal E}_{1}$ et ${\cal U}_{1}$ analogues \`a ${\cal E}$ et ${\cal U}$, avec la restriction $\sharp(a^{L_{1}},e_{1},u_{1})=iso$. On d\'efinit pour ces \'el\'ements une fonction $\hat{i}^{\zeta_{1}}_{iso}[a^{L_{1}},e_{1},u_{1}]$ similaire \`a $\hat{i}^{\zeta}_{\sharp}[a,e,u]$. Si la condition $\sharp(a^{L_{1}},e_{1},u_{1})=iso$ n'est pas v\'erifi\'ee, on pose $\hat{i}^{\zeta_{1}}_{iso}[a^{L_{1}},e_{1},u_{1}]=0$. On pose des d\'efinitions analogues en rempla\c{c}ant l'indice $1$ par $2$ (et $w'$ par $w''$). Alors la conjecture 2 de \cite{W4} VIII.7 fournit l'\'egalit\'e $$(4) \qquad E_{\sharp}^{\zeta_{1},\zeta_{2}}[a,L_{1},L_{2}](Y)=C_{\sharp}(n_{1},\eta[L_{1},\gamma],n_{2},\eta[L_{2},\gamma])\sum_{ e_{1}\in {\cal E}_{1},u_{1}\in {\cal U}_{1},e_{2}\in {\cal E}_{2},u_{2}\in {\cal U}_{2}}\sum_{Y_{1},Y_{2}}$$ $$z_{iso}(Y_{1})^{-1}z_{iso}(Y_{2})^{-1}\Delta_{n_{1},\eta[L_{1},\gamma],n_{2},\eta[L_{2},\gamma]}((Y_{1},Y_{2}),Y)\hat{i}_{iso}^{\zeta_{1}}[a^{L_{1}},e_{1},u_{1}](Y_{1})\hat{i}_{iso}^{\zeta_{2}}[a^{L_{2}},e_{2},u_{2}](Y_{2}),$$ o\`u l'on somme sur les $(Y_{1},Y_{2})$, elliptiques r\'eguliers dans $\mathfrak{g}_{n_{1},\eta[L_{1},\gamma],iso}(F) \times \mathfrak{g}_{n_{2},\eta[L_{2},\gamma],iso}(F)$, \`a conjugaison pr\`es par $G_{n_{1},\eta[L_{1},\gamma],iso}(F)\times G_{n_{2},\eta[L_{2},\gamma],iso}(F)$. Calculons $C_{\sharp}(n_{1},\eta[L_{1},\gamma],n_{2},\eta[L_{2},\gamma])$. D'apr\`es \cite{MW} 3.15, on a les \'egalit\'es $\gamma_{\psi_{F}}(\mathfrak{g}_{\sharp}(F))=\gamma_{\psi_{F}}^{n}sgn(\eta\varpi^{-val_{F}(\eta)})$, si $val_{F}(\eta)$ est pair; $\gamma_{\psi_{F}}(\mathfrak{g}_{\sharp}(F))=\gamma_{\psi_{F}}^{n-1}$ si $val_{F}(\eta)$ est impair. Le terme $\gamma_{\psi_{F}}$ est une constante de Weil \'el\'ementaire. Il v\'erifie l'\'egalit\'e $\gamma_{\psi_{F}}^2=sgn(-1)$. On voit tout d'abord que les termes ci-dessus ne d\'ependent pas de l'indice $\sharp$. La constante $C_{\sharp}(n_{1},\eta[L_{1},\gamma],n_{2},\eta[L_{2},\gamma])$ non plus, on la note simplement $C(n_{1},\eta[L_{1},\gamma],n_{2},\eta[L_{2},\gamma])$. Pour $j=1,2$, on a des formules analogues pour $\gamma_{\psi_{F}}(\mathfrak{g}_{n_{j},\eta[L_{j},\gamma],iso}(F))$. On obtient les \'egalit\'es $$ C(n_{1},\eta[L_{1},\gamma],n_{2},\eta[L_{2},\gamma])=\left\lbrace\begin{array}{cc}1,& pour\,\ val_{F}(\eta[L_{1},\gamma]) \,\,pair\\ &et\,\,val_{F}(\eta[L_{2},\gamma])\,\,pair;\\ sgn(\eta[L_{1},\gamma]\varpi^{-val_{F}(\eta[L_{1},\gamma])}),&pour\,\ val_{F}(\eta[L_{1},\gamma]) \,\,pair\\ &et\,\,val_{F}(\eta[L_{2},\gamma])\,\,impair;\\ sgn(\eta[L_{2},\gamma]\varpi^{-val_{F}(\eta[L_{2},\gamma])}),&pour\,\ val_{F}(\eta[L_{1},\gamma]) \,\,impair\\ &et\,\,val_{F}(\eta[L_{2},\gamma])\,\,pair;\\ sgn(-\eta\varpi^{-val_{F}(\eta)}),&pour\,\ val_{F}(\eta[L_{1},\gamma]) \,\,impair\\ &et\,\,val_{F}(\eta[L_{2},\gamma])\,\,impair.\\ \end{array}\right.$$ Ins\'erons l'\'egalit\'e (4) dans la formule (3). Pour $j=1,2$, posons $$\hat{i}_{iso}[a^{L_{j}},e_{j},u_{j}]=\frac{1}{2}(\hat{i}_{iso}^{+}[a^{L_{j}},e_{j},u_{j}]+\hat{i}_{iso}^{-}[a^{L_{j}},e_{j},u_{j}]).$$ On obtient $$ E_{\sharp}[a,L_{1},L_{2}](\bar{Y}_{1},\bar{Y}_{2})= C(n_{1},\eta[L_{1},\gamma],n_{2},\eta[L_{2},\gamma]) \sum_{ e_{1}\in {\cal E}_{1},u_{1}\in {\cal U}_{1},e_{2}\in {\cal E}_{2},u_{2}\in {\cal U}_{2}}\sum_{Y_{1},Y_{2}}$$ $$z_{iso}(Y_{1})^{-1}z_{iso}(Y_{2})^{-1}S_{\sharp,L_{1},L_{2},\gamma}(\bar{Y}_{1},\bar{Y}_{2},Y_{1},Y_{2})\hat{i}_{iso}[a^{L_{1}},e_{1},u_{1}](Y_{1})\hat{i}_{iso}[a^{L_{2}},e_{2},u_{2}](Y_{2}),$$ o\`u $$S_{\sharp,L_{1},L_{2},\gamma}(\bar{Y}_{1},\bar{Y}_{2},Y_{1},Y_{2})=\sum_{Y}\Delta_{\bar{n}_{1},\eta_{1},\bar{n}_{2},\eta_{2}}((\bar{Y}_{1},\bar{Y}_{2}),Y)\Delta_{n_{1},\eta[L_{1},\gamma],n_{2},\eta[L_{2},\gamma]}((Y_{1},Y_{2}),Y).$$ Rappelons que $Y$ d\'ecrit ici les \'el\'ements elliptiques r\'eguliers de $\mathfrak{g}_{\sharp}(F)$, \`a conjugaison pr\`es par $G_{\sharp}(F)$. Puisque $E[a,L_{1},L_{2}](\bar{Y}_{1},\bar{Y}_{2})$ est la somme de $E_{iso}[a,L_{1},L_{2}](\bar{Y}_{1},\bar{Y}_{2})$ et de $E_{an}[a,L_{1},L_{2}](\bar{Y}_{1},\bar{Y}_{2})$, on a $$(5) \qquad E[a,L_{1},L_{2}](\bar{Y}_{1},\bar{Y}_{2})= C(n_{1},\eta[L_{1},\gamma],n_{2},\eta[L_{2},\gamma]) \sum_{ e_{1}\in {\cal E}_{1},u_{1}\in {\cal U}_{1},e_{2}\in {\cal E}_{2},u_{2}\in {\cal U}_{2}}\sum_{Y_{1},Y_{2}}$$ $$z_{iso}(Y_{1})^{-1}z_{iso}(Y_{2})^{-1}S_{L_{1},L_{2},\gamma}(\bar{Y}_{1},\bar{Y}_{2},Y_{1},Y_{2})\hat{i}_{iso}[a^{L_{1}},e_{1},u_{1}](Y_{1})\hat{i}_{iso}[a^{L_{2}},e_{2},u_{2}](Y_{2}),$$ o\`u $$S_{L_{1},L_{2},\gamma}(\bar{Y}_{1},\bar{Y}_{2},Y_{1},Y_{2})=S_{iso,L_{1},L_{2},\gamma}(\bar{Y}_{1},\bar{Y}_{2},Y_{1},Y_{2})+S_{an,L_{1},L_{2},\gamma}(\bar{Y}_{1},\bar{Y}_{2},Y_{1},Y_{2}).$$ On a suppos\'e $n_{1}\not=0$ et $n_{2}\not=0$. Supposons que ces conditions ne sont pas v\'erifi\'ees, par exemple $n_{2}=0$. On reprend le calcul. Les seules diff\'erences sont que les termes index\'es par $2$ doivent dispara\^{\i}tre. En particulier les $\zeta_{2}$. Les $\frac{1}{4}$ figurant dans les calculs sont remplac\'es par des $\frac{1}{2}$. On obtient encore la formule (5), dont on fait dispara\^{\i}tre les termes index\'es par $2$. Les propri\'et\'es d'inversion des facteurs de transfert nous disent que $S_{L_{1},L_{2},\gamma}(\bar{Y}_{1},\bar{Y}_{2},Y_{1},Y_{2})=0$ sauf si $\bar{n}_{1}=n_{1}$, $\eta_{1}=\eta[L_{1},\gamma]$, $\bar{n}_{2}=n_{2}$ et $\eta_{2}=\eta[L_{2},\gamma]$. Supposons que $(\bar{n}_{1},\bar{n}_{2})\not=(n_{1},n_{2})$ ou que $(\bar{n}_{1},\bar{n}_{2})=(n_{1},n_{2})$ mais que $(\eta_{1},\eta_{2})$ ne v\'erifie pas la condition (1) de 2.2 (c'est-\`a-dire $val_{F}(\eta_{1})\equiv t'_{1}=t''_{1}\,\,mod \,\,2{\mathbb Z}$ et $val_{F}(\eta_{2})\equiv t'_{2}=t''_{2}\,\,mod \,\,2{\mathbb Z}$). Remarquons que les couples $(\eta[L_{1},\gamma],\eta[L_{2},\gamma])$ d\'ependent de $L_{1}$, $L_{2}$ et $\gamma$ mais v\'erifient par construction cette condition. Alors $S_{L_{1},L_{2},\gamma}(\bar{Y}_{1},\bar{Y}_{2},Y_{1},Y_{2})$ est toujours nulle. Donc aussi $E[a,L_{1},L_{2}](\bar{Y}_{1},\bar{Y}_{2})$. La formule (2) implique $J^{endo}(\bar{Y}_{1},\bar{Y}_{2},f)=0$. Ceci \'etant vrai pour tous $\bar{Y}_{1},\bar{Y}_{2}$, on a $transfert_{\bar{n}_{1},\eta_{1},\bar{n}_{2},\eta_{2}}(f)=0$. Cela d\'emontre le (ii) du lemme 2.2. \bigskip \subsection{D\'emonstration du (i) du lemme 2.2} On poursuit le calcul en supposant que $\bar{n}_{1}=n_{1}$, $\bar{n}_{2}=n_{2}$ et que $(\eta_{1},\eta_{2})$ v\'erifient la condition (1) de 2.2. Le terme $E[a,L_{1},L_{2}](\bar{Y}_{1},\bar{Y}_{2})$ n'est non nul que si $\eta[L_{1},\gamma]=\eta_{1}$ et $\eta[L_{2},\gamma]=\eta_{2}$. Le couple $(L_{1},L_{2})$ \'etant fix\'e, on note $\Gamma[L_{1},L_{2}]$ l'ensemble des $\gamma\in \Gamma$ pour lesquels cette condition est v\'erifi\'ee. Supposons qu'elle le soit. Alors $S_{L_{1},L_{2},\gamma}(\bar{Y}_{1},\bar{Y}_{2},Y_{1},Y_{2})$ est non nulle si et seulement si $( Y_{1},Y_{2})$ est stablement conjugu\'e \`a un couple d\'eduit de $(\bar{Y}_{1},\bar{Y}_{2})$ par un automorphisme de la donn\'ee endoscopique d\'efinie par $(n_{1},n_{2})$. Si $(Y_{1},Y_{2})$ est un tel \'el\'ement, $S_{L_{1},L_{2},\gamma}(\bar{Y}_{1},\bar{Y}_{2},Y_{1},Y_{2})$ est \'egal au nombre de termes de la sommation, c'est-\`a-dire au nombre $z(\bar{Y}_{1},\bar{Y}_{2})$ de classes de conjugaison dans la classe totale de conjugaison stable de $\mathfrak{g}_{iso}(F)\cup \mathfrak{g}_{an}(F)$ correspondant \`a celle de $(\bar{Y}_{1},\bar{Y}_{2})$. Il faut ici faire attention. Parce qu'on travaille simultan\'ement avec les deux groupes $G_{iso}$ et $G_{an}$, on a raffin\'e la notion de donn\'ee endoscopique. Il faut raffiner aussi celle d'automorphismes de cette donn\'ee. On voit que, si $n_{1}$ et $n_{2}$ sont tous deux non nuls, il y a deux automorphismes de notre donn\'ee: outre l'identit\'e, il y a l'action d'un \'el\'ement de $O^-(Q_{n_{1},\eta_{1},iso})\times O^-(Q_{n_{2},\eta_{2},iso})$. {\bf Remarque.} Dans le cas o\`u $n_{1}=n_{2}$ et $\eta_{1}=\eta_{2}$, la permutation des deux groupes $G_{n_{1},\eta_{1},iso}$ et $G_{n_{2},\eta_{2},iso}$ n'est pas un automorphisme pour notre notion de donn\'ee endoscopique. \bigskip Soit $(Y_{1},Y_{2})$ un couple stablement conjugu\'e \`a $(\bar{Y}_{1},\bar{Y}_{2})$. Notons $(\underline{Y}_{1},\underline{Y}_{2})$ le couple d\'eduit de $( Y_{1},Y_{2})$ par l'automorphisme non trivial. Il n'est pas stablement conjugu\'e \`a $(\bar{Y}_{1},\bar{Y}_{2})$. Mais les fonctions $\hat{i}_{iso}[a^{L_{1}},e_{1},u_{1}]$ et $\hat{i}_{iso}[a^{L_{2}},e_{2},u_{2}]$ sont invariantes par conjugaison non seulement par les groupes sp\'eciaux orthogonaux, mais par les groupes orthogonaux tout entiers. Cela vient de la d\'efinition de ces fonctions: par exemple, $\hat{i}_{iso}[a^{L_{1}},e_{1},u_{1}]$ est la somme de deux fonctions associ\'ees \`a $X^+(a^{L_{1}},e_{1},u_{1})$ et $X^-(a^{L_{1}},e_{1},u_{1})$ et ces deux \'el\'ements sont justement conjugu\'es par un \'el\'ement du groupe orthogonal de d\'eterminant $-1$. On en d\'eduit l'\'egalit\'e $$\hat{i}_{iso}[a^{L_{1}},e_{1},u_{1}](\underline{Y}_{1})\hat{i}_{iso}[a^{L_{2}},e_{2},u_{2}](\underline{Y}_{2})=\hat{i}_{iso}[a^{L_{1}},e_{1},u_{1}]( Y_{1})\hat{i}_{iso}[a^{L_{2}},e_{2},u_{2}](Y_{2}).$$ Dans la formule (5) du paragraphe pr\'ec\'edent, les couples $(Y_{1},Y_{2})$ sont simplement ceux qui sont stablement conjugu\'es \`a $(\bar{Y}_{1},\bar{Y}_{2})$ ainsi que leurs images $(\underline{Y}_{1},\underline{Y}_{2})$ par automorphisme. Ce qui pr\'ec\`ede montre que ces derniers donnent la m\^eme contribution que les premiers. On peut donc sommer seulement sur les premiers, en multipliant le tout par $2$. On calcule facilement $z(\bar{Y}_{1},\bar{Y}_{2})=4z_{iso}(\bar{Y}_{1})z_{iso}(\bar{Y}_{2})$. En posant $\beta=1$ (sous notre hypoth\`ese $n_{1}\not=0$, $n_{2}\not=0$), la formule (5) devient $$E[a,L_{1},L_{2}](\bar{Y}_{1},\bar{Y}_{2})=2^{1+2\beta} C(n_{1},\eta _{1},n_{2},\eta _{2}) \sum_{(Y_{1},Y_{2})}$$ $$\sum_{ e_{1}\in {\cal E}_{1},u_{1}\in {\cal U}_{1},e_{2}\in {\cal E}_{2},u_{2}\in {\cal U}_{2}}\hat{i}_{iso}[a^{L_{1}},e_{1},u_{1}](Y_{1})\hat{i}_{iso}[a^{L_{2}},e_{2},u_{2}](Y_{2}),$$ o\`u on somme sur les couples $(Y_{1},Y_{2})$ stablement conjugu\'es \`a $(\bar{Y}_{1} ,\bar{Y}_{2})$, pris \`a conjugaison pr\`es. Si par exemple $n_{2}=0$, notre donn\'ee endoscopique n'a pas d'autre automorphisme que l'identit\'e. Mais on a cette fois l'\'egalit\'e $z(\bar{Y}_{1})=2z_{iso}(\bar{Y}_{1})$. Le calcul conduit \`a la m\^eme \'egalit\'e que ci-dessus, o\`u maintenant $\beta=0$. La formule (2) de 2.7 devient $$(1) \qquad J^{endo}(\bar{Y}_{1},\bar{Y}_{2},f)=2^{1+2\beta}C(n_{1},\eta _{1},n_{2},\eta _{2})\vert {\cal L}\vert ^{-1} \sum_{(Y_{1},Y_{2})}\sum_{(L_{1},L_{2})\in {\cal L}}\sum_{\gamma\in \Gamma[L_{1},L_{2}]}C(\gamma,L_{2})\int_{A(\gamma)}$$ $$ \sum_{ e_{1}\in {\cal E}_{1},u_{1}\in {\cal U}_{1},e_{2}\in {\cal E}_{2},u_{2}\in {\cal U}_{2}}\hat{i}_{iso}[a^{L_{1}},e_{1},u_{1}](Y_{1})\hat{i}_{iso}[a^{L_{2}},e_{2},u_{2}](Y_{2})\,da,$$ la somme en $(Y_{1},Y_{2})$ \'etant comme ci-dessus. Rappelons qu'en 2.4, on a fix\'e des ensembles $\Gamma_{0}$ et $\Gamma_{j}$ pour $j=(R-r)/2+1,...,R$. Introduisons l'ensemble $\boldsymbol{\Gamma}$ des familles $\Gamma^*=((\Gamma_{1,j})_{j=1,...,t'_{1}},(\Gamma_{2,j})_{j=1,...,t'_{2}})$ qui v\'erifient les conditions suivantes: pour $j=t'_{2}+1,...,t'_{1}$, $\Gamma_{1,j}=\Gamma_{j+(R-r)/2}$; pour $j=1,...,t'_{2}$, $\Gamma_{1,j}$, resp. $\Gamma_{2,j}$, est un sous-ensemble de $\Gamma_{0}$ \`a deux \'el\'ements, qui est un ensemble de repr\'esentants de $\mathfrak{o}^{\times}/\mathfrak{o}^{\times2}$; on impose $\Gamma_{1,j}\cap \Gamma_{2,j}=\emptyset$. Pour une telle famille, on note $\Gamma^*_{\eta_{1}}$ l'ensemble des familles $\gamma_{1}=(\gamma_{1,j})_{j=1,...,t'_{1}}\in \prod_{j=1,...,t'_{1}}\Gamma_{1,j} $ qui v\'erifient la condition $sgn(\eta_{1}\varpi^{-val_{F}(\eta_{1})}\prod_{j=1,...,t'_{1}}\gamma_{1,j})=sgn_{CD}(w')$. On d\'efinit l'ensemble $\Gamma^*_{\eta_{2}}$ en changeant les indices $1$ en $2 $ et $w'$ en $w''$. Fixons $(L_{1},L_{2})\in {\cal L}$. Pour $(\gamma_{1},\gamma_{2})\in \Gamma^*_{\eta_{1}}\times \Gamma^*_{\eta_{2}}$, on d\'efinit une famille $\gamma=(\gamma_{l})_{l=1,...,R}$ de la fa\c{c}on suivante pour $j= t'_{2}+1,...,t'_{1}$, $\gamma_{j+(R-r)/2}=\gamma_{1,j}$; pour $j=1,...,t'_{2}$, $\gamma_{l_{1}(j)}=\gamma_{1,j}$ et $\gamma_{l_{2}(j)}=\gamma_{2,j}$. On v\'erifie que cette famille appartient \`a $\Gamma$. Cela d\'efinit une application $$\pi_{L_{1};L_{2}}:\sqcup_{\Gamma^*\in \boldsymbol{\Gamma}}\Gamma^*_{\eta_{1}}\times \Gamma^*_{\eta_{2}}\to \Gamma.$$ Montrons que (2) l'image de $\pi_{L_{1},L_{2}}$ est \'egale \`a $\Gamma[L_{1},L_{2}]$; pour $\gamma=(\gamma_{l})_{l=1,...,R}\in \Gamma[L_{1},L_{2}]$, la fibre de $\pi_{L_{1},L_{2}}$ au-dessus de $\gamma$ a pour nombre d'\'el\'ements $$\sigma^*(\gamma)=(\frac{q-3}{4})^{t'_{2}}\prod_{l\in \hat{J}}(q-2+sgn(\gamma_{l-1}\gamma_{l})).$$ Soit $\Gamma^*\in \boldsymbol{\Gamma}$ et $(\gamma_{1},\gamma_{2})\in \Gamma^*_{\eta_{1}}\times \Gamma^*_{\eta_{2}}$. Posons $\gamma=\pi_{L_{1},L_{2}}(\gamma_{1},\gamma_{2})$. On a d\'efini en 2.6 des \'el\'ements $\gamma^{L_{1}}$ et $\gamma^{L_{2}}$. Ce ne sont autres que $\gamma_{1}$ et $\gamma_{2}$. Le terme $\eta[L_{2},\gamma]$ d\'efini en 2.6 est caract\'eris\'e par les relations $val_{F}(\eta[L_{2},\gamma])\equiv t'_{2}\,\,mod \,\,2{\mathbb Z}$ et $sgn(\eta[L_{2},\gamma]\varpi^{-val_{F}(\eta[L_{2},\gamma])}\prod_{j=1,...t'_{2}}\gamma_{2,j})=sgn_{CD}(w'')$. Par hypoth\`ese, $\eta_{2}$ v\'erifie la premi\`ere relation. Par d\'efinition de $\Gamma^*_{\eta_{2}}$, $\eta_{2}$ v\'erifie aussi la seconde. Donc $\eta[L_{2},\gamma]=\eta_{2}$, puis $\eta[L_{1},\gamma]=\eta_{1}$ puisque $\eta[L_{1},\gamma]\eta[L_{2},\gamma]=\eta=\eta_{1}\eta_{2}$. Par d\'efinition de $\Gamma[L_{1},L_{2}]$, $\gamma$ appartient donc \`a cet ensemble, ce qui prouve la premi\`ere assertion. Soit maintenant $\gamma\in \Gamma[L_{1},L_{2}]$. La premi\`ere partie de la preuve montre que le nombre d'\'el\'ements de la fibre de $\pi_{L_{1},L_{2}}$ au-dessus de $\gamma$ est \'egal au nombre d'\'el\'ements $\Gamma^*\in \boldsymbol{\Gamma}$ tels que $(\gamma^{L_{1}},\gamma^{L_{2}})$ appartiennent \`a $(\prod_{j=1,...,t'_{1}}\Gamma_{1,j} )\times (\prod_{j=1,...,t'_{2}}\Gamma_{2,j})$. Ce nombre est le produit sur $j=1,...,t'_{2}$ du nombre des paires $(\Gamma_{1,j},\Gamma_{2,j})$ possibles dont le produit contient $(\gamma_{1,j},\gamma_{2,j})$. Pour simplifier la notation, posons $x=\gamma_{1,j}$, $y=\gamma_{2,j}$. Les \'el\'ements $ x$ et $y$ appartiennent \`a $\Gamma_{0}$ et sont distincts puisqu'ils proviennent de $\gamma\in \Gamma$. Les paires $(\Gamma_{1,j},\Gamma_{2,j})$ possibles sont de la forme $\Gamma_{1,j}=\{ x,x'\}$, $\Gamma_{2,j}=\{ y,y'\}$, o\`u $ x'$ et $y'$ sont deux \'el\'ements de $\Gamma_{0}$ qui v\'erifient les conditions suivantes: $x'\not\in x\mathfrak{o}^{\times2}$, $y'\not\in y\mathfrak{o}^{\times2}$, $x'\not=y$, $y'\not=x$ et $x'\not=y'$. Notons $C'_{x}$ l'ensemble des $x'\in \Gamma_{0}$ tels que $x'\not\in x\mathfrak{o}^{\times2}$ et d\'efinissons de m\^eme $C'_{y}$. Supposons $y\in x\mathfrak{o}^{\times2}$, autrement dit $sgn(xy)=1$. On a $C'_{x}=C'_{y}$. Les deux premi\`eres relations signifient que $x',y'\in C'_{x}$. Cela entra\^{\i}ne les deux relations suivantes $x'\not=y$ et $y'\not=x$. La derni\`ere relation signifie que $(x',y')$ n'appartient pas \`a la diagonale de $C'_{x}\times C'_{x}$. Le nombre de $(x',y')$ possibles est donc $\vert C'_{x}\vert ^2-\vert C'_{x}\vert $. On a $\vert C'_{x}\vert =(q-1)/2$. Le nombre de couples possibles est donc $(q-1)(q-3)/4$, ou encore $(q-2+sgn(xy))(q-3)/4$. Supposons maintenant $y\not\in x\mathfrak{o}^{\times2}$, autrement dit $sgn(xy)=-1$. On a $C'_{x}\not=C'_{y}$. Les premi\`eres et troisi\`emes conditions signifient que $x'\in C'_{x}-\{y\}$. De m\^eme, les deuxi\`emes et quatri\`emes conditions signifient que $y'\in C'_{y}-\{x\}$. La cinqui\`eme condition est automatique. le nombre de $(x',y')$ possibles est donc $(\vert C'_{x}\vert -1)(\vert C'_{y}\vert -1)=((q-3)/2)^2$, ou encore $(q-2+sgn(xy))(q-3)/4$. L'assertion (2) en r\'esulte. Un calcul similaire d\'emontre la relation suivante (3) le nombre d'\'el\'ements de $\boldsymbol{\Gamma}$ est \'egal \`a $2^{-4t'_{2}}(q-1)^{2t'_{2}}(q-3)^{2t'_{2}}$. Pour $\Gamma^*\in \boldsymbol{\Gamma}$, $(L_{1},L_{2})\in {\cal L}$ et $(Y_{1},Y_{2})$ stablement conjugu\'e \`a $(\bar{Y}_{1},\bar{Y}_{2})$, posons $$(4) \qquad {\cal J}_{\Gamma^*,L_{1},L_{2}}(Y_{1},Y_{2})=2^{1+2\beta}C(n_{1},\eta_{1},n_{2},\eta_{2})\sum_{\gamma_{1}\in \Gamma^*_{\eta_{1}}}\sum_{\gamma_{2}\in \Gamma^*_{\eta_{2}}}\sigma^*( \gamma)^{-1}C(\gamma,L_{2})$$ $$\int_{A(\gamma)} \sum_{ e_{1}\in {\cal E}_{1},u_{1}\in {\cal U}_{1},e_{2}\in {\cal E}_{2},u_{2}\in {\cal U}_{2}}\hat{i}_{iso}[a^{L_{1}},e_{1},u_{1}](Y_{1})\hat{i}_{iso}[a^{L_{2}},e_{2},u_{2}](Y_{2})\,da$$ o\`u, pour simplifier, on a pos\'e $\gamma=\pi_{L_{1},L_{2}}(\gamma_{1},\gamma_{2})$. En utilisant (2), on obtient l'\'egalit\'e $$(5) \qquad J^{endo}(\bar{Y}_{1},\bar{Y}_{2},f)= \vert {\cal L}\vert ^{-1}\sum_{Y_{1},Y_{2}} \sum_{(L_{1},L_{2})\in {\cal L}}\sum_{\Gamma^*\in \boldsymbol{\Gamma}}{\cal J}_{\Gamma^*,L_{1},L_{2}}(Y_{1},Y_{2}).$$ Fixons $\Gamma^*\in \boldsymbol{\Gamma}$, $(L_{1},L_{2})\in {\cal L}$ et $(Y_{1},Y_{2})$ stablement conjugu\'e \`a $(\bar{Y}_{1},\bar{Y}_{2})$. Les fonctions $f_{1,\eta_{1}}$ et $f_{2,\eta_{2}}$ qui interviennent dans le lemme 2.2 sont des cas particuliers de notre fonction $f$. Consid\'erons par exemple le cas de l'indice $1$. On passe de $f$ \`a $f_{1,\eta_{1}}$ en rempla\c{c}ant $n$, $\eta$, $r'$, $r''$, $N'$, $N''$, $w'$ par $n_{1}$, $\eta_{1}$, $t'_{1}$, $t''_{1}$, $N'$, $0$, $w'$. Le terme correspondant \`a $w''$ dispara\^{\i}t. L'int\'egrale orbitale $J(Y_{1},f_{1,\eta_{1},iso})$ est calcul\'ee par une formule analogue \`a celle du lemme 2.4. On peut choisir l'ensemble $\Gamma^*_{\eta_{1}}$ comme analogue de $\Gamma$. Puisque $t'_{1}=t''_{1}$, l'analogue de $\hat{J}$ est vide. Les analogues de ${\cal E}^0$ et ${\cal U}$ ne sont autres que les ensembles ${\cal E}_{1}$ et ${\cal U}_{1}$ d\'ej\`a introduits. L'analogue de $\kappa^0$ est trivial. Parce que $w''$ dispara\^{\i}t, l'analogue de $\kappa^{{\cal U}}$ est trivial. L'analogue de la constante $C(r',r'')$ vaut $2^{1-2t'_{1}}$. On note $\alpha(t'_{1},w',\eta_{1})$ l'analogue de la constante $\alpha(r',r'',w',w'')$. On a donc $$(6) \qquad J(Y_{1},f_{1,\eta_{1},iso})=C(w')2^{1-2t'_{1}}\alpha(t'_{1},w',\eta_{1})\sum_{\gamma_{1}\in \Gamma^*_{\eta_{1}}}\sigma(\gamma_{1})$$ $$\int_{A(\gamma_{1})}\sum_{e\in {\cal E}_{1},u_{1}\in {\cal U}_{1}}\hat{i}_{iso}[a_{1},e_{1},u_{1}](Y_{1})\,da_{1}.$$ Il faut toutefois faire attention: cette formule n'est valable que si $n_{1}>0$. En effet, si $n_{1}=0$, le terme de gauche doit dispara\^{\i}tre des calculs qui suivent, autrement dit on doit consid\'erer qu'il vaut $1$. Les sommes de droite disparaissent autrement dit valent $1$. Les deux constantes $C(w')$ et $\alpha(t'_{1},w',\eta_{1})$ valent aussi $1$. Mais $2^{1-t'_{1}}$ vaut $2$ et l'\'egalit\'e n'est pas v\'erifi\'ee. Bien s\^ur, c'est le produit des constantes pour les indices $1$ et $2$ qui va intervenir. La puissance de $2$ qui intervient dans ce produit est donc $2^{2-2t'_{1}-2t'_{2}}$ si $n_{1}\not=0$ et $n_{2}\not=0$ et c'est $2^{1-2t'_{1}-2t'_{2}}$ sinon. Autrement dit, c'est $2^{1+\beta+2t'_{1}+2t'_{2}}$. Soient $\gamma_{1}\in \Gamma^*_{\eta_{1}}$, $\gamma_{2}\in \Gamma^*_{\eta_{2}}$, posons $\gamma=\pi_{L_{1},L_{2}}(\gamma_{1},\gamma_{2})$. Montrons qu'on a l'\'egalit\'e $$(7) \qquad d(r',r'',\gamma,L_{2})\sigma(\gamma)\sigma^*(\gamma)^{-1}\sigma(\gamma_{1})^{-1}\sigma(\gamma_{2})^{-1}=C_{3} ,$$ o\`u $$C_{3}=sgn(-1)^{r(R-r)/2}sgn(\eta\varpi^{-val_{F}(\eta)})^{(R-r)/2}sgn_{CD}(w')^{(R-r)/2}sgn_{CD}(w'')^{(R-r)/2+val_{F}(\eta)}$$ $$(\frac{q-3}{4})^{-t'_{2}}sgn(-1)^{val_{F}(\eta_{2})B}sgn(\eta_{2}\varpi^{-val_{F}(\eta_{2})})^Bsgn_{CD}(w'')^B .$$ Rappelons les d\'efinitions: $$d(r',r'',\gamma,L_{2})=sgn(\eta\varpi^{-val_{F}(\eta)})^{(R-r)/2}sgn_{CD}(w')^{(R-r)/2}sgn_{CD}(w'')^{(R-r)/2+val_{F}(\eta)}$$ $$\left(\prod_{j\in \hat{J}}sgn(\gamma_{j-1}\gamma_{j})^{j/2-1}sgn(\gamma_{j-1}-\gamma_{j}))\right)\left(\prod_{j=R-r+1,...,R}sgn(\gamma_{j})^{(R-r)/2}\right)$$ $$sgn(-1)^{val_{F}(\eta_{2})B}sgn(\eta_{2}\varpi^{-val_{F}(\eta_{2})})^Bsgn_{CD}(w'')^B;$$ $$\sigma(\gamma)=\left(\prod_{j\in \hat{J}}(q-2+sgn(\gamma_{j-1}\gamma_{j}))sgn(\gamma_{j-1}\gamma_{j}(\gamma_{j-1}-\gamma_{j}))\right)$$ $$\prod_{j=R-r+1,...R;\, j\,\, impair}sgn(-\gamma_{j});$$ $$\sigma^*(\gamma)=(\frac{q-3}{4})^{t'_{2}}\prod_{l\in \hat{J}}(q-2+sgn(\gamma_{l-1}\gamma_{l})).$$ Les formules pour $\sigma(\gamma_{1})$ et $\sigma(\gamma_{2})$ se simplifient puisque $t'_{1}=t''_{1}$ et $t'_{2}=t''_{2}$: $$\sigma(\gamma_{1})=\prod_{j=1,...,t'_{1}; \,j\,\,impair}sgn(-\gamma_{1,j}),$$ $$\sigma(\gamma_{2})=\prod_{j=1,...,t'_{2};\, j\,\,impair}sgn(-\gamma_{2,j}).$$ Les restrictions $j$ impair figurant dans les diff\'erents produits peuvent \^etre lev\'es en \'el\'evant le terme correspondant (qui est un signe) \`a la puissance $j$. Consid\'erons l'intervention dans le membre de gauche de (6) d'un terme $\gamma_{j}$ pour $j=R-r+1,...,R$. Il intervient dans $d(r',r'',\gamma,L_{2})$ par un facteur $sgn(\gamma_{j})^{(R-r)/2}$ et dans $\sigma(\gamma)$ par un facteur $sgn(-\gamma_{j})^{j}$. Puisque $\gamma_{1,j-(R-r)/2}=\gamma_{1,j}$, il intervient dans $\sigma(\gamma_{1})$ par un facteur $sgn(-\gamma_{j})^{j-(R-r)/2}$. Le produit de ces contributions est $sgn(-1)^{(R-r)/2}$. Leur produit sur tous les $j=R-r+1,...,R$ est $sgn(-1)^{r(R-r)/2}$. Consid\'erons maintenant l'intervention des termes $\gamma_{j-1}$ et $\gamma_{j}$ pour un $j\in \hat{J}$. Le terme $sgn(\gamma_{j-1}-\gamma_{j})$ intervient dans $d(r',r'',\gamma,L_{2})$ et dans $\sigma(\gamma)$. Il dispara\^{\i}t. Le terme $q-2+sgn(\gamma_{j-1}\gamma_{j}$ intervient dans $\sigma(\gamma)$ et son inverse intervient dans $\sigma^*(\gamma)^{-1}$. Il dispara\^{\i}t. Il intervient $sgn(\gamma_{j-1}\gamma_{j})^{j/2-1}$ dans $d(r',r'',\gamma,L_{2})$ et $sgn(\gamma_{j-1}\gamma_{j})$ dans $\sigma(\gamma)$. Puisque $\{\gamma_{j-1},\gamma_{j}\}=\{\gamma_{1,j/2},\gamma_{2,j/2}\}$, il intervient aussi dans $\sigma(\gamma_{1})\sigma(\gamma_{2})$ le terme $sgn(\gamma_{j-1}\gamma_{j})^{j/2}$. Le produit de ces termes vaut $1$. En r\'esum\'e, la contribution des termes d\'ependant des $\gamma_{j}$ est $sgn(-1)^{r(R-r)/2}$. Outre ce terme, il reste le produit des constantes. Le tout donne la formule (7). Dans la formule (4) interviennent des termes $\gamma$, $a$, $a^{L_{1}}$ et $a^{L_{2}}$. On a vu dans la preuve de (2) que $\gamma^{L_{1}}=\gamma_{1}$ et $\gamma^{L_{2}}=\gamma_{2}$. On voit que, quand $a$ d\'ecrit $A(\gamma)$, le couple $(a^{L_{1}},a^{L_{2}})$ d\'ecrit $A(\gamma_{1})\times A(\gamma_{2})$. La formule (4) se r\'ecrit $${\cal J}_{\Gamma^*,L_{1},L_{2}}(Y_{1},Y_{2})=2^{1+2\beta}C(n_{1},\eta_{1},n_{2},\eta_{2})\sum_{\gamma_{1}\in \Gamma^*_{\eta_{1}}}\sum_{\gamma_{2}\in \Gamma^*_{\eta_{2}}}\sigma^*( \gamma)^{-1}C(\gamma,L_{2})$$ $$\int_{A(\gamma_{1})}\sum_{e_{1}\in {\cal E}_{1},u_{1}\in {\cal U}_{1}}\hat{i}_{iso}[a_{1},e_{1},u_{1}](Y_{1})\,da_{1}\int_{A(\gamma_{2})}\sum_{e_{2}\in {\cal E}_{2},u_{2}\in {\cal U}_{2}}\hat{i}_{iso}[a_{2},e_{2},u_{2}](Y_{2})\,da_{2}.$$ Soient $\gamma_{1}$, $\gamma_{2}$ et $\gamma$ intervenant dans cette formule. Rappelons la d\'efinition $$C(\gamma,L_{2})=d(r',r'',\gamma,L_{2})C(w')C(w'')C(r',r'')\alpha(r',r'',w',w'')\sigma(\gamma).$$ Posons $$C_{4}=2^{\beta+2t'_{1}+2t'_{2}}C_{3}C(n_{1},\eta_{1},n_{2},\eta_{2})C(r',r'')\alpha(r',r'',w',w'')\alpha(t'_{1},w',\eta_{1})\alpha(t'_{2},w'',\eta_{2}).$$ En vertu de (7), on a $$2^{1+2\beta}C(n_{1},\eta_{1},n_{2},\eta_{2})\sigma^*( \gamma)^{-1}C(\gamma,L_{2})=C_{4}2^{1+\beta-2t'_{1}-2t'_{2}}C(w')C(w'')$$ $$ \alpha(t'_{1},w',\eta_{1})\alpha(t'_{2},w'',\eta_{2})\sigma(\gamma_{1})\sigma(\gamma_{2}).$$ Alors la formule pr\'ec\'edente devient $${\cal J}_{\Gamma^*,L_{1},L_{2}}(Y_{1},Y_{2})=C_{4}2^{1+\beta-2t'_{1}-2t'_{2}}C(w')C(w'') \alpha(t'_{1},w',\eta_{1})\alpha(t'_{2},w'',\eta_{2})$$ $$\sum_{\gamma_{1}\in \Gamma^*_{\eta_{1}}} \sigma(\gamma_{1})\int_{A(\gamma_{1})}\sum_{e_{1}\in {\cal E}_{1},u_{1}\in {\cal U}_{1}}\hat{i}_{iso}[a_{1},e_{1},u_{1}](Y_{1})\,da_{1}$$ $$\sum_{\gamma_{2}\in \Gamma^*_{\eta_{2}}}\sigma(\gamma_{2})\int_{A(\gamma_{2})}\sum_{e_{2}\in {\cal E}_{2},u_{2}\in {\cal U}_{2}}\hat{i}_{iso}[a_{2},e_{2},u_{2}](Y_{2})\,da_{2}.$$ Autrement dit, en utilisant (6) et son analogue pour l'indice $2$: $${\cal J}_{\Gamma^*,L_{1},L_{2}}(Y_{1},Y_{2})=C_{4}J(Y_{1},f_{1,\eta_{1},iso})J(Y_{2},f_{2,\eta_{2},iso}).$$ Cette expression ne d\'epend ni de $\Gamma^*$, ni de $L_{1}$ et $L_{2}$. Reportons cette valeur dans l'\'egalit\'e (5). La somme en $(L_{1},L_{2})$, pond\'er\'ee par $\vert {\cal L}\vert ^{-1}$, dispara\^{\i}t. La somme en $\Gamma^*$ se transforme en la multiplication par $\vert \boldsymbol{\Gamma}\vert $. Enfin, la somme en $(L_{1},L_{2})$ remplace les int\'egrales orbitales par leurs versions stables. D'o\`u $${\cal J}_{\Gamma^*,L_{1},L_{2}}(Y_{1},Y_{2})=C_{4}\vert \boldsymbol{\Gamma}\vert S(\bar{Y}_{1},f_{1,\eta_{1},iso})S(\bar{Y}_{2},f_{2,\eta_{2},iso}).$$ Consid\'erons le cas de l'indice $1$. Le quadruplet $(t'_{1},t''_{1},N',0)$ v\'erifie l'hypoth\`ese du (i) du th\'eor\`eme 3.20 de \cite{MW}. Donc $f_{1}$ est stable au sens de cette r\'ef\'erence. On sait de plus que le nombre de classes de conjugaison dans la classe de conjugaison stable de $\bar{Y}_{1}$ est \'egal \`a celui des classes de conjugaison dans la classe de conjugaison stable dans $\mathfrak{g}_{n_{1},\eta_{1},an}(F)$ correspondant \`a celle de $\bar{Y}_{1}$. D'o\`u $$S(\bar{Y}_{1},f_{1,\eta_{1},iso})=\frac{1}{2}S(\bar{Y}_{1},f_{1,\eta_{1}}).$$ Une fois de plus, cette \'egalit\'e n'est vraie que si $n_{1}\not=0$. Si $n_{1}=0$, le terme $\frac{1}{2}$ dispara\^{\i}t. Le produit de ces constantes sur les indices $1$ et $2$ vaut donc $2^{-1-\beta}$. D'o\`u l'\'egalit\'e $${\cal J}_{\Gamma^*,L_{1},L_{2}}(Y_{1},Y_{2})=2^{-1-\beta}C_{4}\vert \boldsymbol{\Gamma}\vert S(\bar{Y}_{1},f_{1,\eta_{1}})S(\bar{Y}_{2},f_{2,\eta_{2}}).$$ Ceci \'etant vrai pour tout couple $(\bar{Y}_{1},\bar{Y}_{2})$, cela d\'emontre que $$transfert_{n_{1},\eta_{1},n_{2},\eta_{2}}(f)=2^{-1-\beta}C_{4}\vert \boldsymbol{\Gamma}\vert f_{1,\eta_{1}}\otimes f_{2,\eta_{2}}.$$ Pour obtenir l'assertion (i) du lemme 2.2, il reste \`a d\'emontrer l'\'egalit\'e $$2^{-1-\beta}C_{4}\vert \boldsymbol{\Gamma}\vert =C_{\eta_{1},\eta_{2}}.$$ Il suffit pour cela de reprendre les d\'efinitions de toutes nos constantes. On laisse ce calcul p\'enible mais sans myst\`ere au lecteur. $\square$ \bigskip \section{D\'emonstration de la proposition 1.2} \bigskip \subsection{Descente du facteur de transfert} On revient \`a nos deux groupes $G_{iso}$ et $G_{an}$ du paragraphe 1. On dit qu'un \'el\'ement $u\in G_{\sharp}(F)$ est topologiquement unipotent si ses valeurs propres, qui appartiennent \`a une certaine extension finie $F'$ de $F$, sont congrues \`a $1$ modulo l'id\'eal maximal de l'anneau des entiers de $F'$. L'application $E:X\mapsto E(X)=(1+X/2)(1-X/2)$ est une bijection entre l'ensemble des \'el\'ements $X\in \mathfrak{g}_{\sharp}(F)$ topologiquement nilpotents et l'ensemble des \'el\'ements topologiquement unipotents de $G_{\sharp}(F)$. Soit $\sharp=iso$ ou $an$ et soit $x\in G_{\sharp}(F)$ un \'el\'ement elliptique et fortement r\'egulier. On peut d\'ecomposer $x$ en un unique produit $x=su$, o\`u $s$ et $u$ commutent entre eux, $s$ est un \'el\'ement dont toutes les valeurs propres sont des racines de l'unit\'e d'ordre premier \`a $p$ et $u$ est topologiquement unipotent. Au lieu de $u$, on consid\`ere plut\^ot l'\'el\'ement topologiquement nilpotent de $X\in \mathfrak{g}_{\sharp}(F)$ tel que $u=E(X)$. On \'ecrit donc $x=sE(X)$. Parmi les valeurs propres de $s$, on distingue le nombre $1$ qui intervient avec une multiplicit\'e impaire not\'ee $1+2n_{+}$ et le nombre $-1$ qui intervient avec une multiplicit\'e paire not\'ee $2n_{-}$ (qui peut \^etre nulle). On note $V_{+}$ et $V_{-}$ les espaces propres associ\'es, $Q_{+}$ et $Q_{-}$ les restrictions de $Q_{\sharp}$ \`a ces espaces et on pose $\eta_{+}=\eta(Q_{+})$ et $\eta_{-}=\eta(Q_{-})$. On note $G_{+}=SO(Q_{+})$, $G_{-}=SO(Q_{-})$. Le groupe de Galois $Gal(\bar{F}/F)$ agit sur l'ensemble des valeurs propres de $s$ diff\'erentes de $\pm 1$, en pr\'eservant leurs multiplicit\'es. Notons $I$ l'ensemble des orbites. Soit $i\in I$, fixons un \'el\'ement $s_{i}$ de cette orbite et notons $d_{i}$ sa multiplicit\'e. Posons $E_{i}=F[s_{i}]$. C'est une extension non ramifi\'ee de $F$. Parce que $s$ appartient \`a un groupe orthogonal, il existe une sous-extension $E^{\natural}_{i}$ de $F$ telle que $[E_{i}:E^{\natural}_{i}]=2$ et que $norme_{E_{i}/E^{\natural}_{i}}(s_{i})=1$. Notons $V_{i}$ la somme des espaces propres associ\'es \`a des valeurs propres conjugu\'ees de $s_{i}$ et notons $s_{\vert V_{i}}$ la restriction de $s$ \`a cet espace. L'alg\`ebre $F[s_{\vert V_{i}}]$ s'identifie \`a $E_{i}$ par $s_{\vert V_{i}}\mapsto s_{i}$. Puisque $V_{i}$ est naturellement un $F[s_{\vert V_{i}}]$-module, il devient un $E_{i}$-espace vectoriel, dont la dimension est $d_{i}$. On montre qu'il existe une unique forme hermitienne non d\'eg\'en\'er\'ee $Q_{i}$ sur $V_{i}$ (relative \`a l'extension $E_{i}/E_{i}^{\natural}$) de sorte que, pour tout $v,v'\in V_{i}$, on ait l'\'egalit\'e $$Q_{\sharp}(v,v')=[E_{i}:F]^{-1}trace_{E_{i}/F}(Q_{i}(v,v')).$$ Notons $G_{i}$ le groupe unitaire de $Q_{i}$. La composante neutre $G_{s}$ du commutant de $s$ dans $G_{\sharp}$ est le groupe $$G_{+}\times G_{-}\times\prod_{i\in I}G_{i}.$$ On a $X\in \mathfrak{g}_{s}(F)$. C'est un \'el\'ement elliptique r\'egulier. {\bf Remarques.} (1) La condition d'ellipticit\'e emp\^eche le couple $(n_{-},\eta_{-})$ d'\^etre \'egal \`a $(1,1)$. (2) Fixons un \'el\'ement $\xi\in \mathfrak{o}^{\times}-\mathfrak{o}^{\times2}$. Pour $i\in I$, notons $Q_{\sharp,\vert V_{i}}$ la restriction de $Q_{\sharp}$ \`a $V_{i}$. On calcule facilement $\eta(Q_{\sharp,\vert V_{i}})=\xi^{d_{i}}$. Parce que l'on a suppos\'e $\eta(Q_{\sharp})=1$, on en d\'eduit l'\'egalit\'e $$\eta_{+}\eta_{-}\xi^d=1,$$ o\`u $d=\sum_{i\in I}d_{i}$. \bigskip Si on \'ecrit $X=(X_{+},X_{-},(X_{i})_{i\in I})$, on d\'efinit la classe totale de conjugaison stable de $X_{+}$, $X_{-}$ et $X_{i}$ pour $i\in I$ de la m\^eme fa\c{c}on qu'en 1.2. On d\'efinit la classe totale de conjugaison stable de $X$ comme le produit de ces classes. Consid\'erons maintenant un \'el\'ement $x'$ dans la classe totale de conjugaison stable de $x$. Il se d\'ecompose en $x'=s'E(X')$. Les donn\'ees $n_{+}$, $\eta_{+}$, $n_{-}$, $\eta_{-}$, $I$ et $d_{i}$ ne changent pas quand on remplace $s$ par $s'$. Les formes $Q_{+}$, $Q_{-}$ et $Q_{i}$ pour $i\in I$ peuvent changer. Par exemple, soit $Q'_{+}$ la forme rempla\c{c}ant $Q_{+}$. Ou bien $Q'_{+}$ est isomorphe \`a $Q_{+}$, ou bien le couple $(Q'_{+},Q_{+})$ est \'egal, \`a l'ordre pr\`es, \`a l'un de nos couples $(Q_{iso},Q_{an})$ du paragraphe 2.1. On voit qu'\`a la classe de conjugaison stable de $X$ dans $\mathfrak{g}_{s}(F)$ correspond une classe de conjugaison stable dans $\mathfrak{g}_{s'}(F)$. L'\'el\'ement $X'$ appartient \`a cette classe, autrement dit, $X'$ appartient \`a la classe totale de conjugaison stable de $X$. Plus pr\'ecis\'ement, l'application $x'\mapsto X'$ se quotiente en une bijection entre l'ensemble des classes de conjugaison contenues dans la classe totale de conjugaison stable de $x$ et l'ensemble des classes de conjugaison contenues dans la classe totale de conjugaison stable de $X$. Comme on vient de le dire, $Q_{+}$ est l'une des formes $Q_{iso}$ ou $Q_{an}$ de 2.1. Posons $sgn^*(X_{+})=1$ si c'est $Q_{iso}$ et $sgn^*(X_{+})=-1$ si c'est $Q_{an}$. On d\'efinit de m\^eme $sgn^*(X_{-})$ et $sgn^*(X_{i})$ pour $i\in I$. On pose $$sgn^*(X)=sgn^*(X_{+})sgn^*(X_{-})\prod_{i\in I}sgn^*(X_{i}).$$ On rappelle que, pour notre indice $\sharp$ fix\'e plus haut, on a d\'efini $sgn_{\sharp}=1$ si $\sharp=iso$ et $sgn_{\sharp}=-1$ si $\sharp=an$. Montrons que (3) on a l'\'egalit\'e $$sgn_{\sharp}= (-1)^{d\, val_{F}(\eta_{-})}sgn^*(X ) .$$ Preuve. Dans chacun de nos espaces $V_{+}$, $V_{-}$ et $V_{i}$ pour $i\in I$, on fixe un r\'eseau presque autodual (cf. \cite{W5} 1.1) que l'on note $L_{+}$, $L_{-}$ et $L_{i}$. On note $L$ la somme directe de ces r\'eseaux. C'est un r\'eseau presque autodual de $V_{\sharp}$. On pose $l''_{+}=L_{+}^*/L_{+}$ et on d\'efinit de m\^eme $l''_{-}$,$l''_{i}$ et $l''$. Chacun de ces espaces est muni d'une forme quadratique dont on note les d\'eterminants normalis\'es $\eta''_{+}$, $\eta''_{-}$, $\eta''_{i}$ et $\eta''$ (rappelons que par exemple $\eta''$ est le produit du d\'eterminant habituel par $(-1)^{[dim_{{\mathbb F}_{q}}(l'')/2]}$). Par d\'efinition de $V_{\sharp}$, la dimension de $l''$ sur ${\mathbb F}_{q}$ est paire et on a $sgn_{\sharp}=sgn(\eta'')$. On v\'erifie facilement que, pour $i\in I$, la dimension de $l''_{i}$ est paire. D'apr\`es la d\'efinition de \cite{W5} 1.1, on a $sgn^*(X_{i})=sgn(\eta''_{i})$. Supposons $val_{F}(\eta_{+})$ paire. Le terme $val_{F}(\eta_{-})$ est \'egalement pair d'apr\`es (2). Alors les dimensions de $l''_{+}$ et $l''_{-}$ sont paires et on a $sgn^*(X_{+})=sgn(\eta''_{+})$, $sgn^*(X_{-})=sgn(\eta''_{-})$. Le d\'eterminant non normalis\'e de la forme sur $l''$ est le produit des d\'eterminants non normalis\'es des formes sur $l''_{+}$, $l''_{-}$ et $l''_{i}$ pour $i\in I$. Puisque toutes les dimensions sont paires, il en est de m\^eme des d\'eterminants normalis\'es: on a $\eta''=\eta''_{+}\eta''_{-}\prod_{i\in I}\eta''_{i}$. Avec les formules ci-dessus, on obtient $$sgn_{\sharp}= sgn^*(X) $$ ce qui co\"{\i}ncide avec (3) puisque $val_{F}(\eta_{-})$ est paire. Supposons maintenant $val_{F}(\eta_{+})$ impaire, donc aussi $val_{F}(\eta_{-})$ impaire. Alors les dimensions de $l''_{+}$ et $l''_{-}$ sont impaires. D'apr\`es les d\'efinitions de \cite{W5} 1.1, on a $sgn^*(X_{+})=sgn(\eta'_{+})$ et $sgn^*(X_{-})=sgn(\eta'_{-})$, o\`u $\eta'_{+}$ et $\eta'_{-}$ sont les d\'eterminants normalis\'es des formes quadratiques sur $L_{+}/\varpi L_{+}^*$ et $L_{-}/\varpi L_{-}^*$. Parce que la dimension de $V_{+}$ est impaire et que celle de $V_{-}$ est paire, on a $\eta'_{+}\eta''_{+}=\eta_{+}\varpi^{-val_{F}(\eta_{+})}$ et $\eta'_{-}\eta''_{-}=-\eta_{-}\varpi^{-val_{F}(\eta_{-})}$. Parce que $l''_{+}$ et $l''_{-}$ sont de dimension impaire, on voit qu'il se glisse un signe $-$ dans le produit des d\'eterminants normalis\'es. C'est-\`a-dire $\eta''=-\eta''_{+}\eta''_{-}\prod_{i\in I}\eta''_{i}$. D'o\`u $$\eta''=\eta_{+}\eta_{-}\varpi^{-val_{F}(\eta_{+})-val_{F}(\eta_{-})}\eta'_{+}\eta'_{-}\prod_{i\in I}\eta''_{i}.$$ La somme $val_{F}(\eta_{+})+val_{F}(\eta_{-})$ est nulle d'apr\`es (2). Avec les formules ci-dessus, on obtient $$sgn_{\sharp}= sgn(\eta_{+}\eta_{-})sgn^*(X) .$$ Mais $sgn(\eta_{+}\eta_{-})=(-1)^d$ d'apr\`es (2). D'o\`u encore l'\'egalit\'e (3) puisque $val_{F}(\eta_{-})$ est impaire. $\square$ Soit $(n_{1},n_{2})\in D(n)$, auquel est associ\'ee une donn\'ee endoscopique de $G_{iso}$ et $G_{an}$, dont le groupe endoscopique est $G_{n_{1},iso}\times G_{n_{2},iso}$. Soit $(x_{1},x_{2})\in G_{n_{1},iso}(F)\times G_{n_{2},iso}(F)$ un couple $n$-r\'egulier form\'e d'\'el\'ements elliptiques. Pour $\sharp=iso$ ou $an$, soit $x\in G_{\sharp}(F)$ un \'el\'ement de la classe totale de conjugaison stable correspondant \`a $(x_{1},x_{2})$. On lui associe les donn\'ees $\eta_{+}$, $n_{+}$, $\eta_{-}$ etc... comme ci-dessus. Pour $j=1,2$, on pose $x_{j}=s_{j}E(\underline{X}_{j})$. On adopte la notation $\underline{X}_{j}$ plut\^ot que $X_{j}$ pour une raison qui appara\^{\i}tra plus loin. On d\'efinit de m\^eme les donn\'ees $\eta_{j,+}$, $n_{j,+}$, $\eta_{j,-}$ etc... On v\'erifie que $n_{+}=n_{1,+}+n_{2,+}$, $\eta_{+}=\eta_{1,+}\eta_{2,+}$, $n_{-}=n_{1,-}+n_{2,-}$, $\eta_{-}=\eta_{1,-}\eta_{2,-}$, que $I=I_{1}\cup I_{2}$ et que, pour $i\in I$, $d_{i}=d_{1,i}+d_{2,i}$ (avec $d_{j,i}=0$ si $i\not\in I_{j})$. Le couple $(n_{1,+},n_{2,+})$, resp. le quadruplet $(n_{1,-},\eta_{1,-},n_{2,-},\eta_{2,-})$, resp. le couple $(d_{1,i},d_{2,i})$ pour $i\in I$, d\'efinit une donn\'ee endoscopique de $G_{+}$, resp. $G_{-}$, $G_{i}$. Avec les d\'efinitions que l'on a donn\'ees, le couple $(\underline{X}_{1,+},\underline{X}_{2,+})$ (par exemple) n'a pas de raison d'appartenir au groupe endoscopique associ\'e \`a $(n_{1,+},n_{2,+})$ car ce dernier est $G_{n_{1,+},iso}\times G_{n_{2,+},iso}$ tandis que $(\underline{X}_{1,+},\underline{X}_{2,+})$ appartient \`a un certain produit $G_{n_{1,+},\sharp_{1,+}}(F)\times G_{n_{2,+},\sharp_{2,+}}(F)$. Mais seule la classe totale de conjugaison stable de $(\underline{X}_{1,+},\underline{X}_{2,+})$ interviendra dans la suite et on peut fixer un \'el\'ement $(X_{1,+},X_{2,+})$ de cette classe qui appartient \`a $G_{n_{1,+},iso}(F)\times G_{n_{2,+},iso}(F)$. On d\'efinit de m\^eme $(X_{1,-},X_{2,-})$ et $(X_{1,i},X_{2,i})$ pour $i\in I$. On pose $X_{1}=X_{1,+}\oplus X_{1,-}\oplus\oplus_{i\in I}X_{1,i}$ et on d\'efinit de m\^eme $X_{2}$. On dispose d'un facteur de transfert $\Delta((x_{1},x_{2}),x)$ associ\'e \`a la donn\'ee endoscopique d\'efinie par $(n_{1},n_{2})$. On dispose d'un facteur de transfert, que nous noterons simplement $\Delta_{+}((X_{1,+},X_{2,+}),X_{+})$ associ\'e \`a la donn\'ee endoscopique d\'efinie par $(n_{1,+},n_{2,+})$ et, de m\^eme, de facteurs de transfert $\Delta_{-}((X_{1,-},X_{2,-}),X_{-})$ et $\Delta_{i}((X_{1,i},X_{2,i}),X_{i})$. Ces trois derniers facteurs sont normalis\'es comme dans les paragraphes 2.1, 2.2 et 2.3. On pose $$\Delta((X_{1},X_{2}),X)=\Delta_{+}((X_{1,+},X_{2,+}),X_{+})\Delta_{-}((X_{1,-},X_{2,-}),X_{-})\prod_{i\in I}\Delta_{i}((X_{1,i},X_{2,i}),X_{i})$$ et $$d_{2}=\sum_{i\in I_{2}}d_{2,i}.$$ \ass{Lemme}{On a l'\'egalit\'e $$\Delta((x_{1},x_{2}),x)=(-1)^{d_{2}val_{F}(\eta_{-})} \Delta((X_{1},X_{2}),X).$$} Preuve. Notons $K_{+}$ l'ensemble des orbites de l'action de $Gal(\bar{F}/F)$ dans l'ensemble des valeurs propres de $X_{+}$ diff\'erentes de $0$ (la valeur propre $0$ intervient avec multiplicit\'e $1$), $K_{-}$ l'ensemble des orbites de l'action de $Gal(\bar{F}/F)$ dans l'ensemble des valeurs propres de $X_{-}$ et, pour $i\in I$, $K_{i}$ l'ensemble des orbites de l'action de $Gal(\bar{F}/E_{i})$ dans l'ensemble des valeurs propres de $X_{i}$, vu comme un endomorphisme $E_{i}$-lin\'eaire de $V_{i}$. Pour $k\in K_{+}$, resp. $k\in K_{-}$, $k\in K_{i}$, on fixe $\Xi_{k}\in k$. On pose $E(\Xi_{k})=(1+\Xi_{k}/2)(1-\Xi_{k}/2)^{-1}$. Pour $k\in K_{+}\cup K_{-}$, on pose $F_{k}=F[\Xi_{k}]$. Il existe une sous-extension $F^{\natural}_{k}$ de $F$ telle que $[F_{k}:F^{\natural}_{k}]=2$ et que $trace_{F_{k}/F^{\natural}_{k}}(\Xi_{k})=0$. Pour $i\in I$ et $k\in K_{i}$, on pose $F_{k}=E_{i}[\Xi_{k}]$. Il existe une extension $F^{\natural}_{k}$ de $E^{\natural}_{i}$ de sorte que $F_{k}$ soit le compos\'e des extensions $E_{i}$ et $F^{\natural}_{k}$ de $E^{\natural}_{i}$ et que $trace_{F_{k}/F^{\natural}_{k}}(\Xi_{k})=0$. On note $K$ l'union disjointe de $K_{+}$, $K_{-}$ et des $K_{i}$ pour $i\in I$. L'ensemble des valeurs propres de $x$ diff\'erentes de $1$ est l'ensemble des conjugu\'es par $Gal(\bar{F}/F)$ des \'el\'ements $\xi_{k}=E(\Xi_{k})$ pour $k\in K_{+}$; $\xi_{k}=-E(\Xi_{k})$ pour $k\in K_{-}$; $\xi_{k}=s_{i}E(\Xi_{k})$ pour $i\in I$ et $k\in K_{i}$. Pour $k\in K$, notons $V_{k}$ la somme des espaces propres de $x$ associ\'es \`a des conjugu\'es de $\xi_{k}$. On peut identifier $V_{k}$ \`a $F_{k}$ de sorte que l'action de $x$ dans $V_{k}$ s'identifie \`a la multiplication par $\xi_{k}$. Il existe un \'el\'ement $c_{k}\in F^{\natural}_{k}$ de sorte que la restriction de $Q_{\sharp}$ \`a $V_{k}$ s'identifie \`a la forme quadratique $$(v,v')\mapsto [F_{k}:F]^{-1}trace_{F_{k}/F}(c_{k}\tau_{k}(v')v)$$ sur $F_{k}$, o\`u $\tau_{k}$ est l'\'el\'ement non trivial de $Gal(F_{k}/F^{\natural}_{k})$. Seule compte la classe de $c_{k}$ modulo les normes de l'extension $F_{k}/F_{k}^{\natural}$. On note $sgn_{k}$ le caract\`ere quadratique de $F_{k}^{\natural,\times}$ associ\'e \`a cette extension. On note $P_{k}$ le polyn\^ome caract\'eristique de $\xi_{k}$ sur $F$. On note $P$ le produit de ces polyn\^omes. Pour tout polyn\^ome $R$, on note $R'$ son polyn\^ome d\'eriv\'e. On pose $$C_{k}=(-1)^{n+1}2[F_{k}:F]c_{k}P'(\xi_{k})P(-1)\xi_{k}^{1-n}(1+\xi_{k})(\xi_{k}-1)^{-1}.$$ On d\'efinit $K_{1}$ et $K_{2}$ comme on a d\'efini $K$. L'ensemble $K$ est l'union disjointe de $K_{1}$ et $K_{2}$. L'union est disjointe car $(x_{1},x_{2})$ est $n$-r\'egulier. D'apr\`es \cite{W1} proposition 1.10, on a l'\'egalit\'e $$(4) \qquad \Delta((x_{1},x_{2}),x)=\prod_{k\in K_{2}}sgn_{k}(C_{k}).$$ {\bf Remarque.} Dans la formule de \cite{W1} d\'efinissant $C_{k}$, il n'y a pas le facteur $[F_{k}:F]$. Mais le terme $c_{k}$ y est d\'efini diff\'eremment: c'est notre $c_{k}$ multipli\'e par $[F_{k}:F]^{-1}$. \bigskip Pour $k\in K_{+}\cup K_{-}$, on note $\mathfrak{P}_{k}$ le polyn\^ome caract\'eristique de $\Xi_{k}$ sur $F$. On pose $\mathfrak{C}_{k}= (-1)^{n_{+}}\eta_{+}[F_{k}:F]c_{k}\Xi_{k}\mathfrak{P}_{k}'(\Xi_{k})\prod_{k'\in K_{+},k'\not=k}\mathfrak{P}_{k'}(\Xi_{k})$, si $k\in K_{+}$; $\mathfrak{C}_{k}=(-1)^{n_{-}} \eta_{-}[F_{k}:F]c_{k}\Xi_{k}^{-1}\mathfrak{P}_{k}'(\Xi_{k})\prod_{k'\in K_{-},k'\not=k}\mathfrak{P}_{k'}(\Xi_{k})$, si $k\in K_{-}$. Pour $i\in I$ et $k\in K_{i}$, on note $\mathfrak{P}_{i,k}$ le polyn\^ome caract\'eristique de $\Xi_{k}$ sur $E_{i}$. On fixe un \'el\'ement $\eta_{i}\in E_{i}$ qui est une unit\'e et v\'erifie $\tau_{i}(\eta_{i})=(-1)^{d_{i}+1}\eta_{i}$, o\`u $\tau_{i}$ est l'unique \'el\'ement non trivial de $Gal(E_{i}/E_{i}^{\natural})$. On pose $\mathfrak{C}_{k}= \eta_{i}[F_{k}:E_{i}]c_{k} \mathfrak{P}_{i,k}'(\Xi_{k})\prod_{k'\in K_{i},k'\not=k}\mathfrak{P}_{i,k'}(\Xi_{k})$. D'apr\`es \cite{W3} lemme X.7, on a l'\'egalit\'e $$(5) \qquad \Delta((X_{1},X_{2}),X)=\prod_{k\in K_{2}}sgn_{k}(\mathfrak{C}_{k}).$$ On va d\'emontrer les propri\'et\'es suivantes: (6) pour $k\in K_{+}\cup K_{-}$, $sgn_{k}(C_{k})=sgn_{k}(\mathfrak{C}_{k})$; (7) pour $i\in I$ et $k\in K_{i}$, $sgn_{k}(C_{k})=(-1)^{[F_{k}:E_{i}]val_{F}(\eta_{-})} sgn_{k}(\mathfrak{C}_{k})$. En les admettant, on voit que le rapport entre les membres de droite de (4) et (5) est $(-1)^{D\,val_{F}(\eta_{-})}$, o\`u $D=\sum_{i\in I}\sum_{k\in K_{i}\cap K_{2}}[F_{k}:E_{i}]$. La somme int\'erieure en $k$ vaut $d_{2,i}$, donc $D=d_{2}$. Alors les \'egalit\'es (4) et (5) impliquent celle de l'\'enonc\'e. Pour d\'emontrer (6) et (7), on a besoin de quelques ingr\'edients. Soit $Z\in \bar{F}$ et $\left(\begin{array}{cc}a&b\\c&d\\ \end{array}\right)\in GL(2;F)$. Supposons $cZ+d\not=0$ et posons $z=\frac{az+b}{cz+d}$. Notons $P_{z}(T)$ et $P_{Z}(T)$ les polyn\^omes caract\'eristiques de $z$ et $Z$ et notons $m$ leur degr\'e. Alors on a les \'egalit\'es $$(8) \qquad (cT+d)^mP_{z}(\frac{aT+b}{cT+d})=c^mP_{z}(\frac{a}{c})P_{Z}(T);$$ $$(9) \qquad (ad-bc)(cZ+d)^{m-2}P'_{z}(z)=c^mP_{z}(\frac{a}{c})P'_{Z}(T).$$ Cf. \cite{Li} lemme 7.4.1. D'autre part, pour $k\in K$, on calcule facilement le d\'eterminant non normalis\'e $det_{k}\in F^{\times}/F^{\times2}$ de la forme quadratique d\'efinie plus haut sur $V_{k}$: on a $$(10)\qquad det_{k}=norme_{F_{k}/F}(\Xi_{k}).$$ Fixons une extension galoisienne finie $F'$ de $F$ contenant tous les corps $F_{k}$. Notons $\mathfrak{o}'_{1}$ le groupe multiplicatif des unit\'es de $F'$ congrues \`a $1$ modulo l'id\'eal maximal $\mathfrak{p}'$ de l'anneau des entiers de $F'$. Pour $k\in K_{+}\cup K_{-}$, introduisons la relation d'\'equivalence dans $F^{'\times}$: $x\equiv_{k} y$ si et seulement s'il existe $x'\in \mathfrak{o}'_{1}$ et $y'\in F_{k}^{\times}$ tels que $xy^{-1}=x'\,norme_{F_{k}/F_{k}^{\natural}}(y')$. On peut remplacer la relation (6) par (11) pour $k\in K_{+}\cup K_{-}$, $C_{k}\equiv_{k}\mathfrak{C}_{k}$. En effet, si cette relation est v\'erifi\'ee, il existe $x\in \mathfrak{o}'_{1}$ et $y\in F_{k}^{\times}$ tels que $C_{k}=x\,norme_{F_{k}/F_{k}^{\natural}}(y)\mathfrak{C}_{k}$. On sait que les termes $C_{k}$ et $\mathfrak{C}_{k}$ appartiennent \`a $F_{k}^{\natural,\times}$. Donc $x\in \mathfrak{o}'_{1}\cap F_{k}^{\natural,\times}$. Parce que $p$ est grand, le caract\`ere $sgn_{k}$ est mod\'er\'ement ramifi\'e, donc $sgn_{k}(x)=1$. On a aussi $sgn_{k}(norme_{F_{k}/F_{k}^{\natural}}(y))=1$. L'\'egalit\'e $sgn_{k}(C_{k})=sgn_{k}(\mathfrak{C}_{k})$ s'ensuit. Soit $k\in K_{+}$. Pour $k'\in K_{-}$ ou $k'\in K_{i}$ pour $i\in I$, la contribution de $k'$ \`a $C_{k}$ est $P_{k'}(\xi_{k})P_{k'}(-1)$. On a $\xi_{k}\in \mathfrak{o}'_{1}$ tandis que les racines de $P_{k'}$ n'appartiennent pas \`a ce groupe. On en d\'eduit $P_{k'}(\xi_{k})\equiv_{k}P_{k'}(1)\equiv_{k}P_{k'}(1)^{-1}$. Le produit $P_{k'}(1)^{-1}P_{k'}(-1)$ est \'egal \`a $norme_{F_{k'}/F}(1-\xi_{k'})^{-1}(-1-\xi_{k'})$. Parce que $norme_{F_{k'}/F_{k'}^{\natural}}(\xi_{k'})=1$, on voit que $(1-\xi_{k'})^{-1}(-1-\xi_{k'})$ est un \'el\'ement de $F_{k'}$ dont la trace dans $F_{k'}^{\natural}$ est nulle. Il existe donc $x\in F_{k'}^{\natural,\times}$ tel que $(1-\xi_{k'})^{-1}(-1-\xi_{k'})=x\Xi_{k'}$. D'apr\`es (10), on a donc $$P_{k'}(1)^{-1}P_{k'}(-1)\equiv_{k}norme_{F_{k'}/F}(x)det_{k'}\equiv_{k}norme_{F_{k'}^{\natural}/F}(x)^2det_{k'}\equiv_{k}det_{k'}.$$ {\bf Remarque.} Le terme $det_{k'}$ n'est d\'efini que modulo $F^{\times2}$. Disons qu'on en choisit un repr\'esentant dans $F^{\times}$. La classe d'\'equivalence de ce repr\'esentant pour la relation $\equiv_{k}$ est bien d\'efinie. \bigskip Soit $k'\in K_{+}$ avec $k'\not=k$. La contribution de $k'$ \`a $C_{k}$ est $P_{k'}(\xi_{k})P_{k'}(-1)$. On utilise (8) avec $\left(\begin{array}{cc}a&b\\c&d\\ \end{array}\right)=\left(\begin{array}{cc}-\frac{1}{2}&1\\-\frac{1}{2}&1\\ \end{array}\right)$, $Z=\Xi_{k'}$ et $T=\Xi_{k}$. L'entier $m=[F_{k'}:F]$ est pair. De plus, $\Xi_{k}\in \mathfrak{p}'$, donc $c\Xi_{k}+d\in \mathfrak{o}'_{1}$. On obtient $P_{k'}(\xi_{k})\equiv_{k}P_{k'}(-1)\mathfrak{P}_{k'}(\Xi_{k})$. D'o\`u $$P_{k'}(\xi_{k})P_{k'}(-1)\equiv_{k}\mathfrak{P}_{k'}(\Xi_{k}).$$ Un calcul analogue s'applique au terme $P'_{k}(\xi_{k})P_{k}(-1)$, en utilisant cette fois la relation (9). On obtient $$P'_{k}(\xi_{k})P_{k}(-1)\equiv_{k}\mathfrak{P}'_{k}(\Xi_{k}).$$ Evidemment $\xi_{k}^{1-n}\equiv_{k}1$ et $1+\xi_{k}\equiv_{k}2$. On calcule $$(\xi_{k}-1)^{-1}=(1-\Xi_{k}/2)\Xi_{k}^{-1}\equiv_{k}\Xi_{k}^{-1}=-\Xi_{k}(-\Xi_{k}^2)^{-1}=-\Xi_{k}norme_{F_{k}/F_{k}^{\natural}}(\Xi_{k})^{-1}\equiv_{k}-\Xi_{k}.$$ En rassemblant ces calculs, on obtient $$C_{k}\equiv_{k} (-1)^{n+n_{+}}\eta_{+}(\prod_{k'}det_{k'})\mathfrak{C}_{k},$$ o\`u le produit porte sur les $k'\in K_{-}$ et $k'\in K_{i}$ pour $i\in I$. Le produit des $det_{k'}$ sur ces $k'$ est le d\'eterminant de la restriction de $Q_{\sharp}$ \`a la somme de $V_{-}$ et des $V_{i}$ pour $i\in I$. Il est \'egal au d\'eterminant de $Q_{\sharp}$ divis\'e par le d\'eterminant de la restriction de $Q_{\sharp}$ \`a $V_{+}$. On a fix\'e le d\'eterminant de $Q_{\sharp}$ en \cite{W5} 1.1: c'est $(-1)^n$. Celui de la restriction de $Q_{\sharp}$ \`a $V_{+}$ est $(-1)^{n_{+}}\eta_{+}$. L'\'equivalence ci-dessus entra\^{\i}ne alors (11). Soit $k\in K_{-}$. Pour $k'\in K_{+}$ ou $k'\in K_{i}$ pour $i\in I$, la contribution de $k'$ \`a $ C_{k}$ est $P_{k'}(\xi_{k})P_{k'}(-1)$. On a $\xi_{k}\in -\mathfrak{o}'_{1}$ et les racines de $P_{k'}$ ne sont pas congrues \`a $-1$ modulo $\mathfrak{p}'$. Donc $P_{k'}(\xi_{k})\equiv_{k}P_{k'}(-1)$, puis $$P_{k'}(\xi_{k})P_{k'}(-1)\equiv_{k}P_{k'}(-1)^2\equiv_{k}1.$$ Pour $k'\in K_{-}$ avec $k'\not=k$, la contribution de $k'$ \`a $ C_{k}$ est $P_{k'}(\xi_{k})P_{k'}(-1)$. On utilise (8) avec $\left(\begin{array}{cc}a&b\\c&d\\ \end{array}\right)=\left(\begin{array}{cc}-\frac{1}{2}&-1\\-\frac{1}{2}&1\\ \end{array}\right)$, $Z=\Xi_{k'}$ et $T=\Xi_{k}$. De nouveau, $m$ est pair et $c\Xi_{k}+d\equiv_{k}1$. D'o\`u $P_{k'}(\xi_{k})\equiv_{k}P_{k'}(1)\mathfrak{P}_{k'}(\Xi_{k})$. Comme plus haut, on a $$P_{k'}(1)P_{k'}(-1)\equiv_{k}P_{k'}(1)^{-1}P_{k'}(1)\equiv_{k}det_{k'}.$$ D'o\`u $$P_{k'}(\xi_{k})P_{k'}(-1)\equiv_{k}det_{k'}\mathfrak{P}_{k'}(\Xi_{k}).$$ Un m\^eme calcul s'applique au terme $P'_{k}(\xi_{k})P_{k}(-1)$, en utilisant cette fois la relation (9). Ici se glisse le d\'eterminant $ad-bc$ qui vaut $-1$. D'o\`u $$P'_{k}(\xi_{k})P_{k}(-1)\equiv_{k}det_{k}\mathfrak{P}'_{k}(\Xi_{k}).$$ Evidemment $\xi_{k}^{1-n}\equiv_{k}(-1)^{n-1}$ et $(\xi_{k}-1)^{-1}\equiv_{k}-2$. On a $$1+\xi_{k}=-\frac{\Xi_{k}}{1-\frac{\Xi_{k}}{2}}\equiv_{k}-\Xi_{k}\equiv_{k} -\Xi_{k}norme_{F_{k}/F_{k}^{\natural}}(\Xi_{k})^{-1}\equiv_{k}\Xi_{k}^{-1}.$$ En rassemblant ces calculs, on obtient $$C_{k}\equiv_{k}(-1)^{n_{-}}\eta_{-}(\prod_{k'\in K_{-}}det_{k'})\mathfrak{C}_{k}.$$ Le produit intervenant ici est le d\'eterminant de la restriction de $Q_{\sharp}$ \`a $V_{-}$, c'est-\`a-dire $(-1)^{n_{-}}\eta_{-}$. D'o\`u encore (11). Soient $i\in I$ et $k\in K_{i}$. Cette fois, on d\'efinit dans $F^{'\times}$ l'\'equivalence $x\equiv_{k}y$ si et seulement s'il existe $x'\in F_{k}^{\times}$ tel que $ xy^{-1}norme_{F_{k}/F_{k}^{\natural}}(x')$ soit une unit\'e. Soit $k'\in K_{+}$ ou $k'\in K_{i'}$ pour un $i'\in I$ avec $i'\not=i$. Les racines du polyn\^ome $P_{k'}$ ne sont congrues ni \`a $-1$, ni \`a $\xi_{k}$ modulo $\mathfrak{p}'$. On en d\'eduit $P_{k'}(\xi_{k})P_{k'}(-1)\equiv_{k}1$. Soit $k'\in K_{-}$. On a $$P_{k'}(\xi_{k})P_{k'}(-1)=P_{k'}(\xi_{k})P_{k'}(1)P_{k'}(1)^{-1}P_{k'}(-1).$$ On a encore $P_{k'}(\xi_{k})P_{k'}(1)\equiv_{k}1$ et, par un calcul d\'ej\`a fait, $P_{k'}(1)^{-1}P_{k'}(-1)\equiv_{k}det_{k'}$. Soit $k'\in K_{i}$ avec $k'\not=k$. Comme pr\'ec\'edemment, $P_{k'}(-1)\equiv_{k}1$. On a $$P_{k'}(\xi_{k})=\prod_{\sigma\in Gal(F_{k'}/F)}(\xi_{k}-\sigma(\xi_{k'})).$$ Pour $\sigma\in Gal(F_{k'}/F)$, $\xi_{k}-\sigma(\xi_{k'})$ est congru modulo $\mathfrak{p}'$ \`a $s_{i}-\sigma(s_{i})$. Si $\sigma\not\in Gal(F_{k}/E_{i})$, ce terme est une unit\'e. Si $\sigma\in Gal(F_{k}/E_{i})$, on a $\xi_{k}-\sigma(\xi_{k'})=s_{i}(E(\Xi_{k})-\sigma(E(\Xi_{k'})))$. Notons $P_{i,k'}$ le polyn\^ome caract\'eristique de $E(\Xi_{k'})$ sur $E_{i}$. On obtient $$P_{k'}(\xi_{k})P_{k'}(-1)\equiv_{k}P_{i,k'}(E(\Xi_{k})).$$ On calcule ce terme gr\^ace \`a (8) o\`u l'on remplace le corps $F$ par $E_{i}$. On prend $\left(\begin{array}{cc}a&b\\c&d\\ \end{array}\right)=\left(\begin{array}{cc}\frac{1}{2}&1\\-\frac{1}{2}&1\\ \end{array}\right)$, $Z=\Xi_{k'}$, $T=\Xi_{k}$. On obtient $$P_{i,k'}(E(\Xi_{k}))\equiv_{k}P_{i,k'}(-1)\mathfrak{P}_{i,k'}(\Xi_{k})\equiv_{k}\mathfrak{P}_{i,k'}(\Xi_{k}),$$ d'o\`u $$P_{k'}(\xi_{k})P_{k'}(-1)\equiv_{k}\mathfrak{P}_{i,k'}(\Xi_{k}).$$ Un m\^eme calcul s'applique \`a $P'_{k}(\xi_{k})P_{k}(-1)$, en utilisant cette fois la relation (9). D'o\`u $$P'_{k}(\xi_{k})P_{k}(-1)\equiv_{k}\mathfrak{P}'_{i,k}(\Xi_{k}).$$ Evidemment $(-1)^{n+1}2\xi_{k}^{1-n}(1+\xi_{k})(\xi_{k}-1)^{-1}\equiv_{k}1$ et $\eta_{i}\equiv_{k}1$ (tous les termes sont des unit\'es). En rassemblant ces calculs, on obtient $$C_{k}\equiv_{k}(\prod_{k\in K_{-}}det_{k})\mathfrak{C}_{k}.$$ Le produit intervenant ici vaut $(-1)^{n_{-}}\eta_{-}$ (comme plus haut, on fixe ici un repr\'esentant de $\eta^-$ dans $F^{\times}$). Il existe donc $x\in F_{k}^{\times}$ et une unit\'e $y$ de $F^{'\times}$ tels que $C_{k}=y\,norme_{F_{k}/F_{k}^{\natural}}(x)\eta_{-}\mathfrak{C}_{k}$. N\'ecessairement, $y$ appartient \`a $F_{k}^{\natural,\times}$. L'extension $F_{k}/F_{k}^{\natural}$ est non ramifi\'ee puisque $F_{k}$ est le compos\'e de $E_{i}$ et de $F_{k}^{\natural}$ sur $E_{i}^{\natural}$. Donc $sgn_{k}(y)=1$, puis $sgn_{k}(C_{k})=sgn_{k}(\eta_{-})sgn_{k}(\mathfrak{C}_{k})$. Notons $val_{F_{k}^{\natural}}$ la valuation usuelle de $F_{k}^{\natural}$. On a $sgn_{k}(x)=(-1)^{val_{F_{k}^{\natural}}(x)}$ pour tout $x\in F_{k}^{\natural, \times}$. En notant $e(F_{k}^{\natural}/F)$ l'indice de ramification de l'extension $F_{k}^{\natural}/F$, on a $val_{F_{k}^{\natural}}(\eta_{-})=e(F_{k}^{\natural}/F)val_{F}(\eta_{-})$. Puisque $E_{i}^{\natural}/F$ est non ramifi\'ee, on a $$e(F_{k}^{\natural}/F)=e(F_{k}^{\natural}/E_{i}^{\natural})=[F_{k}^{\natural}:E_{i}^{\natural}]f(F_{k}^{\natural}/E_{i}^{\natural})^{-1}=[F_{k}:E_{i}]f(F_{k}^{\natural}/E_{i}^{\natural})^{-1},$$ o\`u $f(F_{k}^{\natural}/E_{i}^{\natural})$ est le degr\'e de l'extension r\'esiduelle. Mais $E_{i}$ est l'unique extension quadratique non ramifi\'ee de $E_{i}^{\natural}$ et elle n'est pas contenue dans $F_{k}^{\natural}$ puisque $F_{k}$ est compos\'e de $E_{i}$ et de $F_{k}^{\natural}$. Donc $f(F_{k}^{\natural}/E_{i}^{\natural})$ est impair et $e(F_{k}^{\natural}/F)$ est de la m\^eme parit\'e que $[F_{k}:E_{i}]$. D'o\`u $$sgn_{k}(\eta_{-})=(-1)^{e(F_{k}^{\natural}/F)val_{F}(\eta_{-})}=(-1)^{[F_{k}:E_{i}]val_{F}(\eta_{-})}.$$ D'o\`u l'\'egalit\'e (7), ce qui ach\`eve la d\'emonstration. $\square$ \bigskip \subsection{Calcul d'int\'egrales orbitales} Fixons $(r',r'',N',N'')\in \Gamma$, $w'\in W_{N'}$ et $w''\in W_{N''}$. Comme en 2.1, on note $\varphi_{w'}$ et $\varphi_{w''}$ les fonctions caract\'eristiques des classes de conjugaison de $w'$ et $w''$. On suppose que ces classes sont param\'etr\'ees par des couples de partitions de la forme $(\emptyset,\beta')$ et $(\emptyset,\beta'')$. On pose $f=\Psi\circ k\circ \rho\iota(\varphi_{w'}\otimes \varphi_{w''})$, cf. 1.2. On d\'efinit un couple d'entiers $(r'_{+},r'_{-})$ par les \'egalit\'es $(r'_{+},r'_{-})=(r'+1,r')$, si $r'\equiv r''\,\,mod\,\,2{\mathbb Z}$; $(r'_{+},r'_{-})=(r',r'+1)$, si $r'\not\equiv r''\,\,mod\,\,2{\mathbb Z}$. Soit $\sharp=iso$ ou $an$ et soit $x\in G_{\sharp}(F)$ un \'el\'ement elliptique fortement r\'egulier. On l'\'ecrit $x=sE(X)$ et on associe \`a $s$ les donn\'ees du paragraphe pr\'ec\'edent. En particulier, on a $$G_{s}=G_{+}\times G_{-}\times \prod_{i\in I}G_{i}.$$ On consid\`ere les hypoth\`eses (1)(a) $val_{F}(\eta_{-})\equiv r''\,\,mod\,\,2{\mathbb Z}$; (1)(b) $2n_{+}+1\geq r^{'2}_{+}+r^{''2}$, $2n_{-}\geq r^{'2}_{-}+r^{''2}$. Remarquons que, d'apr\`es 3.1(2), la condition (1)(a) \'equivaut \`a $val_{F}(\eta_{+})\equiv r''\,\,mod\,\,2{\mathbb Z}$. Supposons v\'erifi\'ees ces conditions (1)(a) et (b). On pose $$N_{+}=n_{+}-(r^{'2}_{+}+r^{''2}-1)/2,\,\, N_{-}=n_{-}-(r^{'2}_{-}+r^{''2})/2.$$ Notons $D$ l'ensemble des familles ${\bf d}=(N'_{+},N'_{-},N''_{+},N''_{-},(d'_{i},d''_{i})_{i\in I})$ d'entiers positifs ou nuls v\'erifiant les conditions $$N'_{+}+N''_{+}=N_{+},\,\,N'_{-}+N''_{-}=N_{-}$$; $ d'_{i}+d''_{i}=d_{i}$ pour tout $i\in I$; $N'_{+}+N'_{-}+\sum_{i\in I}d'_{i}f_{i}=N'$, $N''_{+}+N''_{-}+\sum_{i\in I}d''_{i}f_{i}=N''$, \noindent o\`u on a pos\'e $f_{i}=[E_{i}^{\natural}:F]$. Pour une telle famille, posons $$W'({\bf d})=W_{N'_{+}}\times W_{N'_{-}}\times \prod_{i\in I}\mathfrak{S}_{d'_{i}}.$$ Consid\'erons l'ensemble des \'el\'ements ${\bf v}'=(v'_{+},v'_{-},(v'_{i})_{i\in I})\in W'({\bf d})$ v\'erifiant les conditions suivantes: les classes de conjugaison de $v'_{+}$ et $v'_{-}$ sont param\'etr\'ees par des couples de partitions $(\emptyset,\beta'_{+})$ et $(\emptyset,\beta'_{-})$; pour $i\in I$, la classe de conjugaison de $v'_{i}$ est param\'etr\'ee par une partition $\beta'_{i}$ dont tous les termes non nuls sont impairs; on note $f_{i}\beta'_{i}$ la partition dont les termes sont ceux de $\beta'_{i}$ multipli\'es par $f_{i}$; $\beta'=\beta'_{+}\cup \beta'_{-}\cup\bigcup_{i\in I}f_{i}\beta'_{i}$. Cet ensemble est r\'eunion de classes de conjugaison par $W'({\bf d})$ et on fixe un ensemble de repr\'esentants ${\cal V}'({\bf d})$ de ces classes. Remarquons que (2) pour ${\bf v}'=(v'_{+},v'_{-},(v_{i})_{i\in I})\in W'({\bf d})$, on a l'\'egalit\'e $$sgn_{CD}(w')=sgn_{CD}(v'_{+})sgn_{CD}(v'_{-})(-1)^{\sum_{i\in I}d'_{i}}.$$ En effet, $sgn_{CD}(w')=(-1)^{l(\beta')}$, o\`u $l(\beta')$ est le nombre de termes non nuls de $\beta'$. On a $l(\beta')=l(\beta'_{+})+l(\beta'_{-})+\sum_{i\in I}l(\beta'_{i})$. Pour $i\in I$, $\beta'_{i}$ est une partition de $d'_{i}$ dont tous les termes non nuls sont impairs. Donc $l(\beta'_{i})\equiv d'_{i}\,\,mod\,\,2{\mathbb Z}$. L'assertion (2) en r\'esulte. On pose les m\^emes d\'efinitions en rempla\c{c}ant les exposants $'$ par $''$. On pose ${\cal V}({\bf d})={\cal V}'({\bf d})\times {\cal V}''({\bf d})$. Soient ${\bf d}=(N'_{+},N'_{-},N''_{+},N''_{-},(d'_{i},d''_{i})_{i\in I})\in D$ et ${\bf v}=(v'_{+},v'_{-},(v'_{i})_{i\in I} ;v''_{+},v''_{-},(v''_{i})_{i\in I})\in {\cal V}({\bf d})$. Supposons $n_{+}\geq1$. Appliquons la construction de 2.1 \`a l'entier $n_{+}$, \`a l'\'el\'ement $\eta_{+}$ et au quadruplet $(r'_{+},\vert r''\vert ,N'_{+},N''_{+})$. Les hypoth\`eses de ce paragraphe sont v\'erifi\'ees d'apr\`es (1)(a). On d\'efinit deux fonctions $f^0_{+}={\cal Q}_{r'_{+},\vert r''\vert }^{Lie}\circ\rho_{N_{+}}^*\circ\iota_{N'_{+},N''_{+}}(\varphi_{v'_{+}}\otimes \varphi_{v''_{+}})$ et $f^{1}_{+} ={\cal Q}_{r'_{+},\vert r''\vert }^{Lie}\circ\rho_{N_{+}}^*\circ\iota_{N''_{+},N'_{+}}(\varphi_{v''_{+}}\otimes \varphi_{v'_{+}})$. Elles vivent sur deux alg\`ebres de Lie dont l'une est l'ag\`ebre de Lie $\mathfrak{g}_{+}$ de la premi\`ere composante $G_{+}$ de $G_{s}$. En particulier les int\'egrales orbitales $J(X_{+},f^0_{+})$ et $J(X_{+},f^{1}_{+})$ sont bien d\'efinies. Supposons $n_{-}\geq1$. On applique cette fois la construction de 2.2 et on d\'efinit deux fonctions $f^0_{-}={\cal Q}_{r'_{-},\vert r''\vert }^{Lie}\circ\rho_{N_{-}}^*\circ\iota_{N'_{-},N''_{-}}(\varphi_{v'_{-}}\otimes \varphi_{v''_{-}})$ et $f^{1}_{-} ={\cal Q}_{r'_{-},\vert r''\vert }^{Lie}\circ\rho_{N_{-}}^*\circ\iota_{N''_{-},N'_{-}}(\varphi_{v''_{-}}\otimes \varphi_{v'_{-}})$. Les int\'egrales orbitales $J(X_{-},f^0_{-})$ et $J(X_{-},f^{1}_{-})$ sont bien d\'efinies. Enfin, pour $i\in I$ on utilise les constructions de 2.3. On d\'efinit les fonctions $f^0_{i}={\cal Q}(d'_{i},d''_{i})^{Lie}\circ \rho_{i}^*\circ \iota_{d'_{i},d''_{i}}(\varphi_{v'_{i}}\otimes \varphi_{v''_{i}})$ et $f^{1}_{i}={\cal Q}(d'_{i},d''_{i})^{Lie}\circ \rho_{i}^*\circ \iota_{d''_{i},d'_{i}}(\varphi_{v''_{i}}\otimes \varphi_{v'_{i}})$. Les int\'egrales orbitales $J(X_{i},f^0_{i})$ et $J(X_{i},f^{1}_{i})$ sont bien d\'efinies. On pose $f^0[{\bf d},{\bf v}]=f^0_{+}\otimes f^0_{-}\otimes \otimes_{i\in I}f^0_{i}$ et $$J(X,f^0[{\bf d},{\bf v}])=J(X_{+},f^0_{+})J(X_{-},f^0_{-})\prod_{i\in I}J(X_{i},f^0_{i}).$$ Dans ces formules, les termes index\'es par $+$ ou $-$ disparaissent si $n_{+}=0$ ou $n_{-}=0$. On d\'efinit de m\^eme $f^{1}[{\bf d},{\bf v}]$ et $J(X,f^{1}[{\bf d},{\bf v}])$. On pose $b=0$ si $r''>0$ ou si $r''=0$ et $r'$ est pair; $ b=1$ si $r''<0$ ou si $r''=0$ et $r'$ est impair. Posons $$c(r',r'')=(-1)^{n+r''}sgn(-1)^{(r^{'2}-r')/2+(r^{''2}-\vert r''\vert )/2},$$ $$c_{\sharp}(r',r'',w',w'')=\left\lbrace\begin{array}{cc}1,&si\,\,0<r''\leq r'\,\,ou \,\,r''=0\,\,et \,\,r'\,\,est\,\,pair,\\ sgn_{CD}(w''),&si\,\,r'<r'',\\ sgn_{\sharp}&si\,\,-r'\leq r''<0\,\,ou\,\,r''=0\,\,et\,\,r'\,\,est\,\,impair,\\ sgn_{\sharp}sgn_{CD}(w'),&si \,\,r''<-r'.\\ \end{array}\right.$$ \ass{Proposition}{(i) Si les hypoth\`eses (1)(a) et (1)(b) ne sont pas v\'erifi\'ees, $J(x,f)=0$. (ii) Supposons ces hypoth\`eses v\'erifi\'ees. Alors on a l'\'egalit\'e $$J(x,f)=c(r',r'')c_{\sharp}(r',r'',w',w'')\sum_{{\bf d}\in D}\sum_{{\bf v}\in {\cal V}({\bf d}) }J(X,f^{b}[{\bf d},{\bf v}]).$$} C'est la proposition 3.19 de \cite{MW}. \bigskip \subsection{D\'emonstration du (ii) de la proposition 1.2} On conserve les donn\'ees $(r',r'',N',N'')\in \Gamma$, $w'\in W_{N'}$ et $w''\in W_{N''}$. On d\'efinit $f$ comme dans le paragraphe pr\'ec\'edent. Soit $(n_{1},n_{2})\in D(n)$, auquel est associ\'e une donn\'ee endoscopique de $G_{iso}$ et $G_{an}$ dont le groupe endoscopique est $G_{n_{1},iso}\times G_{n_{2},iso}$. Soit $(x_{1},x_{2})\in G_{n_{1},iso}(F)\times G_{n_{2},iso}(F)$ un couple $n$-r\'egulier form\'e d'\'el\'ements elliptiques. On lui associe les donn\'ees du paragraphe 3.1: $n_{1,+}$, $\eta_{1,+}$ etc... A un \'el\'ement quelconque de la classe totale de conjugaison stable dans $G_{iso}(F)\cup G_{an}(F)$ correspondant \`a $(x_{1},x_{2})$ sont aussi associ\'ees des donn\'ees $n_{+}$, $\eta_{+}$ etc... On va calculer $$J^{endo}(x_{1},x_{2},f)=\sum_{x}\Delta((x_{1},x_{2}),x)J(x,f),$$ o\`u $x$ d\'ecrit cette classe totale de conjugaison stable, \`a conjugaison pr\`es. Les int\'egrales orbitales $J(x,f)$ sont calcul\'ees par la proposition pr\'ec\'edente. On en d\'eduit imm\'ediatement (1) si les hypoth\`eses (1)(a) et (b) de 3.2 ne sont pas v\'erifi\'ees, $J^{endo}(x_{1},x_{2},f)=0$. Supposons ces hypoth\`eses v\'erifi\'ees. Comme on l'a expliqu\'e en 3.1, l'application $x\mapsto X$ identifie la sommation \`a la somme sur les $X\in \mathfrak{g}_{iso}(F)\cup \mathfrak{g}_{an}(F)$ dans la classe totale de conjugaison stable correspondant \`a celle de $(X_{1},X_{2})$. Le lemme 3.1 et la proposition 3.2 expriment les termes $\Delta((x_{1},x_{2}),x)$ et $J(x,f)$ \`a l'aide de l'\'el\'ement $X$, \`a l'exception de la constante $c_{\sharp}(r',r'',w',w'')$ car l'indice $\sharp$ est celui tel que $x$ appartienne \`a $G_{\sharp}(F)$. Mais la relation 3.1(3) calcule cet indice \`a l'aide de $X$. Remarquons que, dans cette relation, on peut remplacer $val_{F}(\eta_{-})$ par $r''$ d'apr\`es l'hypoth\`ese (1)(a) de 3.2. Rappelons que $b=0$ si $r''>0$ ou si $r''=0$ et $r'$ est pair et $b=1$ si $r''<0$ ou si $r''=0$ et $r'$ est impair. D\'efinissons $$c(r',r'',w',w'')=\left\lbrace\begin{array}{cc}1,&si\,\,0<r''\leq r'\,\,ou \,\,r''=0\,\,et \,\,r'\,\,est\,\,pair,\\ sgn_{CD}(w''),&si\,\,r'<r'',\\ (-1)^{dr''}&si\,\,-r'\leq r''<0\,\,ou\,\,r''=0\,\,et\,\,r'\,\,est\,\,impair,\\ (-1)^{dr''}sgn_{CD}(w'),&si \,\,r''<-r'.\\ \end{array}\right.$$ On a alors l'\'egalit\'e $$c_{\sharp}(r',r'',w',w'')=c(r',r'',w',w'') sgn^{*}(X)^b.$$ On peut aussi remplacer $(-1)^{d_{2}val_{F}(\eta_{-})}$ par $(-1)^{dr''}$ dans l'\'enonc\'e du lemme 3.1. Alors ce lemme et la proposition 3.2 entra\^{\i}nent l'\'egalit\'e $$(2) \qquad J^{endo}(x_{1},x_{2},f)=c(r',r'')c(r',r'',w',w'')(-1)^{d_{2}r''}\sum_{{\bf d}\in D}\sum_{{\bf v}\in {\cal V}({\bf d})}$$ $$\sum_{X} \Delta((X_{1},X_{2}),X)sgn^{*}(X)^bJ(X,f^{b}[{\bf d},{\bf v}]).$$ Fixons ${\bf d}\in D$ et ${\bf v}\in {\cal V}({\bf d})$. On a par d\'efinition une d\'ecomposition $f^{b}[{\bf d},{\bf v}]=f^b_{+}\otimes f^b_{-}\otimes \otimes_{i\in I}f^b_{i}$. Posons $$J^{endo,*}(X_{1,+},X_{2,+},f^b_{+})=\sum_{X_{+}}\Delta_{+}((X_{1,+},X_{2,+}),X_{+})sgn^{*}(X_{+})^bJ(X_{+},f^b_{+}),$$ o\`u $X_{+}$ parcourt la classe totale de conjugaison stable correspondant \`a celle de $(X_{1,+},X_{2,+})$. On d\'efinit de m\^eme $J^{endo,*}(X_{1,-},X_{2,-},f^b_{-})$ et $J^{endo,*}(X_{1,i},X_{2,i},f^b_{i})$ pour $i\in I$. La somme en $X$ de l'expression (2) est \'egale \`a $$(3) \qquad J^{endo,*}(X_{1,+},X_{2,+},f^b_{+})J^{endo,*}(X_{1,-},X_{2,-},f^b_{-})\prod_{i\in I}J^{endo,*}(X_{1,i},X_{2,i},f^b_{i}).$$ Supposons d'abord $b=0$. Alors $sgn^*(X_{+})^b$ dispara\^{\i}t de la d\'efinition de $J^{endo,*}(X_{1,+},X_{2,+},f^b_{+})$ et ce terme est l'int\'egrale endoscopique $J^{endo}(X_{1,+},X_{2,+},f^0_{+})$. De m\^eme pour les autres termes de (3). On applique les (ii) des lemmes 2.1, 2.2 et 2.3: le produit de ces integrales endoscopiques est nul sauf si les conditions suivantes sont v\'erifi\'ees: $$(4) \left\lbrace\begin{array}{c}n_{1,+}=\frac{(r'_{+}+\vert r''\vert )^2-1}{4}+N'_{+},\\n_{2,+}=\frac{(r'_{+}-\vert r''\vert )^2-1}{4}+N''_{+},\\n_{1,-}=\frac{(r'_{-}+\vert r''\vert )^2}{4}+N'_{-},\\n_{2,-}=\frac{(r'_{-}-\vert r''\vert )^2}{4}+N''_{-},\\d_{1,i}=d'(i),\,\,d_{2,i}=d''_{i}\,\,pour\,\,tout\,\,i\in I\\ \end{array}\right.$$ (5) $val_{F}(\eta_{1,-})\equiv \frac{r'_{-}+\vert r''\vert }{2}\,\, mod\,\,2{\mathbb Z}$, $val_{F}(\eta_{2,-})\equiv \frac{ r'_{-}-\vert r''\vert }{2}\,\, mod\,\,2{\mathbb Z}$. Nootre hypoth\`ese $b=0$ implique que $\vert r''\vert =r''$. Pour $j=1,2$, on a $n_{j}=n_{j,+}+n_{j,-}+\sum_{i\in I }f_{i}d_{j,i}$. En utilisant l'\'egalit\'e $\{r'_{+},r'_{-}\}=\{r',r'+1\}$ et le fait que ${\bf d}\in D$, la condition (4) ci-dessus implique $$n_{1}=\frac{(r'+r'')^2+(r'+r''+1)^2-1}{4}+N',\,\, n_{2}=\frac{(r'-r'')^2+(r'-r''+1)^2-1}{4}+N''.$$ C'est pr\'ecis\'ement le couple $(n_{1},n_{2})$ d\'efini en 1.2. Cela prouve que si notre couple $(n_{1},n_{2})$ n'est pas celui d\'efini dans ce paragraphe, (3) est nul. Ceci \'etant vrai pour tous ${\bf d}$, ${\bf v}$, on a $J^{endo}(x_{1},x_{2},f)=0$ d'apr\`es (2). Cela \'etant vrai pour tout $(x_{1},x_{2})$, le transfert de $f$ relatif \`a $(n_{1},n_{2})$ est nul. Supposons maintenant $b=1$. Le couple $(n_{1,+},n_{2,+})$ d\'efinit une donn\'ee endoscopique pour les deux groupes sp\'eciaux orthogonaux impairs dont les alg\`ebres de Lie contiennent nos \'el\'ements $X_{+}$. Le couple $(n_{2,+},n_{1,+})$ d\'efinit aussi une telle donn\'ee. On a donc aussi un facteur de transfert $\Delta_{+}((X_{2,+},X_{1,+}),X_{+})$. Comme on l'a dit en \cite{W5} 2.1, on a l'\'egalit\'e $$\Delta_{+}((X_{2,+},X_{1,+}),X_{+})=sgn^*(X_{+})\Delta_{+}((X_{1,+},X_{2,+}),X_{+}).$$ On voit alors que $J^{endo,*}(X_{1,+},X_{2,+},f^b_{+})=J^{endo}(X_{2,+},X_{1,+},f^1_{+})$. De m\^eme pour les autres termes de (3). Le raisonnement se poursuit comme ci-dessus. On permute les r\^oles des indices $1$ et $2$; on permute aussi $N'$ et $N''$ puisqu'on remplace la fonction $f^{0}[{\bf d},{\bf v}]$ par $f^{1}[{\bf d},{\bf v}]$; enfin, l'hypoth\`ese $b=1$ entra\^{\i}ne que $\vert r''\vert =-r''$. Ces trois modifications conduisent au m\^eme r\'esultat: le transfert de $f$ relatif \`a $(n_{1},n_{2})$ est nul si $(n_{1},n_{2})$ n'est pas le couple d\'efini en 1.2. Cela d\'emontre le (ii) de la proposition 1.2 pour les fonctions $\varphi'=\varphi_{w'}$ et $\varphi''=\varphi_{w''}$. En faisant varier $w'$ et $w''$, cela d\'emontre cette assertion pour toutes fonctions cuspidales $\varphi'$ et $\varphi''$. \bigskip \subsection{D\'emonstration du (i) de la proposition 1.2} On poursuit le calcul pr\'ec\'edent en supposant que $(n_{1},n_{2})$ est le couple d\'efini en 1.2. On suppose v\'erifi\'ees les hypoth\`eses (1)(a) et (1)(b) de 3.2. On fixe ${\bf d}\in D$ et ${\bf v}\in {\cal V}({\bf d})$. On suppose d'abord que $b=0$. Comme on l'a expliqu\'e, la somme en $X$ de 3.3(2) est \'egale \`a $$(1) \qquad J^{endo}(X_{1,+},X_{2,+},f_{+}^0)J^{endo}(X_{1,-},X_{2,-},f_{-}^0)\prod_{i\in I}J^{endo}(X_{1,i},X_{2,i},f_{i}^0).$$ Ce produit est nul sauf si les conditions (4) et (5) de 3.3 sont v\'erifi\'ees. L'hypoth\`ese (5) est ind\'ependante de ${\bf d}$ et ${\bf v}$. R\'ecrivons-la (en se rappelant que $r''\geq0$ puisque $b=0$): (2) $val_{F}(\eta_{1,-})\equiv \frac{r'_{-}+ r'' }{2}\,\, mod\,\,2{\mathbb Z}$, $val_{F}(\eta_{2,-})\equiv \frac{ r'_{-}- r'' }{2}\,\, mod\,\,2{\mathbb Z}$. Puisque $\eta_{-}=\eta_{1,-}\eta_{2,-}$, elle implique l'hypoth\`ese (1)(a) de 3.2. Si l'hypoth\`ese (4) de 3.3 est v\'erifi\'ee, on a les in\'egalit\'es $$(3) \qquad n_{1,+}\geq \frac{(r'_{+}+ r'')^2-1}{4},\,\, n_{2,+}\geq \frac{(r'_{+}-r'')^2-1}{4},$$ $$n_{1,-}\geq \frac{(r'_{-}+r'')^2}{4},\,\, n_{2,-}\geq \frac{(r'_{-}-r'')^2}{4}.$$ Puisque $n_{+}=n_{1,+}+n_{2,+}$, $n_{-}=n_{1,-}+n_{2,-}$, ces in\'egalit\'es impliquent l'hypoth\`ese (1)(b) de 3.2. On peut donc oublier les hypoth\`eses (1)(a) et (b) de 3.2 et supposer v\'erifi\'ees les hypoth\`eses (2) et (3) ci-dessus. Alors les relations (4) de 3.3 d\'eterminent un unique \'el\'ement ${\bf d}$ dont on v\'erifie qu'il appartient bien \`a $D$. On suppose d\'esormais que ${\bf d}$ est cet unique \'el\'ement. L'int\'egrale endoscopique $J^{endo}(X_{1,+},X_{2,+},f_{+}^0)$ est calcul\'ee par le lemme 2.1. Adaptons les notations. On note $t'_{1,+},t''_{1,+},t'_{2,+}$ et $t''_{2,+}$ les termes not\'es $t'_{1}$ etc... en 2.1 associ\'es \`a $n_{+}$, $r'_{+}$ et $\vert r''\vert $. On note $C_{+}({\bf v})$ la constante $C$ de 2.2. On pose $f_{1,+}={\cal Q}(t'_{1,+},t''_{1,+})^{Lie}\circ\rho_{N'_{+}}\circ\iota_{N'_{+},0}(\varphi_{v'_{+}})$ et $f_{2,+}={\cal Q}(t'_{2,+},t''_{2,+})^{Lie}\circ\rho_{N''_{+}}\circ\iota_{N''_{+},0}(\varphi_{v''_{+}})$. Alors $$J^{endo}(X_{1,+},X_{2,+},f_{+}^0)=C_{+}({\bf v})S(X_{1,+},f_{1,+})S(X_{2,+},f_{2,+}).$$ En adaptant de fa\c{c}on similaire les notations et d\'efinitions, le lemme 2.2 fournit l'\'egalit\'e $$J^{endo}(X_{1,-},X_{2,-},f_{-}^0)=C_{-}({\bf v})S(X_{1,-},f_{1,-})S(X_{2,-},f_{2,-}),$$ tandis que le lemme 2.3 fournit l'\'egalit\'e $$J^{endo}(X_{1,i},X_{2,i},f_{i}^0)=S(X_{1,i},f_{1,i})S(X_{2,i},f_{2,i}).$$ Posons $f_{1}[{\bf v}]=f_{1,+}\otimes f_{1,-}\otimes\otimes_{i\in I}f_{1,i}$ et $$S(X_{1},f_{1}[{\bf v}])=S(X_{1,+},f_{1,+})S(X_{1,-},f_{1,-})\prod_{i\in I}S(X_{1,i},f_{1,i}).$$ D\'efinissons de m\^eme $f_{2}[{\bf v}]$ et $S(X_{2},f_{2}[{\bf v}])$. Alors l'expression (1) ci-dessus vaut $$C_{+}({\bf v})C_{-}({\bf v})S(X_{1},f_{1}[{\bf v}])S(X_{2},f_{2}[{\bf v}]).$$ D\'efinissons une nouvelle constante $C(r',r'',w',w'')$ par les \'egalit\'es si $r''\leq r'$, $$C(r',r'',w',w'')=(-1)^{d_{2}r''}sgn(-1)^{\frac{r'_{+}-r''-1}{2}};$$ si $r'<r''$, $$C(r',r'',w',w'')=(-1)^{d_{2}r''}sgn(-1)^{ r'+r''+1}sgn_{CD}(w'').$$ Montrons que (4) l'expression (1) vaut $$C(r',r'',w',w'')S(X_{1},f_{1}[{\bf v}])S(X_{2},f_{2}[{\bf v}]).$$ Supposons d'abord $r''\leq r'$. Alors $r''\leq r'_{+}$ et $r''\leq r'_{-}$. En utilisant les d\'efinitions de 2.1 et 2.2, on obtient $$C_{+}({\bf v})C_{-}({\bf v})=sgn(-1)^{\frac{r'_{+}+r''-1}{2}+val_{F}(\eta_{+})}sgn(\eta_{2,+}\varpi^{-val_{F}(\eta_{2,+})})^{val_{F}(\eta_{+})}$$ $$sgn(\eta_{2,-}\varpi^{-val_{F}(\eta_{2,-})})^{val_{F}(\eta_{-})}.$$ D'apr\`es 3.1(2) et l'hypoth\`ese (1)(a) de 3.2 (qui est v\'erifi\'ee), $val_{F}(\eta_{+})=-val_{F}(\eta_{-})\equiv r''\,\,mod\,\,2{\mathbb Z}$. D'apr\`es la m\^eme relation 3.1(2) appliqu\'ee aux donn\'ees index\'ees par $2$, on a $val_{F}(\eta_{2,+})+val_{F}(\eta_{2,-})=0$ et $sgn(\eta_{2,+}\eta_{2,-})=(-1)^{d_{2}}$. On en d\'eduit l'\'egalit\'e $C_{+}({\bf v})C_{-}({\bf v})=C(r',r'',w',w'')$. Supposons maintenant $r'\leq r''-2$. Alors $r'_{+}< r''$ et $r'_{-}<r''$. On voit que $C_{+}({\bf v})C_{-}({\bf v})$ est \'egal \`a $$sgn(-1)^{\frac{r'_{+}+r''+1}{2}+val_{F}(\eta_{+})+val_{F}(\eta_{2,-})}$$ $$sgn(\eta_{2,+}\varpi^{-val_{F}(\eta_{2,+})})^{1++val_{F}(\eta_{+})}sgn(\eta_{2,-}\varpi^{-val_{F}(\eta_{2,-})})^{1+val_{F}(\eta_{-})} sgn_{CD}(v''_{+})sgn_{CD}(v''_{-}).$$ Comme ci-dessus, le produit des deuxi\`eme et troisi\`eme termes vaut $(-1)^{ d_{2}(1+r'')}$. On a $$val_{F}(\eta_{+})+val_{F}(\eta_{2,-})=-val_{F}(\eta_{-})+val_{F}(\eta_{2,-})=-val_{F}(\eta_{1,-})\equiv \frac{r'_{-}+r''}{2}\,\,mod\,\,2{\mathbb Z}$$ d'apr\`es (2) ci-dessus. Puisque $r'_{+}+r'_{-}=2r'+1$, on voit que la puissance de $sgn(-1)$ dans l'expression ci-dessus co\"{\i}ncide avec celle figurant dans $C(r',r'',w',w'')$. D'apr\`es la d\'efinition de ${\bf d}$ et le fait que ${\bf v}\in {\cal V}({\bf d})$, on a l'\'egalit\'e $sgn_{CD}(v''_{+})sgn_{CD}(v''_{-})=sgn_{CD}(w'')sgn(\eta_{2,+}\eta_{2,-})$. Ce dernier facteur est \'egal \`a $(-1)^{d_{2}}$ ainsi qu'on l'a d\'ej\`a dit. En rassemblant ces calculs, on obtient de nouveau l'\'egalit\'e $C_{+}({\bf v})C_{-}({\bf v})=C(r',r'',w',w'')$. Supposons maintenant $r'=r''-1$. Alors $r'_{+}=r'<r''$ mais $r'_{-}=r'+1=r''$. Comme dans le cas $r'\leq r''-2$, on obtiendrait l'\'egalit\'e voulue si $C_{-}({\bf v})$ \'etait d\'efinie par les formules du cas $r'_{-}< r''$ et non pas de notre cas $r'_{-}\geq r''$. Le rapport entre les deux formules est $$(5) \qquad sgn(-1)^{val_{F}(\eta_{2,-})}sgn(\eta_{2,-}\varpi^{-val(\eta_{2,-})})^{val_{F}(\eta_{-})}sgn_{CD}(v''_{-}).$$ Si cette expression vaut $1$, on a fini. Supposons qu'elle vaille $-1$. Puisque $r'_{-}=r''$, l'hypoth\`ese (2) plus haut implique que $val_{F}(\eta_{2,-})$ est paire, ce qui fait dispara\^{\i}tre le premier terme de notre expression . On a aussi $r'_{2,-}=r''_{2,-}=\frac{\vert r'_{-}-r''\vert }{2}=0$. La remarque de 2.2 entra\^{\i}ne que, sous notre hypoth\`ese que (5) vaut $-1$, $f_{2,-}$ est nulle, donc aussi $f_{2}[{\bf v}]$. Mais alors $S(X_{2},f_{2}[{\bf v}])=0$ et la constante n'a plus d'importance. Cela prouve (4). L'\'egalit\'e 3.3(2) devient $$(6) \qquad J^{endo}(x_{1},x_{2},f)=c(r',r'')c(r',r'',w',w'')C(r',r'',w',w'')(-1)^{d_{2}r''}$$ $$\sum_{{\bf v}\in {\cal V}({\bf d})}S(X_{1},f_{1}[{\bf v}])S(X_{2},f_{2}[{\bf v}]).$$ Comme en 1.2, on d\'efinit des entiers $r'_{1},r''_{1},r'_{2},r''_{2}$ et les fonctions $f_{1}=\Psi\circ k\circ\rho\iota(\varphi_{w'})$ et $f_{2}=\Psi\circ k\circ\rho\iota(\varphi_{w''})$. Les int\'egrales orbitales stables $S(x_{1},f_{1})$ et $S(x_{2},f_{2})$ sont des cas particuliers d'int\'egrales endoscopiques $J^{endo}(x_{1},x_{2},f)$. Elles sont donc nulles si les analogues des hypoth\`eses (2) et (3) ne sont pas v\'erifi\'ees. Sinon, elles sont donn\'ees par des formules similaires \`a (6). Dans toutes ces formules, les termes index\'es par l'indice $2$ disparaissent ainsi que ceux faisant intervenir $N''$ et $w''$. D'autre part, l'analogue de l'entier $b\in \{0,1\}$ est toujours $0$. En effet, d'apr\`es les d\'efinitions, on a $r''_{1},r''_{2}\geq0$ et si, par exemple, $r''_{1}=0$, on a aussi $r'_{1}=0$ donc $r'_{1}$ pair. On voit que l'analogue de (2) pour $S(x_{1},f_{1})$ est $val_{F}(\eta_{1,-})\equiv\frac{r'_{1,-}+r''_{1}}{2}\,\,mod\,\,2{\mathbb Z}$, tandis que l'analogue pour $S(x_{2},f_{2})$ est $val_{F}(\eta_{2,-})\equiv\frac{r'_{2,-}+r''_{2}}{2}\,\,mod\,\,2{\mathbb Z}$. D'apr\`es la relation (3) du paragraphe 4 ci-apr\`es, $r'_{1,-}+r''_{1}=r'_{-}+r''$, $r'_{2,-}+r''_{2}=\vert r'_{-}-r''\vert $. Il en r\'esulte que la conjonction des deux analogues de (2) est \'equivalente \`a cette relation (2) elle-m\^eme. L'analogue de (3) ci-dessus pour $S(x_{1},f_{1})$ est $$n_{1,+}\geq \frac{(r'_{1,+}+r''_{1})^2-1}{4},\,\, n_{1,-}\geq\frac{(r'_{1,-}+r''_{1})^2}{4},$$ tandis que l'analogue pour $S(x_{2},f_{2})$ est $$n_{2,+}\geq \frac{(r'_{2,+}+r''_{2})^2-1}{4},\,\, n_{1,-}\geq\frac{(r'_{2,-}+r''_{2})^2}{4}.$$ De nouveau, le calcul montre que la conjonction de ces conditions est \'equivalente \`a la relation (3) elle-m\^eme. Cela d\'emontre que, si (2) et (3) ne sont pas v\'erifi\'ees, $S(x_{1},f_{1})S(x_{2},f_{2})=0$. Puisqu'on a d\'ej\`a vu que $J^{endo}(x_{1},x_{2},f)=0$, on obtient $J^{endo}(x_{1},x_{2},f)=S(x_{1},f_{1})S(x_{2},f_{2})$ dans ce cas. On suppose maintenant v\'erifi\'ees nos hypoth\`eses (2) et (3). De m\^eme que l'on a d\'etermin\'e un unique ${\bf d}\in D$, on d\'etermine un unique ${\bf d}_{1}$ relatif \`a $S(x_{1},f_{1})$ et un unique ${\bf d}_{2}$ relatif \`a $S(x_{2},f_{2})$. Ecrivons ${\bf d}=(N'_{+},N'_{-},N''_{+},N''_{-},(d'_{i},d''_{i})_{i\in I})$. Tous ces entiers sont d\'etermin\'es par la relation 3.3(4). De m\^eme, ${\bf d}_{1}$ et ${\bf d}_{2}$ sont d\'etermin\'es par les analogues de cette relation. Par les m\^emes calculs que ci-dessus, on voit que ${\bf d}_{1}=(N'_{+},N'_{-},0,0, (d'_{i},0)_{i\in I})$ tandis que ${\bf d}_{2}=(N''_{+},N''_{-},0,0, (d''_{i},0)_{i\in I})$. On en d\'eduit que ${\cal V}'({\bf d})\simeq {\cal V}({\bf d}_{1})$ et ${\cal V}''({\bf d})\simeq {\cal V}({\bf d}_{2})$, d'o\`u ${\cal V}({\bf d})\simeq {\cal V}({\bf d}_{1})\times {\cal V}({\bf d}_{2})$. On identifie ces deux ensembles. A l'aide de $r',r''$, ${\bf d}$ et d'un \'el\'ement ${\bf v}\in {\cal V}({\bf d})$, on a d\'efini une fonction $f_{1}[{\bf v}]$. Pour $j=1,2$, \`a l'aide de $r'_{j},r''_{j}$, ${\bf d}_{j}$ et d'un \'el\'ement ${\bf v}_{j}\in {\cal V}({\bf d}_{j})$, on d\'efinit de m\^eme une fonction $f_{j}[{\bf v}_{j}]$. Les analogues de l'\'egalit\'e (6) sont $$(7)_{1}\qquad S(x_{1},f_{1})=c(r'_{1},r''_{1})c(r'_{1},r''_{1},w')C(r'_{1},r''_{1},w') \sum_{{\bf v}_{1}\in {\cal V}({\bf d}_{1})}S(X_{1},f_{1}[{\bf v}_{1}]) ;$$ $$(7)_{2}\qquad S(x_{2},f_{2})=c(r'_{2},r''_{2})c(r'_{2},r''_{2},w'')C(r'_{2},r''_{2},w'') \sum_{{\bf v}_{2}\in {\cal V}({\bf d}_{2})}S(X_{2},f_{2}[{\bf v}_{2}]),$$ o\`u on a adapt\'e de fa\c{c}on \'evidente la notation des constantes. Soit ${\bf v}=({\bf v}_{1},{\bf v}_{2})\in {\cal V}({\bf d})$. Montrons que (8) $f_{1}[{\bf d}]=f_{1}[{\bf d}_{1}]$, $f_{2}[{\bf d}]=f_{2}[{\bf d}_{2}]$. On traite le cas de l'indice $1$. D'apr\`es la d\'efinition donn\'ee plus haut, $f_{1}[{\bf v}]=f_{1,+}\otimes f_{1,-}\otimes \otimes_{i\in I}f_{1,i}$, avec par exemple $f_{1,+}={\cal Q}(t'_{1,+},t''_{1,+})^{Lie}\circ\rho_{N'_{+}}\circ\iota_{N'_{+},0}(\varphi_{v'_{+}})$. Quand on remplace $r',r''$, ${\bf d}$ et ${\bf v}$ par $r'_{1},r''_{1}$, ${\bf d}_{1}$ et ${\bf v}_{1}$, les termes $N'_{+}$ et $v'_{+}$ ne changent pas. Les entiers $t'_{1,+}$ et $t''_{1,+}$ sont d\'efinis par $\{t'_{1,+},t''_{1,+}\}=\{\frac{r'_{+}+r''+1}{2},\frac{r'_{+}+r''-1}{2}\}$ et $t'_{1,+}\equiv 1+val_{F}(\eta_{1,+})\,\,mod\,\,2{\mathbb Z}$. La relation (3) du paragraphe 4 ci-apr\`es montre que l'ensemble $ \{\frac{r'_{+}+r''+1}{2},\frac{r'_{+}+r''-1}{2}\}$ ne change par quand on remplace $r',r''$ par $r'_{1},r''_{1}$. La congruence exig\'ee non plus. Donc $t'_{1,+}$ et $t''_{1,+}$ ne changent pas et $f_{1,+}$ non plus. Un calcul analogue s'applique aux composantes $f_{1,-}$ et $f_{1,i}$ pour $i\in I$. Cela prouve (8). De (6), $(7)_{1}$ et $(7)_{2}$ se d\'eduit l'\'egalit\'e $$J^{endo}(x_{1},x_{2},f)=CS(x_{1},f_{1})S(x_{2},f_{2}),$$ o\`u $$C=c(r',r'')c(r',r'',w',w'')C(r',r'',w',w'')(-1)^{d_{2}r''}c(r'_{1},r''_{1})c(r'_{1},r''_{1},w')C(r'_{1},r''_{1},w') $$ $$c(r'_{2},r''_{2})c(r'_{2},r''_{2},w'')C(r'_{2},r''_{2},w'').$$ Remarquons que l'on n'a pas besoin d'inverser les coefficients de $(7)_{1}$ et $(7)_{2}$: ce sont tous des signes. On a l'\'egalit\'e (9) $C=1$. En reprenant les d\'efinitions des diverses constantes, on obtient l'\'egalit\'e $$c(r',r'')c(r',r'',w',w'')C(r',r'',w',w'')(-1)^{d_{2}r''}=(-1)^{n}U,$$ o\`u $U$ est d\'efini au paragraphe 4 ci-apr\`es. Les autres termes sont les analogues relatifs aux donn\'ees index\'ees par $1$ et $2$. Evidemment, $n=n_{1}+n_{2}$ et l'\'egalit\'e $C=1$ r\'esulte de l'\'egalit\'e $U=U_{1}U_{2}$, cf. 4(4) ci-dessous. A l'aide de (9), on obtient $J^{endo}(x_{1},x_{2},f)=S(x_{1},f_{1})S(x_{2},f_{2})$. Cette \'egalit\'e est donc v\'erifi\'ee avec ou sans les hypoth\`eses (2) et (3) et elle l'est pour tous $x_{1},x_{2}$. Donc $f_{1}\otimes f_{2}$ est le transfert de $f$. Cela d\'emontre le (i) de la proposition 1.2 (sous l'hypoth\`ese $b=0$) pour les fonctions $\varphi'=\varphi_{w'}$ et $\varphi''=\varphi_{w''}$. Comme dans le paragraphe pr\'ec\'edent, cela entra\^{\i}ne la m\^eme assertion pour toutes fonctions cuspidales $\varphi'$ et $\varphi''$. On a suppos\'e $b=0$. Supposons maintenant $b=1$. Comme on l'a dit en 3.3, la somme en $X$ de 3.3(2) est alors \'egale \`a $$(1) \qquad J^{endo}(X_{2,+},X_{1,+},f_{+}^1)J^{endo}(X_{2,-},X_{1,-},f_{-}^1)\prod_{i\in I}J^{endo}(X_{2,i},X_{1,i},f_{i}^1).$$ Le calcul se poursuit comme ci-dessus en permutant les r\^oles des indices $1$ et $2$, en permutant $N'$ et $N''$ et en tenant compte de l'\'egalit\'e $\vert r''\vert =-r''$. Le r\'esultat est le m\^eme, on laisse les d\'etails au lecteur. Cela ach\`eve de prouver la proposition 1.2. $\square$ \bigskip \section{Annexe} On rassemble ici quelques calculs \'el\'ementaires utilis\'es dans l'article. Soient $r'\in {\mathbb N}$ et $r''\in {\mathbb Z}$. On pose $r'_{1}=sup([\frac{r'+r''}{2}],-[\frac{r'+r''}{2}]-1)$, $r''_{1}=\vert [\frac{r'+r''+1}{2}]\vert $, $r'_{2}=sup([\frac{r'-r''}{2}],-[\frac{r'-r''}{2}]-1)$, $r''_{2}=\vert [\frac{r'-r''+1}{2}]\vert $. Remarquons que, si l'on change $r''$ en $-r''$, on \'echange les couples $(r'_{1},r''_{1})$ et $(r'_{2},r''_{2})$. (1) On a $r''_{1}+r''_{2}\equiv r''\,\,mod\,\,2{\mathbb Z}$. Preuve. Puisque $m\equiv -m\,\,mod\,\,2{\mathbb Z}$ pour tout $m\in {\mathbb Z}$, on peut remplacer $r''_{1}$ par $[\frac{r'+r''+1}{2}] $ et $r''_{2}$ par $[\frac{r'-r''+1}{2}] $. On a $[\frac{r'-r''+1}{2}]=[\frac{r'+r''+1}{2}-r'']=[\frac{r'+r''+1}{2}]-r''$, d'o\`u (1). $\square$ Pour faciliter les calculs qui suivent, on pose $a=0$ si $r'+r''$ est pair et $a=1$ si $r'+r''$ est impair, $A=0$ si $\vert r''\vert \leq r'$ et $A=1$ si $r'<\vert r''\vert $. (2) On a les \'egalit\'es $$r^{'2}_{1}+r'_{1}+r^{''2}_{1}=\frac{(r'+r'')^2+(r'+r''+1)^2-1}{4},$$ $$r^{'2}_{2}+r'_{2}+r^{''2}_{2}=\frac{(r'-r'')^2+(r'-r''+1)^2-1}{4}.$$ Preuve. Pour $m\in {\mathbb Z}$, $m^2+m$ est invariant par la transformation $m\mapsto -m-1$ et $m^2$ est invariant par $m\mapsto -m$. On peut donc remplacer $r'_{1}$ par $x'=[\frac{r'+r''}{2}]$ et $r''_{1}$ par $x''=[\frac{r'+r''+1}{2}]$. On a $x'=\frac{r'+r''-a}{2}$, $x''=\frac{r'+r''+a}{2}$. Un calcul alg\'ebrique montre que $$x^{'2}+x'+x^{''2}=\frac{(r'+r'')^2+(r'+r''+1)^2-1}{4}+ \frac{a^2-a}{2}.$$ D'o\`u la premi\`ere \'egalit\'e puisque $a^2=a$. La seconde \'egalit\'e en r\'esulte, en changeant $r''$ en $-r''$. $\square$ On d\'efinit $r'_{+}$ et $r'_{-}$ par si $r'\equiv r''\,\,mod\,\,2{\mathbb Z}$, $r'_{+}=r'+1$, $r'_{-}=r'$; si $r'\not\equiv r''\,\,mod\,\,2{\mathbb Z}$, $r'_{+}=r'$, $r'_{-}=r'+1$. Autrement dit, $r'_{+}=r'+1-a$ et $r'_{-}=r'+a$. En rempla\c{c}ant le couple $(r',r'')$ par $(r'_{1},r''_{1})$ ou $(r'_{2},r''_{2})$, on d\'efinit de m\^eme $r'_{1,+}$ et $r'_{1,-}$ ou $r'_{2,+}$ et $r'_{2,-}$. (3) On a les \'egalit\'es $r'_{1,+}+r''_{1}=\vert r'_{+}+r''\vert $, $r'_{1,-}+r''_{1}=\vert r'_{-}+r''\vert $, $r'_{2,+}+r''_{2}=\vert r'_{+}-r''\vert $, $r'_{2,-}+r''_{2}=\vert r'_{-}-r''\vert $. Preuve. De m\^eme que l'on a d\'efini $a$ et $A$ pour $(r',r'')$, on d\'efinit $a_{1}$ et $A_{1}$ pour $(r'_{1},r''_{1})$ et $a_{2}$ et $A_{2}$ pour $(r'_{2},r''_{2})$. Supposons $r''\geq0$. Alors $r'_{1}=[\frac{r'+r''}{2}]=\frac{r'+r''-a}{2}$ et $r''_{1}=[\frac{r'+r''+1}{2}]=\frac{r'+r''+a}{2}$. Si $a=0$, $r'_{1}=r''_{1}$ donc $a_{1}=A_{1}=0$. Si $a=1$, $r'_{1}=r''_{1}-1$ donc $a_{1}=A_{1}=1$. On a donc $a_{1}=A_{1}=a$ quel que soit $a$. D'o\`u $r'_{1,+}=r'_{1}+1-a_{1}=\frac{r'+r''-a}{2}+1-a$ et $r'_{1,-}=r'_{1}+a_{1}=\frac{r'+r''-a}{2}+a$. On calcule alors $$r'_{1,+}+r''_{1}= \frac{r'+r''-a}{2}+1-a+\frac{r'+r''+a}{2}=r'+r''+1-a=r'_{+}+r''=\vert r'_{+}+r''\vert ,$$ $$r'_{1,-}+r''_{1}= \frac{r'+r''-a}{2}+a+\frac{r'+r''+a}{2}=r'+r''+a=r'_{-}+r''=\vert r'_{-}+r''\vert.$$ Ce sont les deux premi\`eres \'egalit\'es de (3). Supposons $b=0$. Alors $r'_{2}=[\frac{r'-r''}{2}]=\frac{r'-r''-a}{2}$ et $r''_{2}=[\frac{r'-r''+1}{2}]=\frac{r'-r''+a}{2}$. Le calcul est le m\^eme que pr\'ec\'edemment, on a simplement chang\'e $r''$ en $-r''$ dans les formules (en particulier, on a $a_{2}=A_{2}=a$ si $b=0$). Supposons $b=1$. Alors $r'_{2}=-1-[\frac{r'-r''}{2}]=-1+\frac{r''-r'+a}{2}$ et $r''_{2}=-[\frac{r'-r''+1}{2}]=\frac{r''-r'-a}{2}$. On constate que cette fois, $a_{2}=A_{2}=1-a$. Alors $r'_{2,+}=r'_{2}+1-a_{2}=r'_{2}+a=\frac{r''-r'+a}{2}+a-1$ et $r'_{2,-}=r'_{2}+a_{2}=r'_{2}+1-a=\frac{r''-r'+a}{2}-a$. On calcule alors $$r'_{2,+}+r''_{2}= \frac{r''-r'+a}{2}+a-1+\frac{r''-r'-a}{2}=r''-r'+a-1=r''-r'_{+}=\vert r'_{+}-r''\vert ,$$ $$r'_{2,-}+r''_{2}= \frac{r''-r'+a}{2}-a+\frac{r''-r'-a}{2}=r''-r'-a=r''-r'_{-}=\vert r'_{-}-r''\vert.$$ Ce sont encore les deux derni\`eres \'egalit\'es de (3). On a suppos\'e $r''\geq0$. Si $r''\leq 0$, on a remarqu\'e que nos couples $(r'_{1},r''_{1})$ et $(r'_{2},r''_{2})$ sont les m\^emes que ceux d\'eduits du couple $(r',-r'')$, \`a ceci pr\`es que l'on doit \'echanger les indices $1$ et $2$. Les \'egalit\'es (3) pour $(r',r'')$ se d\'eduisent par ce changement des m\^emes \'egalit\'es que l'on vient de d\'emontrer pour le couple $(r',-r'')$. $\square$ D\'efinissons un nombre $u$ par si $\vert r''\vert \leq r'$, $u= \frac{r^{'2}-r'}{2}+\frac{r^{''2}-\vert r''\vert }{2}+\frac{r'_{+}-\vert r''\vert -1}{2}$, si $r'< \vert r''\vert $, $u= \frac{r^{'2}-r'}{2}+\frac{r^{''2}-\vert r''\vert }{2}+r'+r''+1$. Posons $U=(-1)^{r''}sgn(-1)^{u}$. On d\'efinit de m\^eme $u_{1}$, $U_{1}$ et $u_{2}$ et $U_{2}$. (4) On a l'\'egalit\'e $U=U_{1}U_{2}$. Preuve. Comme dans la preuve pr\'ec\'edente, on peut supposer $r''\geq0$. On a $(-1)^{r''}=(-1)^{r''_{1}+r''_{2}}$ d'apr\`es (1) et il suffit de d\'emontrer que $u\equiv u_{1}+u_{2}\,\,mod\,\,2{\mathbb Z}$. Consid\'erons l'expression qui d\'efinit $u$ dans le cas $r''\leq r'$. On y remplace $r'_{+}$ par $r'+1-a$. On peut ajouter $r'+r''+a$ qui est pair et on calcule $u\equiv \frac{r^{'2}+r^{''2}+r'}{2}+\frac{r'+a}{2}\,\,mod \,\, 2{\mathbb Z}$. La diff\'erence entre l'expression de $u$ dans le cas $r'< r''$ et cette expression dans le cas $r''\leq r'$ est $$r'+r''+1-\frac{r'_{+}- r'' -1}{2}\equiv r'-r''+1-\frac{r'-a-r''}{2}\equiv \frac{r'+a-r''}{2}+1\,\,mod\,\,2{\mathbb Z}.$$ On obtient alors la congruence $$(5) \qquad u\equiv \frac{r^{'2}+r^{''2}+r'}{2}+\frac{r'+a}{2}+A(\frac{r'+a-r''}{2}+1)\,\,mod\,\,2{\mathbb Z}$$ en tout cas. La demi-somme des membres de droite de (2) vaut $\frac{r^{'2}+r^{''2}+r'}{2}$. En utilisant (2), on obtient $$u\equiv \frac{r^{'2}_{1}+r'_{1}+r^{''2}_{1}+r^{'2}_{2}+r'_{2}+r^{''2}_{2}}{2}+\frac{r'+a}{2}+A(\frac{r'+a-r''}{2}+1)\,\,mod\,\,2{\mathbb Z},$$ puis, en utilisant les analogues de (5) pour $u_{1}$ et $u_{2}$, $$u\equiv u_{1}+u_{2}+\frac{r'+a-r'_{1}-a_{1}-r'_{2}-a_{2}}{2}+A(\frac{r'+a-r''}{2}+1)$$ $$-A_{1}(\frac{r'_{1}+a_{1}-r''_{1}}{2}+1)-A_{2}(\frac{r'_{2}+a_{2}-r''_{2}}{2}+1)\,\,mod\,\,2{\mathbb Z}.$$ Cette expression se simplifie: puisque $r'_{1}=r''_{1}$ ou $r''_{1}-1$, on a toujours $r'_{1,-}=r''_{1}$. De m\^eme, $r'_{2,-}=r''_{2}$. On a aussi remarqu\'e que $a_{1}=A_{1}=a$. D'o\`u $$u\equiv u_{1}+u_{2}+\frac{r'-r'_{1}-r'_{2}-a_{2}}{2}+A(\frac{r'+a-r''}{2}+1)- a-A_{2} \,\,mod\,\,2{\mathbb Z}.$$ Si $A=0$, on a aussi $a_{2}=A_{2}=a$, $r'_{1}=\frac{r'+r''-a}{2}$, $r''_{1}=\frac{r'-r''-a}{2}$. Si $A=1$, on a $a_{2}=A_{2}=1-a$, $r'_{1}=\frac{r'+r''-a}{2}$, $r''_{1}=-1-\frac{r'-r''-a}{2}$. On calcule l'expression ci-dessus dans les deux cas et on conclut $u\equiv u_{1}+u_{2}\,\,mod\,\,2{\mathbb Z}$. Cela prouve (4). $\square$ \bigskip {\bf Index des notations} $C_{cusp}^{\infty}(G_{\sharp}(F))$ 1.2; ${\mathbb C}[\hat{\mathfrak{S}}_{m}]_{U-cusp}$ 2.3; ${\cal E}$ 2.4; $f_{\pi}$ 1.1; $\varphi_{w}$ 2.1; $\hat{i}_{\sharp}[a,e,u]$ 2.4; $\Psi$ 1.1; ${\cal Q}(r',r'')^{Lie}$ 2.1, 2.2, 2.3; $sgn$ 2.1; $\Theta_{\pi}$ 1.1; ${\cal U}$ 2.4; $X^{\zeta}(a,e,u)$ 2.4. \bigskip {\bf Index des notations de \cite{W5}} ${\mathbb C}[X]$ 1.4; $C'_{n'}$ 1.5; $C^{''\pm}_{n'',\sharp}$ 1.5; $C''_{n''}$ 1.5; $C^{GL(m)}$ 1.5; ${\mathbb C}[\hat{W}_{N}]_{cusp}$ 1.8; $D(n)$ 1.2; $D_{iso}(n)$ 1.2; $D_{an}(n)$ 1.2; $D$ 1.7; $D^{par}$ 1.7; $\eta(Q)$ 1.1; $\eta^+(Q)$ 1.1; $\eta^-(Q)$ 1.1; $Ell_{unip}$ 1.4; $\mathfrak{Ell}_{unip}$ 1.4; $\mathfrak{Endo}_{tunip}$ 2.1; $\mathfrak{Endo}_{unip-quad}$ 2.2; $\mathfrak{Endo}_{unip-quad}^{red}$ 2.2; $\mathfrak{Endo}_{unip,disc}$ 2.4; ${\cal F}^L$ 1.9; ${\cal F}^{par}$ 1.9; ${\cal F}$ 2.3: $\mathfrak{F}^{par}$ 2.3; $G_{iso}$ 1.1; $G_{an}$ 1.1; $\Gamma$ 1.8; $\boldsymbol{\Gamma}$ 1.8; $\tilde{GL}(2n)$ 2.1; $Irr_{tunip}$ 1.3; $\mathfrak{Irr}_{tunip}$ 1.3; $Irr_{unip-quad}$1.3; $\mathfrak{Irr}_{unip-quad}$ 1.3; $Jord(\lambda)$ 1.3; $Jord_{bp}(\lambda)$ 1.3; $Jord_{bp}^{k}(\lambda)$ 1.4; $K_{n',n''}^{\pm}$ 1.2; $k$ 1.9; $L^*$ 1.1; $L_{n',n''}$ 1.2; $l(\lambda)$ 1.3; $mult_{\lambda}$ 1.3; $\mathfrak{o}$ 1.1; $O^+(Q)$ 1.1; $O^-(Q)$ 1.1; $\varpi$ 1.1; $\pi_{n',n''}$ 1.3; ${\cal P}(N)$ 1.3; ${\cal P}^{symp}(2N)$ 1.3; $\boldsymbol{{\cal P}^{symp}}(2N)$ 1.3; $\pi(\lambda,s,\epsilon)$ 1.3; $\pi(\lambda^+,\epsilon^+,\lambda^-,\epsilon^-)$ 1.3; $\pi_{ell}(\lambda^+,\epsilon^+,\lambda^-,\epsilon^-)$ 1.4; $proj_{cusp}$ 1.5; ${\cal P}(\leq n)$ 1.5; ${\cal P}_{k}(N)$ 1.8; $\Pi(\lambda,s,h)$ 2.1; $\Pi^{st}(\lambda^+,\lambda^-)$ 2.4; ${\cal P}^{symp,disc}(2n)$ 2.4; $Q_{iso}$ 1.1; $Q_{an}$ 1.1; $\rho_{\lambda}$ 1.3; ${\cal R}^{par}$ 1.5; ${\cal R}^{par,glob}$ 1.5; ${\cal R}^{par}_{cusp}$ 1.5; ${\cal R}^{par,glob}_{{\bf m}}$ 1.5; ${\cal R}^{par}_{{\bf m},cusp}$ 1.5; $res'_{m}$ 1.5; $res''_{m}$ 1.5; $res_{m}$ 1.5 et 1.8; $res_{{\bf m}}$ 1.5; ${\cal R}$ 1.8; ${\cal R}(\gamma)$ 1.8; ${\cal R}(\boldsymbol{\gamma})$ 1.8; ${\cal R}^{glob}$ 1.8; ${\cal R}_{cusp}$ 1.8; $Rep$ 1.9; $\rho\iota$ 1.10; $S(\lambda)$ 1.3; $\mathfrak{S}_{N}$ 1.8; $\hat{\mathfrak{S}}_{N}$ 1.8; $sgn$ 1.8; $sgn_{CD}$ 1.8; ${\cal S}_{n}$ 1.11; $\mathfrak{St}_{tunip}$ 2.1; $\mathfrak{St}_{unip-quad}$ 2.4; $\mathfrak{St}_{unip,disc}$ 2.4; $sgn_{iso}$ 2.6; $sgn_{an}$ 2.6; $val_{F}$ 1.1; $V_{iso}$ 1.1; $V_{an}$ 1.1; $W_{N}$ 1.8; $\hat{W}_{N}$ 1.8; $w_{\alpha}$ 1.8; $w_{\alpha,\beta}$ 1.8; $w_{\alpha,\beta',\beta''}$ 1.8; $Z(\lambda)$ 1.3; $Z(\lambda,s)$ 1.3; ${\bf Z}(\lambda,s)$ 1.3; ${\bf Z}(\lambda,s)^{\vee}$ 1.3; $\vert .\vert _{F}$ 1.1.
1,314,259,994,648
arxiv
\section{Introduction} \IEEEPARstart{D}{istributions} with fixed marginals have been studied extensively in the probability literature (see for example \cite{fixed} and the references therein). They are closely related to (and sometimes identified with, as will be the case in this paper) the concept of coupling, which has proven to be a very useful proof technique in probability theory \cite{coupling}, and in particular in the theory of Markov chains \cite{peres}. There is also rich literature on the geometrical and combinatorial properties of sets of distributions with given marginals, which are known as transportation polytopes in this context (see, e.g., \cite{brualdi}). We investigate here these objects from a certain information-theoretic perspective. Our results and the general outline of the paper are briefly described below. \par Section \ref{shannon} provides definitions and elementary properties of the functionals studied subsequently -- Shannon entropy, R\'enyi entropy, conditional entropy, mutual information, and information divergence. In Section \ref{transport} we recall the definition and basic properties of couplings, i.e., bivariate distributions with fixed marginals, and introduce the corresponding notation. The notion of minimum entropy coupling, which will be useful in subsequent analysis, is also introduced here. In Section \ref{infinite} we discuss in detail continuity and related properties of the above-mentioned information measures under constraints on the marginal distributions. These results complement rich literature on the topic of extending the statements of information theory to the case of countably infinite alphabets. \par In Section \ref{distance} we define a family of (pseudo)metrics on the space of probability distributions, that is based on the minimum entropy coupling in the same way as the total variation distance is based on the so-called maximal coupling. The relation between these distances is derived from the Fano's inequality. Some other properties of the new metrics are also discussed, in particular an interesting characterization of the conditional entropy that they yield. \par In Section \ref{optimization} certain optimization problems associated with the above-mentioned information measures are studied. Most of them are, in a certain sense, the reverse problems of the well-known optimization problems, such as the maximum entropy principle, the channel capacity, and the information projections. The general problems of (R\'enyi) entropy minimization, maximization of mutual information, and maximization of information divergence are all shown to be intractable. Since mutual information is a good measure of dependence of two random variables, this will also lead to a similar result for all measures of dependence satisfying R\'{e}nyi's axioms, and to a statistical scenario where this result might be of interest. The potential practical relevance of these problems is also discussed in this section, as well as their theoretical value. Namely, all of them are found to be basically restatements of some well-known problems in complexity theory. \section{Information measures} \label{shannon} \par In this introductory section we recall the definitions and elementary properties of some basic information-theoretic functionals. All random variables are assumed to be discrete, with alphabet $\mathbb{N}$ -- the set of positive integers, or a subset of $\mathbb{N}$ of the form $\{1,\ldots,n\}$. \par Shannon entropy of a random variable $X$ with probability distribution $P=(p_i)$ (we also sometimes write $P(i)$ for the masses of $P$) is defined as: \begin{equation} H(X) \equiv H(P) = -\sum_{i} p_i\log p_i \end{equation} with the usual convention $0\log0=0$ being understood. The base of the logarithm, $b>1$, is arbitrary and will not be specified. $H$ is a strictly concave\footnote{\,To avoid possible confusion concave means $\cap$ and convex means $\cup$.} functional in $P$ \cite{cover}. Further, for a pair of random variables $(X,Y)$ with joint distribution $S=(s_{i,j})$ and marginal distributions $P=(p_i)$ and $Q=(q_j)$, the following defines their joint entropy: \begin{equation} H(X,Y) \equiv H_{X,Y}(S) = -\sum_{i,j} s_{i,j}\log s_{i,j}, \end{equation} conditional entropy: \begin{equation} H(X|Y) \equiv H_{X|Y}(S) = -\sum_{i,j} s_{i,j}\log \frac{s_{i,j}}{q_j}, \end{equation} and mutual information: \begin{equation} I(X;Y) \equiv I_{X;Y}(S) = \sum_{i,j} s_{i,j}\log \frac{s_{i,j}}{p_{i}q_{j}}, \end{equation} again with appropriate conventions. We will refer to the above quantities as the Shannon information measures. They are all related by simple identities: \begin{equation} \label{identity} \begin{aligned} H(X,Y) &= H(X) + H(Y) - I(X;Y) \\ &= H(X) + H(Y|X) \end{aligned} \end{equation} and obey the following inequalities: \begin{align} \label{ineqH} \max\big\{H(X), H(Y)\big\}\leq H(X&,Y) \leq H(X) + H(Y), \\ \label{ineqI} \min\big\{H(X), H(Y)\big\} &\geq I(X;Y) \geq 0, \\ \label{ineqHx} 0 \leq H(X|Y) &\leq H(X). \end{align} The equalities on the right-hand sides of \eqref{ineqH}--\eqref{ineqHx} are achieved if and only if $X$ and $Y$ are independent. The equalities on the left-hand sides of \eqref{ineqH} and \eqref{ineqI} are achieved if and only if $X$ deterministically depends on $Y$ (i.e., iff $X$ is a function of $Y$), or vice versa. The equality on the left-hand side of \eqref{ineqHx} holds if and only if $X$ deterministically depends on $Y$. We will use some of these properties in our proofs; for their demonstration we point the reader to the standard reference \cite{cover}. \par From identities \eqref{identity} one immediately observes the following: Over a set of bivariate probability distributions with fixed marginals (and hence fixed marginal entropies $H(X)$ and $H(Y)$), all the above functionals differ up to an additive constant (and a minus sign in the case of mutual information), and hence one can focus on studying only one of them and easily translate the results for the others. This fact will also be exploited later. \par Relative entropy (information divergence, Kullback-Leibler divergence) $D(P||Q)$ is the following functional: \begin{equation} D(P||Q) = \sum_{i} p_i \log\frac{p_i}{q_i}. \end{equation} \par Finally, R\'enyi entropy \cite{renyi} of order $\alpha\geq0$ of a random variable $X$ with distribution $P$ is defined as: \begin{equation} H_\alpha(X)\equiv H_\alpha(P) = \frac{1}{1-\alpha}\log\sum_{i} p_i^\alpha, \end{equation} with \begin{equation} H_0(P)=\lim_{\alpha\to 0}H_\alpha(P)=\log|P| \end{equation} where $|P|=|\{i:p_i>0\}|$ denotes the size of the support of $P$, and \begin{equation} H_1(P)=\lim_{\alpha\to 1^+}H_\alpha(P)=H(P). \end{equation} One can also define: \begin{equation} H_\infty(P)=\lim_{\alpha\to\infty}H_\alpha(P)=-\log\max_i p_i. \end{equation} Joint R\'enyi entropy of the pair $(X,Y)$ having distribution $S=(s_{i,j})$ is naturally defined as: \begin{equation} H_\alpha(X,Y)\equiv H_\alpha(S) = \frac{1}{1-\alpha}\log\sum_{i,j} s_{i,j}^\alpha. \end{equation} By using subadditivity (for $\alpha<1$) and superadditivity (for $\alpha>1$) properties of the function $x^\alpha$ one concludes that: \begin{equation} H_\alpha(X,Y) \geq \max\big\{H_\alpha(X),H_\alpha(Y)\big\} \end{equation} with equality if and only if $X$ is a function of $Y$, or vice versa. However, R\'enyi analogue of the right-hand side of \eqref{ineqH} does not hold unless $\alpha=0$ or $\alpha=1$ \cite{aczel}. In fact, no upper bound on the joint R\'enyi entropy in terms of the marginal entropies can exist for $0<\alpha<1$, as will be illustrated in Section \ref{infinite}. \section{Couplings of probability distributions} \label{transport} A coupling of two probability distributions $P$ and $Q$ is a bivariate distribution $S$ (on the product space, in our case $\mathbb{N}^2$) with marginals $P$ and $Q$. This concept can also be defined for random variables in a similar manner, and it represents a powerful proof technique in probability theory \cite{coupling}. \par Let $\Gamma_{n}^{(1)}$ and $\Gamma_{n\times m}^{(2)}$ denote the sets of one- and two-dimensional probability distributions with alphabets of size $n$ and $n\times m$, respectively: \begin{align} \Gamma_{n}^{(1)} &= \Bigg\{ (p_i)\in\mathbb{R}^{n}\,:\,p_i\geq0\,,\,\sum_{i} p_i = 1 \Bigg\} \\ \Gamma_{n\times m}^{(2)} &= \Bigg\{ (p_{i,j})\in\mathbb{R}^{n\times m}\,:\,p_{i,j}\geq 0\,,\,\sum_{i,j} p_{i,j} = 1 \Bigg\} \end{align} and let $\mathcal{C}(P,Q)$ denote the set of all couplings of $P\in\Gamma_{n}^{(1)}$ and $Q\in\Gamma_{m}^{(1)}$: \begin{equation} \mathcal{C}(P,Q) = \Bigg\{ S\in\Gamma_{n\times m}^{(2)}\,:\,\sum_{j} s_{i,j}=p_i,\, \sum_{i} s_{i,j}=q_j \Bigg\}. \end{equation} It is easy to show that the sets $\mathcal{C}(P,Q)$ are convex and closed in $\Gamma_{n\times m}^{(2)}$. They are also clearly disjoint and cover entire $\Gamma_{n\times m}^{(2)}$, i.e., they form a partition of $\Gamma_{n\times m}^{(2)}$. Finally, they are parallel affine $(n-1)(m-1)$-dimensional subspaces of the $(n\cdot m-1)$-dimensional space $\Gamma_{n\times m}^{(2)}$. (We have in mind the restriction of the corresponding affine spaces in $\mathbb{R}^{n\times m}$ to $\mathbb{R}_{+}^{n\times m}$.) \par The set of distributions with fixed marginals is basically the set of matrices with nonnegative entries and prescribed row and column sums (only now the total sum is required to be one, but this is inessential). Such sets are special cases of the so-called transportation polytopes \cite{brualdi}. \par We shall also find it interesting to study information measures over the sets of distributions whose one marginal and the support of the other are fixed: \begin{equation} {\mathcal C}(P,m) = \bigcup_{Q\in\Gamma_{m}^{(1)}} {\mathcal C}(P,Q). \end{equation} These sets are also convex polytopes and form a partition of $\Gamma_{n\times m}^{(2)}$ for $P\in\Gamma_{n}^{(1)}$. \subsection{Minimum entropy couplings} We now introduce one special type of couplings which will be useful in subsequent analysis. \begin{definition} \emph{Minimum entropy coupling} of probability distributions $P$ and $Q$ is a bivariate distribution $S^*\in{\mathcal C}(P,Q)$ which minimizes the entropy functional $H\equiv H_{X,Y}$, i.e., \begin{equation} H(S^*) = \inf_{S\in{\mathcal C}(P,Q)} H(S). \end{equation} \end{definition} \par Minimum entropy couplings exist for any $P\in\Gamma_{n}^{(1)}$ and $Q\in\Gamma_{m}^{(1)}$ because sets ${\mathcal C}(P,Q)$ are compact (closed and bounded) and entropy is continuous over $\Gamma_{n\times m}^{(2)}$ and hence attains its extrema. (Note, however, that they need not be unique.) From the strict concavity of entropy one concludes that the minimum entropy couplings must be vertices of ${\mathcal C}(P,Q)$ (i.e., they cannot be expressed as $aS+(1-a)T$, with $S,T\in{\mathcal C}(P,Q)$, $a\in(0,1)$). Finally, from identities \eqref{identity} it follows that the minimizers of $H_{X,Y}$ over ${\mathcal C}(P,Q)$ are simultaneously the minimizers of $H_{X|Y}$ and $H_{Y|X}$ and the maximizers of $I_{X;Y}$, and hence could also be called \emph{maximum mutual information couplings} for example. \begin{mydef}[cont.] \emph{Minimum $\alpha$-entropy coupling} of probability distributions $P$ and $Q$ is a bivariate distribution $S^*\in{\mathcal C}(P,Q)$ which minimizes the R\'enyi entropy functional $H_\alpha$. \end{mydef} \par Similarly to the above, existence of the minimum $\alpha$-entropy couplings is easy to establish, as is the fact that they must be vertices of ${\mathcal C}(P,Q)$ ($H_\alpha$ is concave for $0\leq\alpha\leq1$; for $\alpha>1$ it is neither concave nor convex \cite{ben} but the claim follows from the convexity of $\sum_{i,j} s_{i,j}^\alpha$). \section{Infinite alphabets} \label{infinite} We now establish some basic properties of information measures over ${\mathcal C}(P,Q)$ and ${\mathcal C}(P,m)$, and of the sets ${\mathcal C}(P,Q)$ and ${\mathcal C}(P,m)$ themselves, in the case when the distributions $P$ and $Q$ have possibly infinite supports. The notation is similar to the finite alphabet case, for example: \begin{equation} \begin{aligned} \Gamma^{(1)} &= \Bigg\{ (p_i)_{i\in\mathbb{N}}\,:\,p_i\geq0\,,\,\sum_{i} p_i = 1 \Bigg\}, \\ \Gamma^{(2)} &= \Bigg\{ (p_{i,j})_{i,j\in\mathbb{N}}\,:\,p_{i,j}\geq0\,,\,\sum_{i,j} p_{i,j} = 1 \Bigg\}. \end{aligned} \end{equation} \par The following well-known claim will be useful. We give a proof for completeness. \begin{lemma} \label{lowersemicont} Let $f : A \to \mathbb{R}$, with $A\subseteq\mathbb{R}$ closed, be a continuous nonnegative function. Then the functional $F(x)=\sum_i f(x_i)$, $x=(x_1,x_2,\ldots)$, is lower semi-continuous in $\ell^1$ topology. \end{lemma} \begin{proof} Let $\norm{x^{(n)}-x}\to0$. Then, by using nonnegativity and continuity of $f$, we obtain \begin{equation} \begin{aligned} \liminf_{n\to\infty} F(x^{(n)}) &= \liminf_{n\to\infty} \sum_{i=1}^{\infty} f(x^{(n)}_i) \\ &\geq \liminf_{n\to\infty} \sum_{i=1}^{K} f(x^{(n)}_i) \\ &= \sum_{i=1}^{K} f(x_i) , \end{aligned} \end{equation} where the fact that $\norm{x^{(n)}-x}\to0$ implies $|x^{(n)}_i-x_i|\to0$, $\forall i$, was also used. Letting $K\to\infty$ we get \begin{equation} \liminf_{n\to\infty} F(x^{(n)}) \geq F(x), \end{equation} which was to be shown. \end{proof} \subsection{Compactness of ${\mathcal C}(P,Q)$ and ${\mathcal C}(P,m)$} \par Let $\ell_{2}^1=\big\{(x_{i,j})_{i,j\in\mathbb{N}}:\sum_{i,j}|x_{i,j}|<\infty\big\}$. This is the familiar $\ell^1$ space, only defined for two-dimensional sequences. It clearly shares all the essential properties of $\ell^1$, completeness being the one we shall exploit. The metric understood is: \begin{equation} \norm{x-y} = \sum_{i,j} |x_{i,j} - y_{i,j}|, \end{equation} for $x,y\in\ell_{2}^1$. In the context of probability distributions, this distance is usually called the total variation distance (actually, it is twice the total variation distance, see \eqref{l1variation}). \begin{theorem} \label{comp} For any $P,Q\in\Gamma^{(1)}$ and $m\in\mathbb{N}$, ${\mathcal C}(P,Q)$ and ${\mathcal C}(P,m)$ are compact. \end{theorem} \begin{proof} A metric space is compact if and only if it is complete and totally bounded \cite[Thm 45.1]{munkres}. These facts are demonstrated in the following two propositions. \end{proof} \begin{proposition} \label{complete} ${\mathcal C}(P,Q)$ and ${\mathcal C}(P,m)$ are complete metric spaces. \end{proposition} \begin{proof} It is enough to show that ${\mathcal C}(P,Q)$ and ${\mathcal C}(P,m)$ are closed in $\ell_{2}^1$ because closed subsets of complete spaces are always complete \cite{munkres}. In other words, it suffices to show that for any sequence $S_n\in{\mathcal C}(P,Q)$ converging to some $S\in\ell_{2}^1$ (in the sense that $\norm{S_n - S}\to 0$), we have $S\in{\mathcal C}(P,Q)$. This is straightforward. If $S_n$ all have the same marginals ($P$ and $Q$), then $S$ must also have these marginals, for otherwise the distance between $S_n$ and $S$ would be lower bounded by the distance between the corresponding marginals: \begin{equation} \sum_{i,j} \left|S(i,j) - S_n(i,j)\right| \geq \sum_{i} \bigg| \sum_{j} S(i,j) - S_n(i,j) \bigg| \end{equation} and hence could not decrease to zero. The case of ${\mathcal C}(P,m)$ is similar. \end{proof} \par For our next claim, recall that a set $E$ is said to be totally bounded if it has a finite covering by $\epsilon$-balls, for any $\epsilon>0$. In other words, for any $\epsilon>0$, there exist $x_1,\ldots,x_K\in E$ such that $E\subseteq\bigcup_k{\mathcal B}(x_k,\epsilon)$, where ${\mathcal B}(x_k,\epsilon)$ denotes the open ball around $x_k$ of radius $\epsilon$. The points $x_1,\ldots,x_K$ are then called an $\epsilon$-net for $E$. \begin{proposition} \label{bounded} ${\mathcal C}(P,Q)$ and ${\mathcal C}(P,m)$ are totally bounded. \end{proposition} \begin{proof} We prove the statement for ${\mathcal C}(P,Q)$, the proof for ${\mathcal C}(P,m)$ is very similar. Let $P, Q$, and $\epsilon>0$ be given. We need to show that there exist distributions $S_1,\ldots,S_K\in{\mathcal C}(P,Q)$ such that ${\mathcal C}(P,Q)\subseteq\bigcup_{k}{\mathcal B}(S_k,\epsilon)$, and this is done in the following. There exists $N$ such that $\sum_{i=N+1}^{\infty} p_i < \frac{\epsilon}{6}$ and $\sum_{j=N+1}^{\infty} q_j < \frac{\epsilon}{6}$. Observe the truncations of the distributions $P$ and $Q$, namely $(p_1,\ldots,p_N)$ and $(q_1,\ldots,q_N)$. Assume that $\sum_{i=1}^N p_i\geq\sum_{j=1}^N q_j$, and let $r=\sum_{i=1}^N p_i - \sum_{j=1}^N q_j$ (otherwise, just interchange $P$ and $Q$). Now let $P^{(N)}=(p_1,\ldots,p_N)$ and $Q^{(N,r)}=(q_1,\ldots,q_N,r)$, and observe ${\mathcal C}(P^{(N)},Q^{(N,r)})$. (Adding $r$ was necessary for ${\mathcal C}(P^{(N)},Q^{(N,r)})$ to be nonempty.) This set is closed (see the proof of Proposition \ref{complete}) and bounded in $\mathbb{R}^{N\times (N+1)}$, and hence it is compact by the Heine-Borel theorem. This further implies that it is totally bounded and has an $\frac{\epsilon}{6}$-net, i.e., there exist $T_1,\ldots,T_K\in{\mathcal C}(P^{(N)},Q^{(N,r)})$ such that ${\mathcal C}(P^{(N)},Q^{(N,r)})\subseteq\bigcup_{k}{\mathcal B}(T_k,\frac{\epsilon}{6})$. Now construct distributions $S_1,\ldots,S_K\in{\mathcal C}(P,Q)$ by ``padding" $T_1,\ldots,T_K$. Namely, take $S_k$ to be any distribution in ${\mathcal C}(P,Q)$ which coincides with $T_k$ on the first $N\times N$ coordinates, for example: \begin{equation} S_k(i,j) = \begin{cases} T_k(i,j) , & i,j\leq N \\ 0 , & j\leq N, i > N \\ T_k(i,N+1)\cdot{q_j}/{\sum_{j=N+1}^\infty q_j} , & i\leq N, j > N \\ p_i\cdot{q_j}/{\sum_{j=N+1}^\infty q_j} , & i,j > N . \end{cases} \end{equation} Note that $\norm{T_\ell-S_\ell}<\frac{\epsilon}{3}$ (where we understand that $T_\ell(i,j)=0$ for $i>N$ or $j>N+1$). We prove below that $S_k$'s are the desired $\epsilon$-net for ${\mathcal C}(P,Q)$, i.e., that any distribution $S\in{\mathcal C}(P,Q)$ is at distance at most $\epsilon$ from some $S_\ell$, $\ell\in\{1,\ldots,K\}$ ($\norm{S-S_\ell}<\epsilon$). Observe some $S\in{\mathcal C}(P,Q)$, and let $S'$ be its $N\times N$ truncation: \begin{equation} S'(i,j) = \begin{cases} S(i,j) , & i,j\leq N \\ 0 , & \text{otherwise}. \end{cases} \end{equation} Note that $S'$ is not a distribution, but that does not affect the proof. Note also that the marginals of $S'$ are bounded from above by the marginals of $S$, namely $q_j'=\sum_i S'(i,j)\leq q_j$ and $p_i'=\sum_j S'(i,j)\leq p_i$. Finally, we have $\norm{S-S'}<\frac{\epsilon}{3}$ because the total mass of $S$ on the coordinates where $i>N$ or $j>N$ is at most $\frac{\epsilon}{3}$. The next step is to create $S''\in{\mathcal C}(P^{(N)},Q^{(N,r)})$ by adding masses to $S'$ on the $N\times(N+1)$ rectangle. One way to do this is as follows. Let \begin{align} u_i &= \begin{cases} p_i-p_i' , & i\leq N \\ 0 , & i>N \end{cases} , \\ v_j &= \begin{cases} q_j - q_j' , & j\leq N \\ r , & j=N+1 \\ 0 , & j>N+1 \end{cases} , \end{align} and let $U=(u_i)$, and $V=(v_j)$, and $c=\sum_i u_i=\sum_j v_j$. Now define $S''$ by: \begin{equation} S'' = S' + \frac{1}{c}U\times V . \end{equation} It is easy to verify that $S''\in{\mathcal C}(P^{(N)},Q^{(N,r)})$ and that $\norm{S'-S''}<\frac{\epsilon}{6}$ because the total mass added is \begin{equation} \begin{aligned} c = \sum_{i=1}^{N} p_i - p_i' &= \sum_{i=1}^{N} \sum_{j=1}^{\infty} ( S(i,j) - S'(i,j) ) \\ &= \sum_{i=1}^{N} \sum_{j=N+1}^{\infty} S(i,j) \\ &\leq \sum_{j=N+1}^{\infty} q_j < \frac{\epsilon}{6} . \end{aligned} \end{equation} Now recall that $T_k$'s form an $\frac{\epsilon}{6}$-net for ${\mathcal C}(P^{(N)},Q^{(N,r)})$ and consequently that there exists some $T_\ell$, $\ell\in\{1,\ldots,K\}$, with $\norm{S''-T_\ell}<\frac{\epsilon}{6}$. To put this all together, write: \begin{equation} \begin{aligned} \norm{S-S_\ell} \leq &\norm{S-S'} + \norm{S'-S''} + \\ &\norm{S''-T_\ell} + \norm{T_\ell-S_\ell} < \epsilon , \end{aligned} \end{equation} which completes the proof. \end{proof} \subsection{Continuity of Shannon information measures} Shannon information measures are known to be discontinuous functionals in general \cite{ho,wehrl}. Imposing certain restrictions on the marginal distributions and entropies, however, ensures their continuity. \begin{theorem} \label{cont} Let $P,Q\in\Gamma^{(1)}$ and $m\in\mathbb{N}$, and assume that $Q$ has finite entropy. Then Shannon information measures are continuous over ${\mathcal C}(P,Q)$ and ${\mathcal C}(P,m)$. \end{theorem} \begin{proof} The claim can be established by using \cite[Thm 4.3]{harr} and exhibiting the cost-stable codes for these statistical models, but we also give here a more direct proof (which will then be extended to prove Theorem \ref{allshannon}). Write \begin{equation} \label{mutinf} H_Y(S) = I_{X;Y}(S) + H_{Y|X}(S) . \end{equation} The functional $H_{Y|X}(S)=\sum_{i,j} s_{i,j}\log\frac{p_i}{s_{i,j}}$ is lower semi-continuous by Lemma \ref{lowersemicont}. The functional $I_{X;Y}$ is also lower semi-continuous since \begin{equation} \label{KL} I_{X;Y}(S) = D(S||P\times Q), \end{equation} and information divergence $D(S||T)$ is known to be jointly lower semi-continuous in the distributions $S$ and $T$ \cite{topsoe2}. But since the sum of these two functionals is a constant $H_Y(S)=H(Q)<\infty$, both of them must be continuous. The continuity of $H_{X|Y}$ and $H_{X,Y}$ now follows from \eqref{identity}. \par Now consider ${\mathcal C}(P,m)$. In \cite{ho} it is shown that $H(Y|X)$ and $I(X;Y)$ are continuous when the alphabet of $Y$ is finite and fixed, which is what we have here. And since $H(X)=H(P)$ is fixed, $H(X|Y)$ and $H(X,Y)$ are also continuous (if $H(P)=\infty$ then they are infinite over the entire ${\mathcal C}(P,m)$, but we also take this to mean that they are continuous). \end{proof} \par In fact, since ${\mathcal C}(P,Q)$ and ${\mathcal C}(P,m)$ are compact, Shannon information measures are \emph{uniformly} continuous over these domains, for any $P,Q$ with finite entropy, and $m\in\mathbb{N}$. \par Combining the above results, we obtain the following. \begin{theorem} Let $P,Q\in\Gamma^{(1)}$ and $m\in\mathbb{N}$, and assume that $Q$ has finite entropy. Then Shannon information measures attain their extreme values over ${\mathcal C}(P,Q)$ and ${\mathcal C}(P,m)$. \end{theorem} \begin{proof} The claim follows from Theorems \ref{comp} and \ref{cont}: Every continuous function attains its infimum and supremum over a compact set \cite[Thm 4.16]{rudin}. \end{proof} \par This claim is obsolete for the maximum entropy distribution because it is easy to find it. Namely, $P\times Q=(p_iq_j)$ maximizes entropy over ${\mathcal C}(P,Q)$ as is easily seen from \eqref{ineqH}, and $P\times U_m$ is the maximizer over ${\mathcal C}(P,m)$, where $U_m$ is the uniform distribution over $\{1,\ldots,m\}$. But it is much harder to find the minimum entropy distribution, as we will show, and its existence is not obvious when the alphabets are unbounded. \par The argument in the proof of Theorem \ref{cont} can easily be adapted to prove the following more general claim. \begin{theorem} \label{allshannon} Let $S_n, S\in\Gamma^{(2)}$ be bivariate probability distributions. If $S_n$ converges to $S$ ($\norm{S_n - S}\to0$) in such a way that $H_X(S_n)\to H_X(S)$ and $H_Y(S_n)\to H_Y(S)$, and if at least one of these marginal entropies is finite, then we must have: \begin{equation} \label{allconverge} \begin{aligned} H_{X,Y}(S_n) \to H_{X,Y}(S) &,\quad I_{X;Y}(S_n) \to I_{X;Y}(S) \\ H_{X|Y}(S_n) \to H_{X|Y}(S) &,\quad H_{Y|X}(S_n) \to H_{Y|X}(S). \end{aligned} \end{equation} \end{theorem} \begin{proof} As in the proof of Theorem \ref{cont}, we observe that $\liminf_{n\to\infty} H_{Y|X}(S_n)\geq H_{Y|X}(S)$ (by Lemma \ref{lowersemicont}), and that $\liminf_{n\to\infty} I_{X;Y}(S_n)\geq I_{X;Y}(S)$ which follows from \eqref{KL} and the fact that when $S_n\to S$, then also $P_n\times Q_n\to P\times Q$, where $P_n, Q_n$, and $P, Q$ are the marginals of $S_n$ and $S$, respectively. But since $H_Y(S_n)\to H_Y(S)$ by assumption, one sees from \eqref{mutinf} that both of these inequalities must in fact be equalities. The remaining claims in \eqref{allconverge} then follow from \eqref{identity}. \end{proof} \par The previous claim establishes that if $\norm{S_n - S}\to0$, then a sufficient condition for the convergence of joint entropy is the convergence of marginal entropies. It is also necessary, as the following theorem shows. \begin{theorem} \label{jointmarg} Let $S_n, S$ be bivariate probability distributions such that $\norm{S_n - S}\to0$ and $H_{X,Y}(S_n)\to H_{X,Y}(S)<\infty$. Then $H_{X}(S_n) \to H_{X}(S)$ and $H_{Y}(S_n) \to H_{Y}(S)$, and conse\-quently, all claims in \eqref{allconverge} hold. \end{theorem} \begin{proof} The claim follows from the identity $H_{X,Y}(S_n) = H_X(S_n) + H_{Y|X}(S_n)$, and the fact that $H_X$ and $H_{Y|X}$ are both lower semi-continuous. \end{proof} \subsection{(Dis)continuity of R\'enyi entropy} R\'enyi entropy $H_\alpha$ is known to be a continuous functional for $\alpha>1$ (see, e.g., \cite{kovacevic2}) and it of course remains continuous over ${\mathcal C}(P,Q)$ and ${\mathcal C}(P,m)$. Therefore, it is also bounded and attains its extrema over these domains. It is, however, in general discontinuous for $\alpha\in[0,1]$, and its behavior over ${\mathcal C}(P,Q)$ and ${\mathcal C}(P,m)$ needs to be examined separately. The case $\alpha=1$ (Shannon entropy) has been settled in the previous subsection, so in the following we assume that $\alpha\in[0,1)$. \begin{theorem} $H_\alpha$ is continuous over ${\mathcal C}(P,m)$, for any $\alpha>0$. For $\alpha=0$ it is discontinuous for any $m\geq2$. \end{theorem} \begin{proof} Let $0<\alpha<1$. If $H_\alpha(P)=\infty$, then $H_\alpha(S)=\infty$ for any $S\in{\mathcal C}(P,m)$ and there is nothing to prove, so assume that $H_\alpha(P)<\infty$. Let $S_n$ be a sequence of bivariate distributions converging to $S$, and observe: \begin{equation} \label{eqcpm} \sum_{i,j} S_n(i,j)^\alpha. \end{equation} Since $S_n(i,j)\leq P(i)$ and $\sum_{i=1}^{\infty}\sum_{j=1}^{m} P(i)^\alpha = m\sum_{i=1}^{\infty} P(i)^\alpha<\infty$ by assumption, it follows from the Weierstrass criterion \cite[Thm 7.10]{rudin} that the series \eqref{eqcpm} converges uniformly (in $n$) and therefore: \begin{equation} \begin{aligned} \lim_{n\to\infty} \sum_{i,j} S_n(i,j)^\alpha &= \sum_{i,j} \lim_{n\to\infty} S_n(i,j)^\alpha \\ &= \sum_{i,j} S(i,j)^\alpha \end{aligned} \end{equation} which gives $H_\alpha(S_n)\to H_\alpha(S)$. \par As for the case $\alpha=0$ it is easy to exhibit a sequence $S_n\to S$ such that the supports of $S_n$ strictly contain the support of $S$, i.e., $|S_n|>|S|$, implying that $\lim_{n\to\infty} H_0(S_n)>H_0(S)$. The case $m=1$ is uninteresting because ${\mathcal C}(P,1)=\{P\}$. \end{proof} \par Unfortunately, continuity over ${\mathcal C}(P,Q)$ fails in general, as we discuss next. \begin{theorem} \label{unbounded} For any $\alpha\in(0,1)$ there exist distributions $P,Q$ with $H_\alpha(P)<\infty$ and $H_\alpha(Q)<\infty$, such that $H_\alpha$ is unbounded over ${\mathcal C}(P,Q)$. \end{theorem} \begin{proof} Let $P=Q=(p_i)$ and assume that the $p_i$'s are monotonically nonincreasing. Define $S_n$ with $S_n(i,j)=\frac{p_n}{n^r} + \varepsilon_{i,j}$ for $i,j\in\{1,\ldots,n\}$, where $\varepsilon_{i,j}>0$ are chosen to obtain the correct marginals and $r>1$, and $S_n(i,j)=p_i\delta_{i,j}$ otherwise, where $\delta_{i,j}$ is the Kronecker's delta. Then $S_n\in{\mathcal C}(P,Q)$, and \begin{equation} \sum_{i,j} S_n(i,j)^\alpha \geq \sum_{i=1}^{n}\sum_{j=1}^{n} \left(\frac{p_n}{n^r}\right)^\alpha = n^{2-r\alpha} p_n^\alpha \end{equation} Now, if $p_n$ decreases to zero slowly enough, the previous expression will tend to $\infty$ when $n\to\infty$ for appropriately chosen $r$. For example, let $p_n\sim n^{-\beta}$, $\beta>1$. Then whenever $2-r\alpha-\beta\alpha>0$, i.e., $r+\beta<2\alpha^{-1}$, we will have $\lim_{n\to\infty}H_\alpha(S_n)=\infty$. Furthermore, if $\beta\alpha>1$, then $H_\alpha(P)<\infty$. Therefore, for a given $\alpha\in(0,1)$, we have found distributions $P$ and $Q$ with finite entropy of order $\alpha$, such that $H_\alpha$ is unbounded over ${\mathcal C}(P,Q)$. \end{proof} \par It is known that R\'enyi entropy $H_\alpha$ satisfies $H_\alpha(X,Y)\leq H_\alpha(X)+H_\alpha(Y)$ only for $\alpha=0$ and $\alpha=1$. Such an upper bound does not hold for $\alpha\in(0,1)$, and, in fact, no upper bound on $H_\alpha(X,Y)$ in terms of $H_\alpha(X)$ and $H_\alpha(Y)$ can exist, as Theorem \ref{unbounded} shows. \begin{corollary} \label{cordiscont} For any $\alpha\in(0,1)$ there exist distributions $P$ and $Q$ such that $H_\alpha$ is discontinuous at every point of ${\mathcal C}(P,Q)$. \end{corollary} \begin{proof} Let $P$ and $Q$ be such that $H_\alpha$ is unbounded over ${\mathcal C}(P,Q)$. Let $S$ be an arbitrary distribution from ${\mathcal C}(P,Q)$. It is enough to show that $H_\alpha$ remains unbounded in any neighborhood of $S$. Let $M>0$ be an arbitrary number, and $\epsilon\in(0,1)$. We can find $T\in{\mathcal C}(P,Q)$ with $H_\alpha(T)$ as large as desired, so assume that $\sum_{i,j} t_{i,j}^\alpha\geq M/\epsilon$. Observe the distribution $(1-\epsilon)S+\epsilon T$. It is in $2\epsilon$-neighborhood of $S$ since $\norm{S-((1-\epsilon)S+\epsilon T)}=\epsilon\norm{S-T}\leq 2\epsilon$. Also, since the function $x^\alpha$ is concave for $\alpha<1$, we get: \begin{equation} \begin{aligned} \sum_{i,j} \big((1-\epsilon)s_{i,j}&+\epsilon t_{i,j}\big)^\alpha \geq \\ &(1-\epsilon)\sum_{i,j} s_{i,j}^\alpha + \epsilon\sum_{i,j} t_{i,j}^\alpha \geq M , \end{aligned} \end{equation} which completes the proof. \end{proof} \par The case of $\alpha=0$ (Hartley entropy) remains; the proof of the following result is straightforward. \begin{theorem} $H_0$ is discontinuous over ${\mathcal C}(P,Q)$, for any distributions $P$ and $Q$ with supports of size at least two. \end{theorem} \par Note that, unlike for the Shannon information measures, we cannot claim in general that $H_\alpha$ attains its supremum over ${\mathcal C}(P,Q)$, for $\alpha<1$. However, infimum \emph{is} attained, i.e., \emph{minimum $\alpha$-entropy coupling always exists}, because R\'enyi entropy is lower semi-continuous \cite{kovacevic2}, and any such function must attain its infimum over a compact set \cite{holmes}. \par We next prove that, although $H_\alpha$ is discontinuous for some $P$ and $Q$, the continuity still holds for a wide class of marginal distributions. \begin{theorem} If $\sum_{i,j} \min\{p_i,q_j\}^\alpha<\infty$, then $H_\alpha$ is continuous over ${\mathcal C}(P,Q)$, for any $\alpha>0$. For $P=Q=(p_i)$, with $p_i$'s nonincreasing, this condition reduces to $\sum_i i\cdot p_i^\alpha<\infty$. \end{theorem} \begin{proof} Let $S_n\to S$, where $S_n,S\in{\mathcal C}(P,Q)$. Since, over ${\mathcal C}(P,Q)$, $S_n(i,j)\leq\min\{p_i,q_j\}$ and by assumption $\sum_{i,j}\min\{p_i,q_j\}^\alpha<\infty$, we can apply the Weierstrass criterion to conclude that $\sum_{i,j} S_n(i,j)^\alpha$ converges uniformly in $n$ and therefore that $H_\alpha(S_n)\to H_\alpha(S)$. \par Now let $P=Q$ and assume that the $p_i$'s are monotonically nonincreasing. Then $\min\{p_i,p_j\}=p_{\max\{i,j\}}$, i.e., \begin{equation} \big(\min\{p_i,p_j\}\big) = \begin{pmatrix} p_1 & p_2 & p_3 & \cdots \\ p_2 & p_2 & p_3 & \cdots \\ p_3 & p_3 & p_3 & \cdots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix} \end{equation} By observing the elements above (and including) the diagonal, it follows that: \begin{equation} \sum_{i} i\cdot p_i^\alpha \leq \sum_{i,j} \min\{p_i,p_j\}^\alpha \leq 2\sum_{i} i\cdot p_i^\alpha, \end{equation} and hence the condition $\sum_{i} i\cdot p_i^\alpha<\infty$ is equivalent to $\sum_{i,j} \min\{p_i,p_j\}^\alpha<\infty$. \end{proof} \par Finally, let us prove a result analogous to Theorem \ref{jointmarg}. \begin{theorem} Let $S_n, S$ be bivariate probability distributions such that $\norm{S_n - S}\to0$ and $H_\alpha(S_n)\to H_\alpha(S)<\infty$. Let $P_n, Q_n$ be the marginals of $S_n$, and $P, Q$ the marginals of $S$. Then $H_\alpha(P_n) \to H_\alpha(P)$ and $H_\alpha(Q_n) \to H_\alpha(Q)$. \end{theorem} \begin{proof} If $\norm{S_n - S}\to0$, then of course $\norm{P_n - P}\to0$ and $\norm{Q_n - Q}\to0$. Write: \begin{equation} \label{rastav} \begin{aligned} \sum_{i,j} S_n(i,j)^\alpha = &\sum_{i} P_n(i)^\alpha + \\ & \sum_{i} \Bigg( \sum_{j} S_n(i,j)^\alpha - P_n(i)^\alpha \Bigg) \end{aligned} \end{equation} We are interested in showing that the first term on the right-hand side converges to $\sum_{i} P(i)^\alpha$, which is equivalent to saying that $H_\alpha(P_n) \to H_\alpha(P)$. Observe that this term is lower semi-continuous by Lemma \ref{lowersemicont}, meaning that \begin{equation} \label{infP} \liminf_{n\to\infty} \sum_{i} P_n(i)^\alpha \geq \sum_{i} P(i)^\alpha, \end{equation} The second term on the right-hand side of \eqref{rastav} is also lower semi-continuous for the same reason, namely: \begin{equation} \sum_{j} S_n(i,j)^\alpha - P_n(i)^\alpha \geq 0 \end{equation} because the function $x^\alpha$ is subadditive, and \begin{equation} \lim_{n\to\infty} \Bigg( \sum_{j} S_n(i,j)^\alpha - P_n(i)^\alpha \Bigg) = \sum_{j} S(i,j)^\alpha - P(i)^\alpha, \end{equation} because $H_\alpha(S_n)\to H_\alpha(S)$. Therefore, \begin{equation} \begin{aligned} \liminf_{n\to\infty} &\sum_{i} \Bigg( \sum_{j} S_n(i,j)^\alpha - P_n(i)^\alpha \Bigg) \geq \\ &\sum_{i} \Bigg( \sum_{j} S(i,j)^\alpha - P(i)^\alpha \Bigg), \end{aligned} \end{equation} or, since $\sum_{i,j} S_n(i,j)^\alpha\to\sum_{i,j} S(i,j)^\alpha$, \begin{equation} \label{supP} \limsup_{n\to\infty} \sum_{i} P_n(i)^\alpha \leq \sum_{i} P(i)^\alpha. \end{equation} Now \eqref{infP} and \eqref{supP} give $H_\alpha(P_n)\to H_\alpha(P)$, and $H_\alpha(Q_n)\to H_\alpha(Q)$ follows by symmetry. \end{proof} \par Note that the opposite implication does not hold for any $\alpha\in[0,1)$, as Corollary \ref{cordiscont} shows. Namely, if $\norm{S_n - S}\to0$, convergence of the marginal entropies ($H_\alpha(P_n) \to H_\alpha(P)$ and $H_\alpha(Q_n) \to H_\alpha(Q)$) does not imply convergence of the joint entropy $H_\alpha(S_n)\to H_\alpha(S)$. \section{Entropy metrics} \label{distance} Apart from many of their other uses, couplings are very convenient for defining metrics on the space of probability distributions. There are many interesting metrics defined via so-called ``optimal" couplings. We first illustrate this point using one familiar example, and then define new information-theoretic metrics based on the minimum entropy coupling. \par Given two probability distributions $P$ and $Q$, one could measure the ``distance" between them as follows. Consider all possible random pairs $\left(X,Y\right)$ with marginal distributions $P$ and $Q$. Then define some measure of dissimilarity of $X$ and $Y$, for example ${\mathbb{P}}(X\neq Y)$, and minimize it over all such couplings (minimization is necessary for the triangle inequality to hold). Indeed, this example yields the well-known total variation distance \cite{peres}: \begin{equation} \label{variation} d_\text{V}(P,Q) = \inf_{{\mathcal{C}}(P,Q)} {\mathbb{P}}(X\neq Y) , \end{equation} where the infimum is taken over all joint distributions of the random vector $(X,Y)$ with marginals $P$ and $Q$. Notice that a minimizing distribution (called a \emph{maximal coupling}, see, e.g., \cite{sason}) in \eqref{variation} is ``easy" to find because ${\mathbb{P}}(X\neq Y)$ is a linear functional in the joint distribution of $(X,Y)$. For the same reason, $d_\text{V}(P,Q)$ is easy to compute, but this is also clear from the identity \cite{peres}: \begin{equation} \label{l1variation} d_\text{V}(P,Q) = \frac{1}{2}\sum_{i} |p_i-q_i|. \end{equation} \par Now let us define some information-theoretic distances in a similar manner. Let $(X,Y)$ be a random pair with joint distribution $S$ and marginal distributions $P$ and $Q$. The total information contained in these random variables is $H(X,Y)$, while the information contained simultaneously in both of them (or the information they contain about each other) is measured by $I(X;Y)$. One is then tempted to take as a measure of their dissimilarity\footnote{\,Drawing a familiar information-theoretic Venn diagram \cite{cover} makes it clear that this is a measure of ``dissimilarity" of two random variables.}: \begin{equation} \begin{aligned} \label{delta} \Delta_1(X,Y) \equiv \Delta_1(S) &= H(X,Y) - I(X;Y) \\ &= H(X|Y) + H(Y|X). \end{aligned} \end{equation} Indeed, this quantity (introduced by Shannon \cite{shannon}, and usually referred to as the \emph{entropy metric} \cite{csiszar}) satisfies the properties of a pseudometric \cite{csiszar}. In a similar way one can show that the following is also a pseudometric: \begin{equation} \label{deltamaks} \Delta_\infty(X,Y) \equiv \Delta_\infty(S) = \max\big\{H(X|Y),H(Y|X)\big\}, \end{equation} as are the normalized variants of $\Delta_1$ and $\Delta_\infty$ \cite{metrics}. These pseudometrics have found numerous applications (see for example \cite{yao}) and have also been considered in an algorithmic setting \cite{kolmo}. \begin{remark} $\Delta_1$ is a pseudometric on the space of random variables over the same probability space. Namely, for $\Delta_1$ to be defined, the joint distribution of $(X,Y)$ must be given because joint entropy and mutual information are not defined otherwise. Equation \eqref{deltainf} below defines the distance between random variables (more precisely, between their distributions) that does not depend on the joint distribution. \end{remark} \par One can further generalize these definitions to obtain a family of pseudometrics. This generalization is akin to the familiar $\ell_p$ distances. Let \begin{equation} \label{deltap} \Delta_p(X,Y) \equiv \Delta_p(S) = \big( H(X|Y)^p + H(Y|X)^p \big)^\frac{1}{p} , \end{equation} for $p\geq1$. Observe that $\lim_{p\to\infty} \Delta_p(X,Y) = \Delta_\infty(X,Y)$, justifying the notation. \begin{theorem} $\Delta_p(X,Y)$ satisfies the properties of a pseudometric, for all $p\in[1,\infty]$. \end{theorem} \begin{proof} Nonnegativity and symmetry are clear, as is the fact that $\Delta_p(X,Y)=0$ if (but not only if) $X=Y$ with probability one. The triangle inequality remains. Following the proof for $\Delta_1$ from \cite[Lemma 3.7]{csiszar}, we first observe that $H(X|Y)\leq H(X|Z) + H(Z|Y)$, wherefrom: \begin{equation} \begin{aligned} \Delta_p(X,Y) \leq \Big( &\big( H(X|Z) + H(Z|Y) \big)^p + \\ &\big( H(Y|Z) + H(Z|X) \big)^p \Big)^\frac{1}{p} . \end{aligned} \end{equation} Now apply the Minkowski inequality ($\pnorm{a+b}\leq \pnorm{a} + \pnorm{b}$) to the vectors $a=(H(X|Z),H(Z|X))$ and $b=(H(Z|Y),H(Y|Z))$ to get: \begin{equation} \Delta_p(X,Y) \leq \Delta_p(X,Z) + \Delta_p(Z,Y) , \end{equation} which was to be shown. \end{proof} \par Having defined measures of dissimilarity, we can now define the corresponding distances: \begin{equation} \label{deltainf} \underline{\Delta}_p(P,Q) = \inf_{S\in{\mathcal{C}}(P,Q)} \Delta_p(S) . \end{equation} The case $p=1$ has also been analyzed in some detail in \cite{vidyasagar}, motivated by the problem of optimal order reduction for stochastic processes. \begin{theorem} $\underline{\Delta}_p$ is a pseudometric on $\Gamma^{(1)}$, for any $p\in[1,\infty]$. \end{theorem} \begin{proof} Since ${\Delta}_p$ satisfies the properties of a pseudometric, we only need to show that these properties are preserved under the infimum. $1^\text{o}$ Nonnegativity is clearly preserved, $\underline{\Delta}_p\geq 0$. $2^\text{o}$ Symmetry is also preserved, $\underline{\Delta}_p(P,Q)=\underline{\Delta}_p(Q,P)$. $3^\text{o}$ If $P=Q$ then $\underline{\Delta}_p(P,Q) = 0$. This is because $S=\text{diag}(P)$ (distribution with masses $p_i=q_i$ on the diagonal and zeroes elsewhere) belongs to ${\mathcal{C}}(P,Q)$ in this case, and for this distribution we have $H_{X|Y}(S)=H_{Y|X}(S)=0$. $4^\text{o}$ The triangle inequality is left. Let $X$, $Y$ and $Z$ be random variables with distributions $P$, $Q$ and $R$, respectively, and let their joint distribution be specified. We know that $\Delta_p(X,Y)\leq \Delta_p(X,Z) + \Delta_p(Z,Y)$, and we have to prove that \begin{equation} \inf_{{\mathcal C}(P,Q)} \Delta_p(X,Y) \leq \inf_{{\mathcal C}(P,R)} \Delta_p(X,Z) + \inf_{{\mathcal C}(R,Q)} \Delta_p(Z,Y). \end{equation} Since, from the above, \begin{equation} \begin{aligned} \inf_{{\mathcal C}(P,Q)} \Delta_p(X,Y) &= \inf_{{\mathcal C}(P,Q,R)} \Delta_p(X,Y) \\ &\leq \inf_{{\mathcal C}(P,Q,R)} \big\{\Delta_p(X,Z) + \Delta_p(Z,Y)\big\} \end{aligned} \end{equation} it suffices to show that \begin{equation} \label{triangle} \begin{aligned} \inf_{{\mathcal C}(P,Q,R)} \big\{\Delta_p(X,Z) + &\Delta_p(Z,Y)\big\} = \\ \inf_{{\mathcal C}(P,R)} &\Delta_p(X,Z) + \inf_{{\mathcal C}(R,Q)} \Delta_p(Z,Y). \end{aligned} \end{equation} (${\mathcal C}(P,Q,R)$ denotes the set of all three-dimensional distributions with one-dimensional marginals $P$, $Q$, and $R$, as the notation suggests.) Let $T\in{\mathcal C}(P,R)$ and $U\in{\mathcal C}(R,Q)$ be the optimizing distributions on the right-hand side (rhs) of \eqref{triangle}. Observe that there must exist a joint distribution $W\in{\mathcal C}(P,Q,R)$ consistent with $T$ and $U$ (for example, take $w_{i,j,k}=t_{i,k}u_{k,j}/r_k$). Since the optimal value of the lhs is less than or equal to the value at $W$, we have shown that the lhs of \eqref{triangle} is less than or equal to the rhs. For the opposite inequality observe that the optimizing distribution on the lhs of \eqref{triangle} defines some two-dimensional marginals $T\in{\mathcal C}(P,R)$ and $U\in{\mathcal C}(R,Q)$, and the optimal value of the rhs must be less than or equal to its value at $(T,U)$. \end{proof} \begin{remark} If $\underline{\Delta}_p(P,Q)=0$, then $P$ and $Q$ are a permutation of each other. This is easy to see because only in that case can one have $H_{X|Y}(S)=H_{Y|X}(S)=0$, for some $S\in\mathcal{C}(P,Q)$. Therefore, if distributions are identified up to a permutation, then $\underline{\Delta}_p$ is a metric. In other words, if we think of distributions as unordered multisets of nonnegative numbers summing up to one, then $\underline{\Delta}_p$ is a metric on such a space. \end{remark} \par Observe that the distribution defining $\underline{\Delta}_p(P,Q)$ is in fact the minimum entropy coupling. Thus minimum entropy coupling defines the distances $\underline{\Delta}_p$ on the space of probability distributions in the same way maximal coupling defines the total variation distance. However, there is a sharp difference in the computational complexity of finding these two couplings, as will be shown in the following section. \subsection{Some properties of entropy metrics} We first note that $\underline{\Delta}_p$ is a monotonically nonincreasing function of $p$. In the following, we shall mostly deal with $\underline{\Delta}_1$ and $\underline{\Delta}_\infty$, but most results concerning bounds and convergence can be extended to all $\underline{\Delta}_p$ based on this monotonicity property. \par The metric $\underline{\Delta}_1$ gives an upper bound on the entropy difference $|H(P)-H(Q)|$. Namely, since \begin{equation} \begin{aligned} |H(X)-H(Y)|&=|H(X|Y)-H(Y|X)| \\ &\leq H(X|Y)+H(Y|X) \\ &=\Delta_1(X,Y), \end{aligned} \end{equation} we conclude that: \begin{equation} \label{deltaent} |H(P)-H(Q)|\leq\underline{\Delta}_1(P,Q). \end{equation} Therefore, entropy is continuous with respect to this pseudometric, i.e., $\underline{\Delta}_1(P_n,P)\to0$ implies $H(P_n)\to H(P)$. Bounding the entropy difference is an important problem in various contexts and it has been studied extensively, see for example \cite{ho2,sason}. In particular, \cite{sason} studies bounds on the entropy difference via maximal couplings, whereas \eqref{deltaent} is obtained via minimum entropy couplings. \par Another useful property, relating the entropy metric $\underline{\Delta}_1$ and the total variation distance, follows from Fano's inequality: \begin{equation} H(X|Y) \leq \mathbb{P}(X\neq Y)\log(|X|-1) + h(\mathbb{P}(X\neq Y)), \end{equation} where $|X|$ denotes the size of the support of $X$, and $h(x)=-x\log_2(x) - (1-x)\log_2(1-x)$, $x\in[0,1]$, is the binary entropy function. Evaluating the rhs at the maximal coupling (the joint distribution which minimizes $\mathbb{P}(X\neq Y)$), and the lhs at the minimum entropy coupling, we obtain: \begin{equation} \underline{\Delta}_1(P,Q) \leq d_\text{V}(P,Q)\log(|P||Q|) + 2h(d_\text{V}(P,Q)). \end{equation} This relation makes sense only when the alphabets (supports of $P$ and $Q$) are finite. When the supports are also fixed it shows that $\underline{\Delta}_1$ is continuous with respect to $d_\text{V}$, i.e., that $d_\text{V}(P_n,P)\to0$ implies $\underline{\Delta}_1(P_n,P)\to0$. By Pinsker's inequality \cite{csiszar} then it follows that $\underline{\Delta}_1$ is also continuous with respect to information divergence, i.e., $D(P_n||P)\to0$ implies $\underline{\Delta}_1(P_n,P)\to0$. \par The continuity of $\underline{\Delta}_1$ with respect to $d_\text{V}$ fails in the case of infinite (or even finite, but unbounded) supports, which follows from \eqref{deltaent} and the fact that entropy is a discontinuous functional with respect to the total variation distance. One can, however, claim the following. \begin{proposition} If $P_n\to P$ in the total variation distance, and $H(P_n)\to H(P)<\infty$, then $\underline{\Delta}_1(P_n,P)\to0$. \end{proposition} \begin{proof} In \cite[Thm 17]{hoverdu} it is shown that if $d_\text{V}(P_{X_n},P_{X})\to0$ and $H(X_n)\to H(X)<\infty$, then $\mathbb{P}(X_n\neq Y_n)\to0$ implies $H(X_n|Y_n)\to0$, for any r.v.'s $Y_n$. Our claim then follows by specifying $P_{X_n}=P_n$, $P_{X}=P_{Y_n}=P$, and taking infimums on both sides of the implication. \end{proof} \par We also note here that sharper bounds than the above can be obtained by using $\underline{\Delta}_\infty$ instead of $\underline{\Delta}_1$. For example: \begin{equation} |H(P)-H(Q)|\leq\underline{\Delta}_\infty(P,Q), \end{equation} (with equality whenever the minimum entropy coupling of $P$ and $Q$ is such that $Y$ is a function of $X$, or vice versa), and: \begin{equation} \underline{\Delta}_\infty(P,Q)\leq d_\text{V}(P,Q)\log\max\{|P|,|Q|\} + h(d_\text{V}(P,Q)). \end{equation} \par We conclude this section with an interesting remark on the conditional entropy. First observe that the pseudometric $\Delta_p$ ($\underline{\Delta}_p$) can also be defined for random vectors (multivariate distributions). For example, $\Delta_1((X,Y),(Z))$ is well-defined by $H(X,Y|Z) + H(Z|X,Y)$. If the distributions of $(X,Y)$ and $Z$ are $S$ and $R$, respectively, then minimizing the above expression over all tri-variate distributions with the corresponding marginals $S$ and $R$ would give $\underline{\Delta}_1(S,R)$. Furthermore, random vectors need not be disjoint. For example, we have: \begin{equation} \Delta_1((X),(X,Y)) = H(X|X,Y) + H(X,Y|X) = H(Y|X), \end{equation} because the first summand is equal to zero. Therefore, the conditional entropy $H(Y|X)$ can be seen as the distance between the pair $(X,Y)$ and the conditioning random variable $X$. If the distribution of $(X,Y)$ is $S$, and the marginal distribution of $X$ is $P$, then: \begin{equation} \underline{\Delta}_1(P,S) = H_{Y|X}(S), \end{equation} because $S$ is the only distribution consistent with these constraints. In fact, we have $\underline{\Delta}_p(P,S) = H_{Y|X}(S)$ for all $p\in[1,\infty]$. Therefore, \emph{the conditional entropy $H(Y|X)$ represents the distance between the joint distribution of $(X,Y)$ and the marginal distribution of the conditioning random variable $X$}. \section{Optimization problems} \label{optimization} In this final section we analyze some natural optimization problems associated with information measures over ${\mathcal C}(P,Q)$ and ${\mathcal C}(P,m)$, and establish their computational intractability. The proofs are not difficult, but they have a number of important consequences, as discussed in Section \ref{conceq}, and, furthermore, they give interesting information-theoretic interpretations of well-known problems in complexity theory, such as the \textsc{Subset sum} and the \textsc{Partition} problems. Some closely related problems over ${\mathcal C}(P,Q)$, in the context of computing $\underline{\Delta}_1(P,Q)$, are also studied in \cite{vidyasagar}. \subsection{Optimization over ${\mathcal C}(P,Q)$} \label{bothmarginals} \par Consider the following computational problem, called \textsc{Minimum entropy coupling}: Given $P=(p_1,\ldots,p_n)$ and $Q=(q_1,\ldots,q_m)$ (with $p_i,q_j\in\mathbb{Q}$), find the minimum entropy coupling of $P$ and $Q$. It is shown below that this problem is NP-hard. The proof relies on the following well-known NP-complete problem \cite{garey}: \displayproblem {Subset sum} {Positive integers $d_1,\ldots,d_n$ and $s$.} {Is there a $J\subseteq\{1,\ldots,n\}$ such that $\sum_{j\in J} d_j = s$ ?} \begin{theorem} \label{minentropy} \textsc{Minimum entropy coupling} is NP-hard. \end{theorem} \begin{proof} We shall demonstrate a reduction from the \textsc{Subset sum} to the \textsc{Minimum entropy coupling}. Let there be given an instance of the \textsc{Subset sum}, i.e., a set of positive integers $s;d_1,\ldots,d_n$, $n\geq 2$. Let $D=\sum_{i=1}^{n} d_i$, and let $p_i=d_i/D$, $q=s/D$ (assume that $s<D$, the problem otherwise being trivial). Denote $P=(p_1,\ldots,p_n)$ and $Q=(q,1-q)$. The question we are trying to answer is whether there is a $J\subseteq\{1,\ldots,n\}$ such that $\sum_{j\in J} d_j = s$, i.e., such that $\sum_{j\in J} p_j = q$. Observe that this happens if and only if there is a matrix $S$ with row sums $P=(p_1,\ldots,p_n)$ and column sums $Q=(q,1-q)$, which has exactly one nonzero entry in every row (or, in probabilistic language, a distribution $S\in{\mathcal C}(P,Q)$ such that $Y$ deterministically depends on $X$). We know that in this case, and only in this case, the entropy of $S$ would be equal to $H(P)$ \cite{cover}, which is by \eqref{ineqH} a lower bound on entropy over ${\mathcal C}(P,Q)$. In other words, if such a distribution exists, it must be the minimum entropy coupling. Therefore, if we could find the minimum entropy coupling, we could easily decide whether it has one nonzero entry in every row, thereby solving the given instance of the \textsc{Subset sum}. \end{proof} \par Now from \eqref{identity} we conclude that the problems of minimization of the conditional entropies and maximization of the mutual information over ${\mathcal C}(P,Q)$ are also NP-hard. Furthermore, in the same way as above one can define the problem \textsc{Minimum $\alpha$-entropy coupling}, for any $\alpha\geq0$, and establish its NP-hardness. Note that the reverse problems over ${\mathcal C}(P,Q)$, entropy maximization for example, are trivial for Shannon information measures. In the case of R\'enyi entropy the problem is in general not trivial, but it can be solved by standard convex optimization methods. \par It would be interesting to determine whether the \textsc{Minimum entropy coupling} belongs to FNP\footnote{\,The class FNP captures the complexity of function problems associated with decision problems in NP, see \cite{pap}.}, but this appears to be quite difficult. Namely, given the optimal solution, it is not obvious how to verify (in polynomial time) that it is indeed optimal. A similar situation arises with the decision version of this problem: Given $P$ and $Q$ and a threshold $h$, is there a distribution $S\in{\mathcal C}(P,Q)$ with entropy $H(S)\leq h$? Whether this problem belongs to NP is another interesting question (which we shall not be able to answer here). The trouble with these computational problems is that $\mathbb{R}$-valued functions are involved. Verifying, for example, that $H(S)\leq h$ might not be computationally trivial as it might seem because the numbers involved are in general irrational. We shall not go into these details further; we mention instead one closely related problem which has been studied in the literature: \displayproblem {Sqrt sum} {Positive integers $d_1,\ldots,d_n$, and $k$.} {Decide whether $\sum_{i=1}^n \sqrt{d_i} \leq k$ ?} This problem, though ``conceptually simple" and bearing certain resemblance with the above decision version of the entropy minimization problem, is not known to be solvable in NP \cite{etessami} (it is solvable in PSPACE). \subsection{Optimization over ${\mathcal C}(P,m)$} \label{onemarginal} Minimization of the joint entropy $H(X,Y)$ over ${\mathcal C}(P,m)$ is trivial. The reason is that $H(X,Y)\geq H(P)$ with equality iff $Y$ deterministically depends on $X$, and so the solution is \emph{any} joint distribution having at most one nonzero entry in each row (the same is true for $H_\alpha$, $\alpha\geq0$). Since $H(X)$ is fixed, this also minimizes the conditional entropy $H(Y|X)$. The other two optimization problems considered so far, minimization of $H(X|Y)$ and maximization of $I(X;Y)$, are still equivalent because $I(X;Y)=H(X)-H(X|Y)$, but they turn out to be much harder. Therefore, in the following we shall consider only the maximization of $I(X;Y)$. \par Let \textsc{Optimal channel} be the following computational problem: Given $P=(p_1,\ldots,p_n)$ and $m$ (with $p_i\in\mathbb{Q}, m\in\mathbb{N}$), find the distribution $S\in{\mathcal C}(P,m)$ which maximizes the mutual information. This problem is the reverse of the channel capacity in the sense that now the input distribution (the distribution of the source) is fixed, and the maximization is over the conditional distributions. In other words, given a source, we are asking for the channel with a given number of outputs which has the largest mutual information. Since the mutual information is convex in the conditional distribution \cite{cover}, this is again a convex maximization problem. \par We describe next the well-known \textsc{Partition} (or \textsc{Number partitioning}) problem \cite{garey}. \displayproblem {Partition} {Positive integers $d_1,\ldots,d_n$.} {Is there a partition of $\{d_1,\ldots,d_n\}$ into two subsets with equal sums?} This is clearly a special case of the \textsc{Subset sum}. It can be solved in pseudo-polynomial time by dynamic programming methods \cite{garey}. But the following closely related problem is much harder. \displayproblem {3-Partition} {Nonnegative integers $d_1,\ldots,d_{3m}$ and $k$ with $k/4<d_j<k/2$ and $\sum_{j} d_j = mk$.} {Is there a partition of $\{1,\ldots,3m\}$ into $m$ subsets $J_1,\ldots,J_m$ (disjoint and covering $\{1,\ldots,3m\}$) such that $\sum_{j\in J_r} d_j$ are all equal? (The sums are necessarily $k$ and every $J_i$ has $3$ elements.)} This problem is NP-complete in the strong sense \cite{garey}, i.e., no pseudo-polynomial time algorithm for it exists unless P=NP. \begin{theorem} \label{channel} \textsc{Optimal channel} is NP-hard. \end{theorem} \begin{proof} We prove the claim by reducing 3-\textsc{Partition} to \textsc{Optimal channel}. Let there be given an instance of the 3-\textsc{Partition} problem as described above, and let $p_i=d_i/D$, where $D=\sum_i d_i$. Deciding whether there exists a partition with described properties is equivalent to deciding whether there is a matrix $C\in\mathcal{C}(P,m)$ with the other marginal $Q$ being uniform and $C$ having at most one nonzero entry in every row (i.e., $Y$ deterministically depending on $X$). This on the other hand happens if and only if there is a distribution $C\in\mathcal{C}(P,m)$ with mutual information equal to $H(Q)=\log m$, which is by \eqref{ineqI} an upper bound on $I_{X;Y}$ over $\mathcal{C}(P,m)$. The distribution $C$ would therefore necessarily be the maximizer of $I_{X;Y}$. To conclude, if we could solve the \textsc{Optimal channel} problem with instance $(p_1,\ldots,p_{3m};m)$, we could easily decide whether the maximizer is such that it has at most one nonzero entry in every row, thereby solving the original instance of the 3-\textsc{Partition} problem. \end{proof} \par Note that the problem remains NP-hard even when the number of channel outputs ($m$) is fixed in advance and is not a part of the input instance. For example, maximization of $I_{X;Y}$ over ${\mathcal C}(P,2)$ is essentially equivalent to the \textsc{Partition} problem. \par It is easy to see that the transformation in the proof of Theorem \ref{channel} is in fact \emph{pseudo-polynomial} \cite{garey} which implies that \textsc{Optimal channel} is strongly NP-hard and, unless P=NP, has no pseudo-polynomial time algorithm. \subsection{Some comments and generalizations} \label{conceq} \subsubsection{Entropy minimization} \par Entropy minimization, taken in the broadest sense, is a very important problem. Watanabe \cite{pattern} has shown, for example, that many algorithms for clustering and pattern recognition can be characterized as suitably defined entropy minimization problems. \par A much more familiar problem in information theory is that of entropy maximization. The so-called \emph{Maximum entropy principle} formulated by Jaynes \cite{jaynes} states that, among all proba\-bility distributions satisfying certain constraints (expressing our knowledge about the system), one should pick the one with maximum entropy. It has been recognized by Jaynes, as well as many other researchers, that this choice gives the least biased, the most objective distribution consistent with the information one possesses about the system. Consequently, the problem of maximizing entropy under constraints has been thoroughly studied (see, e.g., \cite{harr,kapur1}). It has also been argued \cite{kapur2,yuan}, however, that minimum entropy distributions can be of as much interest as maximum entropy distributions. The MinMax information measure, for example, has been introduced in \cite{kapur2} as a measure of the amount of information contained in a given set of constraints, and it is based both on maximum and minimum entropy distributions. \par One could formalize the problem of entropy minimization as follows: Given a polytope (by a system of inequalities with rational coefficients, say) in the set of probability distributions, find the distribution $S^*$ which minimizes the entropy functional $H$. (If the coefficients are rational, then all the vertices are rational, i.e., have rational coordinates. Therefore, the minimum entropy distribution has finite description and is well-defined as an output of a computational problem.) This problem is strongly NP-hard and remains such over transportation polytopes, as established above. \subsubsection{R\'enyi entropy minimization} The problem of minimization of the R\'enyi entropies $H_\alpha$ over arbitrary polytopes is also strongly NP-hard, for any $\alpha\geq0$. Note that, for $\alpha>1$, this problem is equivalent to the maximization of the $\ell^\alpha$ norm (see also \cite{maxnorm,maxnorm2} for different proofs of the NP-hardness of norm maximization). Interestingly, however, the minimization of $H_\infty$ is polynomial-time solvable; it is equivalent to the maximization of the $\ell^\infty$ norm \cite{maxnorm}. For $\alpha<1$, the minimization of R\'enyi entropy is equivalent to the minimization of $\ell^\alpha$ (which is not a norm in the strict sense), a problem arising in compressed sensing \cite{minnorm}. \par Hence, as we have seen throughout this section, various problems from computational complexity theory can be reformulated as information-theoretic optimization problems. (Observe also the similarity of the \textsc{Sqrt sum} and the minimization of R\'enyi entropy of order $1/2$.) \subsubsection{Other information measures} \par Maximization of mutual information is a very important problem in the general context. The so-called Maximum mutual information criterion has found many applications, e.g., for feature selection \cite{battiti} and the design of classifiers \cite{deng}. Another familiar example is that of the capacity of a communication channel which is defined precisely as the maximum of the mutual information between the input and the output of a channel. \par We have illustrated the general intractability of the problem of maximization of $I_{X;Y}$ by exhibiting two simple classes of polytopes over which the problem is strongly NP-hard (and we have argued that the same holds for the conditional entropy). \par We also mention here one possible generalization of this problem -- maximization of information divergence. Namely, since for $S\in{\mathcal C}(P,Q)$: \begin{equation} I_{X;Y}(S)=D(S||P\times Q), \end{equation} one can naturally consider the more general problem of maximization of $D(S||T)$ when $S$ belongs to some convex region and $T$ is fixed. Formally, let \textsc{Information divergence maximization} be the following computational problem: Given a rational convex polytope $\Pi$ in the set of probability distributions, and a distribution $T$, find the distribution $S\in\Pi$ which maximizes $D(\cdot||T)$. This is again a convex maximization problem because $D(S||T)$ is strictly convex in $S$ \cite{csiszar}. \begin{corollary} \textsc{Information divergence maximization} is NP-hard. \end{corollary} \par Note that the reverse problem, namely the minimization of information divergence, defines an information projection of $T$ onto the region $\Pi$ \cite{csiszar}. \subsubsection{Measures of statistical dependence} \label{renyiax} We conclude this section with one more generalization of the problem of maximization of mutual information. Namely, this problem can also be seen as a statistical problem of expressing the largest possible dependence between two given random variables. \par Consider the following statistical scenario. A system is described by two random variables (taking values in $\mathbb{N}$) whose joint distribution is unknown; only some constraints that it must obey are given. The set of all distributions satisfying these constraints is usually called a statistical model. \begin{example} Suppose we have two correlated information sources obtained by independent drawings from a discrete bivariate probability distribution, and suppose we only have access to individual streams of symbols (i.e., streams of symbols from either one of the sources, but not from both simultaneously) and can observe the relative frequencies of the symbols in each of the streams. We therefore ``know" probability distributions of both sources (say $P$ and $Q$), but we don't know how correlated they are. Then the ``model" for this joint source would be ${\mathcal C}(P,Q)$. In the absence of any additional information, we must assume that some $S\in{\mathcal C}(P,Q)$ is the ``true" distribution of the source. \end{example} \par Given such a model, we may ask the following question: What is the largest possible dependence of the two random variables? How correlated can they possibly be? This question can be made precise once a dependence measure is specified, and this is done next. \par A. R\' enyi \cite{renyidep} has formalized the notion of probabilistic dependence by presenting axioms which a ``good" dependence measure $\rho$ should satisfy. These axioms, adapted for discrete random variables, are listed below. \begin{enumerate}% \item[(A)] $\rho(X,Y)$ is defined for any two random variables $X$, $Y$, neither of which is constant with probability $1$.% \item[(B)] $0 \leq \rho(X,Y) \leq 1$.% \item[(C)] $\rho(X,Y) = \rho(Y,X)$.% \item[(D)] $\rho(X,Y) = 0$ iff $X$ and $Y$ are independent.% \item[(E)] $\rho(X,Y) = 1$ iff $X=f(Y)$ or $Y=g(X)$.% \item[(F)] If $f$ and $g$ are injective functions, then $\rho(f(X),g(Y)) = \rho(X,Y)$.% \end{enumerate}% Actually, R\' enyi considered axiom (E) to be too restrictive and demanded only the ``if part". It has been argued subsequently \cite{bell}, however, that this is a substantial weakening. We shall find it convenient to consider the stronger axiom given above. As an example of a good measure of dependence, one could take precisely the mutual information; its normalized variant $I(X;Y)/\min\{H(X),H(Y)\}$ satisfies all the above axioms. \par We can now formalize the question asked above. Namely, let \textsc{maximal $\rho$--dependence} be the following problem: Given two probability distributions $P=(p_1,\ldots,p_n)$ and $Q=(q_1,\ldots,q_m)$, find the distribution $S\in{\mathcal C}(P,Q)$ which maximizes $\rho$. The proof of the following claim is identical to the one given for mutual information (entropy) in Section \ref{bothmarginals} and we shall therefore omit it. \begin{theorem} Let $\rho$ be a measure of dependence satisfying R\'enyi's axioms. Then \textsc{maximal $\rho$--dependence} is NP-hard. \end{theorem} \par The intractability of the problem over more general statistical models is now a simple consequence. \IEEEtriggeratref{12}
1,314,259,994,649
arxiv
\section*{Supplementary information} \subsection*{Initial state preparation} Our experiment began with the preparation of a unity-filling rubidium-87 Mott insulator in a two-dimensional square optical lattice with a lattice spacing of $a_{\mathrm{lat}}=\SI{532}{nm}$~\cite{sherson2010}. The typically prepared system consists of 250 atoms, and we set the initial depth of the two lattices in the atomic plane to $40\,E_{\mathrm{r}}$, where $E_{\mathrm{r}}=h^{2}/8ma^{2}_{\mathrm{lat}}$ is the recoil energy, $h$ the Planck constant and $m$ the atomic mass. At such a lattice depth, virtually no tunneling takes place, and thus the prepared state is separable. We proceeded by removing all the atoms on the odd lattice sites along the $x$ axis, thereby preparing a charge-density wave (CDW) along one direction (see schematic in Fig.~1 of the main text). To do so, we use a spatially modulated laser beam with $\sigma^{-}$ polarization and a wavelength of \SI{787.55}{nm}, which induces a differential lightshift of $ h \times 10$ kHz between the two hyperfine states $\ket{c} = \ket{F=1, m_{F}=-1}$ and $\ket{d} = \ket{F=2, m_{F}=-2}$~\cite{weitenberg2011b}. We then apply a microwave sweep to transfer the illuminated atoms to the hyperfine state $\ket{d}$ and remove them by shining resonant light on the cycling transition of the $\mathrm{D}_{2}$ line. The initial state fidelity is characterized by the imbalance between the even $N_{e}$ and odd $N_{o}$ lattice sites $ \mathcal{I} = (N_{\mathrm{e}}-N_{\mathrm{o}})/(N_{\mathrm{e}}+N_{\mathrm{o}}) = 0.91(1)$, and the remaining atom number is $N=124(12)$. To prepare an admixture of the two hyperfine states $\ket{d}$ and $\ket{c}$, we afterwards apply a resonant microwave pulse of a certain length, which generates a state $\ket{ \mathrm{ \Psi }}_{\textit{\textbf{i}}} = \sqrt{1-\eta} \, \ket{d}_{\textit{\textbf{i}}}+\sqrt{\eta}\, \ket{c}_{\textit{\textbf{i}}} $ in each lattice site. After less than one tunneling time, inhomogeneities due to the disorder potential will lead to dephasing between the two spin states, and hence the whole system can be treated as a statistical spin mixture with a fraction $\eta$ in the $\ket{c}$ state. \subsection*{Disorder potential} After the preparation of a CDW, we quenched the system by ramping up a projected disorder potential and lowering the depth of the in-plane lattices from $40\, E_{\mathrm{r}}$ to $12\,E_{\mathrm{r}}$ in less than \SI{5}{ms}. The disorder potential is generated by the spatial modulation of a laser beam using a digital micromirror device (DMD), such that each lattice site of our 2D system features an individually programmable light shift~\cite{choi2016}. The DMD consists of a $1024 \times 768$ micromirror array with a $\SI{13.7}{\micro \meter}$ micromirror pitch, and approximately $7 \times 7$ of these mirrors oversample the point spread function (Gaussian with $\sigma=0.48(1)\, a_{\mathrm{lat}}$) of our system. To image the disorder onto the atoms we use a high-resolution objective with a numerical aperture of $\mathrm{NA}=0.68$~\cite{sherson2010}. A pseudorandom number generator is used to produce a 2D random pattern, which is chosen different for every experimental realization. We resolved the microwave resonances spatially, which is equivalent to extracting the local light shift~\cite{choi2016}. The histogram of all the local shifts displays a Gaussian distribution with full width at half maximum $\Delta$, which characterizes the strength of the disorder. Our imaging system introduces a finite correlation to the disordered potential due to its finite resolution, for which we measure a correlation length of $0.63(1)\,a_{\mathrm{lat}}$~\cite{choi2016}. The disorder potential can locally modify the Bose-Hubbard tunneling parameter $J$, in contrast to the bare lattice case, but in our parameter regime we expect this effect to be reasonably small. The disorder strength is much smaller than the lattice depth ($\Delta\approx 0.02 \, V_{\mathrm{lat}}$), and the modification of the tunneling $\delta J$ will be well below $0.08J$. The disorder beam is tuned to the so-called `tune-out' wavelength of the $\ket{F=1, m_{F}=-1}$ state, such that species-dependent potentials can be tailored~\cite{Leblanc2007}. This allows for tuning to a configuration where the $\ket{c}$ component experiences a vanishing light shift, while the light shift for the $\ket{d}$ state leads to an attractive potential. Aside from the programmed on-site disorder potential $\delta_{i}$, all atoms are equally sensitive to an overall harmonic trapping potential $V_{\textit{\textbf{i}}} = m a^{2}_{\mathrm{lat}} (\omega^{2}_{x} x^{2}_{\textit{\textbf{i}}} + \omega^{2}_{y} y^{2}_{\textit{\textbf{i}}})/2 $ with frequencies $(\omega_{x},\omega_{y}) = 2 \pi \times (51(2),55(2))$Hz. \subsection*{Measuring the occupation number} At the end of each run we measure the atomic occupation in each lattice site. We first freeze the tunneling motion by increasing the lattice depth to $60\,E_{\mathrm{r}}$ in less than half a millisecond. To selectively measure only one of the two hyperfine components, we then remove all the atoms in the state that we do not wish to measure. To image the $\ket{c}$ state, we push the $\ket{d}$-state atoms using a resonant $\mathrm{D}_{2}$ light pulse, while for detecting the $\ket{d}$ state, we apply a microwave sweep to swap the two hyperfine spin states before the optical push-out pulse, thus removing the atoms which were originally in the $\ket{c}$ state. After the state selection, all lattices are ramped to their maximum depth and an optical molasses is used to scatter fluorescence photons and simultaneously cool the atoms. We expose an EMCCD camera for \SI{1}{s}, thereby obtaining single-site-resolved images of the parity-projected density distribution~\cite{sherson2010}. \begin{figure} \centering \includegraphics{fig-A1.pdf} \caption{ \label{fig:S1} \textbf{Atom loss and doublon formation.} The purple points show the normalized total atom number $n$ as a function of holding time. The purple line indicates a linear fit to the decay, with a typical time of $3300(300)\,\tau$. In blue we show the normalized parity-projected atom number $n_{\mathrm{CDW}}$ for the CDW evolution as a function of evolution time. The inset shows in red the estimated doublon fraction $p_{\mathrm{d}}$, obtained from the subtraction of the plotted normalized atom numbers.} \end{figure} \subsection*{Atom loss and doublon generation} An estimate of the total atom number in the system cannot be directly obtained from the in-situ fluorescence imaging, since parity projection prevents us from distinguishing empty lattice sites from doubly-occupied ones. When discarding local information, however, we can circumvent this issue. Performing a short time of flight in the 2D plane right before imaging dilutes the atom density, such that essentially no doubly-occupied sites remain in the final atomic distribution. We used this technique to measure the atom number of an initially prepared Mott insulator by turning both in-plane lattices off for \SI{1}{ms}. By measuring the total atom number $N(t)$ for different holding times, we were able to estimate the atom losses. This is shown in Fig. S1, in which the purple points represent the normalized atom number, $n(t)=N(t)/N(0)$, and the purple line is a linear fit with an atom number decay of $3300(300)\,\tau$. This means that after $ 600 \,\tau$, approximately $17\,\% $ of the initially prepared atoms have been lost, for which the main loss mechanism is induced by technical fluctuations of the lattice-beam intensities~\cite{choi2016}. We estimate the photon scattering rate from the disordered potential to be below $\gamma = 7 \cdot 10^{-5}\,\tau^{-1} $. Though our atom loss is comparable to previous work in other quantum-gas experiments~\cite{lueschen2017a}, the measured imbalance decay seems little affected by it. By comparing the parity-projection-free atom number with the one obtained from direct in-situ measurements of the CDW relaxation (Fig. 2 in the main text), we can additionally estimate the fraction of dynamically generated doublons even for long times. We define the doublon fraction as $p_{\mathrm{d}}(t)=2 \cdot N_{\mathrm{dou}}(t)/N(t)=(N(t)-N_{\mathrm{CDW}}(t))/N(t)=(n(t)-n_{\mathrm{CDW}}(t))/n(t)$, where $ N_{\mathrm{dou}}(t)$ is the number of doublons, and $N_{\mathrm{CDW}}(t)$ and $n_{\mathrm{CDW}}(t)$ are respectively the absolute and normalized parity-projected atom number. For the data measured at $\Delta=28\,J$, we find that the doublon fraction remains approximately constant after the fast initial formation (see Fig. S1). We did not take into account the effect of triply-occupied sites, which at the current experimental settings is estimated to be negligible. \begin{figure*} \centering \includegraphics{fig-A2.pdf} \caption{ \label{fig:S2} \textbf{Results of the dynamics obtained by exact diagonalization.} In the left we plot the evolution of the imbalance for $\Delta=25\,J$ for a non-interacting ($U=0$) and an interacting ($U=25\,J$) case, in which we can appreciate the slow relaxation when interactions are present. The second figure shows the evolution of the doublon fraction, which essentially saturates after a very fast initial formation.} \end{figure*} \subsection*{Numerical simulations} To get more insight on the slow relaxation observed for the dirty-component imbalance dynamics (Fig. 2 in the main text) we have performed numerical simulations based on exact diagonalization for a small disordered-Bose-Hubbard system, for which we used the Quspin package~\cite{Quspin}. The considered system is a ladder of $2 \times 6$ lattice sites with periodic boundary conditions, populated with 5 particles in a CDW-like pattern. We choose the system parameters close to the experimental ones, with a disorder distribution given by a Gaussian with full width at half maximum of $\Delta=25\,J$ and we have considered both a non-interacting ($U = 0$) and an interacting case ($U = 25\,J$). Without interactions, the imbalance dynamics shows a very fast relaxation to $\mathcal{I} = 0.7$, which takes less than $10\,\tau$ (see Fig. S2). In contrast, for the interacting system a steady state is only reached after almost $100\,\tau$, and one can distinguish a very fast initial decay to $\mathcal{I} = 0.6$ in few tunneling times from a much slower relaxation afterwards, quite similar to our experimental findings. Interestingly, such a slow relaxation is not observed in one-dimensional numerical simulations with the same interaction strength, not even for an intermediate disorder strength $\Delta$. The slow decay stops at a steady state imbalance of $\mathcal{I} \approx 0.5$, which is notably higher than the one measured in our experiment ($\mathcal{I} \approx 0.15$). Such a discrepancy could be explained by finite-size effects due to the small number of sites and atoms in the simulation in comparison to the real system. From our simulations, we have also obtained the doublon-fraction dynamics, which clearly indicate a fast doublon formation during the initial time scale, followed by a much slower increase, also in a similar fashion to the experimental results. \begin{figure}[b] \centering \includegraphics{fig-A3.pdf} \caption{ \label{fig:S3}\textbf{Delocalization dynamics in a lin-lin plot.} Shown is the dynamics of the dirty-state imbalance $\mathcal{I}_{d}$ for four different bath sizes ($N_{c}=0$ in blue, $N_{c}=20$ in dark green, $N_{c}=40$ in purple and $N_{c}=90$ in red). The dashed lines indicate exponential fits and the solid lines fits of an exponential with an offset. The horizontal dashed gray line indicates the typical statistical threshold at which the imbalance is compatible with zero. The error bars indicate one standard deviation of the mean.} \end{figure} \subsection*{Goodness of fit estimation} Here we provide a quantitative estimation of the goodness of the fits in Fig. 3 of the main text. We have calculated the reduced chi-square statistic $\chi^{2}_{\nu}$ for the single-exponential ($\mathcal{I}(t)=A_{1} e^{-t/t_{1}}$) and exponential-with-an-offset ($\mathcal{I}(t)=A_{1} e^{-t/t_{1}}+A_{2} $) fits for the four datasets (see Tab. 1). The values show that while the single exponential fit has a good agreement with the data for the two cases of bigger bath size, this gets worse as the bath size is reduced, as seen by the higher $\chi^{2}_{\nu}$ values for $N_{c}= 20$ and the bath-free case $N_{c}= 0$. Instead, the fit of an exponential with an offset provides a good fit for all four cases. We also show Fig. 3 with linear axes in Fig. S3, including points (for the largest bath case) which are not included in the log-lin plot of the main text due to diverging errorbars. \begin{table}[h] \begin{tabular}{lllll} \hline \multicolumn{1}{|l|}{Clean atom number $N_{c}$} & \multicolumn{1}{c|}{90} & \multicolumn{1}{c|}{40} & \multicolumn{1}{c|}{20} & \multicolumn{1}{c|}{0} \\ \hline \multicolumn{1}{|l|}{Single exponential $\chi^{2}_{\nu}$} & \multicolumn{1}{l|}{0.88} & \multicolumn{1}{l|}{1.14} & \multicolumn{1}{l|}{5.12} & \multicolumn{1}{l|}{9.05} \\ \hline \multicolumn{1}{|l|}{Exponential with an offset $\chi^{2}_{\nu}$} & \multicolumn{1}{l|}{0.96} & \multicolumn{1}{l|}{0.99} & \multicolumn{1}{l|}{0.99} & \multicolumn{1}{l|}{1.6} \\ \hline & & & & \end{tabular} \caption{\textbf{Reduced chi-squared statistics $\chi^{2}_{\nu}$ for the two different fits of Fig. 3 and Fig. S3 }} \end{table} \bigskip
1,314,259,994,650
arxiv
\section{Introduction} Various physical phenomena are affected by drastic changes in the speed of sound, or more generally, in the speed of certain waves. The occurrence of these changes may be caused, for example, by the geometry of the problem. Other examples are the propagation of waves in heterogeneous compressible solids. These waves can propagate with different velocities due to the local stiffness of the material. This work focuses on numerical schemes for all Mach number flows in both fluid dynamics and continuum mechanics. Phenomena of interest involve fluid flows and elastic materials whose deformations are investigated within a monolithic Eulerian framework. With this approach any material (gas, liquid or solid) can be described with the same system of conservation equations and a suitable general formulation of the constitutive law \cite{de2016cartesian,godunov2013elements,gorsse2014simple,plohr1988conservative,plohr1992conservative}. The classical Euler system used in fluid dynamics can be considered as a special case within this general framework. The challenges in constructing an all-speed scheme that can be applied for compressible as well as low Mach schemes are twofold. Firstly the scheme has to ensure the correct numerical viscosity for all Mach number regimes. Explicit upwind schemes in particular are not suited for applications in weakly compressible regimes, since they suffer from excessive diffusion when the Mach number is small \cite{GuillardViozat1999,Klein1995} leading to spurious solutions. Here, the excess of viscosity is observed with respect to fluctuations in the stress tensor which lead to both acoustic compression and elastic deformations. Having the correct numerical diffusion is an integral part of obtaining so called asymptotic preserving schemes. {\color{black}It} is well known that compressible flow equations formally converge to incompressible equations when the Mach number tends to zero, see e.g. \cite{Dellacherie2010,KlainermanMajda1981,Schochet2005} and references therein. To obtain physically admissible solutions especially in the weakly compressible flow regime, the numerical scheme has to preserve these asymptotics. Secondly, explicit schemes have to obey a severe time step restriction to guarantee the robustness of the numerical scheme. Being forced to use small time steps reflects in long CPU times, in particular if long time intervals are considered. As a side effect all waves are resolved even though usually only the slow waves are of interest and inaccuracies or a lack of resolution on fast longitudinal or acoustic waves are acceptable. Therefore, the ability to use large time steps is desirable. This can be achieved with implicit-explicit (IMEX) schemes or fully implicit schemes. A non-exhaustive list of more recent IMEX schemes is for example given by \cite{AvgBerIolRus2019,BoscarinoRussoScandurra2018,BouchutFranckNavoret2020,ZeifangEtAl2020} and of implicit schemes \cite{AbbIolPup2017,AbbIolPup2019,BerthonKlingenbergZenk2018,viozat1997} as well as references therein. The problematic of stability in connection with flux splittings to construct IMEX schemes is addressed for example in \cite{SchNoe2015,ZakNoe2018} which can be avoided using fully implicit schemes. However, IMEX and implicit schemes have to be constructed carefully to avoid the necessity of utilizing non-linear implicit solvers. The schemes we propose here are motivated by the work given in \cite{ AbbIolPup2017} where a Jin Xin relaxation method \cite{JinXin1995} is used to build a linear diffusive approximation to the original equations. In \cite{ AbbIolPup2017}, a linearly fully implicit all-speed integration method for compressible materials is proposed, at the cost of introducing additional auxiliary variables that have to be updated within the scheme. This scheme was found to be accurate in computing steady state solutions as well as in approximating material waves in various Mach regimes and different materials. It showed consistent improvement in approximating material waves at slow velocities with respect to explicit schemes. However it proved to be quite computationally costly due to the complexity of the coupled linear system that needs to be solved implicitly. Having to store an increased number of variables is in particular problematic for large scale or multi-dimensional simulations. Here, we overcome the need to solve for auxiliary variables while preserving the properties of the linearly implicit scheme proposed in \cite{ AbbIolPup2017}. To achieve this, we split the stiff relaxation source terms from the fluxes and then reformulate the homogeneous part in an elliptic form. This technique is also used in the context of constructing all-speed schemes for the Euler equations \cite{CordierDegondKumbaro2012,NoelleEtAl2014,ThoZenkPupKB2020}. This procedure yields a decoupled symmetric linear implicit system that is easy to implement and can be solved efficiently using standard linear iterative or direct solvers. To reduce the numerical diffusion of the first order scheme, we extend the time semi-discrete scheme to second order. In \cite{CoulFraHelRatSon2019} a Crank-Nicholson scheme \cite{CraNich1947} was used to obtain an implicit second order relaxation scheme for compressible flow only. Here, we use a second order stiffly accurate diagonally implicit Runge Kutta integrator \cite{HaiWan1991} which has an additional computational stage but proved to be stable also for low Mach flows. In space a convex combination of upwinding and centred differences depending on the local Mach number, as in \cite{AbbIolPup2017}, is used which leads to the correct numerical viscosity of the scheme which is validated by several numerical tests. Especially for low Mach flows the centred discretization of the pressure gradient is recovered whose importance for the correct limit behaviour was discussed e.g. in \cite{Dellacherie2010,GuillardViozat1999}. The paper is organized as follows. In Section \ref{sec:NumScheme} we shortly revisit the Jin Xin relaxation technique described in \cite{JinXin1995} for general systems of hyperbolic conservation laws. {\color{black}For simplicity and illustrating purposes, all derivations are carried out in a one-dimensional framework.} Then the numerical scheme based on this relaxation model is developed. The derivation of the scheme is kept quite generic and can be applied to general hyperbolic conservation laws. First a first order semi-discrete scheme using backward Euler is constructed which is then equipped with a suitable space discretization. To reduce the diffusiveness of the backward Euler scheme, a second order time integrator is employed in the subsequent section. The first and second order schemes are numerically validated in Section \ref{sec:Numerics} applied to the Eulerian model of nonlinear elasticity that is briefly described in Section \ref{sec:NonlinElast}. It is tested in different Mach number regimes applied to different materials. The results are compared to a standard local Lax-Friedrichs scheme. Final conclusions are drawn in Section \ref{sec:conclusions}. \section{Construction of {\color{black} the implicit} Jin Xin relaxation scheme} \label{sec:NumScheme} We consider a general system of hyperbolic conservation laws, for simplicity in one space direction, given by \begin{equation} \label{Sys:Hyperbolic_1D} \partial_t {\bm{\psi}} + \partial_x {\bm{f}}({\bm{\psi}}) = 0, \end{equation} where ${\bm{\psi}} \in \mathbb{R}^k$ denotes the state vector consisting of $k$ variables and $ {\bm{f}}({\bm{\psi}}) \in \mathbb{R}^k$ denotes a (non-linear) flux function. Especially concerning the simulation of low Mach flows, stiff terms arise in the flux function which makes the resolution of those flow regimes quite challenging. In combination with the appearance of large characteristic speeds this can cause problems in the construction of numerical schemes applicable on those flows. Here, the aim is to obtain efficient and robust numerical procedures independently of the considered system of hyperbolic conservation laws. To achieve this, we use the Jin-Xin relaxation method \cite{JinXin1995} to build a linear diffusive approximation to the original equations \eqref{Sys:Hyperbolic_1D} called the \textit{relaxation model}. By doing this, we introduce dissipation errors while, due to the linearity of the relaxation model, the numerical solver does not require sophisticated or complicated non-linear solvers. Especially regarding stiff gradients, this would increase the complexity of the numerical scheme. Motivated by this, we shortly describe the relaxation method in the next section following the work of \cite{JinXin1995}. The numerical scheme is constructed subsequently. \subsection{The relaxation method} \label{sec:JXrelax} Following \cite{JinXin1995}, we introduce a vector of relaxation variables $ {\bm{v}}\in \mathbb{R}^k$ and consider the following relaxation system given by \begin{subequations} \label{Sys:Relaxation_1D} \begin{align} \partial_t {\bm{\psi}} + \partial_x {\bm{v}} &= 0, \label{eq:Relax_psi}\\ \partial_t {\bm{v}} + \AA^2 \partial_x {\bm{\psi}} &= - \frac{1}{\eta} \left({\bm{v}} - {\bm{f}}({\bm{\psi}})\right), \label{eq:Relax_v} \end{align} \end{subequations} where $\eta > 0$ denotes the relaxation rate and $\AA^2$ is a diagonal matrix with positive entries given by \begin{equation} \AA^2 = \text{diag}(a_1^2, \dots, a_k^2). \end{equation} The fluxes on the relaxation model are linear whereas the relaxation source term on the right hand side of \eqref{eq:Relax_psi} is non-linear and stiff for small $\eta$. Rewriting equation \eqref{eq:Relax_v}, using a Chapman-Enskog expansion of the relaxation variables for small $\eta$, we find \begin{equation} \label{eq:Relax_v_exp} {\bm{v}} = {\bm{f}}({\bm{\psi}}) - \eta (\bm f^\prime({\bm{\psi}})^2 - \AA^2)) \partial_x {\bm{\psi}}, \end{equation} where $\bm f^\prime({\bm{\psi}}) = \nabla_{\bm{\psi}} {\bm{f}}({\bm{\psi}})$ denotes the Jacobian of the flux function $\bm f$. {\color{black} We have neglected the $\mathcal{O}(\eta^2)$ terms in the expansion \eqref{eq:Relax_v_exp}, since we are only interested in the first order diffusion terms in $\eta$. } Inserting \eqref{eq:Relax_v_exp} into \eqref{eq:Relax_psi}, we have \begin{equation} \label{Sys:Parabolic_Relax} \partial_t {\bm{\psi}} + \partial_x {\bm{f}}({\bm{\psi}}) = \eta \partial_x \left(\left(\AA^2 - {\bm{f}}^\prime({\bm{\psi}})^2\right)\partial_x {\bm{\psi}}\right). \end{equation} To obtain a diffusive approximation of the original system of equations \eqref{Sys:Hyperbolic_1D}, it has to be ensured that the diffusion term on the right hand side is {\color{black}non-negative}. This yields the so called sub-characteristic condition, that for all $\ {\bm{\psi}}$ it has to hold \begin{equation} \AA^2 - \bm f^\prime({\bm{\psi}})^2 \geq 0 \qquad \text{({\color{black}positive semi definite})}. \end{equation} Since we are considering hyperbolic equations, we can diagonalize the Jacobian by using the basis of right eigenvectors $\bm R$ and {\color{black} setting $\bm A$ as a constant approximation of } characteristic speeds, we can write \begin{equation} \AA^2 - \bm f^\prime({\bm{\psi}})^2 = \bm R(\AA^2 - \bm \Lambda^2)\bm R^{-1}, \end{equation} where $\bm \Lambda$ is a diagonal matrix containing the characteristic speeds {\color{black}$\lambda_{j}, ~j=1,\dots,k$} of the original equations \eqref{Sys:Hyperbolic_1D} {\color{black}given by the eigenvalues of the Jacobian ${\bm{f}}^\prime({\bm{\psi}})$}. Choosing {\color{black} \begin{equation} a_k = \underset{j=1,\dots,k}{\max}|\lambda_j({\bm{\psi}})| \end{equation}} as the maximum {\color{black} absolute value} of the $k$-th characteristic speed over the computational domain $\Omega$ fulfils the sub-characteristic condition, see also \cite{JinXin1995}. Furthermore, from \eqref{eq:Relax_v_exp} and \eqref{Sys:Parabolic_Relax}, we recover at leading order the original system, namely \begin{equation} \label{eq:Relax_equi_cont} \begin{cases} {\bm{v}} = {\bm{f}}({\bm{\psi}}), \\ \partial_t {\bm{\psi}} + \partial_x {\bm{f}}({\bm{\psi}}) = 0, \end{cases} \end{equation} also referred to as the relaxation limit. \subsection{Construction of the numerical scheme} In the original paper of Jin \& Xin \cite{JinXin1995}, the construction of an explicit upwind scheme was described. Therein they detailed two approaches to treat the relaxation source term, namely the \textit{relaxing} and \textit{relaxed} strategy. In \cite{AbbIolPup2017} a fully implicit relaxing scheme was introduced. A relaxing scheme is characterized by keeping the relaxation source term including the relaxation rate $\eta$ in the numerical scheme. This implies that a value has to be assigned to the relaxation rate $\eta$ a priori which has to be chosen carefully to still obtain the correct relaxation limit. For further details, we refer to \cite{JinLev1996} on restrictions on how to choose $\eta$. Moreover, as is detailed in \cite{AbbIolPup2019}, there are additional restrictions on how to set $\eta$ in order to obtain the correct asymptotics in the low Mach limit when considering e.g. the Euler equations. The relaxing scheme is constructed by first defining the discretization of the fluxes with an upwind scheme and then integrated in time with a backward Euler scheme. This leads to the following scheme \begin{align} \label{eq:RelaxingScheme} \begin{split} &\frac{{\bm{\psi}}_i^{n+1}-{\bm{\psi}}_i^n}{\Delta t} + \frac{1}{\Delta x} \left(\mathcal{V}_{i+1/2}({\bm{\psi}}^{n+1},{\bm{v}}^{n+1}) - \mathcal{V}_{i-1/2}({\bm{\psi}}^{n+1},{\bm{v}}^{n+1})\right) = 0 \\ &\frac{{\bm{v}}_i^{n+1} - {\bm{v}}_i^n}{\Delta t} + \frac{1}{\Delta x} \AA^2 \left(\mathcal{P}_{i+1/2}({\bm{\psi}}^{n+1},{\bm{v}}^{n+1}) - \mathcal{P}_{i-1/2}({\bm{\psi}}^{n+1},{\bm{v}}^{n+1})\right) \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad= -\frac{1}{\eta}\left({\bm{v}}_i^{n+1} - {\bm{f}}({\bm{\psi}}^{n+1}_i)\right) \end{split} \end{align} where $\mathcal{V}$ and $\mathcal{P}$ are the numerical fluxes associated to $\partial_x {\bm{v}}$ and $\partial_x {\bm{\psi}}$ respectively. Since they are based on upwinding, they depend both on ${\bm{\psi}}^{n+1}$ and ${\bm{v}}^{n+1}$ and \eqref{eq:RelaxingScheme} is a fully coupled implicit system with $2k$ equations. This means the number of variables that need to be solved and updated has doubled with respect to the original problem. To obtain a linear implicit system the relaxation source term was linearised using a truncated Taylor expansion. This requires the knowledge of the Jacobian of the flux function which at times is difficult to compute or not available. In addition, due to the coupling of variables, extensions to higher dimensions are very inefficient leading to even larger implicit systems since the number of variables increases for each added space dimension. Therefore the aim of the numerical scheme that is presented in the next section is to reduce the numerical cost by completely eliminating the relaxation variables from the numerical scheme. Furthermore we want to pass directly to the relaxation limit $\eta \to 0$ constructing a so called \textit{relaxed} implicit numerical method. In this way we ensure that we reach the correct relaxation limit as well as the low Mach limit without the need of fixing $\eta$ a priori, and with no need to update the relaxation variables. \subsubsection{Fully implicit relaxed scheme} The numerical scheme is constructed by first deriving a time-semi discrete scheme upon which a suitable space discretization is applied. The time step is given by $\Delta t = t^{n+1} - t^n$. As we are interested in the limit $\eta \to 0$ to recover the original system \eqref{Sys:Hyperbolic_1D}, the relaxation source term on the right hand side of \eqref{eq:Relax_v} is stiff and is therefore discretized implicitly. We apply an operator splitting and {\color{black} consider the following relaxation subsystem} \begin{align} \begin{cases} {\bm{\psi}}^\star &= {\bm{\psi}}^n, \\[2pt] {\bm{v}}^{\star} &= {\bm{v}}^n - \frac{\Delta t}{\eta} \left({\bm{v}}^\star - {\bm{f}}( {\bm{\psi}}^\star)\right). \end{cases} \end{align} The latter equation can be rewritten and solved analytically by \begin{equation} \label{eq:Stiff_relax_impl} {\bm{v}}^\star = \frac{\eta}{\eta + \Delta t} {\bm{v}}^n + \frac{\Delta t}{\eta + \Delta t} {\bm{f}}({\bm{\psi}}^\star), \end{equation} {\color{black} where ${\bm{\psi}}^\star$ and ${\bm{v}}^\star$ denote the state and relaxation variables after the relaxation process.} Considering now the limit $\eta \to 0$, we find ${\bm{v}}^\star = {\bm{f}}( {\bm{\psi}}^\star)$ which is consistent with the relaxation equilibrium solution \eqref{eq:Relax_equi_cont}. Next, we discretize the homogeneous part of system \eqref{Sys:Relaxation_1D}. The associated characteristic speeds are given by $\pm a_j, ~j=1,\dots,k$ and depend on the considered system of equations. Especially when considering systems with fast characteristic speeds, a severe time step restriction when using an explicit scheme as done in \cite{JinXin1995} has to be enforced to guarantee stability. An example for fast characteristic speeds are sound waves in the Euler equations for weakly compressible flows, or shear and longitudinal waves in hyperelastic materials, see Section \ref{sec:NonlinElast}. For further details we refer to \cite{dBraIolMil2017} and references therein. Therefore, we choose a fully implicit discretization using {\color{black} Diagonally Implicit Runge Kutta (DIRK) methods. More precisely, we apply {\em stiffly accurate} DIRK schemes, whose choice is motivated by results given in \cite{DimPar2014,ParRus2005} where a connection between the asymptotic preserving property and the structure of the DIRK scheme is investigated. As proven in \cite{DimPar2014} stiffly accurate DIRK schemes are also $L$ stable and are therefore in particular suited for low Mach number flows. For a system of equations \eqref{Sys:Hyperbolic_1D}, an implicit Runge Kutta method based on a diagonal Butcher tableau with $s$ stages \begin{center} \vskip3mm \renewcommand{\arraystretch}{1.25} \begin{tabular}{c|ccc} $c_1$ & $\alpha_{11}$ & & \\ $\vdots$ & $\vdots$ & $\ddots$ & \\ $c_s$ & $\alpha_{s1}$ & $\cdots$ & $\alpha_{ss}$ \\ \hline & $\beta_1$ & $\cdots$ & $\beta_s$ \end{tabular} \end{center} is given by \begin{align} \begin{cases} {\bm{\psi}}^{(j)} &= {\bm{\psi}}^n - \Delta t \sum_{l=1}^{j} \alpha_{jl} \partial_x {\bm{f}}({\bm{\psi}}^{(l)}),\\[2mm] {\bm{\psi}}^{n+1} &= {\bm{\psi}}^n - \Delta t \sum_{j=1}^{s} \beta_l \partial_x {\bm{f}}({\bm{\psi}}^{(j)}). \\ \end{cases} \end{align} Thereby $\alpha_{jl}, \beta_l, ~j =1, \dots s, l = 1, \dots j$ denote the weights in the quadrature rule for the stages and final update respectively and $c_j$ the corresponding nodes in the considered time interval. For stiffly accurate DIRK schemes the weights $\beta_l$ coincide with the weights $\alpha_{jl}$ of the last stage. } {\color{black} Applying this formalism with $s$ stages on the homogeneous part of \eqref{Sys:Relaxation_1D} leads to the following time semi-discrete scheme for $j = 1, \dots, s$ given by \begin{subequations} \label{eq:Hom_relax_impl} \begin{align} &\begin{cases} {\bm{\psi}}^{(j)} &= {\bm{\psi}}^n - \Delta t \sum_{l=1}^{j} \alpha_{jl} \partial_x {\bm{v}}^{(l)},\\[2mm] {\bm{v}}^{(j)} &= {\bm{v}}^n - \Delta t \sum_{l=1}^{j} \alpha_{jl} \AA^2 \partial_x {\bm{\psi}}^{(l)}, \end{cases} \label{eq:Hom_relax_impl_stages}\\[2mm] &\begin{cases} {\bm{\psi}}^{n+1} &= {\bm{\psi}}^n - \Delta t \sum_{j=1}^{s} \beta_l \partial_x {\bm{v}}^{(j)}, \\[2mm] {\bm{v}}^{n+1} &= {\bm{v}}^n - \Delta t \sum_{j=1}^s \beta_l \partial_x \AA^2 {\bm{\psi}}^{(j)}. \end{cases} \label{eq:Hom_relax_impl_update} \end{align} \end{subequations} } Each stage in \eqref{eq:Hom_relax_impl_stages} forms a pairwise coupled linear system consisting of $2k$ variables that have to be solved implicitly. We remark that $\bm A$ is a diagonal matrix with constant entries meaning the characteristics are frozen during one time step $(t^n,t^{n+1})$. To eliminate the relaxation variables ${\bm{v}}$, we insert ${\bm{v}}^{(j)}$ given by {\color{black}the second equation of \eqref{eq:Hom_relax_impl_stages}} into the {\color{black}first equation of \eqref{eq:Hom_relax_impl_stages}} and solve for ${\bm{\psi}}^{(j)}$ only. The time semi-discrete scheme for the state variables ${\bm{\psi}}^{(j)}$ is then given as follows {\color{black}\begin{multline} \label{eq:Implicit_time_semi_1} {\bm{\psi}}^{(j)} - \Delta t^2 \alpha_{jj}^2 \AA^2 \partial_x^2 {\bm{\psi}}^{(j)} = \\ {\bm{\psi}}^n - \Delta t \alpha_{jj} \partial_x {\bm{v}}^n - \Delta t \sum_{l=1}^{j-1} \alpha_{jl} \partial_x {\bm{v}}^{(l)} + \Delta t^2 \alpha_{jj} \sum_{l=1}^{j-1} \alpha_{jl} \AA^2 \partial_x^2 {\bm{\psi}}^{(l)}. \end{multline} At each stage, the relaxation source term is solved by using \eqref{eq:Stiff_relax_impl} leading to \begin{equation} {\bm{v}}^n = {\bm{f}}({\bm{\psi}}^n), \quad {\bm{v}}^{(j)} = {\bm{f}}({\bm{\psi}}^{(j)}). \end{equation} Thus, the obtained stage values obey an approximation to the original equations \eqref{Sys:Hyperbolic_1D}. A similar structure was obtained for a second order explicit scheme introduced by Jin \& Xin in \cite{JinXin1995}.} Using the fact that ${\bm{v}}^{(l)} = {\bm{f}}({\bm{\psi}}^{(l)}), ~l=1,\dots,j$ holds at relaxation equilibrium, we find {\color{black}\begin{multline} \label{eq:Implicit_time_semi_2} {\bm{\psi}}^{(j)} - \Delta t^2 \alpha_{jj}^2 \AA^2 \partial_x^2 {\bm{\psi}}^{(j)} \\ = {\bm{\psi}}^n - \Delta t \alpha_{jj} \partial_x {\bm{f}}({\bm{\psi}}^n) - \Delta t \sum_{l=1}^{j-1} \alpha_{jl} \partial_x {\bm{f}}({\bm{\psi}}^{(l)}) + \Delta t^2 \alpha_{jj} \sum_{l=1}^{j-1} \alpha_{jl} \AA^2 \partial_x^2 {\bm{\psi}}^{(l)}. \end{multline} The stages given in \eqref{eq:Implicit_time_semi_2} form now a decoupled linear implicit system consisting only of $k$ variables, thus half of the number of variables than in \eqref{eq:Hom_relax_impl_stages}. {\color{black} As standard for implicit Runge Kutta methods, the state variables at the new time step ${\bm{\psi}}^{n+1}$ are updated explicitly from the previously computed stages ${\bm{v}}^{(j)}, j=1,...,s$ in \eqref{eq:Hom_relax_impl_update}. Since we know that in relaxation equilibrium holds ${\bm{v}}^{(j)} = {\bm{f}}({\bm{\psi}}^{(j)})$, we obtain directly the solution of the relaxed system: \begin{equation} \label{eq:DIRK_final_update} {\bm{\psi}}^{n+1} = {\bm{\psi}}^n - \Delta t \sum_{j=1}^{s} \beta_j \partial_x{\bm{f}} ({\bm{\psi}}^{(j)}). \end{equation} As a consequence, the update of the relaxation variables ${\bm{v}}^{n+1}$ in \eqref{eq:Hom_relax_impl_update} is redundant as we obtain immediately ${\bm{v}}^{n+1} = {\bm{f}}({\bm{\psi}}^{n+1})$ due to the relaxation process. Therefore, in the implementation of the numerical scheme, storing the final update for the relaxation variables ${\bm{v}}^{n+1}$ can be neglected since they are given by the flux evaluation which can be computed on the fly. This halves the storage requirements with respect to the full relaxation scheme \cite{AbbIolPup2017}. Unlike for schemes directly based on the original equation \eqref{Sys:Hyperbolic_1D}, the final update \eqref{eq:DIRK_final_update} does not coincide with the last stage in \eqref{eq:Implicit_time_semi_2}. This additional step is necessary to obtain an accurate description of contact waves and corrects the diffusion on the slow waves. } {\color{black} Summarizing, the time semi-discrete scheme composed of the stages \eqref{eq:Implicit_time_semi_2} and the update \eqref{eq:DIRK_final_update}} is free of relaxation variables ${\bm{v}}$ and therefore we have obtained a scheme that depends only on the state variables ${\bm{\psi}}$ and the storing and updating of additional variables is avoided. To illustrate the scheme, we detail the first and second order time semi-discrete scheme based on a backward Euler scheme and second order method taken from \cite{HaiWan1991} which will be used to obtain the numerical results in Section \ref{sec:Numerics}. The associated Butcher tableaux are reported in \cref{tab:Butcher_DIRK}. Note that both methods are stiffly accurate diagonal implicit Runge Kutta (DIRK) methods with a single coefficient on the diagonal (SDIRK). \begin{table} \renewcommand{\arraystretch}{1.25} \centering \begin{subtable}[b]{0.3\textwidth} \centering \begin{tabular}{c|c} 1 & 1 \\ \hline & 1 \\ \end{tabular} \subcaption{Backward Euler scheme.} \end{subtable}\hfill \begin{subtable}[b]{0.65\textwidth} \centering \begin{tabular}{c|cc} $\gamma$ & $\gamma$ & 0 \\ 1 & $1-\gamma$ & $\gamma$ \\ \hline & $1-\gamma$ & $\gamma$ \\ \end{tabular} \subcaption{Second order method from \cite{HaiWan1991} p. 106 with $\gamma = 1 - \frac{\sqrt{2}}{2}$.} \end{subtable} \caption{Butcher tableaux of the first and second order scheme.} \label{tab:Butcher_DIRK} \end{table} Following the above given procedure, the first order time semi-discrete scheme with $s=1, j=1, ~\alpha_{11} = 1, ~\beta_1 = 1$ is given by \begin{equation} \begin{cases} &{\bm{\psi}}^{(1)} - \Delta t^2 \AA^2 \partial_x^2 {\bm{\psi}}^{(1)} = {\bm{\psi}}^n - \Delta t \partial_x {\bm{f}}({\bm{\psi}}^n) \\[2mm] &{\bm{\psi}}^{n+1} = {\bm{\psi}}^n - \Delta t \partial_x {\bm{f}}({\bm{\psi}}^{(1)}) \end{cases} \end{equation} and the second order time semi-discrete scheme with $s=2, ~\alpha_{11} = \alpha_{22} =\gamma, ~\alpha_{21} = 1-\gamma, \beta_1 = \alpha_{21}, \beta_2 = \alpha_{22}$ is then given by \begin{equation} \begin{cases} &{\bm{\psi}}^{(1)} - \Delta t^2 ~\alpha_{11}^2 \AA^2 \partial_x^2 {\bm{\psi}}^{(1)} = {\bm{\psi}}^n - \Delta t ~\alpha_{11} \partial_x {\bm{f}}({\bm{\psi}}^n) \\[2mm] &{\bm{\psi}}^{(2)} - \Delta t^2 ~\alpha_{11}^2 \AA^2 \partial_x^2 {\bm{\psi}}^{(2)} = {\bm{\psi}}^n - \Delta t ~\alpha_{11} \partial_x {\bm{f}}({\bm{\psi}}^n) - \Delta t \alpha_{21} \partial_x {\bm{f}}({\bm{\psi}}^{(1)}) \\ &\hskip4.8cm + ~\Delta t^2 ~\alpha_{11} \alpha_{21} \AA^2 \partial_x^2 {\bm{\psi}}^{(1)} \\[2mm] &{\bm{\psi}}^{n+1} = {\bm{\psi}}^n - \Delta t \sum_{j=1}^{2} \alpha_{2,j} \partial_x {\bm{f}}({\bm{\psi}}^{(j)}). \end{cases} \end{equation} To further motivate the choice of L stable DIRK integrators over the one-stage well-known second order Crank-Nicholson (CN) method \cite{CraNich1947}, in the here considered context of low Mach number flows, we report the stability functions of the respective method, see also \cite{HaiWan1991} for more details. For a given $z \in \mathbb{C}$ the stability function $R(z)$ of the Crank-Nicholson method reads \begin{equation} R(z) = \frac{1 + 1/2 z}{1 - 1/2 z} \end{equation} which yields $R(z) \to -1$ in the limit $|z| \to \infty$. Therefore no damping in the oscillations occurs and the asymptotics towards the incompressible limit in the low Mach number regime are not preserved. The stability function of the second order DIRK method given in Table \ref{tab:Butcher_DIRK} is given as follows \begin{equation} R(z) = \frac{1 + (1- 2\gamma) z }{(1-\gamma z)^2} \end{equation} and yields $R(z) \to 0$ for $|z| \to \infty$ and is therefore L stable. Moreover, since the method is also stiffly accurate, the asymptotics in the singular Mach number limit are preserved which is fundamental in the context of our applications. Therefore we favour the second order DIRK integrator with two stages over the Crank-Nicholson scheme with only one stage. } \subsubsection{Space discretization} To obtain a fully discrete scheme, we consider the space discretization next. In general one is not restricted in {\color{black} the} choice of space discretization keeping in mind that the numerical diffusion should not increase when the characteristic speeds of the considered model tend to infinity as it is the case for example when the Mach number tends to zero in the Euler equations. Here, we choose the framework of finite volumes on a Cartesian grid. The computational domain $\Omega$ is divided into $N$ uniform cells $C_i = (x_{i+1/2},x_{i-1/2})$ with grid size $\Delta x$. As it is standard in the finite volume setting, we consider cell averages ${\bm{\psi}}_i^n$ defined at time $t^n$ by \begin{equation*} {\bm{\psi}}_i^n = \frac{1}{\Delta x} \int_{C_i} {\bm{\psi}}(x,t^n) dx. \end{equation*} For the second derivative on ${\bm{\psi}}$ and the flux derivatives we apply centred differences and obtain {\color{black}for the stages in \eqref{eq:Implicit_time_semi_2} the following fully discrete formulation \begin{multline} \label{eq:Implicit_fully_disc_stages_centered} {\bm{\psi}}_i^{(j)} - \Delta t^2 \alpha_{jj}^2 \AA^2 ( {\bm{\psi}}^{(j)}_{i+1} - 2 {\bm{\psi}}_i^{(j)} + {\bm{\psi}}_{i-1}^{(j)}) \\ = {\bm{\psi}}^n - \Delta t \alpha_{jj} ({\bm{f}}_{i+1}^{n} - {\bm{f}}_{i-1}^n) - \Delta t \sum_{l=1}^{j-1} \alpha_{jl} ({\bm{f}}_{i+1}^{(j)} - {\bm{f}}_{i-1}^{(j)})\\ + \Delta t^2 \alpha_{jj} \sum_{l=1}^{j-1} \alpha_{jl} \AA^2 ( {\bm{\psi}}^{(l)}_{i+1} - 2 {\bm{\psi}}_i^{(l)} + {\bm{\psi}}_{i-1}^{(l)}). \end{multline}} {\color{black}where ${\bm{f}}^n_i = {\bm{f}}({\bm{\psi}}^n_i)$.} This choice is motivated especially for low Mach number flows as it yields the correct numerical diffusion in space due to centred fluxes \cite{GuillardViozat1999,Dellacherie2010}. To be able to apply the scheme also on compressible flows, obtaining a so called \textit{all-speed} scheme, a local Lax-Friedrichs flux can be used. In compressible regimes, all characteristic speeds are of the same order and the focus lies on the resolution of all waves. To obtain an all-speed scheme that can be used from compressible to weakly compressible flow regimes, we scale the diffusion by defining a function $g(M_{loc})$ based on the local Mach number $M_{loc}$ as introduced in \cite{AbbIolPup2017} to guarantee the correct numerical diffusion for all Mach regimes. In the following we define \begin{equation*} g(M_{loc}) = \begin{cases} {\color{black}\sin\left(\frac{\pi M_{loc}}{2}\right)} &\text{for } M_{loc} \in [0,1] \\ 1 &\text{for } M_{loc} > 1 \end{cases}. \end{equation*} Then we define the numerical fluxes as a convex combination of a centred and an local Lax-Friedrichs flux where the convex parameter is given by $g(M_{loc}) \in [0,1]$. {\color{black} It is given as follows \begin{align} \label{eq:Flux_convex} \begin{split} \bm f_{i+1/2} &=(1- g(M_{loc})) \frac{1}{2}\left(\bm f_i + \bm f_{i+1}\right) + g(M_{loc}) \left(\frac{1}{2}\left(\bm f_i + \bm f_{i+1}\right) - \frac{\lambda_{i+1/2}}{2}({\bm{\psi}}_{i+1} - {\bm{\psi}}_i)\right)\\ &=\frac{1}{2}\left(\bm f_i + \bm f_{i+1}\right) - g(M_{loc})\frac{\lambda_{i+1/2}}{2}({\bm{\psi}}_{i+1} - {\bm{\psi}}_i), \end{split} \end{align} where $\lambda_{i+1/2} = \max|\lambda_j({\bm{\psi}}_{i+1}), \lambda_j({\bm{\psi}}_i)|, ~j = 1, \dots, k$ denotes a local approximation of the maximal characteristic speed as standard for local Lax-Friedrichs fluxes since it is applied directly on the original flux function ${\bm{f}}$. We emphasize, that if the local Mach number tends to zero, i.e. for low Mach number flows, we obtain a centred discretization and the numerical diffusion is independent of the Mach number.} Since $\AA^2$ is a constant diagonal matrix, system \eqref{eq:Implicit_fully_disc_stages_centered}, with centred fluxes or with the flux \eqref{eq:Flux_convex}, consists of $k$ decoupled linear implicit equations which can be solved in parallel. In addition, the number of variables to be solved in the numerical scheme corresponds to the number of state variables in the original system \eqref{Sys:Hyperbolic_1D}. {\color{black}For the fully discrete update \eqref{eq:DIRK_final_update} is given by \begin{equation} {\bm{\psi}}_i^{n+1} = {\bm{\psi}}^n - \Delta t \sum_{j=1}^{s} \beta_l \left({\bm{f}}_{i+1/2}^{(j)} - {\bm{f}}_{i-1/2}^{(j)}\right). \end{equation} Note that the fully discrete scheme with the fluxes given in \eqref{eq:Flux_convex} is at most first order accurate for compressible and second order for low Mach number flows. } We wish to stress the simplicity of the scheme. Non-linear terms given by the fluxes $\bm f$ are evaluated explicitly and the stiffness of the low Mach case reduces to solve only linear decoupled equations. {\color{black}Furthermore,} one needs to store and update the same number of state variables as in the original problem \eqref{Sys:Hyperbolic_1D} and it is easy to implement. In addition, due to the decoupled linear nature of the equations that need to be solved, it is possible to use a very fine grid and still have moderate run times of the scheme. \section{Applications} \label{sec:NonlinElast} {\color{black} To illustrate the properties of the above developed schemes, we apply them on two models from fluid dynamics, namely the well-known Euler equations and an extension to treat compressible materials via non-linear elasticity in a monolithic way. For simplicity and sake of clarity in the development of our new numerical schemes, their derivation was given in a one dimensional framework. Therefore, we focus in the description of the considered models and the subsequent numerical test cases on variations along one space direction only. \subsection{The Euler equations} We consider the Euler equations, which are given in one dimension as follows \begin{align} \label{Sys:Euler} \begin{split} \partial_t \rho + \partial_x (\rho u) &= 0, \\ \partial_t (\rho u) + \partial_x (\rho u^2 + p) &= 0,\\ \partial_t E + \partial_x ((E + p) u) &=0, \end{split} \end{align} where $\rho$ denotes the density, $u$ the velocity, $p$ the pressure and $E$ the total energy given by \begin{equation} E = \rho e + \frac{1}{2} \rho u^2. \end{equation} The system is closed by choosing either the ideal gas law \begin{equation} \label{eq:EOS_idGas} e(\rho, p) = \frac{p}{(\gamma - 1)\rho} \end{equation} or the stiffened gas equation \begin{equation} \label{eq:EOS_stiffGas} e(\rho,p) = \frac{p}{(\gamma - 1)\rho} + \frac{p_\infty}{\rho}, \end{equation} where the parameter $p_\infty$ is given by the properties of the considered fluid. System \eqref{Sys:Euler} can be written in conservation form \eqref{Sys:Hyperbolic_1D} setting $k=3$, ${\bm{\psi}} = (\rho,\rho u,E)^T$ and ${\bm{f}}({\bm{\psi}}) = (\rho u, \rho u^2 + p, u(E+p))^T$. The characteristic speeds of the Euler equations consist of a material wave $\lambda_u = u$ and two acoustic waves $\lambda_\pm = u \pm c$, where $c$ denotes the sound speed. For the stiffened gas equation it is given by \begin{equation} c^2 = \gamma~\frac{p + p_\infty}{\rho}. \end{equation} Note, that by setting $p_\infty = 0$ we recover the sound speed for the ideal gas law. The sound speeds scale with the inverse of the \emph{acoustic Mach number} $M = |u|/c$ which is defined by the ratio between the absolute value of the fluid velocity and the sound speed of the fluid. Consequently, in the low Mach number regime $M \ll 1$, the acoustic wave speeds tend to infinity and are significantly faster than the fluid velocity $\lambda_u = u$. The scaling of the Euler equations with respect to the Mach number, the singular Mach number limits and the computation of characteristic speeds are extensively studied in the literature and we refer the interested reader e.g. to \cite{BoscarinoRussoScandurra2018,BouchutFranckNavoret2020,Dellacherie2010,GuillardViozat1999,KlainermanMajda1981,Klein1995,Schochet2005,ThoZenkPupKB2020,viozat1997} and references therein. } \subsection{Non-linear elasticity} {\color{black} To be able to treat compressible solids in the same framework as gases and fluids described by the Euler equations \eqref{Sys:Euler}, we consider an Eulerian model for non-linear elasticity. Even though we focus on a one dimenional setting, deformations of the solids are still considered in two directions. Details on the derivation of the model can be found in \cite{dBraIolMil2017,de2016cartesian,AbbIolPup2017,gorsse2014simple,plohr1988conservative,plohr1992conservative} and we shortly describe its most important features in the following. } The equations are given by \begin{align} \label{Sys:2D_xDir} \begin{split} \partial_t \rho + \partial_x (\rho u_1) &= 0, \\ \partial_t (\rho u_1) + \partial_x (\rho u_1^2 - \sigma_{11}) &= 0,\\ \partial_t (\rho u_2) + \partial_x (\rho u_1 u_2 - \sigma_{21}) &= 0,\\ \partial_t Y_1^2 + \partial_x (u_1 Y_1^2 + u_2) &=0,\\ \partial_t E + \partial_x ((E-\sigma_{11}) u_1 - \sigma_{21} u_2) &=0, \end{split} \end{align} consisting of the conservation of mass $\rho$, momentum $\rho \bm u$ and total energy $E$ which is given by the sum of the internal and kinetic energy as follows \begin{equation} E = \rho e + \frac{1}{2} \rho \Vert \bm u \Vert^2 \end{equation} with $\bm{u} = (u_1, u_2)^T$. Further it has, {\color{black}in contrast to the Euler equations \eqref{Sys:Euler},} an additional equation for the deformation gradient $[\nabla Y]$ which in the one dimensional framework reduces to \begin{equation} [\nabla Y] = \begin{bmatrix} Y_1^1 & 0 \\ Y_1^2 & 1 \\ \end{bmatrix} \end{equation} with $Y_1^1 = \rho/\rho_0$, where $\rho_0$ denotes the initial density. Therefore the only governing equation {\color{black} for the deformation gradient $[\nabla Y]$ in \eqref{Sys:2D_xDir} is given by} the component $Y_1^2$. The system \eqref{Sys:2D_xDir} is closed by considering {\color{black} a generalised formulation of the EOS where the gas/fluid is described by the ideal gas law \eqref{eq:EOS_idGas}/stiffened gas \eqref{eq:EOS_stiffGas} equation respectively and the compressible material as a } neohookean solid, namely \begin{equation} \label{eq:EOS} e(\rho,p,[\nabla Y]) = \frac{p}{(\gamma - 1)\rho} + \frac{p_\infty}{\rho} + \frac{\chi}{\rho} (tr \bar B - 2). \end{equation} {\color{black}Therein denotes $p_\infty$ a material constant associated to the description of liquids, $\chi$ the shear modulus connected to the rigidity of the considered material and $B$ the right Cauchy-Green tensor \begin{equation} {B} = \left[\nabla Y\right]^{-1} \left[\nabla Y\right]^{-T}. \end{equation} From the latter, the modified Cauchy-Green tensor can be defined as \begin{equation} \overline{B} = \frac{B}{det\left[\nabla Y \right]^{-1}}. \end{equation} Finally we obtain \begin{equation} tr \bar B = \frac{1}{\rho/\rho_0}\left(1 + (Y_1^2)^2 + (\rho/\rho_0)^2\right) \end{equation} which completes the description of the EOS \eqref{eq:EOS}. We want to stress that the parameters} $\chi,p_\infty,\gamma$ in the EOS \eqref{eq:EOS} are determined by the given material and can be found in \cref{tab:Parameters_mat_tests} for the materials considered in this work. {\color{black}To conclude the description of system \eqref{Sys:2D_xDir}, we can define the Cauchy stress tensor $\sigma$ appearing in the momentum and energy equations.} It has the following non-zero components \begin{equation} \sigma_{11} = - p + \chi \left(1 - \left(\frac{\rho}{\rho_0}\right)^2 - \left(Y_1^2\right)^2\right), \quad \sigma_{21} = -2\chi Y_1^2, \end{equation} which denote the normal and tangential stress respectively {\color{black}and the pressure can be computed from the EOS \eqref{eq:EOS} via $p = \rho^2 \frac{\partial e}{\partial \rho}$}. Model \eqref{Sys:2D_xDir} can be written in conservation form \eqref{Sys:Hyperbolic_1D} with $k=5$ by setting \begin{equation} {\bm{\psi}} = \begin{pmatrix} \rho \\ \rho u_1 \\ \rho u_2 \\ Y_1^2 \\ E \end{pmatrix}, \quad {\bm{f}}({\bm{\psi}}) = \begin{pmatrix} \displaystyle \rho u_1\\\displaystyle \rho u_1^2 +{\color{black} p - \chi \left(1 - \left(\frac{\rho}{\rho_0}\right)^2 - \left(Y_1^2\right)^2\right)}\\\displaystyle \rho u_1 u_2 + {\color{black}2\chi Y_1^2} \\u_1 Y_1^2 + u_2\\\displaystyle \left(E+{\color{black}p - \chi \left(1 - \left(\frac{\rho}{\rho_0}\right)^2 - \left(Y_1^2\right)^2\right)}\right) u_1 + {\color{black}2\chi Y_1^2} u_2 \end{pmatrix}. \end{equation} For model \eqref{Sys:2D_xDir}, we can define two flow regimes induced by two speeds. We find the classical \textit{acoustic Mach number} $M$ associated with the speed of sound \begin{equation} c(\rho,p,[\nabla Y]) = \sqrt{\gamma \ \frac{p + p_\infty}{\rho}} \end{equation} which is computed by \begin{equation} M = \frac{|u_1|}{c}. \end{equation} In addition we can define a \textit{shear Mach number} $M_\chi$ associated with an isochoric elastic speed \begin{equation} u_{iso} = \sqrt{\frac{2 \chi}{\rho}} \end{equation} which is computed by \begin{equation} M_\chi = \frac{|u_1|}{u_{iso}}. \end{equation} These Mach numbers appear in the non-dimensional formulation analysed in \cite{AbbIolPup2017} in the Cauchy tensor $\sigma$ \begin{equation} \sigma_{11} = - \frac{p}{M^2} + \chi \frac{\left(1 - \left(\frac{\rho}{\rho_0}\right)^2 - \left(Y_1^2\right)^2\right)}{2 M_\chi^2}, \quad \sigma_{21} = -\chi \frac{Y_1^2}{M_\chi^2} \end{equation} and in the internal energy \begin{equation} e(\rho,p) = \frac{p}{M^2 \rho (\gamma - 1)} + \frac{p_\infty}{M^2\rho} + \frac{\chi}{2 M_\chi^2\rho} (tr \bar B - 2). \end{equation} This reflects also on the characteristic speeds, {\color{black} derived and analyzed in \cite{dBraIolMil2017,de2016cartesian,AbbIolPup2017,gorsse2014simple,plohr1988conservative,plohr1992conservative}}, which are given, {\color{black} considering the parameters $M$ and $M_\chi$,} by two longitudinal waves \begin{equation} \lambda_{1,5} = u_1 \pm \sqrt{\frac{c^2}{2 M^2} + \frac{\chi}{2 \rho M_\chi^2} \left(\alpha + 1\right) + \frac{1}{\rho} \sqrt{\left(\frac{\rho c^2}{2 M^2} + \frac{\chi}{2 M_\chi^2} \left(\alpha - 1\right)\right)^2 + \frac{\chi^2}{M_\chi^2} \left(Y_1^2\right)^2}} \end{equation} where $\alpha = \left(\frac{\rho}{\rho_0}\right)^2 + \left(Y_1^2\right)^2$, two shear waves \begin{equation} \lambda_{2,4} = u_1 \pm \sqrt{\frac{c^2}{2 M^2} + \frac{\chi}{2 \rho M_\chi^2} \left(\alpha + 1\right) - \frac{1}{\rho} \sqrt{\left(\frac{\rho c^2}{2 M^2} + \frac{\chi}{2 M_\chi^2} \left(\alpha - 1\right)\right)^2 + \frac{\chi^2}{M_\chi^2} \left(Y_1^2\right)^2}} \end{equation} and one material wave given by the flow velocity $\lambda_3 = u_1$. {\color{black}For details on the Jacobian $\nabla_{\bm{\psi}} {\bm{f}}({\bm{\psi}})$ and the calculation of the characteristic speeds, see e.g. \cite{AbbIolPup2017,AbbIolPup2019,dBraIolMil2017,de2016cartesian}.} Note that by setting {\color{black} the shear modulus, connected to compressible solids, as} $\chi = 0$, we recover the wave speeds of the classic Euler equations \eqref{Sys:Euler}, where the longitudinal waves reduce to acoustic waves and the shear waves are not present. They collapse to the material wave $\lambda_{2,4} = u_1$. {\color{black}For the Eulerian model of non-linear elasticity \eqref{Sys:2D_xDir}, we} can identify two different low Mach number limits given by the \begin{enumerate} \item \textit{acoustic and shear low Mach number regime} characterized by $M \ll 1$ and $M_\chi \ll 1$. Both pressure and deformation gradient cause the stiffness in the equations. Thus longitudinal and shear waves are significantly faster than the material wave. Especially $\mathcal{O}(M) \simeq \mathcal{O}(M_\chi)$ means $c \simeq u_{iso}$ and $p + p_\infty \simeq \chi$, see Test 4 in Section \ref{sec:SimMatWaves}. \item \textit{acoustic low Mach number regime} characterized by $M \ll 1$ and $M \ll M_\chi$. The stiffness in the equations stems solely from the pressure resulting in $p + p_\infty \gg \chi$ and $c \gg |u_1|$ while $c \gg u_{iso}$. This means longitudinal waves are significantly faster than the shear and material waves, see Test 5 in Section \ref{sec:SimMatWaves}. \end{enumerate} \section{Numerical results} \label{sec:Numerics} In this section we validate the implicit first (IM1) and second order (IM2) implicit relaxed scheme at all speeds with respect to the Mach number regime applied to the Eulerian model of nonlinear elasticty \eqref{Sys:2D_xDir}. We compare the numerical results against an explicit local Lax Friedrichs (LLF1) scheme for which a CFL condition on the time step $\Delta t$ has to be satisfied. We define an acoustic time step associated with the CFL condition \begin{equation} \label{eq:CFL_cond} \Delta t \leq \nu_{ac} \frac{\Delta x}{| u_1 + c|}, \quad \Delta t \leq \nu_{ac} \frac{\Delta x}{|\lambda_1|} \end{equation} for gases and hyperelastic solids respectively. Analogously we define a material CFL condition oriented towards the flow velocity $u_1$ given by \begin{equation} \Delta t \leq \nu_{mat} \frac{\Delta x}{|u_1|}. \end{equation} {\color{black} Therein $\nu_{ac}$ and $\nu_{mat}$ denote the CFL coefficient for the acoustic and material time step respectively.} The matrix $\bm A$ with the relaxation speeds is constructed as explained in Section \ref{sec:JXrelax} and is given by $\bm A = \diag(a_1,a_2,a_3,a_4,a_5)$ with $a_j \geq \underset{i=1,\dots,N}{\max} |\lambda_j({\bm{\psi}}_i^n)|$. We first perform steady test cases on the Euler equations ($\chi = 0$) by considering flows in a nozzle in different Mach number regimes. Next, we present results on the propagation of material waves in gases and hyperelastic solids. \subsection{Laval nozzle flow} The Laval nozzle is a converging-diverging duct. It is widely used for achieving steady supersonic flows in a variety of systems such as rocket motors and wind tunnels. The sketch of a nozzle is drawn in \cref{fig:nozzle_func}. The simplest analytic model for compressible flow in a Laval nozzle is the quasi one-dimensional duct flow approximation \cite{BenRub1971} {\color{black} which is a modification of the Euler equations \eqref{Sys:Euler} and is} given by \begin{equation}\label{eq:Laval1} \begin{cases} \partial_t\left( S\rho\right) + \partial_x\left( S\rho u\right) = 0 \\ \partial_t\left( S\rho u\right) + \partial_x\left(S\left(\rho u^2 + p\right)\right) = p\partial_xS \\ \partial_t\left( S\rho e\right) + \partial_x\left(Su\left(\rho e+p\right)\right) = 0, \\ \partial_t S = 0. \end{cases} \end{equation} The quasi one-dimensional assumption consists in taking the cross sectional area as a smooth function of the axial coordinate, $S=S\left(x\right)$ given in \cref{fig:nozzle_area}. Hence, all flow variables are functions of the axial coordinate only. After a few manipulations, system \eqref{eq:Laval1} can be rearranged in such a way that the Euler system \eqref{Sys:Euler} with a non linear source term is obtained \begin{equation}\label{eq:Laval2} \begin{cases} \partial_t\rho + \partial_x\left(\rho u\right)= -\rho u\dfrac{\partial_xS}{S} \\ \partial_t\left(\rho u\right) +\partial_x \left(\rho u^2 + p\right) =-\rho u^2\dfrac{\partial_xS}{S}\\ \partial_t\left(\rho e\right) + \partial_x\left(u\left(\rho e+p\right)\right) = -u\left(\rho e+p\right)\dfrac{\partial_xS}{S}\\ \partial_t S = 0. \end{cases} \end{equation} Formulations \eqref{eq:Laval1}-\eqref{eq:Laval2} are equivalent and both conservative because the cross section $S\left(x\right)$ of the nozzle is a smooth function of $x$. We simulate perfect gas through a Laval nozzle at different acoustic Mach number regimes. Thereby a steady state is reached evolving system \eqref{eq:Laval2} in time until the difference of the solution of two consecutive time steps falls below a certain tolerance, here of order $10^{-9}$. More details of the set-up can be found in \cite{AvgBerIolRus2019}. Since it is a smooth solution, we can asses the experimental order of convergence (EOC) calculating the $L^1$ error between the numerical and steady state solution of \eqref{eq:Laval2}. We consider an ideal gas \eqref{eq:EOS_idGas} with $\gamma = 1.4$ and at the inlet of the nozzle total pressure and temperature are imposed as $P_{tot} = 1 Pa$ and $T_{tot} = 1K$, see also \cref{fig:nozzle_func}. The Mach number regime of the flow can be determined by choosing the outlet pressure $p_{out}$. The outlet pressure and resulting Mach number regime are given in \cref{tab:NozzleEOC} together with the results for the {\color{black}scheme} IM2 with $\nu_{ac} = 48$ in \eqref{eq:CFL_cond} which means that the acoustic CFL condition is largely violated. For low Mach number flows we achieve second order convergence, see \cref{tab:EOC_999,tab:EOC_99999,tab:EOC_9999999}. This is due to fact that in those regimes $g(M_{loc})$ is vanishingly small and the second order centred differences dominate in the numerical flux. For the compressible flow in \cref{tab:EOC_9} with $p_{out}=0.9$, the convergence rates drop due to the significant contribution of $M_{loc}$ which gives an upwind numerical flux and the scheme is thus first order in space. It is crucial to remark that using a pure upwind scheme for the simulation of low Mach regimes yields inconsistent results as shown in \cite{AbbIolPup2017} {\color{black} for the exact same test case} using an explicit Jin Xin relaxation scheme. For general results on the problematic of applying upwind schemes on low Mach flows see \cite{Klein1995,GuillardViozat1999,GuiMur2004,Rie2010}. {\color{black}This is mainly due to the fact that the numerical viscosity scales differently on different wave types, leading to excessive diffusion and thus inaccurate solutions. } \begin{figure}[t!] \begin{subfigure}[t]{0.5\textwidth} \centering \hskip-5mm{\includegraphics[scale=0.15]{./Figures/nozzle_draw2.pdf} \caption{Laval nozzle general sketch.} \label{fig:nozzle_func}} \end{subfigure} \begin{subfigure}[t]{0.5\textwidth} \centering \hskip-5mm{\includegraphics[scale=0.325]{./Figures/area_nozzle.png} \caption{Geometry of the simulated nozzle.} \label{fig:nozzle_area}} \end{subfigure} \caption{Set-up for Laval nozzle in Test 4.1. } \label{fig:nozzle} \end{figure} \begin{table}[t!] \renewcommand{\arraystretch}{1.25} \begin{subtable}{\textwidth} \begin{center} \begin{tabular}{ccccccc} N& $u$ & & $\rho $ & & $p$ & \\\hline\hline 1024& $1.553 \cdot 10^{-3}$ &---&$ 1.49 2\cdot 10^{-3}$ & --- & $1.818 \cdot 10^{-3}$& --- \\ 2048&$5.673\cdot 10^{-4}$&1.45& $5.163\cdot 10^{-4}$&1.53& $6.277\cdot 10^{-4}$&1.53\\ 4096& $2.443\cdot 10^{-4}$ &1.21& $2.012\cdot 10^{-4}$&1.35& $2.450\cdot 10^{-4}$&1.35\\\hline \end{tabular} \end{center} \caption{$p_{out} = 0.9$ Pa, $M_{min} \approx 0.39, M_{max} \approx 0.72$ (compressible flow regime).} \label{tab:EOC_9} \end{subtable} \begin{subtable}{\textwidth} \begin{center} \begin{tabular}{ccccccc} N& $u$ & & $\rho$ & & $p$ & \\\hline\hline 1024&$ 7.844\cdot 10^{-5}$ &---&$ 4.383\cdot 10^{-5}$ & ---& $6.086\cdot 10^{-5}$&---\\ 2048& $1.993\cdot 10^{-5}$ & 1.98& $ 1.095\cdot 10^{-5}$ &2.00& 1$.514\cdot 10^{-5}$& 2.01\\ 4096& $4.892\cdot 10^{-6}$ &2.02&$ 2.847\cdot 10^{-6}$ &1.94& $ 3.905\cdot 10^{-6}$&1.95 \\\hline \end{tabular} \end{center} \caption{$p_{out} = 0.999$ Pa, $M_{min} \approx 3.9 \cdot 10^{-2}, M_{max} \approx 7.2 \cdot 10^{-2}$.} \label{tab:EOC_999} \end{subtable} \begin{subtable}{\textwidth} \begin{center} \begin{tabular}{ccccccc} N& $u$ & & $\rho$ & & $p$ & \\\hline\hline 1024& $8.345 \cdot 10^{-6}$ &---& $4.660\cdot 10^{-6}$ & --- & $6.522\cdot 10^{-6}$ & ---\\ 2048& $2.036\cdot 10^{-6}$ & 2.03 & $9.945\cdot 10^{-7}$ &2.22& $1.391\cdot 10^{-6}$ & 2.22\\ 4096& $4.899\cdot 10^{-7}$ & 2.05 & $2.193\cdot 10^{-7}$ & 2.18 & $3.070\cdot 10^{-7}$ & 2.18 \\\hline \end{tabular} \end{center} \caption{$p_{out} = 0.99999$ Pa, $M_{min} \approx 3.9 \cdot 10^{-3}, M_{max} \approx 7.2 \cdot 10^{-3}$.} \label{tab:EOC_99999} \end{subtable} \begin{subtable}{\textwidth} \begin{center} \begin{tabular}{ccccccc} N& $u$ & & $\rho$ & & $p$ & \\\hline\hline 1024& $6.913\cdot 10^{-7}$ &---& $3.163\cdot 10^{-7}$ &---& $4.428\cdot 10^{-7}$ & --- \\ 2048& $1.726\cdot 10^{-7}$ &2.00& $6.851\cdot 10^{-8}$ &2.21& $9.591\cdot 10^{-8}$ & 2.36 \\ 4096& $4.076\cdot 10^{-8}$ &2.08& $1.404\cdot 10^{-8}$ & 2.28& $1.965\cdot 10^{-8}$ & 2.28 \\\hline \end{tabular} \end{center} \caption{$p_{out} = 0.9999999$ Pa, $M_{min} \approx 3.9 \cdot 10^{-4}, M_{max} \approx 7.2 \cdot 10^{-4}$.} \label{tab:EOC_9999999} \end{subtable} \caption{Convergence rates of {\color{black}scheme} IM2 for Nozzle flow for different values of $p_{out}$ leading to different flow regimes with respect to the Mach number. } \label{tab:NozzleEOC} \end{table} \begin{table}[b!] \begin{center} \renewcommand{\arraystretch}{1.25} \begin{tabular}{cclccc} Test & Material & Flow regime & $\gamma$ & $p_\infty$ (Pa) & $\chi$ (Pa) \\ \hline \hline 1 & perfect gas & $M \approx 0.9$ & 1.4 & 0 & 0\\ 2 & perfect gas & $M \approx 6 \cdot 10^{-3}$ & 1.4 & 0 & 0\\ 3 & water & $M \approx 2.5 \cdot 10^{-3}$ & 4.4 & $6.8 \cdot 10^8$ & 0\\ 4 & copper & $M \approx M_\chi \approx \mathcal{O}(10^{-3})$ & 4.22 & $3.42 \cdot 10^{10}$ & $5 \cdot 10^{10}$\\ 5 & hyperelastic solid & $M \approx 3 \cdot 10^{-3}, ~M_\chi \approx 0.15$ & 4.4 & $6.8 \cdot 10^8$ & $ 8 \cdot 10^5$ \\\hline \end{tabular} \caption{Parameters for the used materials in the tests of Section \ref{sec:SimMatWaves} and the Mach number regime of the simulated contact waves.} \label{tab:Parameters_mat_tests} \end{center} \end{table} \begin{table}[h!] \begin{center} \renewcommand{\arraystretch}{1.15} \begin{tabular}{llllllllllll} Test & $\Omega$ & $x_0$ & $T_f$ & $\rho_L$ & $\rho_R$ & $u_{1,L}$ & $u_{1,r}$ & $u_{2,L}$ & $u_{2,R}$ & $p_L$ & $p_R$ \\[2pt] & $[m]$ & $[m] $ & $[s]$ & $\left[\frac{kg}{m^3}\right]$ & $\left[\frac{kg}{m^3}\right]$ & $\left[\frac{m}{s}\right]$ & $\left[\frac{m}{s}\right]$ & $\left[\frac{m}{s}\right]$ & $\left[\frac{m}{s}\right]$ & $\left[\frac{kg}{m s^2}\right]$ & $\left[\frac{kg}{m s^2}\right]$\\[7pt]\hline\hline 1 & $[0,1]$ & 0.5 &0.1644& 1 & 0.125 & 0 & 0 & 0 & 0 & 1 & 0.1\\ 2 & $[0,1]$ & 0.5 & 0.25 & 1 & 1 & 0 & $ 8 \cdot 10^{-3}$ & 0 & 0 & 0.4 & 0.399\\ 3 & $[0,1]$ & 0.5 & $10^{-4}$ & $1 \cdot 10^3$ & $1 \cdot 10^3$ & 0 & 15 & 0 & 0 & $10^8$ & $0.98 \cdot 10^8$\\ 4.1 & $[0,2]$ & 1 & $10^{-4}$ & $8.9\cdot10^3$ & $8.9\cdot10^3$ & 0 & 0 &0 & 100 & $10^9$ & $1 \cdot 10^5$\\ 4.2 & $[0,500]$ & 250 & $4\cdot 10^{-2}$ & $8.9\cdot10^3$ & $8.9\cdot10^3$ & 0 & 0 &0 & 100 & $10^9$ & $1 \cdot 10^5$\\ 5 & $[0,100]$ & 50 & 0.016 & $1 \cdot 10^3$ & $1\cdot10^3$ & 0 & 10 & 0 & 40 & $10^8$ & $ 0.98 \cdot 10^8$\\ \hline \end{tabular} \caption{Initial condition for the material wave test cases in Section \ref{sec:SimMatWaves}.} \label{tab:Initial_mat_tests} \end{center} \end{table} \subsection{Simulation of material waves} \label{sec:SimMatWaves} In this section, we solve Riemann problems (RPs) for different materials where our interest lies in the motion and accurate capturing of material waves in different Mach regimes. The set-up of the test cases are taken from \cite{AbbIolPup2017}. The parameters and initial conditions for each test case are given in \cref{tab:Parameters_mat_tests,tab:Initial_mat_tests}. For all test cases we use Neumann boundary conditions where we impose $\frac{\partial {\bm{\psi}}}{\partial x} = 0$. {\color{black} Tests 1 to 3 concern the Euler equations \eqref{Sys:Euler} whereas Tests 4 and 5 concern the model of non-linear elasticity \eqref{Sys:2D_xDir}. To demonstrate the effect of the final update \eqref{eq:DIRK_final_update} in the DIRK formalism on the capturing of the contact wave, we give the {\em predicted} solution given by the stage \eqref{eq:Implicit_time_semi_2} for the first order scheme. It is indicated by IM1p. } \subsubsection{Perfect gas} \textbf{Test 1} is the Sod shock tube test case with a biatomic perfect gas in the compressible regime. The solution of the RP consists of a rarefaction, a contact discontinuity and a shock wave. Since the test is situated in the compressible regime, we apply an acoustic CFL condition with $\nu_{ac} = 0.9$. For {\color{black}scheme} IM1 we divide the computational domain in $N=2000$ grid cells and for {\color{black}scheme} IM2 into $N=1000$ grid cells. This results in the same amount of computational time, since {\color{black}scheme} IM2 consists of two stages. Furthermore we apply a minmod reconstruction in the explicit diffusion of {\color{black}scheme} IM2 to obtain a fully second order scheme in space and time. The numerical results are given in \cref{fig:Sod} and are in good agreement with the exact solution of the RP. {\color{black}Schemes} IM1 and IM2 both capture all waves accurately with the correct shock strength and speed, whereas {\color{black}scheme} IM2 is more accurate on the contact wave than {\color{black}scheme} IM1. {\color{black}For a comparison of the results with the fully coupled scheme \eqref{eq:RelaxingScheme} given in \cite{AbbIolPup2017}, see Fig. 7 in \cite{AbbIolPup2017}.} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.397]{SOD_dens.pdf} \includegraphics[scale=0.397]{SOD_veloc.pdf} \includegraphics[scale=0.397]{SOD_P.pdf} \end{center} \caption{Test 1: Sod shock tube with ideal gas on computational domain $[0,1]$.} \label{fig:Sod} \end{figure} In \textbf{Test 2} a RP in the low Mach regime with a local Mach number $M_{loc} \approx 6 \cdot 10^{-3}$ on the material wave is considered. The left and right travelling acoustic waves are significantly faster than the contact wave which travels only a few cells during the simulation. We compare {\color{black}schemes} IM1p and IM1 with $N=2000$ and {\color{black}scheme} IM2 with $N=1000$ grid cells to an explicit first order local Lax-Friedrichs (LLF1) scheme with $N=2000$ grid cells. The time steps for the LLF1 scheme are restricted by the fastest acoustic wave, i.e. $\nu_{ac} = 0.9$ resulting in $\Delta t = 6\cdot10^{-4}$. For the implicit relaxation schemes we use larger time steps given by $\Delta t = 3\cdot10^{-3}$ for {\color{black}scheme} IM1 and $\Delta t = 6\cdot10^{-3}$ for {\color{black}scheme} IM2. The results are given in \cref{fig:Test2}. {\color{black}As expected for large time steps, {\color{black}scheme} IM1p, using only the predicted solution $\psi^{(1)}$, is consistently diffusive on the material wave, where the solution {\em corrected} by the final update captures the material wave well.} {\color{black}Scheme} IM2 captures the material wave well and is more accurate on the acoustic waves than the first order schemes. {\color{black}For a comparison of the results with the fully coupled scheme \eqref{eq:RelaxingScheme} detailed in \cite{AbbIolPup2017}, see Fig. 9 in \cite{AbbIolPup2017}.} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.397]{newTest2_dens.pdf} \includegraphics[scale=0.397]{newTest2_dens_zoom.pdf} \includegraphics[scale=0.397]{newTest2_veloc.pdf} \includegraphics[scale=0.397]{newTest2_P.pdf} \end{center} \caption{Test 2: Low Mach tube with ideal gas on computational domain $[0,1]$ with 1000 grid cells for {\color{black}scheme} IM2 and 2000 grid cells for {\color{black}the schemes} IM1p, IM1 and LLF1. Top right: Zoom on the contact wave in density.} \label{fig:Test2} \end{figure} To further quantify this observation, we compare the $L^1$-error of the density around the contact wave on the interval $[0.4,0.6]$ versus the needed CPU time in \cref{tab:Test2CPU}. The CPU time is obtained by averaging the computational times over 50 simulations. As reference, the exact solution of the Riemann problem is used which can be obtained by procedures given in \cite{Toro2009}. We see that with the LLF1 scheme, we need a very fine grid to achieve similar errors on the contact wave as produced by {\color{black}scheme} IM2. A fine grid however imposes a very strict constraint on the CFL condition leading to very small time steps and thus long CPU times. Since we can choose large time steps for {\color{black}scheme} IM2, we have smaller CPU times than for {\color{black}scheme} LLF1 on the same grid, even though we have to solve implicit systems in {\color{black}scheme} IM2. This demonstrates the advantage of the implicit {\color{black}scheme} IM2 over explicit {\color{black}scheme} LLF1 for the resolution of material waves in low Mach regimes. To achieve the same error given by {\color{black}scheme} IM2, the explicit {\color{black}scheme} LLF1 needs much finer grids and much longer CPU times. \begin{table}[htbp] \renewcommand{\arraystretch}{1.25} \begin{center} \begin{tabular}{lccrc} scheme & $\Delta x$ & $\Delta t$ & CPU time [s] & $L^1$-error \\\hline\hline IM2 & $10^{-3}$ & $6.00\cdot10^{-3}$ &$0.348$ & $4.20 \cdot 10^{-6}$ \\\hline LLF1 & $10^{-3}$ & $1.19\cdot10^{-3}$&$0.371$ & $2.00 \cdot 10^{-5}$ \\ LLF1 & $10^{-4}$ & $1.19\cdot10^{-4}$ &$17.609$ & $7.22 \cdot 10^{-6}$ \\ LLF1 & $10^{-5}$ & $1.19\cdot10^{-5}$ &$1587.385$ & $3.32 \cdot 10^{-6}$ \\\hline \end{tabular} \end{center} \caption{Test 2: $L^1$-error in $\rho$ vs. CPU time on the contact wave on the interval $[0.4,0.6]$. } \label{tab:Test2CPU} \end{table} \subsubsection{Stiffened gas} \textbf{Test 3} concerns water flow in a pipe with a small pressure jump. The local Mach number on the contact wave is given by $M_{loc} \approx 2.5\cdot10^{-2}$ and the set up is analogous to Test 2 for an ideal gas. The time step for the LLF1 scheme must be oriented towards the fastest acoustic wave with $\nu_{ac} = 0.9$ which results in a time step of $\Delta t = 2.4\cdot10^{-7}$. For the implicit scheme we can choose larger time steps given by $\Delta t = 2.15\cdot10^{-6}$ for {\color{black}schemes} IM1p and IM1 and $\Delta t = 4.30\cdot10^{-6}$ for {\color{black}scheme} IM2 which corresponds to $\nu_{mat} = 0.06$ or respectively $\nu_{ac} = 8.1$ in the CFL condition. Since the sound speeds are faster in water than for an ideal gas, the time steps are significantly smaller compared to Test 2. The numerical results are given in \cref{fig:Test3}. {\color{black}Scheme} As in the previous test case, IM1p is very diffusive on the acoustic as well as on the material wave, whereas {\color{black}schemes} IM1 and IM2 are sharp on the material wave and resolve it more accurately than the {\color{black}scheme} LLF1. However, {\color{black}scheme} IM2 is inaccurate on the negligible acoustic waves where small oscillations can be observed. Their appearance is local and does not impair the results on the contact wave which are the focus of the simulation. {\color{black}For a comparison of the results with the fully coupled scheme \eqref{eq:RelaxingScheme} given in \cite{AbbIolPup2017}, see Fig. 10 in \cite{AbbIolPup2017}.} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.397]{newTest3_dens.pdf} \includegraphics[scale=0.397]{newTest3_dens_zoom.pdf} \includegraphics[scale=0.397]{newTest3_veloc.pdf} \includegraphics[scale=0.397]{newTest3_P.pdf} \end{center} \caption{Test 3: Tube with water on the computational domain $[0,1]$ with 1000 grid cells for {\color{black}scheme} IM2 and 2000 grid cells for {\color{black}the schemes} IM1p, IM1 and LLF1. Top right: Zoom on the contact wave in density. } \label{fig:Test3} \end{figure} \subsubsection{Hyperelastic solid} The next two test cases concern deformations of hyperelastic solids using the model of non-linear elasticity \eqref{Sys:2D_xDir}. The tests illustrate the two different low Mach regimes described at the end of Section \ref{sec:NonlinElast}. \textbf{Test 4.1} simulates a deformation of a two meter pipe filled with copper. Initially the copper is at rest with normal velocity $u_1 = 0$. A higher pressure is applied on the left part of the pipe. A non-zero tangential velocity $u_2$ is imposed on the right part of the pipe which leads to a RP consisting of 5 waves. Since $p_\infty$ and $\chi$, {\color{black}given in \cref{tab:Parameters_mat_tests},} are of the same magnitude, the longitudinal and shear waves are both significantly faster than the material wave which is situated in a flow regime of $\mathcal{O}(10^{-3})$. This corresponds to a low acoustic and low shear Mach number regime. For the {\color{black}schemes} IM1p and IM1 we use $N=4000$ grid cells and for {\color{black}scheme} IM2 $N=2000$ grid cells on the domain $[0,2]$. The time step for an explicit scheme imposed by the fastest wave with a CFL condition of $\nu_{ac} = 0.9$ results in $\Delta t = 3.15\cdot10^{-8}$. However, for the implicit relaxation schemes larger time steps can be used given by $\Delta t = 1.25\cdot10^{-6}$ for {\color{black}schemes} IM1p and IM1 and $\Delta t = 2.5\cdot10^{-6}$ for {\color{black}scheme} IM2 which corresponds to $\nu_{mat} = 0.25$ or respectively $\nu_{ac} = 35.7$ in the CFL condition. The numerical results are given in \cref{fig:Test41}. {\color{black}The reference solution is obtained with an explicit second order scheme using local Lax-Friedrichs fluxes and a strong stability preserving Runge Kutta method (SSPRK2) on a fine grid with $10^5$ cells. It} captures all waves accurately at the cost of being restricted to very small time steps. {\color{black}The predicted solution by scheme} IM1p is too diffusive to capture the material wave whereas {\color{black}the schemes} IM1 and IM2 detect the contact wave very accurately even though time steps 36 times larger than the acoustic one are used. However, we observe local oscillations on the fast longitudinal and shear waves for scheme IM2. We want to stress that the focus of the simulation lies on a sharp resolution of the material wave which is unaffected by the oscillations. \begin{figure}[t!] \begin{center} \includegraphics[scale=0.397]{newTest4_Density.pdf} \includegraphics[scale=0.397]{newTest4_P.pdf} \includegraphics[scale=0.397]{newTest4_U1.pdf} \includegraphics[scale=0.397]{newTest4_U2.pdf} \includegraphics[scale=0.397]{newTest4_tan_stress.pdf} \includegraphics[scale=0.397]{newTest4_norm_stress.pdf} \end{center} \caption{Test 4.1: Tube with copper on the computational domain $[0,2]$ for short times with 2000 grid cells for {\color{black}scheme} IM2 and 4000 grid cells for {\color{black}scheme} IM1. The reference solution is computed with a second order explicit LLF scheme with $10^5$ grid cells.} \label{fig:Test41} \end{figure} To asses this further, we run Test 4.1 over a longer time period. The set-up is given in \textbf{Test 4.2} in \cref{tab:Initial_mat_tests}. The length of the pipe of 500m is chosen such that all waves are still contained in the computational domain and no boundary effects arise. We run all schemes with 10000 grid cells which amounts to $\Delta x = 5\cdot10^{-2}$. For the implicit {\color{black}schemes} IM1, IM2 we impose a large time step of $\Delta t = 3.8 \cdot 10^{-4}$ about 200 times larger than the acoustic constraint with $\nu_{ac} = 0.9$ leading to a time step of $\Delta t = 8.5\cdot 10^{-6}$. {\color{black}In \cref{fig:Test42}, the zoom on the material wave in the density and pressure for schemes IM1 and IM2 are given. Both schemes capture the position of the contact wave accurately even over long simulation times. As reference solution, the results of the contact wave with the explicit scheme SSPRK2 of Test 4.1 is used. This is motivated by the fact, that the contact wave travels at most one cell for the chosen grid in the simulation with the schemes IM1 and IM2 and moreover, running the whole simulation with the explicit scheme would have diffused the material waves to an extend that it would not be feasible as reference solution. Therefore the material wave of the reference solution is slightly smoothed out. } The numerical results on the material wave in density and pressure for Schemes IM1 and IM2 validate that the oscillations on the acoustic waves observed in \cref{fig:Test41} do not affect the material waves {\color{black}which is the main interest of the simulation}. {\color{black}For a comparison of the results with the fully coupled scheme \eqref{eq:RelaxingScheme} given in \cite{AbbIolPup2017}, see Fig. 11 to 13 therein.} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.397]{newTest4l_Density_zoom.pdf} \includegraphics[scale=0.397]{newTest4l_P_zoom.pdf} \end{center} \caption{Test 4.2: Zoom on material wave in density and pressure for long times on $[0,500]$ with $\Delta x = 5\cdot10^{-2}$ for {\color{black}the schemes} IM1 and IM2. } \label{fig:Test42} \end{figure} \textbf{Test 5} corresponds to an acoustic low Mach regime. It simulates the deformation of a hyperelastic material (rubber) where the material parameter $p_\infty$ is several orders of magnitude larger than $\chi$, see \cref{tab:Parameters_mat_tests}. The material and shear waves are almost stationary compared to the significantly faster longitudinal waves travelling towards the left and right boundaries of the domain. The space discretization for all schemes are given by $N=10000$ grid cells resulting in $\Delta x = 10^{-2}$. The time step for {\color{black}an explicit} scheme would be constraint by the longitudinal waves with a CFL number given by 0.9 resulting in $\Delta t = 4.83\cdot10^{-6}$. For the implicit schemes IM1 and IM2, larger time steps can be chosen given by $\Delta t = 1\cdot10^{-4}$ which corresponds to $\nu_{mat} = 0.4$ or respectively $\nu_{ac} = 18.75$ in the CFL condition. The numerical results on the whole domain are given in \cref{fig:Test5full}. To assess the influence of the time step on the oscillations on the fast longitudinal waves, we compare the results of {\color{black}scheme} IM2 with $\nu_{mat}= 0.4$ with a smaller time step given by $\nu_{mat} = 0.1$. We see from the results of \cref{fig:Test5full}, that the oscillations consistently reduce with a smaller time step. It is evident that the predictor {\color{black}scheme} IM1p is too diffusive to resolve the slow waves. In \cref{fig:Test5}, density, pressure and tangential stress computed with {\color{black}the schemes} IM1 and IM2 are plotted with focus on the slow waves on the domain $[46,54]$. The material and shear waves are captured accurately with {\color{black}schemes} IM1 and IM2 while the results with {\color{black}scheme} IM1p are too diffusive to distinguish between the contact and shear waves in the density. {\color{black}The reference solution in \cref{fig:Test5full,fig:Test5} are computed with the IM2 scheme on a finer grid with $\Delta x = 2\cdot 10^{-3}$ and $\Delta t = 5\cdot 10^{-6}$. Using an explicit scheme, as done in \cref{fig:Test41} for Test 4.1, even on fine grids still yields a too diffusive profile of the contact and shear waves and is therefore not feasible as reference solution.} {\color{black}For a comparison of the results with the fully coupled scheme \eqref{eq:RelaxingScheme} given in \cite{AbbIolPup2017}, see Fig. 14 in \cite{AbbIolPup2017}.} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.38]{newTest5_Density.pdf} \includegraphics[scale=0.38]{newTest5_U2.pdf} \includegraphics[scale=0.38]{newTest5_P.pdf} \includegraphics[scale=0.38]{newTest5_tan_stress.pdf} \end{center} \caption{Test 5: Density, velocity, pressure and tangential stress on computational domain $[0,100]$ with $\Delta x = 10^{-2}$ for {\color{black}the schemes} IM1p, IM1 and IM2. } \label{fig:Test5full} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.397]{newzoomTest5_Density.pdf} \includegraphics[scale=0.397]{newzoomTest5_U2.pdf} \includegraphics[scale=0.397]{newzoomTest5_P.pdf} \includegraphics[scale=0.397]{newzoomTest5_Y.pdf} \end{center} \caption{Test 5: Zoom on the slow waves in the center of the computational domain $[0,100]$ with 10000 grid cells. } \label{fig:Test5} \end{figure} \section{Conclusions and further developments} \label{sec:conclusions} We have proposed an implicit relaxed solver based on a Jin Xin relaxation model for the numerical simulation of all-speed flows. The scheme was constructed for general systems of hyperbolic conservation laws and was applied on the simulation of compressible materials using {\color{black} the Euler equations and} a monolithic Eulerian model of non-linear elasticity. The scheme has proved to be accurate on the approximation of material waves in different Mach regimes as well as on the computation of steady state solutions. The presented scheme showed improvement over the implicit relaxing scheme introduced in \cite{AbbIolPup2017} due to the reduced number of variables that needed to be updated and the simple linear decoupled structure of the method. However, {\color{black} in the context of Riemann problems, local oscillations on the acoustic waves in \cref{fig:Test3} and longitudinal waves in \cref{fig:Test41,fig:Test5full} appeared. These phenomenon was also noted by the authors of \cite{PupSemVis2021} in the context of obtaining linear higher order space reconstruction methods coupled with SDIRK integrators in time for scalar equations. The authors concluded, that a limiting in time is necessary to reduce those oscillations. Although the oscillations we observed in our simulations were consistent, of local nature and appeared only on the negligible fast waves, it cannot be guaranteed that they might cause negative pressures or densities in different applications. Therefore we plan to incorporate ideas on efficient time limiting proposed in \cite{PupSemVis2021} for systems in our linear implicit framework in future work.} {\color{black} Another issue that presented itself during the simulation of long term monitoring of material waves as in \cref{fig:Test42,fig:Test5full} is the incomplete knowledge of boundary conditions and is connected to relaxation model the scheme is based on. Since the number of equations in the relaxation model are doubled with respecto to the original equations, twice as many boundary conditions are necessary to describe the behaviour of the medium at the boundaries. In the relaxation limit however, the number of characteristics reduces, but the nature of the problem changes from hyperbolic to parabolic as an elliptic operator appears for the state variables. To avoid the extension of the computational domain as done in Tests 4 and 5 for long term simulations, it would be preferable to have an appropriate description of the flow at the boundaries. This will be subject to future work.} \section*{Acknowledgments} A. Thomann has been partially supported by the Gutenberg Research College, JGU Mainz, and PROCOPE-MOBILITE-2021 granted by the \textit{Service pour la science et la technologie de l'Ambassade de France en Allemagne}. Further G. Puppo acknowledges the support of PRIN2017 and Sapienza, Progetto di Ateneo RM120172B41DBF3A. \bibliographystyle{siamplain} \section{A detailed example} Here we include some equations and theorem-like environments to show how these are labeled in a supplement and can be referenced from the main text. Consider the following equation: \begin{equation} \label{eq:suppa} a^2 + b^2 = c^2. \end{equation} You can also reference equations such as \cref{eq:matrices,eq:bb} from the main article in this supplement. \lipsum[100-101] \begin{theorem} An example theorem. \end{theorem} \lipsum[102] \begin{lemma} An example lemma. \end{lemma} \lipsum[103-105] Here is an example citation: \cite{KoMa14}. \section[Proof of Thm]{Proof of \cref{thm:bigthm}} \label{sec:proof} \lipsum[106-112] \section{Additional experimental results} \Cref{tab:foo} shows additional supporting evidence. \begin{table}[htbp] {\footnotesize \caption{Example table} \label{tab:foo} \begin{center} \begin{tabular}{|c|c|c|} \hline Species & \bf Mean & \bf Std.~Dev. \\ \hline 1 & 3.4 & 1.2 \\ 2 & 5.4 & 0.6 \\ \hline \end{tabular} \end{center} } \end{table} \bibliographystyle{siamplain}
1,314,259,994,651
arxiv
\section{Introduction and Preliminaries} It is well known that the class of distribution semigroups in Banach spaces was introduced by J. L. Lions \cite{li121} in 1960 as an attempt to seek for the solutions of abstract first order differential equations that are not well-posed in the usual sense, i.e., whose solutions are not governed by strongly continuous semigroups of linear operators. From then on, distribution semigroups have attracted the attention of a large number of mathematicians (cf. \cite{a22}, \cite{b52}-\cite{barbu1}, \cite{fat1}-\cite{fujiwara}, \cite{lar}, \cite{peet} and \cite{yosi} for more details about distribution semigroups in Banach spaces with densely defined generators). Following the pioneering works by D. Prato, E. Sinestrari \cite{d81}, W. Arendt \cite{a11} and E. B. Davies, M. M. Pang \cite{d811qw} (cf. also S. \=Ouchi \cite{o192}), there has been growing interest in dropping the usually imposed density assumptions in the theory of first order differential equations and discussing various generalizations of strongly continuous semigroups, such as integrated semigroups, $C$-regularized semigroups and $K$-convoluted semigroups (cf. \cite{knjigah} for a comprehensive survey of results). The class of distribution semigroups with not necessarily densely defined generators has been introduced independently by P. C. Kunstmann \cite{ku112} and S. W. Wang \cite{w241}, while the class of $C$-distribution semigroups has been introduced by the first named author in \cite{ko98} (cf. \cite{ki90}, \cite{knjigah}, \cite{ku101}, \cite{ku112}, \cite{kmp} and \cite{me152}-\cite{mija} for further information in this direction). Ultradistribution semigroups in Banach spaces, with densely or non-densely defined generators, and abstract Beurling spaces have been analyzed in the papers of R. Beals \cite{b41}-\cite{b42}, J. Chazarain \cite{cha}, I. Cior\u anescu \cite{ci1}, I. Cior\u anescu, L. Zsido \cite{cizi}, P. R. Chernoff \cite{chernof}, H. A. Emami-Rad \cite{er} and H. Komatsu \cite{k92} (cf. also \cite{knjigah}, \cite{diff1}, \cite{ptica}-\cite{tica}, \cite{ku113} and \cite{me152}). On the other hand, the study of distribution semigroups in locally convex spaces has been initiated by R. Shiraishi, Y. Hirata \cite{1964}, T. Ushijima \cite{ush1} and M. Ju Vuvunikjan \cite{vuaq}. For the best knowledge of the authors, there is no significant reference which treats ultradistribution semigroups in locally convex spaces.\\ In this paper, we introduce and systematically analyze the classes of\\ $C$-distribution semigroups and $C$-ultradistribution semigroups in the setting of sequentially complete locally convex spaces, providing additionally a large amount of relevant references on the subjects under consideration. We provide a few theoretical novelties. For example, the notion of a pre-distribution semigroup and the notion of a non-dense distribution semigroup seem to be completely new and not considered elsewhere, with the exception of classical case that $E$ is a Banach space. The definition of a stationary dense operators is continuation of an investigation on dense distribution semigroups. By studying $A$ in a sequentially complete locally convex space $E$ we can study the behavior of $A_{\infty}$ in $D_{\infty}(A)$. Naturally, we can pose the question are they somehow related. The answer is affirmative. If $A$ is stationary dense for a closed linear operator with some additional condition, the information lost in passing from $A$ in $E$ to $A_{\infty}$ in $D_{\infty}(A)$ can be retrieved. In case of a dense ultradistribution semigroups the definition of stationary dense operators can not be considered.\\ In \cite{ku101} are defined stationary dense operators in Banach space $E$ and their connections to dense distribution semigroups. When we are dealing with semigroups on locally convex spaces in order to exist the resolvent of $A$, where $A$ is an infinitesimal generator for the semigroup, we suppose that the semigroups are equicontinuous. Further on, it will be given new examples on stationary dense operators on sequentially complete locally convex spaces and results of \cite{ku101}, \cite{ush}, \cite{fujiwara} will be extended. The organization of paper can be briefly described as follows. In Section 2, we analyze the $C$-wellposedness of first order Cauchy problem in the sense of distributions and ultradistributions, paying special accent on the study of $C$-generalized resolvents of linear operators in Subsection 2.1. Section 3 is devoted to the study of main structural properties of $C$-distribution semigroups and $C$-ultradistribution semigroups. In section 4 are considered stationary dense operators in sequentially complete locally convex spaces following the investigation in \cite{ku101}. \subsection{Notation} We use the standard notation throughout the paper. Unless specified otherwise, we shall always assume that $E$ is a Hausdorff sequentially complete locally convex space over the field of complex numbers, SCLCS for short. If $X$ is also a SCLCS, then we denote by $L(E,X)$ the space consisting of all continuous linear mappings from $E$ into $X;$ $L(E)\equiv L(E,E).$ By $\circledast_{E}$ ($\circledast$, if there is no risk for confusion), we denote the fundamental system of seminorms which defines the topology of $E.$ By $L_{\circledast}(E)$ we denote the subspace of $L(E)$ consisting of those continuous linear mappings $T$ from $E$ into $E$ satisfying that for each $p\in \circledast$ there exists $c_{p}>0$ such that $p(Tx)\leq c_{p} p(x),$ $x\in E.$ Let ${\mathcal B}$ be the family of bounded subsets\index{bounded subset} of $E,$ and let $p_{B}(T):=\sup_{x\in B}p(Tx),$ $p\in \circledast_{X},$ $B\in {\mathcal B},$ $T\in L(E,X).$ Then $p_{B}(\cdot)$ is a seminorm\index{seminorm} on $L(E,X)$ and the system $(p_{B})_{(p,B)\in \circledast_{X} \times {\mathcal B}}$ induces the Hausdorff locally convex topology on $L(E,X).$ If $E$ is a Banach space, then we denote by $\|x\|$ the norm of an element $x\in E.$ The Hausdorff locally convex topology on $E^{\ast},$ the dual space\index{dual space} of $E,$ defines the system $(|\cdot|_{B})_{B\in {\mathcal B}}$ of seminorms on $E^{\ast},$ where and in the sequel $|x^{\ast}|_{B}:=\sup_{x\in B}|\langle x^{\ast}, x \rangle |,$ $x^{\ast} \in E^{\ast},$ $B\in {\mathcal B}.$ Here $\langle \ , \ \rangle $ denotes the duality bracket between $E$ and $E^{\ast},$ sometimes we shall also write $\langle x , x^{\ast} \rangle$ or $x^{\ast}(x)$ to denote the value of $\langle x^{\ast}, x \rangle .$ Let us recall that the spaces $L(E)$ and $E^{\ast}$ are sequentially complete provided that $E$ is barreled\index{barreled space} (\cite{meise}). By $E^{\ast \ast}$ we denote the bidual of $E.$ Recall, the polars of nonempty sets $M\subseteq E$ and $N\subseteq E^*$ are defined as follows $M^{\circ}:=\{y\in E^*:|y(x)|\leq 1\text{ for all } x\in M\}$ and $N^{\circ}:=\{x\in E:\;|y(x)|\leq 1\text{ for all } y\in N\}.$ If $A$ is a linear operator acting on $E,$ then the domain, kernel space and range of $A$ will be denoted by $D(A),$ $N(A)$ and $R(A),$ respectively. Since no confusion seems likely, we will identify $A$ with its graph. In the remaining part of this paragraph, we assume that the operator $A$ is closed. Set $p_{A}(x):=p(x)+p(Ax),$ $x\in D(A),$ $p\in \circledast$. Then the calibration $(p_{A})_{p\in \circledast}$ induces the Hausdorff sequentially complete locally convex topology on $D(A);$ we denote this space simply by $[D(A)].$ Set $D_{\infty}(A):=\bigcap_{n=1}^{\infty}D(A^n).$ Then the space $D_{\infty}(A),$ equipped with the following system of seminorms $p_{n}(x):=p(x)+p(Ax)+p(A^2x)+\cdot \cdot \cdot+p(A^nx),$ $x\in D_{\infty}(A)$ ($n \in {\mathbb N},$ $p\in \circledast$) becomes an SCLCS; we will denote this space by $[D_{\infty}(A)].$ Clearly, if $A$ is a Fr\' echet space, then $[D_{\infty}(A)]$ is a Fr\' echet space, as well. If $C\in L(E)$ is injective, then we define the $C$-resolvent set of $A,$ $\rho_{C}(A)$ for short, by \begin{equation}\label{C-res} \rho_{C}(A):=\Bigl\{\lambda \in {\mathbb C} : \lambda -A \mbox{ is injective and } (\lambda-A)^{-1}C\in L(E)\Bigr\}. \end{equation} By the closed graph theorem\index{Closed Graph Theorem} \cite{meise}, the following holds: If $E$ is a webbed bornological space\index{space!webbed bornological} (this, in particular, holds if $E$ is a Fr\' echet space\index{space!Fr\' echet}), then the $C$-resolvent set\index{$C$-resolvent set} of $A$ consists of those complex numbers $\lambda$ for which the operator $\lambda -A$ is injective and $R(C) \subseteq R(\lambda -A).$ The resolvent set of $A,$ denoted by $\rho(A),$ is nothing else but the $I$-resolvent set of $A,$ where $I$ denotes the identity operator on $E.$ Unless stated otherwise, we shall always assume that $CA\subseteq AC.$ By $\sigma_{p}(A),$ $\sigma_{c}(A)$ and $\sigma_{r}(A)$ we denote the point, continuous and residual spectrum of $A,$ respectively. For a closed linear operator $A$, we introduce the subset $A^*$ of $E^*\times E^*$ by $$ A^{*}:=\Bigl\{\bigl(x^*, y^*\bigr)\in E^*\times E^*:x^*(Ax)=y^*(x)\text{ for all }x\in D(A)\Bigr\}. $$ If $A$ is densely defined, then $A^*$ is also known as the adjoint operator\index{operator!adjoint} of $A$ and it is a closed linear operator on $E^*$. The exponential region $E(a,b)$ has been defined for the first time by W. Arendt, O. El--Mennaoui and V. Keyantuo in \cite{a22}: $$ e(a,b):=\Bigl\{\lambda\in\mathbb{C}:\Re\lambda\geq b,\:|\Im\lambda|\leq e^{a\Re\lambda}\Bigr\} \ \ (a,\ b>0). $$ \subsection{Structural properties} Now we are going to explain the notions of various types of generalized function spaces used throughout the paper. We begin with the recollection of the most important properties of vector-valued distribution spaces (cf. \cite{ant1}, \cite{a43}, \cite{fat1}, \cite{komatsu}, \cite{knjigah}, \cite{kothe1}, \cite{martinez}-\cite{meise}, \cite{stan}, \cite{sch16}, \cite{yosida} and references cited therein for the basic information in this direction). The Schwartz spaces of test functions $\mathcal{D}=C_0^{\infty}(\mathbb{R})$ and $\mathcal{E}=C^{\infty}(\mathbb{R})$ carry the usual topologies. The spaces $\mathcal{D}'(E):=L(\mathcal{D},E)$, $\mathcal{E}'(E):=L(\mathcal{E},E)$ and $\mathcal{S}'(E):=L(\mathcal{S},E)$ are topologized in the very obvious way; $\mathcal{D}'_{\Omega}(E)$, $\mathcal{E}'_{\Omega}(E)$ and $\mathcal{S}'_{\Omega}(E)$ denote the subspaces of $\mathcal{D}'(E)$, $\mathcal{E}'(E)$ and $\mathcal{S}'(E)$, respectively, containing $E$-valued distributions whose supports are contained in $\Omega ;$ $\mathcal{D}'_{0}(E)\equiv \mathcal{D}'_{[0,\infty)}(E)$, $\mathcal{E}'_{0}(E)\equiv \mathcal{E}'_{[0,\infty)}(E)$, $\mathcal{S}'_{0}(E)\equiv \mathcal{S}'_{[0,\infty)}(E).$ In the case that $E={\mathbb C},$ then the above spaces are also denoted by $\mathcal{D}',$ $\mathcal{E}',$ $\mathcal{S}',$ $\mathcal{D}'_{\Omega},$ $\mathcal{E}'_{\Omega},$ $\mathcal{S}'_{\Omega},$ $\mathcal{D}_0'$, $\mathcal{E}_0'$ and $\mathcal{S}_0'.$ If $\varphi$, $\psi:\mathbb{R}\to\mathbb{C}$ are measurable functions, then we define the convolution products $\varphi*\psi$ and $\varphi*_0\psi$ by $$ \varphi*\psi(t):=\int\limits_{-\infty}^{\infty}\varphi(t-s)\psi(s)\,ds\mbox{ and } \varphi*_0 \psi(t):=\int\limits^t_0\varphi(t-s)\psi(s)\,ds,\;t\;\in\mathbb{R}. $$ Notice that $\varphi*\psi=\varphi*_0\psi$, provided that supp$(\varphi)$ and supp$(\psi)$ are subsets of $[0,\infty).$ Given $\varphi\in\mathcal{D}$ and $f\in\mathcal{D}'$, or $\varphi\in\mathcal{E}$ and $f\in\mathcal{E}'$, we define the convolution $f*\varphi$ by $(f*\varphi)(t):=f(\varphi(t-\cdot))$, $t\in\mathbb{R}$. For $f\in\mathcal{D}'$, or for $f\in\mathcal{E}'$, define $\check{f}$ by $\check{f}(\varphi):=f(\varphi (-\cdot))$, $\varphi\in\mathcal{D}$ ($\varphi\in\mathcal{E}$). Generally, the convolution of two distribution $f$, $g\in\mathcal{D}'$, denoted by $f*g$, is defined by $(f*g)(\varphi):=g(\check{f}*\varphi)$, $\varphi\in\mathcal{D}$. Then we know that $f*g\in\mathcal{D}'$ and supp$(f*g)\subseteq$supp$ (f)+$supp$(g)$. Let $G$ be an $E$-valued distribution, and let $f : {\mathbb R} \rightarrow E$ be a locally integrable function (cf. \cite[Definition 1.1.4, Definition 1.1.5]{knjigaho}). As in the scalar-valued case, we define the $E$-valued distributions $G^{(n)}$ ($n\in {\mathbb N}$) and $ hG$ ($h\in {\mathcal E}$); the regular $E$-valued distribution ${\mathbf f}$ is defined by ${\mathbf f}(\varphi):=\int_{-\infty}^{\infty}\varphi (t) f(t) \, dt$ ($\varphi \in {\mathcal D}$). We need the following auxiliary lemma whose proof can be deduced as in the scalar-valued case (\cite{stan}). \begin{lem}\label{polinomi} Suppose that $0<\tau \leq \infty,$ $n\in {\mathbb N}$. If $f : (0,\tau) \rightarrow E$ is a continuous function and $$ \int \limits^{\tau}_{0}\varphi^{(n)}(t)f(t)\, dt=0,\quad \varphi \in {\mathcal D}_{(0,\tau)}, $$ then there exist elements $x_{0},\cdot \cdot \cdot, x_{n-1}$ in $E$ such that $f(t)=\sum^{n-1}_{j=0}t^{j}x_{j},$ $t\in (0,\tau).$ \end{lem} Let $\tau>0,$ and let $X$ be a general Hausdorff locally convex space (not necessarily sequentially complete). Following L. Schwartz \cite{sch166}, it will be said that a distribution $G\in {\mathcal D}'(X)$ is of finite order on the interval $(-\tau,\tau)$ iff there exist an integer $n\in {\mathbb N}_{0}$ and an $X$-valued continuous function $f : [-\tau,\tau] \rightarrow X$ such that $$ G(\varphi)=(-1)^{n}\int^{\tau}_{-\tau}\varphi^{(n)}(t)f(t)\, dt,\quad \varphi \in {\mathcal D}_{(-\tau,\tau)}. $$ Let us recall that the spaces of Beurling, respectively, Roumieu ultradifferentiable functions are defined by $\mathcal{D}^{(M_p)}:=\mathcal{D}^{(M_p)}(\mathbb{R}) :=\text{indlim}_{K\Subset\Subset\mathbb{R}}\mathcal{D}^{(M_p)}_K$, respectively, $\mathcal{D}^{\{M_p\}}:=\mathcal{D}^{\{M_p\}}(\mathbb{R}) :=\text{indlim}_{K\Subset\Subset\mathbb{R}}\mathcal{D}^{\{M_p\}}_K$, (where $K$ goes through all compact sets in ${\mathbb R}$ where $\mathcal{D}^{(M_p)}_K:=\text{projlim}_{h\to\infty}\mathcal{D}^{M_p,h}_K$, respectively, $\mathcal{D}^{\{M_p\}}_K:=\text{indlim}_{h\to 0}\mathcal{D}^{M_p,h}_K$, \begin{align*} \mathcal{D}^{M_p,h}_K:=\bigl\{\phi\in C^{\infty}(\mathbb{R}): \text{supp}(\phi) \subseteq K,\;\|\phi\|_{M_p,h,K}<\infty\bigr\}, \end{align*} \begin{align*} \|\phi\|_{M_p,h,K}:=\sup\Biggl\{\frac{h^p\bigl|\phi^{(p)}(t)\bigr|}{M_p} : t\in K,\;p\in\mathbb{N}_0\Biggr\}. \end{align*} Henceforth the asterisk $*$ stands for both cases. Let $\emptyset \neq \Omega \subseteq {\mathbb R}.$ The spaces $\mathcal{D}'^*(E):=L(\mathcal{D}^*, E)$, $\mathcal{D}^{*}_{\Omega}$, $\mathcal{D}^{\ast}_0$, $\mathcal{E}'^{*}_{\Omega}$, $\mathcal{E}'^{*}_{0}$, $\mathcal{D}'^{*}_{\Omega}(E)$ and $\mathcal{D}'^{*}_{0}(E)$ are defined as in the case of Schwartz spaces. An entire function of the form $P(\lambda)=\sum_{p=0}^{\infty}a_p\lambda^p$, $\lambda\in\mathbb{C}$ is of class $(M_p)$, respectively, of class $\{M_p\}$, if there exist $l>0$ and $C>0$, respectively, for every $l>0$ there exists a constant $C>0$, such that $|a_p|\leq Cl^p/M_p$, $p\in\mathbb{N};$ cf. \cite{k91} for further information. The corresponding ultradifferential operator $P(D)=\sum_{p=0}^{\infty}a_p D^p$ is of class $(M_p)$, respectively, of class $\{M_p\}$. Since $(M_{p})$ satisfies (M.2), the ultradifferential operator $P(D)$ of $\ast$-class $$ \langle P(D)G,\varphi \rangle :=\langle G, P(-D)\varphi \rangle,\quad G\in \mathcal{D}'^*(E),\ \varphi \in \mathcal{D}^*, $$ is a continuous linear mapping from $\mathcal{D}'^*(E)$ into $\mathcal{D}'^*(E).$ The multiplication by a function $a\in {\mathcal E}^{\ast}(\Omega)$, convolution of scalar valued ultradistributions (ultradifferentiable functions), and the notion of a regularizing sequence in $\mathcal{D}^*,$ are defined as in the case of distributions; we know that there exists a regularizing sequence in $\mathcal{D}$ ($\mathcal{D}^*$). If $\varphi\in\mathcal{D}^{\ast}$ ($T\in {\mathcal E}'^{\ast}$) and $G\in\mathcal{D}'^{\ast}(E)$, then $\varphi \ast G \in {\mathcal E}^{\prime \ast}(E)$ and $T \ast G \in {\mathcal D}^{\prime \ast}(E)$ (cf. \cite[p. 685]{k82}, and \cite[Definition 3.9]{k82} for the notion of space ${\mathcal E}^{\prime \ast}(E)$). Following \cite[Definition 4.5]{k82}, we say that a vector-valued ultradistribution $G\in\mathcal{D}'^*(E)$ is bounded iff it maps any neighborhood of zero in $\mathcal{D}^*$ into a bounded subset of $E.$ A vector-valued ultradistribution $G\in\mathcal{D}'^*(E)$ is said to be locally bounded iff for every compact subset $K$ of $\Omega ,$ $G$ maps any neighborhood of zero in $\mathcal{D}^{*}_{K}$ into a bounded subset of $E.$ Recall that any vector-valued ultradistribution $G\in\mathcal{D}'^*(E)$ is bounded (locally bounded) if $E$ is metrizable (if $E$ is a (DF) space). \begin{thm}\label{1.3.1.4} (\cite[Theorem 4.6, Theorem 4.7]{k82}) \emph{(i)} Let $G\in\mathcal{D}'^*(E)$ be locally bounded. Then, for each relatively compact non-empty open set $\Omega\subseteq\mathbb{R},$ there exists a sequence of continuous function $(f_n)$ in $E^{\overline{\Omega}}$ such that $$ G_{|\Omega}=\sum_{n=0}^{\infty}D^nf_n $$ and there exists $L>0$ in the Beurling case, resp., for every $L>0$ in the Roumieu case, such that the set $\{M_{n}L^{n}f_n(t) : t\in\overline{\Omega},\ n\in\mathbb{N}\}$ is bounded in $E.$ \emph{(ii)} Let $G\in\mathcal{D}'^*(E)$ be locally bounded. Suppose, additionally, that $(M_p)$ satisfies \emph{(M.3)}. Then for each relatively compact non-empty open set $\Omega\subseteq\mathbb{R}$ there exist an ultradifferential operator of $*$-class and a continuous function $f:\overline{\Omega}\to E$ such that $G_{|\Omega}=P(D)f$. \end{thm} \begin{thm}\label{delta} (\cite[Theorem 4.8]{k82}) Suppose that $(M_p)$ additionally satisfies \emph{(M.3)} as well as that $G\in\mathcal{D}'^*(E)$ and \emph{supp}$(G)\subseteq\{0\}$. Then there exists a sequence $(x_n)_{n\in {{\mathbb N}_{0}}}$ in $E$ such that $G(\varphi)=\sum_{n=0}^{\infty}\delta^{(n)}(\varphi)x_n$, $\varphi\in\mathcal{D}^*$ and there exists $L>0$ in the Beurling case, resp., for every $L>0$ in the Roumieu case, such that the set $\{M_{n}L^{n}x_n : n\in\mathbb{N}\}$ is bounded in $E.$ \end{thm} The characterization of vector-valued distributions supported by a point has been studied by R. Shiraishi, Y. Hirata \cite{1964} and T. Ushijima \cite{ush1}. Their results can be briefly described as follows: Suppose that $G\in\mathcal{D}'(E)$ and supp$(G)\subseteq\{0\}$. If there exists a norm $\| \cdot \|$ on $E$ satisfying that $\|x\| \leq cp(x),$ $x\in E$ for some $p\in \circledast$ and $c>0,$ or if $E$ satisfies the conditions ($\ast$)-($\ast$)' stated on page 82 of \cite{1964}, then there exist $n\in\mathbb{N}$ and $x_i\in E$, $0\leq i\leq n$ such that $G(\varphi)=\sum_{i=0}^n\delta^{(i)}(\varphi)x_i$, $\varphi\in\mathcal{D}$. If the space $E$ satisfies the property that any vector-valued distribution $G\in\mathcal{D}'(E)$ with supp$(G)\subseteq\{0\}$ can be represented as a finite sum of vector-valued distributions like $\delta^{(i)} \otimes x_{i}$ (cf. the next paragraph for the notion), then we shall simply say that $E$ is admissible. We need the following basic facts about the Laplace transform of (ultra-)distributions (cf. T. K\=omura \cite{komura}). Let $$ \hat{\varphi}(\lambda):=\frac{1}{2\pi} \int \limits^{\infty}_{-\infty}e^{\lambda t}\varphi(t)\, dt,\quad \lambda \in {\mathbb C},\ \varphi \in {\mathcal D} \ \ (\varphi \in {\mathcal D}^{\ast}). $$ Set ${\mathbf D}:=\{ \hat{\varphi} : \varphi \in {\mathcal D}\}$ and ${\mathbf D}^{\ast}:=\{ \hat{\varphi} : \varphi \in {\mathcal D}^{\ast}\},$ $\hat{\varphi} +\hat{\psi}:=\widehat{\varphi +\psi},$ $\lambda \hat{\varphi}:=\hat{\lambda \varphi}$ ($\lambda \in {\mathbb C};$ $\varphi,$ $\psi \in {\mathcal D}$ (${\mathcal D}^{\ast}$)). Then the mapping $\hat \ : {\mathcal D} \rightarrow {\mathbf D}$ ($\hat \ : {\mathcal D}^{\ast} \rightarrow {\mathbf D}^{\ast}$) is a linear isomorphism between the vector spaces ${\mathcal D}$ and ${\mathbf D}$ (${\mathcal D}^{\ast}$ and ${\mathbf D}^{\ast}$). It is said that a subset ${\mathbf T}=\{\hat{\varphi} : \varphi \in {\mathcal T}\}$ of ${\mathbf D}$ (${\mathbf D}^{\ast}$) is open iff the set ${\mathcal T}$ is open in ${\mathcal D}$ (${\mathcal D}^{\ast}$). Then the mapping $\hat{\cdot}$ becomes a linear topological homeomorphism between the Hausdorff locally convex spaces ${\mathcal D}$ (${\mathcal D}^{\ast}$) and ${\mathbf D}$ (${\mathbf D}^{\ast}$). Set ${\mathbf D}'(E):=L({\mathbf D},E)$ and ${\mathbf D}'^{*}(E):=L({\mathbf D}^{\ast},E).$ There exists a linear topological homeomorphism $\hat{\cdot} : {\mathcal D}'(E) \rightarrow {\mathbf D}'(E)$ ($\hat{\cdot} : {\mathcal D}'^{\ast}(E) \rightarrow {\mathbf D}'^{\ast}(E)$) defined by $\hat{G}(\hat{\varphi}):=G(\varphi),$ $\varphi \in {\mathcal D}$ ($\hat{G}(\hat{\varphi}):=G(\varphi),$ $\varphi \in {\mathcal D}^{\ast}$) for all $G\in {\mathcal D}'(E)$ ($G\in{\mathcal D}'^{\ast}(E)$). The functional $\hat{G}$ is called the generalized Laplace transform of $G.$ Set, for every non-empty subset $\Omega$ of ${\mathbb R},$ ${\mathbf D}'_{\Omega}(E):=\{\hat{G} : G\in {\mathcal D}'_{\Omega}(E)\}$ (${\mathbf D}'^{*}_{\Omega}(E):=\{\hat{G} : G\in {\mathcal D}'^{*}_{\Omega}(E)\}$); ${\mathbf D}'_{0}(E)\equiv {\mathbf D}'_{[0,\infty)}(E)$ (${\mathbf D}'^{*}_{0}(E) \equiv {\mathbf D}'^{*}_{[0,\infty)}(E)$). Then ${\mathbf D}'_{0}(E)$ (${\mathbf D}'^{*}_{0}(E)$) is a closed subspace of ${\mathbf D}'(E)$ (${\mathbf D}'^{*}(E)$) and it is topologically homeomorphic to ${\mathcal D}'_{0}(E)$ (${\mathcal D}'^{*}_{0}(E)$) by the mapping $\hat{\cdot}.$ If $F\in {\mathcal D}'$ ($F\in {\mathcal D}'^{\ast}$) and $x\in E$, then we define $F \otimes x \in {\mathcal D}'(E)$ ($F \otimes x \in {\mathcal D}'^{\ast}(E)$) and $\hat{F} \otimes x \in {\mathbf D}'(E)$ ($\hat{F} \otimes x \in {\mathbf D}'^{\ast}(E)$) by $\langle F \otimes x ,\varphi \rangle:=\langle F , \varphi \rangle x,$ $\varphi \in {\mathcal D}$ ($\varphi \in {\mathcal D}^{\ast}$) and $\langle \hat{F} \otimes x ,\hat{\varphi} \rangle :=\langle F, \varphi \rangle x,$ $\varphi \in {\mathcal D}$ ($\varphi \in {\mathcal D}^{\ast}$). If $T \in {\mathcal E}'(E)$ ($T \in {\mathcal E}'^{\ast}(E)$), then we define the Laplace transform of $T$ by $$ \hat{T}(\lambda):=\bigl \langle T(x), e^{\lambda x} \bigr \rangle,\quad \lambda \in {\mathbb C}. $$ \begin{thm}\label{p-w-dis} (\cite{komura}) Let $k>0.$ Then an $E$-valued entire function $f(\lambda)$ is the Laplace transform of a distribution $T\in {\mathcal D}'_{[-k,k]}(E)$ iff for every $x^{\ast}\in E^{\ast}$ there exist $n\in {\mathbb N}$ and $c>0$ such that $$ \bigl | \bigl \langle x^{\ast}, f(\lambda) \bigr \rangle \bigr|\leq c(1+|\lambda|)^{n}e^{k|\Re \lambda|},\quad \lambda \in {\mathbb C}. $$ \end{thm} The proof of following Paley-Wiener theorem for $E$-valued ultradistributions with compact support can be deduced with the help of the corresponding assertion for scalar-valued ultradistributions \cite[Theorem 1.1]{k911} and the idea from \cite{komura}. This result can be viewed of some independent interest; we will include all relevant details of proof for the sake of completeness. \begin{thm}\label{EvPWU} (\emph{The Paley-Wiener theorem for $E$-valued ultradistributions}) Let \emph{(M.1)}, \emph{(M.2)} and \emph{(M.3)} hold, let ${\mathcal R}$ denote the set consisting of all positive monotonically increasing sequences, and let $$ M_{r_p}(\rho)=\sup_{p\in {\mathbb N}}\Biggl\{\ln \frac{{\rho}^p}{M_p\prod_{i=1}^{p}r_i}\Biggr\},\quad \rho>0. $$ An $E$-valued entire function $\hat{u}(\lambda)$ is the generalized Laplace transform of an $E$-valued ultradistribution $u$ of $\ast$-class with support contained in a non-empty compact subset $K\subseteq {\mathbb R}$ iff for every $x^{\ast}\in E^{\ast}$ there exist $h>0$ and $c>0$, in Beurling case, resp., there exist $(r_p)\in{\mathcal R}$ and $c_{r_p}>0,$ in Roumieu case, such that \begin{align} \notag \bigl | & \bigl \langle x^{\ast}, \hat{u}(\lambda) \bigr \rangle \bigr|\leq c e^{M(\lambda/h)+H_K(i\lambda)},\quad \lambda\in{\mathbb C},\mbox{ resp.,} \\ \label{E2} & \bigl| \bigl \langle x^{\ast}, \hat{u}(\lambda) \bigr \rangle \bigr|\leq c_{r_p} e^{M_{r_p}(\lambda)+H_K(i\lambda)},\quad \lambda\in{\mathbb C}. \end{align} Here $H_{K}(\lambda):=\sup_{x\in K} \Im (x\lambda),$ $\lambda \in {\mathbb C}.$ \end{thm} \begin{proof} The proof of necessity follows almost immediately from the Paley-Wiener theorem for scalar-valued ultradistributions (see \cite[Theorem 1.1]{k911}). In the proof of sufficiency, we will consider only the Roumieu case. So, let us assume that $\hat{u}(\lambda)$ is an $E$-valued entire function satisfying (\ref{E2}). It suffices to show (see \cite[Example, p. 267]{komura} and \cite[Lemma 3.3]{k91}) that $\hat{u}(\lambda)$ satisfies: For any continuous seminorm $p$ on $E$, there exist $({r_p})\in{\mathcal R}$ and $c_{r_p}>0$ such that \begin{equation}\label{ponom} p\bigl(\hat{u}(\lambda)\bigr)\leq c_{r_p} e^{M_{r_p}(\lambda)+H_K(i\lambda)}, \quad \lambda\in{\mathbb C}. \end{equation} Let us suppose the converse, i.e., that (\ref{E2}) holds but (\ref{ponom}) does not hold. We will construct sequences $(r_n)$, $(c_n)$, $(\varepsilon_n)$, $(\lambda_n)$ and $(x_{n}^{\ast})$ satisfying certain properties. Set $\varepsilon_{1}:=1/2.$ It is clear that there exist $\lambda_1\in{\mathbb C}$, $r_1>0$ and $c_1>0$ such that, for some continuous seminorm $q$ on $E$, one has $$ q\bigl(\hat{u}(\lambda_1)\bigr)>c_{r_1}e^{M_{r_1}(\lambda_1)+H_K(i\lambda_1)}. $$ Observe also that $q(x)=\sup_{x^{\ast}\in U^{\circ}}|\langle x^{\ast},x\rangle|$ for all $x\in E,$ where $U=\{x\in E\, :\, q(x)\leq1\}.$ Choose $x_{1}^{\ast}\in U^{\circ}$ such that $|\langle x_{1}^{\ast},\hat{u}(\lambda)\rangle|>c_{r_1}e^{M_{r_1}(\lambda_1)+H_K(i\lambda_1)}$. Suppose that $(r_i)$, $(C_i)$, $(\varepsilon_i)$, $(\lambda_i)$ and $(x_{i}^{\ast})$ are determined for $1\leq i\leq N-1.$ Then there exist $c_{r_{N}}>c_{r_{N-1}}+1$ and $r_{N}>r_{N-1}+1$ with \begin{equation}\label{zvpar} \bigl| \bigl \langle x_{N-1}^{\ast},\hat{u}(\lambda) \bigr \rangle \bigr|\leq c_{r_N} e^{M_{r_N}(\lambda)+H_K(i\lambda)},\quad \lambda \in {\mathbb C}. \end{equation} Having in mind that the set $\{|\langle x^{\ast}, {\hat{u}}(\lambda_i)\rangle|\, :\,\, 1\leq i\leq N-1,\ x^{\ast}\in U^{\circ}\}$ is bounded, we obtain that there exists $\varepsilon_N\leq1/2^N$ such that $$ \sup\limits_{1\leq i\leq N-1, x^{\ast}\in U^{\circ}}\bigl| \bigl \langle x^{\ast},\hat{u}(\lambda_i) \bigr \rangle \bigr |\leq\frac{1}{2^N\varepsilon_N}. $$ Now, by the assumption, there exists $\lambda_{N} \in {\mathbb C}$ such that $$ q\bigl(\hat{u}(\lambda_N)\bigr)>\frac{3C_{r_N}}{\varepsilon_N}e^{M_{r_N}(\lambda_N)+H_K(i\lambda_N)}. $$ There is an $x_{N}^{\ast}\in U^{\circ}$ such that $$ \bigl| \bigr\langle\hat{u}(\lambda_N),x_{N}^{\ast} \bigr\rangle \bigr|>\frac{3C_{r_N}}{\varepsilon_N}e^{M_{r_N}(\lambda_N)+H_K(i\lambda_N)}. $$ Set $x_{\infty}^{\ast}:=\sum_{n=1}^{\infty}\varepsilon_n x_{n}^{\ast}$. Since $U^{\circ}$ is convex, balanced and $\sigma$-compact, $\sum_{n=1}^{\infty}\varepsilon_n\leq 1$, we have $x_{\infty}^{\ast}\in U^{\circ}$. Hence, by (\ref{zvpar}), \begin{align*} \Bigl| \bigl \langle\hat{u}&(\lambda_N),x_{\infty}^{\ast} \bigr \rangle\Bigr|=\Biggl|\sum\limits_{n=1}^{\infty}\bigl \langle\hat{u}(\lambda_N),\varepsilon_nx_{n}^{\ast} \bigr \rangle \Biggr| \\ & \geq \Bigl| \bigl \langle\hat{u}(\lambda_N),\varepsilon_Nx_{N}^{\ast} \bigr \rangle\Bigr|-\sum\limits_{n=1}^{N-1}\Bigl| \bigl \langle\hat{u}(\lambda_N), \varepsilon_nx_{n}^{\ast} \bigr \rangle \Bigr|-\sum\limits_{n=N+1}^{\infty}\Bigl| \bigl \langle\hat{u}(\lambda_N),\varepsilon_nx_{n}^{ \ast} \bigr \rangle \Bigr| \\ & > 3C_{r_N}e^{M_{r_N}(\lambda_N)+H_K(i\lambda_N)}-\sum\limits_{n=1}^{N-1}\Bigl|\varepsilon_n e^{M_{r_N}(\lambda_N)+H_K(i\lambda_N)}\Bigr | \\ & -\sum\limits_{n=N+1}^{\infty}\frac{1}{2^n}>C_{r_N}e^{M_{r_N}(\lambda_N)+H_K(i\lambda_N)}, \end{align*} which contradicts (\ref{E2}). \end{proof} For further information concerning the Paley-Wiener type theorems for ultradifferentiable functions and infinitely differentiable functions with compact support, we refer the reader to \cite[Section 9]{k91}, \cite{yosida} and \cite[Section 11.6]{stan}. The spaces of tempered ultradistributions of the Beurling, resp., the Roumieu type, are defined in \cite{pilip} as duals of the following test spaces $$ \mathcal{S}^{(M_p)}(\mathbb{R}^{n}):=\text{projlim}_{h\to\infty}\mathcal{S}^{M_p,h}(\mathbb{R}^{n}), \mbox{ resp., }\mathcal{S}^{\{M_p\}}(\mathbb{R}^{n}):=\text{indlim}_{h\to 0}\mathcal{S}^{M_p,h}(\mathbb{R}^{n}), $$ where for each $h>0,$ \begin{align*} \mathcal{S}^{M_p,h}(\mathbb{R}^{n}):=\bigl\{\phi\in C^\infty(\mathbb{R}^{n}):\|\phi\|_{M_p,h}<\infty\bigr\}, \end{align*} \begin{align*} \|\phi\|_{M_p,h}:=\sup\Biggl\{\frac{h^{|\alpha|+|\beta|}}{M_{|\alpha|} M_{|\beta|}}\bigl(1+|x|^2\bigr)^{\beta/2}\bigl|\phi^{(\alpha)}(x)\bigr| : x\in\mathbb{R}^{n}, \;\alpha,\;\beta\in\mathbb{N}_{0}^{n}\Biggr\}. \end{align*} If $n=1,$ then we also write $\mathcal{S}^{(M_p)}$ ($\mathcal{S}^{\{M_p\}}$) for $\mathcal{S}^{(M_p)}(\mathbb{R}^{n})$ ($\mathcal{S}^{\{M_p\}}(\mathbb{R}^{n})$); the common abbreviation for the both case of brackets will be $\mathcal{S}^{\ast}.$ For further information we refer to \cite{b42}-\cite{ckm}, \cite{cizi}, \cite{fat}, \cite{gr}, \cite{knjigah}, \cite{dusanka}, \cite{ku113}, \cite{me152}, \cite{patak} and \cite{pilip}. \section{$C$-wellposedness of first order Cauchy problem in the sense of distributions and ultradistributions} In this section, we will continue the study of T. Ushijima \cite[Section 1-Section 2]{ush1} on the well-posedness of Cauchy problem in the spaces of abstract distributions. We note that there exist some assertions in the existing literature on abstract differential equations in locally convex spaces, like \cite[Proposition 1.2]{komura} or \cite[Propositions 1.1, 1.3-1.4, 1.6; Theorem 2.1]{ush1}, in which the sequential completeness of the state space $E$ has not been assumed. We will not follow this general approach here. \begin{defn}\label{C-wellposed} (cf. also \cite[Definition 2.1.4]{me152} for distribution case, with $C=I$) Let $A$ be a closed linear operator on $E,$ let $C\in L(E)$ be injective, and let $CA\subseteq AC.$ Then it is said that the operator $A$ is $C$-wellposed for the abstract Cauchy problem $u'-Au=G$ at $t=0$ in the sense of distributions (ultradistributions of $\ast$-class) if for each $G\in {\mathcal D}'_{0}(E)$ ($G\in {\mathcal D}'^{\ast}_{0}(E)$) there exists a unique $U_{G}\in {\mathcal D}'_{0}(E)$ ($U_{G} \in {\mathcal D}'^{\ast}_{0}(E)$) satisfying the following conditions: \begin{itemize} \item[(i)] $U_{G}(\varphi) \in D(A)$ for all $\varphi \in {\mathcal D}$ ($\varphi \in {\mathcal D}^{\ast}$), \item[(ii)] the mapping $G \mapsto U_{G},$ $G\in {\mathcal D}'_{0}(E)$ ($G\in {\mathcal D}'^{\ast}_{0}(E)$) belongs to $L( {\mathcal D}'_{0}(E))$ ($L({\mathcal D}'^{\ast}_{0}(E))$), \item[(iii)] $U_{G}'(\varphi)-AU_{G}(\varphi)=CG(\varphi)$ for all $\varphi \in {\mathcal D}$ ($\varphi \in {\mathcal D}^{\ast}$). \end{itemize} \end{defn} \begin{defn}\label{exp C-wellposed} (cf. also \cite[Subsection 2.1.3]{me152} for distribution case, with $C=I$) Let $A$ be a closed linear operator on $E$, let $C\in L(E)$ be injective, and let $CA\subseteq AC$. Then it is said that the operator $A$ is exponentially $C$-wellposed for the abstract Cauchy problem $u'-Au=G$ at $t=0$ in sense of distributions (ultradistributions of $\ast$-class) if for each $G\in\DD'_0(E)$ ($G\in {\mathcal D}'^{\ast}_{0}(E)$) there exists a unique $U_G\in\DD'_0(E)$ ($U_{G} \in {\mathcal D}'^{\ast}_{0}(E)$) satisfying (i), (ii) and (iii) from the previous definition and the following condition: \begin{itemize} \item[(iv)] there exists $a\geq 0$ such that $e^{-a\cdot}U_G\in\SSS'(E)$ ($e^{-a\cdot}U_G\in {\mathcal S}'^{\ast}(E)$). \end{itemize} \end{defn} \subsection{$C$-generalized resolvents of linear operators} Throughout this subsection, we assume that $X$ and $Y$ are Hausdorff locally convex spaces over the field ${\mathbb K} \in \{{\mathbb R},{\mathbb C}\}$ as well as that $A$ is a linear operator on $Y$ and the operator $C\in L(Y)$ is injective. Let $Z$ be a non-trivial subspace of $L(X,Y)$ obeying the property that $CU\in Z$ whenever $U\in Z,$ and let ${\mathbf B}$ denote the family of all bounded subsets of $X.$ By $I_{X}$ ($I_{Y},$ $I_{Z}$) we denote the identity operator on $X$ ($Y$, $Z$). Then $Z$ is a Hausdorff locally convex space over the field ${\mathbb K},$ and the fundamental system of seminorms which defines the topology of $Z$ is $(P_{B})_{P\in \circledast_{Y},B\in {\mathbf B}},$ where $P_{B}(T):=\sup_{x\in B}P(Tx),$ $T\in Z$ ($P\in \circledast_{Y},$ $B\in {\mathbf B}$). In \cite[Definition 4.1]{komura}, T. K\=omura has analyzed the case $X={\mathcal D}_{(-\infty ,a]},$ $Z=L({\mathcal D}_{(-\infty ,a]},Y)$ for some $a>0,$ and $C=I_{Y},$ while on pages 96-97 of \cite{ush1} T. Ushijima has analyzed the case $X={\mathcal D},$ $Z={\mathcal D}'_{0}(Y)$ and $C=I_{Y}.$ The main aim of this subsection is to provide, on the basis of ideas from \cite{komura} and \cite{ush1} (cf. also \cite[Definition 1]{vuaq}), a very general approach for introducing the notions of $C$-generalized resolvents of linear operators. Define a linear operator $A_{X,Z}$ on $Z$ by $$ A_{X,Z}:=\bigl\{(U,V) \in Z \times Z : Ux\in D(A)\mbox{ for all }x\in X\mbox{ and }Vx=A(Ux),\ x\in X \bigr\}. $$ Then it is checked at once that $D_{X,Z}\in L(Z)$ for any $D\in L(Y)$ satisfying that $DU\in Z$ for all $U\in Z,$ and that the operator $C_{X,Z}\in L(Z)$ is injective. Furthermore, the assumption $CA\subseteq AC$ ($C^{-1}AC=A$) implies $C_{X,Z}A_{X,Z}\subseteq A_{X,Z}C_{X,Z}$ ($C_{X,Z}^{-1}A_{X,Z}C_{X,Z}=A_{X,Z}$). If the closed graph theorem holds for the mappings from $X$ into $Y,$ then $D(A_{X,Z})$ consists exactly of those mappings $U\in Z$ for which $R(U)\subseteq D(A)$ and $AU\in Z;$ in this case, $A_{X,Z}=AU$ for all $U\in D(A_{X,Z}).$ \begin{defn}\label{gen-res} The $C_{X,Z}$-resolvent set of $A,$ $\rho_{C_{X,Z}}(A)$ in short, is defined as the set of those scalars $\lambda \in {\mathbb K}$ for which the operator $\lambda I_{Z}-A_{X,Z}$ is injective, $R(C_{X,Z})\subseteq R(\lambda I_{Z}-A_{X,Z})$ and $(\lambda I_{Z}-A_{X,Z})^{-1}C_{X,Z}\in L(Z).$ If $C=I_{Y},$ then the $C_{X,Z}$-resolvent set of $A$ is also called the $_{X,Z}$-resolvent set of $A$ and denoted by $\rho_{_{X,Z}}(A)$ for short. \end{defn} In other words, the $C_{X,Z}$-resolvent set of $A$ (the $_{X,Z}$-resolvent set of $A$) is defined as the $C_{X,Z}$-resolvent set (the resolvent set) of the operator $A_{X,Z}$ in $Z.$ The $C_{X,Z}$-spectrum of $A,$ denoted by $\sigma_{C_{X,Z}}(A),$ is defined as the complement of set $\rho_{C_{X,Z}}(A)$ in ${\mathbb K};$ in the case that $C=I_{Y},$ $\sigma_{C_{X,Z}}(A)$ is also denoted by $\sigma_{_{X,Z}}(A)$ and called the $_{X,Z}$-spectrum of $A.$ We can decompose the $_{X,Z}$-spectrum of $A$ into three disjunct subsets: \begin{itemize} \item[(i)] the point $_{X,Z}$-spectrum of $A$, shortly $\sigma_{p;_{X,Z}}(A),$ consisting of the eigenvalues of the operator $A_{X,Z},$ \item[(ii)] the continuous $_{X,Z}$-spectrum of $A$, shortly $\sigma_{c;_{X,Z}}(A),$ consisting of the scalars that are not eigenvalues of the operator $A_{X,Z},$ but make the range of $\lambda I_{Z}-A_{X,Z}$ a proper dense subset of the space $Z$, \item[(iii)] the residual $_{X,Z}$-spectrum of $A$, shortly $\sigma_{r;_{X,Z}}(A),$ consisting of all other scalars in the spectrum. \end{itemize} It can be simply verified that the closedness (closability, injectivity) of the operator $A$ on $Y$ implies the closedness (closability, injectivity) of the operator $A_{X,Z}$ on $Z$ (cf. also \cite[Proposition 4.3]{komura}), and that $\sigma_{p;_{X,Z}}(A)\subseteq \sigma_{p}(A)$. Suppose now that $\lambda \in \rho_{C}(A)$ (defined in the same way as in (\ref{C-res}), with ${\mathbb C}$ and $E$ replaced respectively by ${\mathbb K}$ and $Y$) and $(\lambda-A)^{-1}CU\in Z$ for all $U\in Z.$ Then $\lambda \in \rho_{C_{X,Z}}(A_{X,Z})$ and $(\lambda I_{Z}-A_{X,Z})^{-1}C_{X,Z}=((\lambda-A)^{-1}C)_{X,Z};$ in particular, $\rho_{C}(A)\subseteq \rho_{C_{X,Z}}(A_{X,Z})$ provided that $Z=L(X,Y).$ In both approaches, T. K\=omura's or T. Ushijima's, the denseness of the operator $A$ implies the denseness of the operator $ A_{X,Z};$ using the proof of \cite[Proposition 4.4]{komura} and the consideration given on page 685 of \cite{k82}, it can be proved that the same conclusion holds in the case that $X={\mathcal D}^{\ast}_{(-\infty ,a]},$ $Z=L({\mathcal D}^{\ast}_{(-\infty ,a]},Y),$ for some $a>0$ and $C=I_{Y}$ (cf. \cite{komura}) or that $X={\mathcal D}^{\ast},$ $Z={\mathcal D}^{\prime \ast}_{0}(Y)$ and $C=I_{Y}$ (cf. \cite{ush1}). The following trivial example shows that the denseness of the operator $A$ does not imply the denseness of the operator $ A_{X,Z}$ in general case, as well as that the choice of spaces $X$ and $Z$ is very important for saying anything relevant and noteworthy about the operator $A_{X,Z}.$ \begin{example}\label{denseness} Let $A$ be a densely defined linear operator on $Y,$ $C=I_{Y},$ let $U\in L(X,Y)$ satisfy that $R(U)$ is not contained in $D(A),$ and let $Z=\{\alpha U : \alpha \in {\mathbb K}\}.$ Then $D(A_{X,Z})=\{0\},$ and therefore, $A_{X,Z}$ is not densely defined in $Z.$ \end{example} Apart from this, there exists a great number of well-known identities which continue to hold for $C$-generalized resolvents. For example, the validity of inclusion $CA\subseteq AC$ implies the following: \begin{itemize} \item[(a)] Let $k\in {\mathbb N}_{0}$ and $\lambda,\ z\in \rho_{C_{X,Z}}(A_{X,Z})$ with $z\neq\lambda.$ Then \begin{align*} \notag \bigl(z&-A_{X,Z}\bigr)^{-1}C_{X,Z} \bigl(\lambda-A_{X,Z}\bigr)^{-k} C_{X,Z}^{k} \\ &=\frac{(-1)^{k}}{(z-\lambda)^{k}}\bigl(z-A_{X,Z}\bigr)^{-1}C_{X,Z}^{k+1} +\sum \limits^{k}_{i=1}\frac{(-1)^{k-i}\bigl(\lambda-A_{X,Z}\bigr)^{-i}C_{X,Z}^{k+1}}{\bigl(z-\lambda\bigr)^{k+1-i}}. \end{align*} \item[(b)] Let $n\in {\mathbb N}$ and $U\in D(A_{X,Z}^{n}).$ Then \begin{align*} \bigl(\lambda &-A_{X,Z}\bigr)^{-1}C_{X,Z}U ={\lambda}^{-1}C_{X,Z}U+{\lambda}^{-2}C_{X,Z}A_{X,Z}U \\&+\cdot \cdot \cdot+{\lambda}^{-n}C_{X,Z}A_{X,Z}^{n-1}U+{\lambda}^{-n}\bigl(\lambda-A_{X,Z}\bigr)^{-1}C_{X,Z}A_{X,Z}^{n}U. \end{align*} \end{itemize} In K\=omura's approach \cite{komura}, let $X={\mathcal D}_{(-\infty ,a]},$ $Y=E,$ $Z=L({\mathcal D}_{(-\infty ,a]},E)$ for some $a>0,$ and $C=I_{E}$). A linear operator $A$ in $E$ is the infinitesimal generator of a uniquely determined locally equicontinuous $C_{0}$-semigroup $(T(t))_{t\geq 0}$ in $E$ iff the following holds: \begin{itemize} \item[(1)] $A$ is a closed linear operator with a dense domain $D(A);$ \item[(2)] for any $a > 0,$ in the space ${\mathbf D}^{\prime}_{a}(E)$ the following conditions are satisfied: \begin{itemize} \item[(a)] there exists the generalized resolvent $(\lambda I_{Z}- {\mathbf A})^{-1}$ of $A;$ \item[(b)] for any fixed complex number $\lambda$, there is a continuous linear operator $R(\lambda)$ on $E$ into itself such that for any fixed $x \in E,$ $R(\lambda)x$ is an $E$-valued entire function in $\lambda$ satisfying that there exists $k\in {\mathbb N}$ such that for any continuous seminorm $p$ on $E,$ there exist an integer $N = N(p) > 0$ and a number $C = C(p) > 0$ with $p(R(\lambda) x) \leq C(1 + |\lambda|)^{N}e^{k|\Re \lambda|},$ $\lambda \in {\mathbb C},$ as well as that $R(\lambda)x$ is a representation of $(\lambda I_{Z} - {\mathbf A})^{-1}$ and the family of operators $$ \Biggl \{ \frac{\lambda^{n+1}}{n!}\frac{d^{n}}{d\lambda^{n}}R(\lambda) : \lambda >0,\ n\in {\mathbb N}_{0} \Biggr\}\subseteq L(E) \quad \mbox{is equicontinuous.} $$ \end{itemize} \end{itemize} The proof of \cite[Theorem 3]{komura} is rather long and can be trivially modified only for the class of locally equicontinuous $C$-regularized semigroups in SCLCS's (recall that there exist examples of integrated semigroups and $C$-regularized semigroups in Banach spaces with not necessarily densely defined generators, so that we cannot expect the validity of (1) in this framework). On the other hand, some necessary and sufficent conditions for the generation of locally equicontinuous $K$-convoluted $C$-semigroups in SCLCS's, defined locally or globally, can be very simply clarified if we use the notion of asymptotic $\Theta C$-resolvents (cf. S. \=Ouchi \cite{ouchi} for the pioneering results in this direction, \cite{kuosi}, \cite{t212}, \cite{w18} and \cite{knjigah} for the Banach space case): Let $0<\tau \leq \infty,$ let $\gamma\in[0,\tau),$ and let $K\in L_{loc}^{1}([0,\tau)),$ $K\neq 0.$ Set $\Theta (t):=\int^{t}_{0}K(s)\, ds,$ $t\in [0,\tau).$ An operator family $\{L_{\gamma}(\lambda):\gamma\in[0,\tau)$, $\lambda\geq 0\}\subseteq L(E)$ is called an asymptotic $\Theta C$-resolvent for $A$ iff there exists a strongly continuous operator family $(V(t))_{t\in[0,\tau)}\subseteq L(E)$ such that the following conditions hold: \begin{itemize} \item[(i)] For every fixed element $x\in E,$ the function $\lambda\to L_{\gamma} (\lambda)x,$ $\lambda \geq 0$ belongs to $C^{\infty}([0,\infty):E)$ and the operator family \[ \Biggl\{ \frac{\lambda^n}{(n-1)!} \frac{d^{n-1}}{d\lambda^{n-1}} L_{\gamma}(\lambda) : \lambda\geq 0,\;n\in\mathbb{N} \Biggr\}\subseteq L(E) \] is equicontinuous. \item[(ii)] $L_{\gamma}(\lambda)$ commutes with $C$ and $A$ for all $\lambda\geq0$. \item[(iii)] $(\lambda-A) L_{\gamma}(\lambda)x=-e^{-\lambda\gamma}V(\gamma)x +\int^{\gamma}_0 e^{-\lambda s}K(s)Cx\,ds$, $\lambda\geq0$. \item[(iv)] $L_{\gamma}(\lambda) L_{\gamma}(\eta)=L_{\gamma}(\eta)L_{\gamma}(\lambda)$, $\lambda\geq 0$, $\eta\geq 0$. \end{itemize} Keeping this notion in mind, it can be straightforwardly verified that the assertions of \cite[Proposition 2.3.18, Theorem 2.3.19-Theorem 2.3.20]{knjigah} continue to hold in locally convex spaces with minor technical modifications. Following \cite[Definition 2.1]{ush1}, it will be said that an $L(E)$-valued distribution (ultradistribution of $\ast$-class) ${\mathcal G}$ is boundedly equicontinuous iff for every $p\in \circledast$ and for every bounded subset $B$ of ${\mathcal D}$ (${\mathcal D}^{\ast}$), there exist $c>0$ and $q\in \circledast$ such that $$ p({\mathcal G}(\varphi)x)\leq cq(x),\quad \varphi \in B, \ x\in E. $$ If $E$ is barreled, then the uniform boundedness principle \cite[p. 273]{meise} implies that each ${\mathcal G}\in {\mathcal D}'(L(E))$ (${\mathcal G}\in {\mathcal D}'^{*}(L(E))$) is automatically boundedly equicontinuous. Suppose now that the operator $A$ is $C$-wellposed for the abstract Cauchy problem $u'-Au=G$ at $t=0$ in the sense of distributions (ultradistributions of $\ast$-class); for the sake of brevity, we will consder only the ultradistribution case. Put $G_{x}(\varphi):=\varphi(0)x,$ $\varphi \in {\mathcal D}^{\ast}$ ($x\in X$). Then we define ${\mathcal G}(\varphi)x:=U_{G_{x}}(\varphi),$ $\varphi \in {\mathcal D}^{\ast}$ ($x\in X$). Using the fact that the space ${\mathcal D}^{\ast}$ is barelled and the arguments already used in the proof of \cite[Theorem 2.1]{ush1}, we can simply prove the following theorem. \begin{thm}\label{59} Let $A$ be $C$-wellposed, let $C\in L(E)$ be injective, and let $CA\subseteq AC$. Then there exists a boundedly equicontinous ${\mathcal G}\in\DD^{\prime}_{0}(L(E))$ $({\mathcal G}\in {\mathcal D}^{\prime \ast}_{0}(L(E)))$ satisfying the following properties: \begin{itemize} \item[(i)] For any $x\in E$ and $\varphi\in\DD$ $(\varphi\in\DD^{\ast})$, we have ${\mathcal G}(\varphi)x\in D(A)$ and ${\Big(}\frac{d}{dt}{\mathcal G}{\Big)}(\varphi)x-A{\mathcal G}(\varphi)x= \delta(\varphi)Cx$; \item[(ii)] For any $x\in D(A)$, $\varphi\in\DD$ $(\varphi\in\DD^{\ast})$, we have ${\mathcal G}(\varphi)Ax=A{\mathcal G}(\varphi)x$; \item[(iii)] For any $\varphi\in\DD$ $(\varphi\in\DD^{\ast})$, we have ${\mathcal G}(\varphi)Cx=C{\mathcal G}(\varphi)x.$ \end{itemize} \end{thm} \begin{proof} If $E$ is sequentially complete, then the opposite direction of Theorem \ref{59} holds. In order to prove that, we need two auxiliaries lemmas whose proofs in ultradistribution case can be deduced by slightly modifying the corresponding proofs of \cite[Proposition 2.1, Proposition 2.2]{ush1}.\end{proof} \begin{lem}\label{lemma1} The algebraic tensor product ${\mathbf D}^{\prime \ast}_0\otimes E$ is dense in ${\mathbf D}^{\prime \ast}_0(E)$. If $A$ is closed linear operator on $E$, then ${\mathbf D}^{\prime \ast}_0\otimes D(A)$ is dense in $D({\mathbf A})$ topologized by the graph topology of ${\mathbf A}$. \end{lem} \begin{lem}\label{lemma2} (cf. also \cite[Section 3]{ku113} for Banach space valued ultradistributions) For any boundedly equicontinuous ultradistribution ${\mathcal G}\in\DD^{\prime \ast}_0(L(E))$, there exists a unique convolution operator ${\mathcal G}\ast \cdot \in L(\DD^{\prime \ast}_{0}(E))$ satisfying that for $f=F\otimes x$ (defined in the obvious way), with arbitrary $F\in\DD^{\prime \ast}_0$ and $x\in E,$ we have: $$ ({\mathcal G}\ast f)(\varphi)={\mathcal G}_t\bigl(\alpha(t)F_s(\varphi(t+s))\bigr)x, $$ where $\alpha(t)$ is an arbitrary smooth function with \emph{supp}$(\alpha)\subset[a,\infty)$, $a>-\infty$ and $\alpha(t)=1$ for $t\geq0$. \end{lem} \begin{thm}\label{opositethm} Let $E$ be a sequentially complete locally convex space, let $C\in L(E)$ be an injective operator, and let $CA\subseteq AC$. Then a closed linear operator $A$ on $E$ is $C$-wellposed iff there exists a boundedly equicontinuous ${\mathcal G}\in\DD^{\prime}_0(L(E))$ (${\mathcal G}\in\DD^{\prime \ast}_0(L(E))$) satisfying \emph{(i)-(iii)} of \emph{Theorem \ref{59}}. \end{thm} \begin{proof} The sufficiency can be given directly, by simple showing that for any boundedly equicontinuous ultradistribution ${\mathcal G}\in\DD^{\prime \ast}_0(L(E))$, $G\mapsto U_{G}:={\mathcal G} \ast G$ is a unique mapping belonging to $L( {\mathcal D}'_{0}(E))$ and satisfying the properties (i)-(iii) from Definition \ref{C-wellposed}. \end{proof} Observe, finally, that Theorem \ref{59} and Theorem \ref{opositethm} can be simply reformulated in the case that $A$ is exponentially $C$-wellposed. \section{The basic properties of C-distribution semigroups and C-ultradistribution semigroups in locally convex spaces} \begin{defn}\label{cuds} Let $\mathcal{G}\in\mathcal{D}_0'(L(E))$ ($\mathcal{G}\in\mathcal{D}_0'^{\ast}(L(E))$) satisfy $C\mathcal{G}=\mathcal{G}C,$ and let $\mathcal{G}$ be boundedly equicontinuous. Then it is said that $\mathcal{G}$ is a pre-(C-DS) (pre-(C-UDS) of $\ast$-class) iff the following holds: \[\tag{C.S.1} \mathcal{G}(\varphi*_0\psi)C=\mathcal{G}(\varphi)\mathcal{G}(\psi),\quad \varphi,\;\psi\in\mathcal{D} \ \ (\varphi,\;\psi\in\mathcal{D}^{\ast}). \] If, additionally, \[\tag{C.S.2} \mathcal{N}(\mathcal{G}):=\bigcap_{\varphi\in\mathcal{D}_0}N(\mathcal{G}(\varphi))=\{0\} \ \ \Biggl(\mathcal{N}(\mathcal{G}):=\bigcap_{\varphi\in\mathcal{D}^{\ast}_0}N(\mathcal{G}(\varphi))=\{0\}\Biggr), \] then $\mathcal{G}$ is called a $C$-distribution semigroup ($C$-ultradistribution semigroup of $\ast$-class), (C-DS) ((C-UDS)) in short. A pre-(C-DS) $\mathcal{G}$ is called dense if \[\tag{C.S.3} \mathcal{R}(\mathcal{G}):=\bigcup\limits_{\varphi\in\mathcal{D}_0}R(\mathcal{G}(\varphi)) \text{ is dense in }E\ \Biggl(\mathcal{R}(\mathcal{G}):=\bigcup\limits_{\varphi\in\mathcal{D}^{\ast}_0}R(\mathcal{G}(\varphi)) \text{ is dense in }E \Biggr). \ \] The notion of a dense pre-(C-UDS) $\mathcal{G}$ of $\ast$-class (and the set $\mathcal{R}(\mathcal{G})$) is defined similarly. \end{defn} \begin{rem}\label{redun} \begin{itemize} \item[(i)] We have assumed that ${\mathcal G}$ is boundedly equicontinuous in order to stay consistent with the notion introduced in \cite{ush1} and our previous analysis. Observe, however, that the assumption on bounded equicontinuity of ${\mathcal G}$ is slightly redundant and that we can rephrased a great part of our results in the case that ${\mathcal G}$ does not satisfy this condition. \item[(ii)] If $C=I,$ then we also write pre-(DS), pre-(UDS), (DS), (UDS), ... , instead of pre-(C-DS), pre-(C-UDS), (C-DS), (C-UDS). \end{itemize} \end{rem} Suppose that $\mathcal{G}$ is a pre-(C-DS) (pre-(C-UDS) of $\ast$-class). Then $\mathcal{G}(\varphi)\mathcal{G}(\psi)=\mathcal{G}(\psi)\mathcal{G}(\varphi)$ for all $\varphi,\,\psi\in\mathcal{D}$ ($\varphi,\,\psi\in\mathcal{D}^{\ast}$), and $\mathcal{N}(\mathcal{G})$ is a closed subspace of $E$. The structural characterization of a pre-(C-DS) $\mathcal{G}$ (pre-(C-UDS) $\mathcal{G}$ of $\ast$-class) on its kernel space $\mathcal{N}(\mathcal{G})$ is described in the following theorem (cf. Theorem \ref{delta}, the paragraph directly after its formulation, as well as \cite[Proposition 3.1.1]{knjigah} and the proofs of \cite[Lemma 2.2]{ku112}, \cite[Proposition 3.5.4]{knjigah}). \begin{thm}\label{delta-point} \begin{itemize} \item[(i)] Let $\mathcal{G}$ be a pre-$($C-DS$)$, and let the space $L(\mathcal{N}(\mathcal{G}))$ be admissible. Then, with $N=\mathcal{N}(\mathcal{G})$ and $G_1$ being the restriction of $\mathcal{G}$ to $N$ $(G_1=\mathcal{G}_{|N})$, we have: There exists a unique operators $T_0$, $T_1,\dots,T_m\in L(\mathcal{N}(\mathcal{G}))$ such that $G_1=\sum_{j=1}^m\delta^{(j)}\otimes T_j$, $T_iC^i=(-1)^iT_0^{i+1}$, $i=0,1,\dots,m-1$ and $T_0T_m=T_0^{m+2}=0$. \item[(ii)] Let $(M_{p})$ satisfy \emph{(M.3)}, let $\mathcal{G}$ be a pre-$($C-UDS$)$ of $\ast$-class, and let the space $\mathcal{N}(\mathcal{G})$ be barreled. Then, with $N=\mathcal{N}(\mathcal{G})$ and $G_1$ being the restriction of $\mathcal{G}$ to $N$ $(G_1=\mathcal{G}_{|N})$, we have: There exists a unique set of operators $(T_{j})_{j\in {\mathbb N}_{0}}$ in $L(\mathcal{N}(\mathcal{G}))$ such that $G_1=\sum_{j=0}^{\infty}\delta^{(j)}\otimes T_j$, $T_jC^j=(-1)^jT_0^{j+1}$, $j\in {\mathbb N}$ and the set $\{M_{j}T_{j}L^{j} : j\in {{\mathbb N}_{0}}\}$ is bounded in $L(\mathcal{N}(\mathcal{G})),$ for some $L>0$ in the Beurling case, resp. for every $L>0$ in the Roumieu case. \end{itemize} \end{thm} Let $\mathcal{G}\in\mathcal{D}_0'(L(E))$ ($\mathcal{G}\in\mathcal{D}_0'^{\ast}(L(E))$) satisfy (C.S.2), and let $T\in\mathcal{E}_0'$ ($T\in\mathcal{E}_0'^{\ast}$), i.e., $T$ is a scalar-valued distribution (ultradistribution of $\ast$-class) with compact support contained in $[0,\infty)$. Define \[ G(T)x:=\Bigl\{(x,y) \in E\times E : \mathcal{G}(T*\varphi)x=\mathcal{G}(\varphi)y\;\mbox{ for all }\;\varphi\in\mathcal{D}_0 \ \ \bigl(\varphi\in\mathcal{D}^{\ast}_{0}\bigr) \Bigr\}. \] Then it can be easily seen that $G(T)$ is a closed linear operator. Following R. Shiraishi, Y. Hirata \cite{1964} and P. C. Kunstmann \cite{ku112}, we define the (infinitesimal) generator of a (C-DS) $\mathcal{G}$ by $A:=G(-\delta')$ (for some other approaches, see J. L. Lions \cite{li121}, \cite[Remark 3.1.20]{knjigah} and J. Peetre \cite{peet}, T. Ushijima \cite{ush1}). Since for each $\psi\in\mathcal{D}$ ($\psi\in\mathcal{D}^{\ast}$), we have $\psi_+:=\psi\mathbf{1}_{[0,\infty)}\in\mathcal{E}_0'$ ($\mathcal{E}_{0}'^{*}$), ($\mathbf{1}_{[0,\infty)}$ stands for the characteristic function of $[0,\infty)$) the definition of $G(\psi_+)$ is clear. Further on, if $\mathcal{G}$ is a (C-DS) ((C-UDS) of $\ast$-class), $T\in\mathcal{E}_0'$ ($T\in\mathcal{E}_{0}'^{*}$) and $\varphi\in\mathcal{D}$ ($\varphi\in\mathcal{D}^{\ast}$), then ${\mathcal G}(\varphi)G(T)\subseteq G(T)\mathcal{G}(\varphi)$, $CG(T)\subseteq G(T)C$ and $\mathcal{R}(\mathcal{G})\subseteq D(G(T))$. If $\mathcal{G}$ is a pre-(C-DS) (pre-(C-UDS) of $\ast$-class) and $\varphi$, $\psi\in\mathcal{D}$ ($\varphi$, $\psi\in\mathcal{D}^{\ast}$), then the assumption $\varphi(t)=\psi(t)$, $t\geq 0$, implies $\mathcal{G}(\varphi)=\mathcal{G}(\psi)$. As in the Banach space case, we can prove the following: Suppose that $\mathcal{G}$ is a (C-DS) ((C-UDS) of $\ast$-class). Then $G(\psi_+)C=\mathcal{G}(\psi)$, $\psi\in\mathcal{D}$ ($\psi\in\mathcal{D}^{\ast}$) and $C^{-1}AC=A.$ Furthermore, the following holds: \begin{prop}\label{isto} Let ${\mathcal G}$ be a (C-DS) ((C-UDS) of $\ast$-class), $S$, $T\in\mathcal{E}'_0$ ($S$, $T\in\mathcal{E}'^{\ast}_0$), $\varphi\in\mathcal{D}_0$ ($\varphi\in\mathcal{D}^{\ast}_0$), $\psi\in\mathcal{D}$ ($\psi\in\mathcal{D}^{\ast}$) and $x\in E$. Then we have: \begin{itemize} \item[(i)] $(\mathcal{G}(\varphi)x$, $\mathcal{G}(\overbrace{T*\cdots*T}^m*\varphi)x)\in G(T)^m$, $m\in\mathbb{N}$. \item[(ii)] $G(S)G(T)\subseteq G(S*T)$ with $D(G(S)G(T))=D(G(S*T))\cap D(G(T))$, and $G(S)+G(T)\subseteq G(S+T)$. \item[(iii)] $(\mathcal{G}(\psi)x$, $\mathcal{G}(-\psi')x-\psi(0)Cx)\in G(-\delta')$. \item[(iv)] If $\mathcal{G}$ is dense, then its generator is densely defined. \end{itemize} \end{prop} The assertions (ii)-(vi) of \cite[Proposition 3.1.2]{knjigah} can be reformulated for pre-(C-DS)'s (pre-(C-UDS)'s of $\ast$-class) in locally convex spaces; here it is only worth noting that for any barreled space $E$ and for any bounded subset $B$ of $E^*$ the mapping $x\mapsto \sup_{x^{*}\in B}|\langle x^{\ast},x\rangle |,$ $x\in E$ is a continuous seminorm on $E$ (cf. also the proof of \cite[Theorem 2.3]{ush1}) and that the reflexivity of state space $E$ (recall that the sequential completeness of $E$ is our standing hypothesis) implies that the spaces $E,$ $E^*$ and $E^{**}=E$ are both barreled and sequentially complete. \begin{prop}\label{kuki} Let $\mathcal{G}$ be a pre-(C-DS) (pre-(C-UDS) of $\ast$-class). Then the following holds: \begin{itemize} \item[(i)] $C(\overline{\langle\mathcal{R}(\mathcal{G})\rangle})\subseteq\overline{\mathcal{R}(\mathcal{G})}$, where $\langle\mathcal{R}(\mathcal{G})\rangle$ denotes the linear span of $\mathcal{R}(\mathcal{G})$. \item[(ii)] Assume $\mathcal{G}$ is not dense and $\overline{C\mathcal{R}(\mathcal{G})}=\overline{\mathcal{R}(\mathcal{G})}$. Put $R:=\overline{\mathcal{R}(\mathcal{G})}$ and $H:=\mathcal{G}_{|R}$. Then $H$ is a dense pre-($C_1$-DS) (pre-($C_1$-UDS) of $\ast$-class) on $R$ with $C_1=C_{|R}$. \item[(iii)] Assume $\overline{R(C)}=E$ and $E$ is barreled. Then the dual $\mathcal{G}(\cdot)^*$ is a pre-($C^*$-DS) (pre-($C^*$-UDS) of $\ast$-class) on $E^*$ and $\mathcal{N}(\mathcal{G}^*)=\overline{\mathcal{R}(\mathcal{G})}^{\circ}$. \item[(iv)] If $E$ is reflexive and $\overline{R(C)}=E$, then $\mathcal{N}(\mathcal{G})=\overline{\mathcal{R}(\mathcal{G}^*)}^{\circ}$. \item[(v)] Assume $\overline{R(C)}=E$ and $E$ is barreled. Then $\mathcal{G}^*$ is a ($C^*$-DS) (($C^*$-UDS) of $\ast$-class) in $E^*$ iff $\mathcal{G}$ is a dense pre-(C-DS) (pre-(C-UDS) of $\ast$-class). If $E$ is reflexive, then $\mathcal{G}^*$ is a dense pre-($C^*$-DS) (pre-($C^*$-UDS) of $\ast$-class) in $E^*$ iff $\mathcal{G}$ is a (C-DS) ((C-UDS) of $\ast$-class). \end{itemize} \end{prop} Now we shall state and prove an extension of \cite[Proposition 2]{ki90} for pre-(C-DS)'s (pre-(C-UDS)'s of $\ast$-class) in locally convex spaces. \begin{prop}\label{kisinski} Suppose that ${\mathcal G}\in {\mathcal D}^{\prime}_{0}(L(E))$ (${\mathcal G}\in {\mathcal D}^{\prime \ast}_{0}(L(E))$) and ${\mathcal G}(\varphi)C=C{\mathcal G}(\varphi),$ $\varphi \in {\mathcal D}$ ($\varphi \in {\mathcal D}^{\ast}$). Then ${\mathcal G}$ satisfies \emph{(C.S.1)} iff \begin{equation}\label{polish} {\mathcal G}\bigl(\varphi^{\prime}\bigr){\mathcal G}(\psi)-{\mathcal G}(\varphi){\mathcal G}\bigl(\psi^{\prime}\bigr)=\psi(0){\mathcal G}(\varphi)C-\varphi(0){\mathcal G} (\psi)C,\quad \varphi,\ \psi \in {\mathcal D} \ \ \bigl( \varphi,\ \psi \in {\mathcal D}^{\ast} \bigr). \end{equation} In particular, ${\mathcal G}$ is a pre-(C-DS) (pre-(C-UDS) of $\ast$-class) iff ${\mathcal G}$ is boundedly equicontinuous and \emph{(\ref{polish})} holds. \end{prop} \begin{proof} Steps of the proof of the proposition are the same as that of \cite[Proposition 2]{ki90}; because of its significance, we shall include all relevant details of the proof. If ${\mathcal G}$ satisfies (C.S.1), then (\ref{polish}) follows immediately from (C.S.1) and the equality $\varphi^{\prime}\ast_{0} \psi -\varphi \ast_{0} \psi^{\prime}=\psi(0)\varphi -\varphi(0)\psi ,$ $\varphi,\ \psi \in {\mathcal D}$ ($\varphi,\ \psi \in {\mathcal D}^{\ast}$). Suppose now that (\ref{polish}) holds, $\varphi,\ \psi \in {\mathcal D}$ ($\varphi,\ \psi \in {\mathcal D}^{\ast}$), $a>0$ and supp$(\psi)\subseteq (-\infty,a].$ Since ${\mathcal G}\in {\mathcal D}^{\prime}_{0}(L(E))$ (${\mathcal G}\in {\mathcal D}^{\prime \ast}_{0}(L(E))$), and the function $t\mapsto \int^{a}_{0}[\varphi(t-s)\psi(s)-\varphi(-s)\psi(t+s)]\, ds,$ $t\in {\mathbb R}$ belongs to ${\mathcal D}$ (${\mathcal D}^{\ast}$) with $(\varphi \ast_{0}\psi)(t)=\int^{a}_{0}[\varphi(t-s)\psi(s)-\varphi(-s)\psi(t+s)]\, ds,$ $t\geq 0,$ we have \begin{align} \notag {\mathcal G}(\varphi \ast_{0}\psi)Cx&={\mathcal G}\int^{a}_{0}\bigl[\varphi(\cdot-s)\psi(s)-\varphi(-s)\psi(\cdot+s)\bigr]Cx\, ds \\\notag &=\int^{a}_{0}\bigl[\psi(s){\mathcal G}(\varphi(\cdot-s))Cx-\varphi(-s){\mathcal G}(\psi(\cdot+s))Cx\bigr]\, ds \\\label{polish-1} &=\int^{a}_{0}\Bigl[{\mathcal G}\bigl(\varphi^{\prime}(\cdot-s)\bigr) {\mathcal G}(\psi(\cdot +s))x- {\mathcal G}(\varphi (\cdot-s)) {\mathcal G}\bigl(\psi^{\prime}(\cdot +s)\bigr)x\Bigr]\, ds \\\label{polish-2} & =-\int^{a}_{0}\frac{d}{ds}\bigl[{\mathcal G}(\varphi(\cdot-s)){\mathcal G}(\psi(\cdot +s))x\bigr]\, ds \\\notag &={\mathcal G}(\varphi) {\mathcal G}(\psi)x-{\mathcal G}(\varphi(\cdot -a)){\mathcal G}(\psi (\cdot+a))x \\\notag &={\mathcal G}(\varphi) {\mathcal G}(\psi)x-{\mathcal G}(\varphi(\cdot -a))0x={\mathcal G}(\varphi) {\mathcal G}(\psi)x, \end{align} for any $x\in E$ and $\varphi,\ \psi \in {\mathcal D}$ ($\varphi,\ \psi \in {\mathcal D}^{\ast}$), where (\ref{polish-1}) follows from an application of (\ref{polish}), and (\ref{polish-2}) from an elementary argumentation involving the continuity of ${\mathcal G}$ as well as the facts that for each function $\zeta \in {\mathcal D}$ ($\zeta \in {\mathcal D}^{\ast}$) we have that $\lim_{h\rightarrow 0}(\tau_{h}\zeta)=\zeta$ in ${\mathcal D}$ (${\mathcal D}^{\ast}$), $\lim_{h\rightarrow 0}\frac{1}{h}(\tau_{h}\zeta -\zeta)=\zeta^{\prime}$ in ${\mathcal D}$ (${\mathcal D}^{\ast}$) and that the set $\{\tau_{h}\zeta : |h|\leq 1\}$ is bounded in ${\mathcal D}$ (${\mathcal D}^{\ast}$). The proof of proposition is thereby complete. \end{proof} \begin{thm}\label{fundamentalna} Suppose that ${\mathcal G}\in {\mathcal D}^{\prime}_{0}(L(E))$ (${\mathcal G}\in {\mathcal D}^{\prime \ast}_{0}(L(E))$), ${\mathcal G}(\varphi)C=C{\mathcal G}(\varphi),$ $\varphi \in {\mathcal D}$ ($\varphi \in {\mathcal D}^{\ast}$) and $A$ is a closed linear operator on $E$ satisfying that $\mathcal{G}(\varphi)A\subseteq A{\mathcal G}(\varphi),$ $\varphi \in {\mathcal D}$ ($\varphi \in {\mathcal D}^{\ast}$) and \begin{equation}\label{dkenk} A\mathcal{G}(\varphi)x=\mathcal{G}\bigl(-\varphi'\bigr)x-\varphi(0)Cx,\quad x\in E,\ \varphi \in {\mathcal D} \ \ (\varphi \in {\mathcal D}^{\ast}). \end{equation} Then the following holds: \begin{itemize} \item[(i)] ${\mathcal G}$ satisfies \emph{(C.S.1)}. \item[(ii)] If ${\mathcal G}$ is boundedly equicontinuous and satisfies \emph{(C.S.2)}, then ${\mathcal G}$ is a (C-DS) ((C-UDS) of $\ast$-class) generated by $C^{-1}AC.$ \item[(iii)] Consider the distribution case. If $E$ is admissible, then the condition \emph{(C.S.2)} automatically holds for ${\mathcal G}$. \end{itemize} \end{thm} \begin{proof} Let $\varphi,\ \psi \in {\mathcal D}$ ($\varphi,\ \psi \in {\mathcal D}^{\ast}$) be fixed. Using the inclusion $\mathcal{G}(\varphi)A\subseteq A{\mathcal G}(\varphi)$ and the equality (\ref{dkenk}), we get that $ A{\mathcal G}(\varphi){\mathcal G}(\psi)x={\mathcal G}(\varphi)A{\mathcal G}(\psi)x, $ $x\in E,$ i.e., \begin{equation}\label{nahari} {\mathcal G}\bigl( -\varphi^{\prime}\bigr){\mathcal G}(\psi)x-\varphi(0)C{\mathcal G}(\psi)x={\mathcal G}(\varphi)\Bigl[ {\mathcal G}\bigl( -\psi^{\prime} \bigr)x-\psi(0)Cx \Bigr],\quad x\in E. \end{equation} This is, in fact, (\ref{polish}) so that (i) follows immediately from Proposition \ref{kisinski}. The proof of (iii) is the same as in the Banach space case (see e.g. \cite{li121} and the proof of \cite[Theorem 3.1.27]{knjigah}). Hence, it remains to be proved that the integral generator of ${\mathcal G}$ is the operator $C^{-1}AC,$ if ${\mathcal G}$ is boundedly equicontinuous and satisfies (C.S.2) (cf. the item (ii)); for the sake of brevity, we shall consider only the distribution case. Denote by $B$ the integral generator of ${\mathcal G}.$ Then it is checked at once that $C^{-1}AC\subseteq B.$ Suppose now that $(x,y)\in B,$ i.e., that ${\mathcal G}(-\zeta^{\prime})x={\mathcal G}(\zeta)y$ for all $\zeta \in {\mathcal D}_{0}.$ This clearly implies $A{\mathcal G}(\zeta)x={\mathcal G}(\zeta)y$ for all $\zeta \in {\mathcal D}_{0}.$ Consider now the equation (\ref{nahari}) with $\varphi=\xi$ and $\xi(0)=1.$ By (\ref{dkenk}), it readily follows that $C{\mathcal G}(\psi)x\in D(A).$ Since $A{\mathcal G}(\zeta)x={\mathcal G}(\zeta)y$ for all $\zeta \in {\mathcal D}_{0},$ we obtain that ${\mathcal G}(\zeta)AC{\mathcal G}(\eta)x={\mathcal G}(\zeta)C{\mathcal G}(\eta)y$ for all $\zeta \in {\mathcal D}_{0}$ ($\eta \in {\mathcal D}$). By (C.S.2), we get that $AC{\mathcal G}(\eta)x=C{\mathcal G}(\eta)y$ ($\eta \in {\mathcal D}$). This, in turn, implies $CA{\mathcal G}(\eta)x=C{\mathcal G}(\eta)y,$ $A{\mathcal G}(\eta)x={\mathcal G}(\eta)y,$ ${\mathcal G}( -\eta^{\prime})x-\eta(0)Cx={\mathcal G}(\eta)y$ ($\eta \in {\mathcal D}$), and since $\eta$ was arbitrary, $Cx\in D(A).$ Combined with the equality $AC{\mathcal G}(\eta)x=C{\mathcal G}(\eta)y$ ($\eta \in {\mathcal D}$), the above implies $ACx=Cy$ and $C^{-1}ACx=y,$ as claimed. \end{proof} \begin{rem} \begin{itemize} \item[(i)] Even in the case that $E$ is a Banach space and $C=I,$ ${\mathcal G}$ need not satisfy the condition (C.S.2) in ultradistribution case (\cite{cizi}, \cite{knjigah}). \item[(ii)] Suppose that $\overline{R(C)}=E,$ $E$ is barreled and $A$ generates a dense (C-DS) ((C-UDS) of $\ast$-class) on $E$. Then Proposition \ref{kuki}(iii) implies that the dual $\mathcal{G}(\cdot)^*$ is a ($C^*$-DS) (($C^*$-UDS) of $\ast$-class) on $E^*.$ Since ${\mathcal G}^{\ast}(\varphi)A^{\ast} \subseteq A^{\ast}{\mathcal G}^{\ast}(\varphi),$ $\varphi \in {\mathcal D}$ ($\varphi \in {\mathcal D}^{\ast}$) and (\ref{dkenk}) holds with $A$ and ${\mathcal G}$ replaced respectively by $A^{\ast}$ and ${\mathcal G}^{\ast},$ Theorem \ref{fundamentalna}(ii) implies that the integral generator of ${\mathcal G}^{\ast}$ is the operator $(C^{\ast})^{-1}A^{\ast}C^{\ast}$ (cf. also \cite[Remark 3.1.22]{knjigah}). \item[(iii)] Using Proposition \ref{kisinski} and the method proposed by J. Kisy\'nski in \cite[Section 6]{ki90}, we can introduce the integral generator of a pre-(C-DS) (pre-(C-UDS) of $\ast$-class) on an arbitrary sequentially complete locally convex space. On the other hand, it seems that the method proposed in \cite[Section 3.5; cf. Definition 3.5.7]{knjigah} can be used to define the integral generator of a pre-(C-DS) (pre-(C-UDS) of $\ast$-class) only in the case that $E$ is a Banach space. It would take too long to go into further details concerning these subjects here. \end{itemize} \end{rem} \begin{prop}\label{kisinski-uniqueness} Every $C$-distribution semigroup ($C$-ultradistribution semigroup of $\ast$-class) is uniquely determined by its generator. \end{prop} \begin{proof} We will prove the assertion only in distribution case because the ultradistribution case can be considered quite similarly. Suppose that ${\mathcal G}_{1}$ and ${\mathcal G}_{2}$ are two $C$-distribution semigroups generated by $A.$ Put ${\mathcal G}:={\mathcal G}_{1}-{\mathcal G}_{2}.$ Then ${\mathcal G} \in {\mathcal D}^{\prime}_{0}(L(E)),$ $A{\mathcal G}(\varphi)\subseteq {\mathcal G}(\varphi)A,$ $\varphi \in {\mathcal D}$ and $A{\mathcal G}(\varphi)={\mathcal G}(-\varphi^{\prime}),$ $\varphi \in {\mathcal D}.$ This implies \begin{align*} {\mathcal G}\bigl(\varphi^{\prime}\bigr){\mathcal G}(\psi)x=-A{\mathcal G}(\varphi){\mathcal G}(\psi)x=-{\mathcal G}(\varphi)A{\mathcal G}(\psi)x={\mathcal G}(\varphi){\mathcal G}\bigl(\psi^{\prime}\bigr)x, \end{align*} for all $\varphi,\ \psi \in {\mathcal D}$ and $x\in E.$ Keeping in mind this equality, the part of proof of Proposition \ref{kisinski} starting from the equation (\ref{polish-1}) shows that \begin{align*} 0&=\int^{a}_{0}\Bigl[{\mathcal G}\bigl(\varphi^{\prime}(\cdot-s)\bigr) {\mathcal G}(\psi(\cdot +s))x- {\mathcal G}(\varphi (\cdot-s)) {\mathcal G}\bigl(\psi^{\prime}(\cdot +s)\bigr)x\Bigr]\, ds={\mathcal G}(\varphi){\mathcal G}(\psi)x \end{align*} for all $\varphi,\ \psi \in {\mathcal D}$ and $x\in E$ (here, the number $a>0$ is chosen so that supp$(\psi)\subseteq (-\infty,a]$). In particular, ${\mathcal G}(\psi){\mathcal G}(\varphi)={\mathcal G}(\varphi){\mathcal G}(\psi)=0$ ($\varphi,\ \psi \in {\mathcal D}$), so that \begin{equation}\label{acid-prsute} {\mathcal G}_{1}(\varphi){\mathcal G}_{2}(\psi)+{\mathcal G}_{2}(\varphi){\mathcal G}_{1}(\psi)={\mathcal G}_{1}(\psi){\mathcal G}_{2}(\varphi)+{\mathcal G}_{2}(\psi){\mathcal G}_{1}(\varphi),\quad \varphi,\ \psi \in {\mathcal D}. \end{equation} Applying the operator $A$ on the both sides of (\ref{acid-prsute}), and using (\ref{acid-prsute}) once more for the equality of terms ${\mathcal G}_{1}(-\varphi^{\prime}){\mathcal G}_{2}(\psi)+{\mathcal G}_{2}(-\varphi^{\prime}){\mathcal G}_{1}(\psi)$ and ${\mathcal G}_{1}(\psi){\mathcal G}_{2}(-\varphi^{\prime})+{\mathcal G}_{2}(\psi){\mathcal G}_{1}(-\varphi^{\prime}),$ we get that \begin{align} \notag & {\mathcal G}_{1}(\psi) {\mathcal G}_{2}\bigl(-\varphi^{\prime}\bigr)-\varphi(0){\mathcal G}_{2}(\psi)C+{\mathcal G}_{2}(\psi){\mathcal G}_{1}\bigl(-\varphi^{\prime}\bigr)-\varphi(0){\mathcal G}_{1}(\psi)C \\\label{funk} & ={\mathcal G}_{1}\bigl( -\psi^{\prime} \bigr){\mathcal G}_{2}(\varphi)-\psi(0){\mathcal G}_{2}(\varphi)C+{\mathcal G}_{2}\bigl(-\psi^{\prime}\bigr){\mathcal G}_{1}(\varphi)-\psi(0){\mathcal G}_{1}(\varphi)C,\quad \varphi,\ \psi \in {\mathcal D}. \end{align} Now we shall apply the operator $A$ on the both sides of (\ref{funk}). In such a way, we get that \begin{align*} {\mathcal G}_{1}\bigl(-\psi^{\prime}\bigr) &{\mathcal G}_{2}\bigl(-\varphi^{\prime}\bigr)-\psi(0){\mathcal G}_{2}\bigl( -\varphi^{\prime}\bigr)-\varphi(0) \bigl[ {\mathcal G}_{2}\bigl(-\psi^{\prime}\bigr) -\psi(0)C^{2} \bigr] \\ & +{\mathcal G}_{2}\bigl(-\psi^{\prime}\bigr){\mathcal G}_{1}\bigl(-\varphi^{\prime}\bigr)-\psi(0){\mathcal G}_{1}\bigl( -\varphi^{\prime}\bigr)-\psi(0)\bigl[ {\mathcal G}_{1}\bigl(-\psi^{\prime}\bigr) -\psi(0)C^{2} \bigr] \\ & = {\mathcal G}_{1}\bigl(-\psi^{\prime} \bigr) \bigl[{\mathcal G}_{2}\bigl(-\varphi^{\prime}\bigr)-\varphi(0)C\bigr] -\psi(0)\bigl[ {\mathcal G}_{2}\bigl(-\varphi^{\prime}\bigr) -\varphi(0)C^{2} \bigr] \\&+{\mathcal G}_{2}\bigl(-\psi^{\prime}\bigr) \bigl[ {\mathcal G}_{1}\bigl(-\varphi^{\prime}\bigr) -\varphi(0)C \bigr]-\psi(0)\bigl[ {\mathcal G}_{1}\bigl(-\varphi^{\prime}\bigr)C-\varphi(0)C^{2} \bigr] \end{align*} for any $ \varphi,\ \psi \in {\mathcal D}.$ Using this equality with $\varphi(0)=1,$ and the injectivity of $C$, we obtain that ${\mathcal G}_{1}(\psi^{\prime})={\mathcal G}_{2}(\psi^{\prime}),$ $\psi \in {\mathcal D}.$ Hence, ${\mathcal G}^{\prime}=0$ and the standard arguments from the theory of scalar-valued distributions show that there exists a test function $\eta \in {\mathcal D}_{[0,1]}$ such that $\int^{+\infty}_{-\infty}\eta (t)\, dt=1$ and ${{\mathcal G}(\psi)=\mathcal G}_{1}(\psi)-{\mathcal G}_{2}(\psi)=(\int^{+\infty}_{-\infty}\psi (t)\, dt ){\mathcal G}(\eta)$ for all $\psi \in {\mathcal D}.$ Choosing $\psi \in {\mathcal D}_{[-2,-1]}$ with $\int^{+\infty}_{-\infty}\psi (t)\, dt=1$ we easily get that ${\mathcal G}(\eta)=0,$ so ${\mathcal G}(\psi)=0$ for all $\psi \in {\mathcal D}.$ This completes the proof of proposition. \end{proof} \begin{rem}\label{vuaq-1977} It should be noticed that M. Ju Vuvunikjan has observed (without giving a corresponding proof) that for any closed linear operator $A$ on $E$ there exists at most one vector-valued distribution ${\mathcal G}\in {\mathcal D}^{\prime}_{0}(L(E))$ satisfying that $\mathcal{G}(\varphi)A\subseteq A{\mathcal G}(\varphi),$ $\varphi \in {\mathcal D}$ and that (\ref{dkenk}) holds with $C=I$ (cf. \cite[p. 436]{vuaq} for more details). It is also worth noting that we do not use the operation of convolution of vector-valued (ultra-)distributions in the proof of Proposition \ref{kisinski-uniqueness}. \end{rem} \begin{thm}\label{lokal-int-C} Let $\mathcal{G}$ be a (C-DS) generated by $A$, and let $L(E,[D(A)])$ be a quasi-complete \emph{(DF)}-space. Then, for every $\tau>0$, there exist $n_{\tau}\in\mathbb{N}$ such that $A$ is the integral generator of a local $n_{\tau}$-times integrated $C$-semigroup on $E.$ \end{thm} \begin{proof} The proof follows by the use of \cite[Theorem 3.1.7]{knjigah}, Lemma \ref{polinomi} and the structural theorem for locally convex valued distributions. \end{proof} \begin{rem}\label{finite-order} Let $\mathcal{G}$ be a (C-DS) generated by $A.$ Then we have $A\mathcal{G}(\varphi)x=-\mathcal{G}(\varphi')x-\varphi (0)Cx$, $\varphi\in\mathcal{D}$, $x\in E,$ so that $\mathcal{G}$ can be viewed as a continuous linear mapping from $\mathcal{D}$ into $L(E,[D(A)])$. Theorem \ref{lokal-int-C} continues to hold if we assume that the distribution $\mathcal{G}\in {\mathcal D}'(L(E,[D(A)]))$ is of finite order, instead of setting the assumption that $L(E,[D(A)])$ is a quasi-complete (DF) space. In order to transfer the assertions of \cite[Theorem 3.1.21, Remark 3.1.22]{knjigah} to $C$-distribution semigroups in locally convex spaces, it seems almost inevitable to assume that the distribution $\mathcal{G}\in {\mathcal D}'(L(E,[D(A)]))$ is of finite order. \end{rem} The proof of subsequent theorem can be deduced by using Lemma \ref{polinomi}, the proof of \cite[Theorem 3.1.8]{knjigah} and an elementary argumentation regarding the topological properties of the space ${\mathcal D}.$ \begin{thm}\label{cyne} Suppose that there exists a sequence $((p_k,\tau_k))_{k\in \mathbb{N}_{0}}$ in $\mathbb{N}_{0} \times (0,\infty)$ such that $\lim_{k\rightarrow \infty}\tau_{k}=\infty$ and $A$ is a subgenerator of a locally equicontinuous $p_{k}$-times integrated $C$-semigroup $(S_{p_{k}}(t))_{t\in [0,\tau_{k})}$ on $E$ ($k\in \mathbb{N}_{0}$). Then the operator $C^{-1}AC$ generates a (C-DS) ${\mathcal G},$ given by $$ {\mathcal G}(\varphi)x=(-1)^{p_{k}}\int \limits^{\infty}_{0}\varphi^{(p_{k})}(t)S_{p_{k}}(t)x\, dt,\quad \varphi \in {\mathcal D}_{(-\infty ,\tau_{k})},\ x\in E. $$ \end{thm} \begin{rem}\label{Banach} In the case that $C=I,$ then it suffices to suppose that the operator $A$ is the integral generator of a locally equicontinuous $p$-times integrated semigroup $(S_{p}(t))_{t\in [0,\tau)}$ for some $p\in {\mathbb N}$ and $\tau>0$ (cf. \cite[Remark 3.1.10]{knjigah} for the Banach space case). \end{rem} Let $\alpha\in(0,\infty)\setminus \mathbb{N}$, $f\in\mathcal{S}$ and $n=\lceil\alpha\rceil$. Let us recall that the Weyl fractional derivative $W^{\alpha}_+$ of order $\alpha$ (cf. \cite{mija} and \cite{knjigah}) is defined by \begin{align*} W^{\alpha}_+f(t):=\frac{(-1)^n}{\Gamma(n-\alpha)}\frac{d^n}{dt^n}\int\limits^{\infty}_t(s-t)^{n-\alpha-1}f(s)\,ds, \;t\in\mathbb{R}. \end{align*} If $\alpha=n\in\mathbb{N}_{0}$, then we set $W^n_+:=(-1)^n\frac{d^n}{dt^n}.$ It is well known that the following equality holds: $W^{\alpha+\beta}_{+}f=W^{\alpha}_{+}W^{\beta}_{+}f$, $\alpha,\,\beta>0,$ $f\in\mathcal{S}$. Suppose that $\alpha\in(0,\infty)\setminus \mathbb{N}$ and $f\in C([0,\infty) : E).$ Set $f_{n-\alpha}(t):=(g_{n-\alpha}\ast f)(t),$ $t\geq 0.$ Making use of the dominated convergence theorem, and the change of variables $s\mapsto s-t,$ we get that $$ \frac{1}{\Gamma(n-\alpha)}\frac{d^n}{dt^n}\int\limits^{\infty}_t(s-t)^{n-\alpha-1}\varphi(s)\,ds= \int \limits^{\infty}_{0}g_{n-\alpha}(s)\varphi^{(n)}(t+s)\, ds,\quad t\geq 0,\ \varphi \in {\mathcal D}. $$ Hence, \begin{align*} \int \limits^{\infty}_{0}W^{\alpha}_+\varphi(t)f(t)\, dt&=(-1)^{n} \int \limits^{\infty}_{0}g_{n-\alpha}(s)\varphi^{(n)}(t+s)f(t)\, ds \, dt \\ &=(-1)^{n} \int \limits^{\infty}_{0}\! \! \int \limits^{t}_{0}\varphi^{(n)}(t)g_{n-\alpha}(s)f(t-s)\, ds\, dt \\ &=(-1)^{n} \int \limits^{\infty}_{0}\varphi^{(n)}(t)f_{n-\alpha}(t)\, dt,\quad \varphi \in {\mathcal D}. \end{align*} Therefore, if $A$ is the integral generator of a locally equicontinuous $\alpha$-times integrated $C$-semigroup $(S_{\alpha}(t))_{t\geq 0}$ on $E,$ then we have that: $$ \int^{\infty}_0W^{\alpha}_+\varphi(t)S_{\alpha}(t)x\,dt= (-1)^{n} \int \limits^{\infty}_{0}\varphi^{(n)}(t)S_{n}(t)x\, dt,\quad x\in E,\ \varphi \in {\mathcal D}, $$ with $(S_{n}(t))_{t\geq 0}$ being the locally equicontinuous $n$-times integrated $C$-semigroup generated by $A.$ Combined with Theorem \ref{cyne}, the above implies: \begin{thm}\label{miana} Assume that $\alpha \geq 0$ and $A$ is the integral generator of a locally equicontinuous $\alpha$-times integrated $C$-semigroup $(S_{\alpha}(t))_{t\geq 0}$ on $E.$ Set \begin{equation}\label{globalne} \mathcal{G}_{\alpha}(\varphi)x:=\int^{\infty}_0W^{\alpha}_+\varphi(t)S_{\alpha}(t)x\,dt,\quad x\in E,\ \varphi\in\mathcal{D}. \end{equation} Then $A$ is the integral generator of a (C-DS) ${\mathcal G}.$ \end{thm} It is well known that the integral generator of a $C$-distribution semigroup in a Banach space can have the empty $C$-resolvent set. On the other hand, the existence and polynomial boundedness of $C$-resolvent of $A$ on a certain exponential region ensures that the operator $C^{-1}AC$ generates a (C-DS). More precisely, we have the following. \begin{thm}\label{herbs} Let $a>0$, $b>0$, $\alpha>0$ and $e(a,b)\subseteq \rho_C(A).$ Suppose that the mapping $\lambda\mapsto(\lambda-A)^{-1}Cx$, $\lambda\in e(a,b)$ is continuous for every fixed element $x\in E$, as well as that the operator family $\{(1+|\lambda|)^{-\alpha}(\lambda- A)^{-1}C : \lambda \in e(a,b)\}\subseteq L(E)$ is equicontinuous. Set \begin{align}\label{franci} \mathcal{G}(\varphi)x:=(-i)\int_{\Gamma}\hat{\varphi}(\lambda)(\lambda-A)^{-1}Cx\,d\lambda, \;\;x\in E,\;\varphi\in\mathcal{D}, \end{align} with $\Gamma$ being the upwards oriented boundary of region $e(a,b)$. Then $\mathcal{G}$ is a (C-DS) generated by $C^{-1}AC$. \end{thm} \begin{proof} Without loss of generality, we may assume that, for every $x\in E,$ the mapping $\lambda\mapsto (\lambda-A)^{-1}Cx$ is analytic on some open neighborhood of the region $e(a,b);$ cf. \cite[Proposition 2.16]{sic}. By the argumentation given in the proof of \cite[Theorem 3.1.27]{knjigah}, it readily follows that $\mathcal{G}\in {\mathcal D}^{\prime}_{0}(L(E)),$ as well as that $\mathcal{G}(\varphi)C^{l}((z-A)^{-1}C)^{m}=C^{l}((z-A)^{-1}C)^{m}\mathcal{G}(\varphi),$ $\varphi \in {\mathcal D}$ ($m,\ l\in {\mathbb N}_{0}$). Suppose that $B$ is a bounded subset of ${\mathcal D}.$ Then there exists $\tau>0$ such that $B$ is contained and bounded in ${\mathcal D}_{[-\tau,\tau]}.$ Since \begin{equation}\label{mos-def} \hat{\varphi}(\lambda)=\frac{(-1)^{n}}{2\pi \lambda^{n}}\int^{\infty}_{-\infty}e^{\lambda t}\varphi^{(n)}(t)\, dt,\quad \varphi \in {\mathcal D},\ \lambda \in {\mathbb C} \setminus \{0\} , \end{equation} we obtain that for each $n\in {\mathbb N}$ there exists $c_{n}>0$ such that for each $\varphi \in B$ we have $|\hat{\varphi}(\lambda)|\leq c_{n}e^{\tau \Re \lambda}|\lambda|^{-n},$ $\lambda \in {\mathbb C} \setminus \{0\} .$ Keeping in mind this estimate and the equicontinuity of the family $\{(1+|\lambda|)^{-\alpha}(\lambda- A)^{-1}C : \lambda \in e(a,b)\}$, it can be simply proved that ${\mathcal G}$ is boundedly equicontinuous. Similarly as in the proof of \cite[Theorem 3.1.27]{knjigah}, we have that $\mathcal{G}(\varphi)A\subseteq A{\mathcal G}(\varphi),$ $\varphi \in {\mathcal D}$ and (\ref{dkenk}) holds. By Theorem \ref{fundamentalna}(i), we get that ${\mathcal G}$ satisfies (C.S.1). In order to prove (C.S.2), suppose that ${\mathcal G}(\varphi)x'=0$, $\varphi \in {\mathcal D}_{0}$ for some element $x'\in E.$ Owing to \cite[Theorem 4.4(ii)]{sic}, we know that there exist numbers $n\in {\mathbb N},$ $n>\alpha+1,$ and $\tau'>0$ such that the operator $A$ ($C^{-1}AC$) is a subgenerator (the integral generator) of a locally equicontinuous non-degenerate $n$-times integrated $C$-semigroup $(S_{n}(t))_{t\in [0,\tau')},$ given by $$ S_{n}(t)x=\frac{1}{2\pi i}\int_{\Gamma}e^{\lambda t}\lambda^{-n}\bigl(\lambda-A\bigr)^{-1}Cx\, d\lambda,\quad x\in E,\ t\in [0,\tau'). $$ Integration by parts implies that $\int^{\infty}_{0}\varphi^{(n)}(t)e^{\lambda t}\, dt=(-1)^{n}\lambda^{n}\int^{\infty}_{0}\varphi (t)e^{\lambda t}\, dt,$ $\lambda \in {\mathbb C},$ $\varphi \in {\mathcal D}_{(0,\tau')}.$ Exploiting this equality, the Fubini theorem, and the foregoing arguments, it can be simply verified that: $$ {\mathcal G}(\varphi)x=(-1)^{n}\int^{\infty}_{0}\varphi^{(n)}(t)S_{n} (t)x\, dt,\quad x\in E,\ \varphi \in {\mathcal D}_{(0,\tau')}. $$ This, in particular, holds with $x=x',$ so that Lemma \ref{polinomi} implies that there exist elements $x_{0},\cdot \cdot \cdot,x_{n-1}\in E$ such that $S_{n}(t)x'=\sum^{n-1}_{j=0}t^{j}x_{j},$ $t\in [0,\tau').$ Plugging $t=0,$ we get that $x_{0}=0.$ Hence, $$ A\sum \limits^{n-1}_{j=1}\frac{t^{j+1}}{j+1}x_{j}=\sum_{j=1}^{n-1}t^{j}x_{j}-\frac{t^{n}}{n!}Cx',\quad t\in [0,\tau'), $$ which implies $x_{1}=\cdot \cdot \cdot =x_{n-1}=0,$ and consequently, $x'=0,$ because $(S_{n}(t))_{t\in [0,\tau')}$ is non-degenerate. We have proved that ${\mathcal G}$ is a (C-DS). By Theorem \ref{fundamentalna}(ii), the integral generator of ${\mathcal G}$ is the operator $C^{-1}AC.$ \end{proof} \begin{rem}\label{filipa-1} \begin{itemize} \item[(i)] In the proof of \cite[Theorem 3.1.27]{knjigah}, the equation (\ref{dkenk}) and the structural theorem for vector-valued distributions supported by a point have been essentially used in proving the property (C.S.2) for ${\mathcal G}.$ Observe that we do not assume here that the space $E$ is admissible. \item[(ii)] Now we would like to explain how one can prove the property (C.S.1) for ${\mathcal G}$ by using direct computations. Let us fix two test functions $\varphi,$ $\psi \in {\mathcal D},$ an element $x\in E$ and a number $z\in \rho_{C}(A) \setminus e(a,b).$ Using the generalized resolvent equation \cite[(6)]{knjigaho}, we get that the operator family $\{(1+|\lambda|)^{-1}(\lambda- A)^{-1}C((z-A)^{-1}C)^{\lceil \alpha \rceil +1} : \lambda \in e(a,b)\}\subseteq L(E)$ is equicontinuous. Set $y:=((z-A)^{-1}C)^{\lceil \alpha \rceil +1}x.$ Making use of the equation (\ref{mos-def}) with $n=1,$ the partial integration, and the Cauchy formula, one obtains: \begin{align} \notag {\mathcal G}(\varphi)y&=\frac{(-1)}{2\pi i}\int_{\Gamma}\! \int_{-\infty}^{\infty}\frac{1}{\lambda}e^{\lambda t}\varphi^{\prime}(t)\bigl (\lambda-A\bigr)^{-1}Cy\, dt \, d\lambda \\\notag & = \frac{(-1)}{2\pi i}\int_{\Gamma}\! \int_{0}^{\infty}\frac{1}{\lambda}e^{\lambda t}\varphi^{\prime}(t)\bigl (\lambda-A\bigr)^{-1}Cy\, dt\, d\lambda \\\notag & =\frac{1}{2\pi i}\int_{\Gamma}\Biggl[ \frac{\varphi (0)}{\lambda} + \int^{\infty}_{0}e^{\lambda t}\varphi(t)\, dt \Biggr]\bigl (\lambda-A\bigr)^{-1}Cy\, d\lambda \\\label{mos-def-1} & =\frac{1}{2\pi i}\int_{\Gamma}\! \int^{\infty}_{0}e^{\lambda t}\varphi(t)\bigl (\lambda-A\bigr)^{-1}Cy\, dt \, d\lambda,\quad \varphi \in {\mathcal D}. \end{align} This simply implies \begin{align}\label{mos-def-2} {\mathcal G}(\varphi \ast_{0}\psi)y=\frac{1}{2\pi i}\int_{\Gamma} \ \int_{0}^{\infty} \int_{0}^{\infty} e^{\lambda (t+s)}\varphi(t)\psi (s)\bigl (\lambda-A\bigr)^{-1}Cy\, dt\, ds\, d\lambda. \end{align} Let $\theta \in (0,\pi/2)$ be a fixed angle. Considering, for a sufficiently large number $R>0,$ the positively oriented curve $\Gamma':=\Gamma_{1}' \cup \Gamma_{2}' \cup \Gamma_{3}' \cup \Gamma_{4}',$ where $\Gamma_{1}':=\{t-iR\cos \theta : t\in [-R\sin \theta , a^{-1}\ln (R\cos \theta)]\},$ $\Gamma_{2}':=\{\lambda \in \Gamma: |\Im \lambda|\leq R\cos \theta\},$ $\Gamma_{3}':=\{t+iR\cos \theta : t\in [-R\sin \theta , a^{-1}\ln (R\cos \theta)]\}$ and $\Gamma_{4}':=\{Re^{i\vartheta} : \vartheta \in [\theta +(\pi/2),(3\pi/2)-\theta]\},$ and applying the Cauchy formula, we get that \begin{equation}\label{mos-def-3} \frac{1}{2\pi i}\int_{\Gamma}\frac{\int^{\infty}_{0}e^{\lambda t}\varphi (t)\, dt}{\lambda-\eta}\, d\lambda=0,\quad \varphi \in {\mathcal D},\ \eta \notin \Gamma. \end{equation} Let $\Gamma_{1}$ be the positively oriented boundary of the region $e(a_{1},b_{1}),$ where $0<a_{1}<a$ and $b_{1}>b.$ Noticing that, for every fixed number $\lambda \in \Gamma,$ the operator $A=\lambda$ generates the strongly continuous semigroup $(e^{\lambda t})_{t\geq 0}$ on ${\mathbb C},$ and that every $C$-distribution semigroup on a Banach space is uniquely determined by its generator, we can apply \cite[Theorem 3.1.27]{knjigah} in order to see that \begin{equation}\label{integral-lkj} (-i)\int_{\eta \in \Gamma_{1}}\frac{\hat{\psi}(\eta)}{\eta-\lambda}\, d\eta=\int^{\infty}_{0}e^{\lambda t}\psi(t)\, dt. \end{equation} Set $\hat{\varphi}_{+}(\lambda):=\int^{\infty}_{0}e^{\lambda t}\varphi (t)\, dt,$ $\lambda \in {\mathbb C}.$ Using the Cauchy formula, (\ref{mos-def-1}), (\ref{mos-def-3})-(\ref{integral-lkj}), the resolvent equation, and the Fubini theorem, we obtain: \begin{align*} {\mathcal G}(\varphi ){\mathcal G}(\psi)y&=\frac{(-1)}{2\pi}\int_{\Gamma} \! \int_{\Gamma_{1}}\hat{\varphi}_{+}(\lambda)\hat{\psi}(\eta)\frac{\bigl(\lambda-A\bigr)^{-1}C^{2}y-\bigl(\eta-A\bigr)^{-1}C^{2}y}{\eta -\lambda}\, d\eta \, d\lambda \\& =\frac{1}{2\pi i}\int_{\Gamma}\! \int^{\infty}_{0}\hat{\varphi}_{+}(\lambda)e^{\lambda t}\psi(t) \bigl(\lambda-A\bigr)^{-1}C^{2}y\, dt\, d\lambda \\ &-\frac{1}{2\pi}\int_{\Gamma_{1}} \! \Biggl( \int_{\Gamma}\frac{\int^{\infty}_{0}e^{\lambda t}\varphi (t)\, dt}{\lambda-\eta}\, d\lambda\Biggr) \hat{\psi}(\eta)\bigl(\eta-A\bigr)^{-1}C^{2}y\, d\eta \\&=\frac{C}{2\pi i}\int_{\Gamma}\! \int^{\infty}_{0}\hat{\varphi}_{+}(\lambda)e^{\lambda t}\psi(t) \bigl(\lambda-A\bigr)^{-1}Cy\, dt\, d\lambda \\& =\frac{C}{2\pi i}\int_{\Gamma} \ \int_{0}^{\infty} \int_{0}^{\infty} e^{\lambda (t+s)}\varphi(s)\psi (t)\bigl (\lambda-A\bigr)^{-1}Cy\, ds\, dt\, d\lambda \\&=C{\mathcal G}\bigl(\varphi \ast_{0} \psi \bigr)y. \end{align*} Because ${\mathcal G}(\zeta)$ commutes with the operator $((z-A)^{-1}C)^{\lceil \alpha \rceil +1}$ ($\zeta \in {\mathcal D}$), the above computation implies by (\ref{mos-def-2}) that ${\mathcal G}(\varphi) {\mathcal G}(\psi)x={\mathcal G}(\varphi \ast_{0} \psi)Cx$ and that (C.S.1) holds. \item[(ii)] The assertion of \cite[Proposition 3.1.28(i)]{knjigah} continues to hold in locally convex spaces. \end{itemize} \end{rem} In the remaining part of this section, we shall reconsider the definition of a regular distribution semigroup given by J. L. Lions \cite{li121} and prove some results on dense (C-DS)'s ((C-UDS)'s of $\ast$-class). Suppose that $\mathcal{G}\in\mathcal{D}'_0(L(E))$ ($\mathcal{G}\in\mathcal{D}'^{\ast}_0(L(E))$) is boundedly equicontinuous. We analyze the following conditions for ${\mathcal G}$: $(d_1)$:\quad $\mathcal{G}(\varphi*\psi)C=\mathcal{G}(\varphi)\mathcal{G}(\psi)$, $\varphi,\,\psi\in\mathcal{D}_0$ ($\varphi,\,\psi\in\mathcal{D}^{\ast}_0$), $(d_2)$:\quad the same as (C.S.2), $(d_3)$:\quad $\mathcal{R}(\mathcal{G})$ is dense in $E$, $(d_4)$:\quad for every $x\in\mathcal{R}(\mathcal{G})$, there exists a function $u_x\in C([0,\infty):E)$ so that $u_x(0)=Cx$ and $\mathcal{G}(\varphi)x=\int_0^{\infty}\varphi(t)u_x(t)\,dt$, $\varphi\in\mathcal{D}$ ($\varphi\in\mathcal{D}^{\ast}$), $(d_5)$:\quad if $(d_2)$ holds then $(d_5)$ means $G(\varphi_+)C=\mathcal{G}(\varphi)$, $\varphi\in\mathcal{D}$ ($\varphi\in\mathcal{D}^{\ast}$). Using the same arguments as in the Banach space case (\cite{knjigah}), we can prove the following theorem. \begin{thm}\label{wu-tang} \begin{itemize} \item[(i)] Suppose that $\mathcal{G}\in\mathcal{D}'_0(L(E))$ ($\mathcal{G}\in\mathcal{D}'^{\ast}_0(L(E))$) is boundedly equicontinuous and $\mathcal{G}C=C\mathcal{G}$. Then $\mathcal{G}$ is a $($C-DS$)$ ($($C-UDS$)$ of $\ast$-class) iff $(d_1)$, $(d_2)$ and $(d_5)$ hold. \item[(ii)] Suppose that $\mathcal{G}\in\mathcal{D}'_0(L(E))$ ($\mathcal{G}\in\mathcal{D}'^{\ast}_0(L(E))$) satisfies $(d_1)$, $(d_2)$, $(d_3)$, $(d_4)$ and $\mathcal{G}C=C\mathcal{G}$. If ${\mathcal G}$ is boundedly equicontinuous, then $\mathcal{G}$ is a $($C-DS$)$ ($($C-UDS$)$ of $\ast$-class). \item[(iii)] Let $\mathcal{G}$ be a $($C-DS$)$ ($($C-UDS$)$ of $\ast$-class). Then $\mathcal{G}$ satisfies $(d_4)$. \end{itemize} \end{thm} \section{Stationary dense operators in locally convex spaces} Following P. C. Kunstmann \cite{ku101}, we introduce the notion of a stationary dense operator in a sequentially complete locally convex space as follows. \begin{defn}\label{pc-kunst} A closed linear operator $A$ is said to be stationary dense iff $$ n(A):=\inf\bigl\{k\in\mathbb{N}_0:D(A^m)\subseteq\overline{D(A^{m+1})}\text{ for all }m\geq k\bigr\}<\infty. $$ \end{defn} The abstract Cauchy problem \[(ACP_1):\left\{ \begin{array}{l} u\in C([0,\tau):[D(A)])\cap C^1([0,\tau):E),\\ u'(t)=Au(t),\;t\in[0,\tau),\\ u(0)=x, \end{array} \right. \] where $A$ is a closed linear operator on $E$ and $0<\tau \leq \infty,$ has been analyzed in a great number of research papers and monographs (see e.g. \cite{a22}-\cite{b42}, \cite{cha}-\cite{c62}, \cite{d81}-\cite{engel}, \cite{hiber1}-\cite{ki90}, \cite{komatsulocally}-\cite{knjigaho}, \cite{ko98}, \cite{ptica}-\cite{tica}, \cite{ku101}-\cite{li121}, \cite{me152}, \cite{pa11} and \cite{ush}-\cite{yosi}). By a mild solution of problem $(ACP_1)$ we mean any continuous function $t\mapsto u(t;x),$ $t\in [0,\tau)$ such that $A\int^{t}_{0}u(s;x)\, ds=u(t;x)-x,$ $t\in [0,\tau).$ \begin{prop}\label{pc-kunst-isto} Let $0<\tau \leq \infty $ and $n\in {\mathbb N}_{0}.$ \begin{itemize} \item[(i)] Suppose that the abstract Cauchy problem $(ACP_1)$ has a unique mild solution $u(t;x)$ for all $x\in D(A^{n}).$ Then $A$ is stationary dense and $n(A)\leq n.$ \item[(ii)] Suppose that $(S_{n}(t))_{t\in [0,\tau)}$ is a locally equicontinuous $n$-times integrated semigroup generated by $A.$ Then $A$ is stationary dense and $n(A)\leq n.$ \end{itemize} \end{prop} \begin{proof} One has to use the arguments given in that of \cite[Lemma 1.7]{ku101} (cf. also \cite[Remark 1.2(i)]{ku101}) and the fact that for any locally equicontinuous $n$-times integrated semigroup $(S_{n}(t))_{t\in [0,\tau)}$ generated by $A$ the abstract Cauchy problem $(ACP_1)$ has a unique mild solution for all $x\in D(A^{n}),$ given by $ u(t;x)=S_{n}(t)A^{n}x+\sum^{n-1}_{j=0}\frac{t^{j}}{j!}A^{j}x,$ $t\in [0,\tau)$ (\cite{a22}). \end{proof} \begin{lem}\label{lema0} Let $A$ be a closed operator in sequentially complete locally convex space and let $(\lambda_n)\in\rho(A)$ be a sequence such that $\lim_{n\rightarrow\infty}|\lambda_n|=\infty$ and there exist $C>0$ and $k\geq-1$ such that for every $p\in\circledast$, there exists $q\in\circledast$, such that $p(R(\lambda_n:A)x)\leq C|\lambda_n|^kq(x)$ for all $x\in E$ and $n\in\mathbb N$. Then $A$ is stationary dense with $n(A)\leq k+2$.\end{lem} \begin{proof} Let $x\in D(A^{k+1})$. Then for every $p\in\circledast$, there exists $q\in\circledast$, such that $p(\lambda_n R(\lambda_n:A)x)\leq \|R(\lambda_n:A)\| q(Ax)$. Now for $x\in D(A^{k+2})$, it follows $\lambda_n R(\lambda_n:A)x\in D(A^{k+3})$ for all $\mathbb N$. Furthermore $$p(\lambda_n R(\lambda_n:A)x-x)=p(R(\lambda_n:A)Ax)\leq \frac{C'}{|\lambda_n|}q(Ax).$$ Hence, $$x=\lim\limits_{n\longrightarrow\infty}\lambda_n R(\lambda_n:A)x$$ and $x$ belongs to the closure of $D(A^{k+3})$, which means that $A$ is stationary dense and $n(A)\leq k+2$. \end{proof} We say that the operator $A$ satisfy the condition $(EQ)$ if:\\ $A_{\infty}$ is a generator of an equicontinuous semigroup $T_{\infty}(t)$ in $D_{\infty}(A)$, i.e. for every $p\in\circledast$ there exists $q\in\circledast$ and $C$ such that $$p(T_{\infty}(t)x)\leq C q(x),$$ for every $x\in D_{\infty}(A)$.\\ Using the results given in \cite{ku101}, ~\cite[Theorem 4.1]{ush} we can state similar results in our setting (E is sequentially complete locally convex space). Assume that $A$ is stationary dense, satisfies $(EQ)$, $n=n(A)$ and $F$ is the closure of $D(A^n)$ in $E$. \begin{lem}\label{lema1234} \begin{itemize} \item[a)]$A_F$ is densely defined in $F$, where $A_F$ means the restriction of the operator $A$ on $F$; \item[b)] \label{lema2} $\rho(A;L(E))=\rho(A_F;L(F))$ for all $\lambda\in\rho(A)$. Additionally for all $x\in E$ and $p\in\circledast$, there exist $n\in{\mathbb N}$, $C>0$ such that $$p(R(\lambda:A)x)\leq C(1+|\lambda|)^n(p(R(\lambda:A_F)x)+1);$$ \item[c)]\label{lema3} The Fr\'echet spaces $D_{\infty}(A)$ and $D_{\infty}(A_F)$ coincide topologically and $A_{\infty}=(A_F)_{\infty}$; \item[d)]$n(A)=\inf\{k\in{\mathbf N_0}\, :\, \overline{D(A^k)}\subset D_{\infty}(A)\}.$\end{itemize} \end{lem} \begin{proof}\begin{itemize} \item[a)] It is obvious since $D(A^n)$ is dense in $F$ and $D(A^n)\subset D(A_F)$. \item[b)] Since $D(A^n)\subset D(A)$, $F$ is invariant under resolvent of $A$, we obtain $\rho(A;L(E))\subset\rho(A_F;L(F))$. Now, let $\lambda\in\rho(A_F;L(F))$. Then $\lambda-A$ is injective, so $R(\lambda:A)$ is an extension of $R(\lambda:A_F)$. Let $\mu\in\rho(A)$. By $R(\lambda:A)=(\mu-A)^nR(\lambda:A)R(\mu:A)^n$, we obtain that $\lambda\in\rho(A:L(E))$.\\ It holds, for $x\in E$ and all $p\in\circledast$, $$p(R(\lambda:A)x)\leq p_n(R(\mu:A)^nx)p_n(R(\lambda:A_F)x)p(R(\mu:A)^nx)$$ where $p_n(x)=\sup\limits_{p\in\circledast}\sum\limits_{i=1}^{n}p(A^ix)$. Note that $(D(A^n),p_n)$ is a Banach space. For $x\in D(A^n)$ we have $$p_n(R(\lambda:A_F)x)=\sup\limits_{p\in\circledast}\sum\limits_{i=0}^{n}p(A^iR(\lambda:A_F)x)=$$ $$=\sup\limits_{p\in\circledast}\sum\limits_{i=0}^{n}p(({\lambda}^k+A^k-{\lambda}^k)R(\lambda:A_F)x)\leq$$ $$\leq\sup\limits_{p\in\circledast}\sum\limits_{i=0}^{n}{\Biggl(}{|\lambda|}^k p(R({\lambda}:A_F)x)+ \sum\limits_{j=0}^{i-1}{|\lambda|}^j p(A^{i-1-j}x){\Biggr)}.$$ The last one inequality, together with the previous one gives the statement of the lemma. \item[c)]It holds $D(A^{n+k})\hookrightarrow D((A_F)^k)\hookrightarrow D(A^k)$, which one holds for all $k\in{\mathbb N}_0$. Then $$\bigcap\limits_{k=1}^{\infty}D(A^{n+k})\hookrightarrow\bigcap\limits_{k=1}^{\infty}D(A_F^k) \hookrightarrow\bigcap\limits_{k=1}^{\infty}D(A^k),$$ which gives that $D_{\infty}(A)$ and $D_{\infty}(A_F)$ coincide topologically and $A_{\infty}=(A_F)_{\infty}$. \item[d)] From the previous lemma, $D_{\infty}(A)=D_{\infty}(A_F)$ and $D(A^{n+1})\subset D(A_F)$, we obtain the conclusion of the lemma.\end{itemize}\end{proof} \begin{thm}\label{eqeq} Let $A$ be a stationary dense operator in $E$ with non-empty resolvent. Then $\rho(A;L(E))=\rho(A_{\infty};L(D_{\infty}(A)))$.\end{thm} \begin{proof} By Lemma \ref{lema1234} a) we have that $A_F$ is dense in $F$ and by b) from the same lemma it has non-empty resolvent. Then using the proof of ~\cite[Theorem 2.3]{ku101} and Lemma \ref{lema1234} c), we obtain that $\rho(A;L(E))=\rho(A_{\infty};L(D_{\infty}(A)))$. \end{proof} Next example show us if $A$ is not stationary dense operator in $E$, then the conclusion of Theorem \ref{eqeq} does not hold. \begin{example} Let we define the space $S_j$ as $$S_j=\{\varphi\in{\mathcal C}^{\infty}({\mathbb R})\, :\, p_j(x)=\sup\limits_{\alpha+\beta\leq j}\|x^{\beta}D^{\alpha}\varphi(x)\|_{L^2({\mathbb R})}<\infty\}.$$ Then the test space for tempered distributions $S({\mathbb R})$ can be defined as ${\mathcal S}({\mathbb R})=\lim\limits_{j\rightarrow\infty}\mbox{proj} S_j$. Let $E={\mathcal S}({\mathbb R})$, ($E$ is a Fr\'echet space as a projective limit of Banach spaces, so $E$ is a sequentially complete locally convex space). Define $A=-\frac{d}{dt}$ on $E$ with domain $D(A)=\{f\in E\, :\, f(0)=0\}$. The operator $A$ is not stationary dense on $E$. Note that $D_{\infty}(A)=\{0\}$ and $\rho(A_{\infty})=\{\lambda\in{\mathbb C}\, :\, \lambda\neq 0\}$. For $f\in E$, $\lambda\in{\mathbb C}$, and $\Re\lambda>s$, we have $$(\lambda-A)^{-1}f=\int\limits_0^{\infty}e^{-(\lambda-s)t}f(t)\, dt$$ belongs in $E$. Then $\rho(A)=\{\lambda\in{\mathbb C}\, :\, \Re\lambda>s\}$. Therefore, we obtain that the conclusion of Theorem \ref{eqeq} does not hold. The same conclusion can be made in the ultradistribution case. Consider the spaces $$E_{(h)}=\{f\in C^{\infty}({\mathbb R})\, :\, f(0)=0,\, p_n(f)=\sup\limits_{k\in{\mathbb N}_0}\sup\limits_{t\geq k}\frac{h^n|{f}^{(n)}(t)|}{M_n}<+\infty\}$$ $$E_{\{h\}}=\{f\in C^{\infty}({\mathbb R})\, :\, f(0)=0,\, p_n(f)=\sup\limits_{k\in{\mathbb N}_0}\sup\limits_{t\geq k}\frac{h^n|{f}^{(n)}(t)|}{M_n\prod\limits_{i=1}^nr_i}<+\infty\}$$ for $(r_i)$ monotonically increasing positive sequence and $(M_n)$ satisfying $(M.1)$ and $(M.3)'$. The spaces $E_{(h)}$ and $E_{\{h\}}$ are Fr\'echet spaces. Let $A=-\frac{d}{ds}$ with domain $D(A)=\{f\in{E_h}\, : \, f(0)=0, Af\in E_h\}$, where $E_h$ stands for both spaces. Let $p_n\in\circledast$ be a seminorm in $E_{\{h\}}$. The previous consideration in distribution case for $E={\mathcal S}({\mathbb R})$ is similar and more simple then the case with spaces $E_{(h)}$ and $E_{\{h\}}$. \end{example} \begin{thm}\label{kor1} Let $A$ be a stationary dense operator in a sequentially complete locally convex space $E$, $n=n(A)$ and $F=\overline{D(A^n)}$. Then $A$ generates a distribution semigroup in $E$ if and only if $A_F$ generates a distribution semigroup in $F$.\end{thm} \begin{proof}The proof of this theorem is direct consequence of \cite[Theorem 2.7]{tica}, Lemma \ref{lema0} and Lemma \ref{lema1234} b). \end{proof} Following two results are due to T. Ushijima and we can restate it in locally convex case as following. \begin{thm}\label{pom1} Let $E$ be a sequentially complete locally convex space and $A$ be a closed operator with dense $D(A^{\infty})$. Then $A$ has the property $(EQ)$ if and only if there exists logarithmic region $\Omega_{\alpha,\beta}$ and $k\in{\mathbb N}$ and $C>0$ such that $$ ||R(\lambda : A)||\leq C(1+|\lambda|)^k, \quad \lambda\in \Omega_{\alpha,\beta}.$$ \end{thm} \begin{thm} Let $E$ be a sequentially complete locally convex space and $A$ linear operator on $E$. The following conditions are equivalent: \begin{itemize} \item[i)] $A$ is the generator of a distribution semigroup $G$; \item[ii)] $A$ is well-posed and densely defined; \item[iii)] $A$ satisfy the condition $(EQ)$; \item[iv)] $A$ is densely defined, and there exist an adjoint logarithmic region $\Omega_{\alpha,\beta}$ and $k\in{\mathbb N}$ and $C>0$ such that $$ ||R(\lambda : A)||\leq C(1+|\lambda|)^k, \quad \lambda\in \Omega_{\alpha,\beta}.$$ \end{itemize} \end{thm} \begin{proof} We will prove $i)\Rightarrow ii)\Rightarrow iii)\Rightarrow iv)\Rightarrow i)$. The statements $i)\Leftrightarrow ii)$ and $iv)\Rightarrow i)$ follow from \cite[Theorem 2.7]{tica} and $iii)\Rightarrow iv)$ follows by Theorem \ref{pom1} It remains to show that $i)\Rightarrow iii)$ and $iv)\Rightarrow ii)$.\\ $i)\Rightarrow iii)$: By the definition of the distribution semigroup, we can conclude that $D(A^{\infty})$ contains ${\mathcal R}(G)$. Since ${\mathcal R}(G)$ is dense in $E$, $D(A^{\infty})$ is dense in $E$. By the results in the third section from \cite{ush1}, we obtain that $$({\boldsymbol\lambda}-{\mathbf A}_{\infty})^{-1}(1\otimes x)(\hat{\varphi})=G(\varphi)x,$$ for any $\varphi\in\DD$ and $x\in D_{\infty}(A)$. Note that with $({\boldsymbol\lambda}-{\mathbf A}_{\infty})^{-1}$ is denoted the generalized resolvent. Let $f_{\lambda}(t)=\mu(t)e^{-\lambda t}$, where $\mu\in\DD$ and $\mu(t)=1$ for $|t|\leq1$. Define the operator $R_{\infty}(\lambda)=G(f_{\lambda})$. Since $G\in\DD'_0(L(E))$, for all $p\in\circledast$, there exists $q\in\circledast$, $k\in{\mathbb N}$, $C>0$ such that for all $x\in E$, $\lambda\in{\mathbb C}$, $\Re\lambda\geq0$ $$p(R_{\infty}(\lambda)x)\leq C (1+|\lambda|)^kq(x).$$ Now, for any $x\in D_{\infty}(A)$, $\varphi\in\DD$, $a>0$, $$\frac{1}{i}\int\limits_{a-i\infty}^{a+i\infty}\hat{\varphi}(\lambda)R_{\infty}(\lambda)x\, d\lambda=\int\limits_{a-i\infty}^{a+i\infty}{\Bigl(}\frac{1}{2\pi i}\int\limits_{-\infty}^{\infty}\varphi(t) e^{t\lambda}\, dt{\Bigr)}G(f_{\lambda})\, d\lambda\, x=$$ $$=G{\Bigl(}\mu(s)\frac{1}{2\pi i}\int\limits_{a-i\infty}^{a+i\infty}\, d\lambda\int\limits_{-\infty}^{+\infty}\varphi(t)e^{(t-s)\lambda}\, dt{\Bigr)}x=G(\mu\cdot\varphi)x=$$ $$=G(\varphi)x=({\boldsymbol{\lambda}}-{\textbf A}_{\infty})^{-1}(1\otimes x)(\hat{\varphi}).$$ Again, using third section in \cite{ush1}, we obtain $$\int_0^{\infty}\varphi(t)T_{\infty}(t)x\, dt=G(\varphi)x,$$ for all $\varphi\DD$ and $x\in D_{\infty}(A)$, which gives $iii)$. $v)\Rightarrow ii)$: It is same like in $(v)\Rightarrow (ii)$ of ~\cite[Theorem 4.1]{ush}. \end{proof} \begin{thm} Let $A$ be a closed operator in $E$. Then $A$ generates a distribution semigroup in $E$ if and only if $A$ is stationary dense and $A_{\infty}$ generates an equicontinuous semigroup in $D_{\infty}(A)$.\end{thm} \begin{proof} Let $A$ generates distribution semigroup in $E$. By simpler version of \cite[Theorem 2.7]{tica} and Lemma \ref{lema0} $A$ is stationary dense. Now, let $n=n(A)$ and $F=\overline{D(A^n)}$. Then by Lemma \ref{lema1234} a) and Theorem \ref{kor1} $A_F$ generates a dense distribution semigroup in $F$. Then $(A_F)_{\infty}$ generates an equicontinuous semigroup in $D_{\infty}(A_F)$. By Lemma \ref{lema1234} c) follows the conclusion.\\ Opposite direction. Let $A$ is stationary dense and $A_{\infty}$ generates an equicontinous semigroup and $n=n(A)$ and $F=\overline{D(A^n)}$. Then $A_F$ generates a distribution semigroup in $F$. By Theorem \ref{kor1} we obtain that $A$ generates a distribution semigroup in $E$. \end{proof} \begin{thm} Let $A$ be a closed operator in $E$. Then $A$ generates an exponential distribution semigroup in $E$ if and only if $A$ is stationary dense and $A_{\infty}$ generates a quasi-equicontinuous semigroup in $D_{\infty}(A)$.\end{thm} The proof is direct consequence having on mind the results of exponential distribution semigroups listed before.
1,314,259,994,652
arxiv
\section{Basic Formulae} Out of the Weisskopf-Wigner approximation we will essentially need only the part which has to do with the eigenvectors of the effective, non-hermitian Hamiltonian which is the result of two approximations made in the Schr\"odinger equation \cite{nach}, \cite{kabir3}. This part defines the $K_S$ and $K_L$ states in the usual way \begin{eqnarray} \label{2.1} &\mbox{$|K_S\rangle$} = p\mbox{$|K^0 \rangle$} + q \mbox{$|\bar{K^0}\rangle$}, \, \, \, \, \mbox{$|K_L\rangle$} = p\mbox{$|K^0 \rangle$} -q\mbox{$|\bar{K^0}\rangle$} \\ \label{2.2} &\mbox{$\langle K_S \ks$} = \mbox{$\langle K_L \kl$} = |p|^2 + |q|^2 =1 \\ \label{2.3} &\mbox{$\langle K_S \kl $} = \mbox{$\langle K_L \ks $} =|p|^2 - |q|^2 \neq 0 \end{eqnarray} The equality $\mbox{$\langle K_S \kl $} =\mbox{$\langle K_L \ks $}$ in eq.(\ref{2.3}) is imposed by CPT-invariance which we will assume to hold throughout the paper. The presence of CP-violation in the mixing is reflected by $|p|^2 - |q|^2 \neq 0$ which enforces the states $K_S$ and $K_L$ to be non-orthogonal to each other. Another often used parametrization of the mixing parameters is given by \begin{equation} \label{2.4} p={1+ \epsilon_K \over \sqrt{2(1 + |\epsilon_K|^2)}},\, \, \, \, \, q={1- \epsilon_K \over \sqrt{2(1 + |\epsilon_K|^2)}} \end{equation} which makes contact with the $\epsilon_K$ parameter mentioned in the introduction. Since the CP-violation in the $K^0-\bar{K^0}$ system (or equivalently the non-orthogonality of $K_S$ and $K_L$) will play an important role we define for the sake of a short notation \begin{equation} \label{2.5} \mbox{$\Delta_K$} \equiv |p|^2 - |q|^2 \end{equation} Let us also note here that although eqs.(\ref{2.1})-(\ref{2.3}) come out naturally in the context of WW-approximation, indepedent of this approximation, assuming $K^0 \leftrightarrow \bar{K^0}$ mixing, the presence of CP-violation in the mixing and implementing therein CPT-contraints there is not much choice left other than to postulate eqs.(\ref{2.1})-(\ref{2.3}) for the $K_S$ and $K_L$ states, up to possible contributions from continuum states which we neglect (for a different point of view where in the context of a generalized quantum mechanical vector space $K_S$ and $K_L$ are orthogonal see \cite{suder2} and references therein). Hence eqs.(\ref{2.1})-(\ref{2.3}) have a much broader applicability than the part of WW approximation which determines the time dependence of transition and survival amplitudes. Given a full, hermitian Hamiltonian $H$ according to general principles of Quantum Mechanics the time evolution for $K^0$ and $\bar{K^0}$ can be summarized as follows \begin{eqnarray} \label{2.6} &P_{K_{\alpha}K_{\beta}}(t)=\langle K_{\alpha}|e^{-iHt}|K_{\beta}\rangle =\langle K_{\alpha}| K_{\beta}(t)\rangle \nonumber \\ &|K_{\alpha}(t)\rangle = e^{-iHt}|K_{\alpha}\rangle \nonumber \\ &K_{\alpha}= K^0,\, \bar{K^0}, \end{eqnarray} Due to the non-orthogonality of $K_S$ and $K_L$ there is a subtle difference between the treatment of the time evolution of $K^0$, $\bar{K^0}$ and $K_S$, $K_L$. For the former the $P_{K_{\alpha}K_{\beta}}(t)$ are expansion coefficients in \begin{eqnarray} \label{2.7} |K^0 (t)\rangle &=& \mbox{$P_{K^0 K^0}(t)$} |K^0 \rangle + \mbox{$P_{\bar{K^0} K^0}(t)$} |\bar{K^0}\rangle \nonumber \\ |\bar{K^0}(t) \rangle &=& \mbox{$P_{\bar{K^0}\bar{ K^0}}(t)$} |\bar{K^0} \rangle + \mbox{$P_{K^0 \bar{K^0}}(t)$} |K^0\rangle \end{eqnarray} which according to the orthogonality of $K^0$ and $\bar{K^0}$ and in agreement with the first equation in (\ref{2.6}) are identical to $\langle K_{\alpha}|K_{\beta}(t)\rangle$ for $K_{\alpha}=K^0, \, \bar{K^0}$. Since the quantum mechanical principle $|A(t)\rangle =\exp (-iHt)|A\rangle$ is valid for any state $|A\rangle$ we can use eqs.(\ref{2.1})-(\ref{2.3}) and eq.(\ref{2.6}) to derive the following time dependence of $K_S$ and $K_L$ \begin{eqnarray} \label{2.8} \mbox{$|K_S(t)\rangle$} = p\left[\mbox{$P_{K^0 K^0}(t)$} \mbox{$|K^0 \rangle$} + \mbox{$P_{\bar{K^0} K^0}(t)$} \mbox{$|\bar{K^0}\rangle$} \right] + q\left[\mbox{$P_{\bar{K^0}\bar{ K^0}}(t)$} \mbox{$|\bar{K^0}\rangle$} + \mbox{$P_{K^0 \bar{K^0}}(t)$} \mbox{$|K^0 \rangle$}\right] \nonumber \\ \mbox{$|K_L(t)\rangle$} = p\left[\mbox{$P_{K^0 K^0}(t)$} \mbox{$|K^0 \rangle$} + \mbox{$P_{\bar{K^0} K^0}(t)$} \mbox{$|\bar{K^0}\rangle$} \right] - q\left[\mbox{$P_{\bar{K^0}\bar{ K^0}}(t)$} \mbox{$|\bar{K^0}\rangle$} + \mbox{$P_{K^0 \bar{K^0}}(t)$} \mbox{$|K^0 \rangle$} \right] \nonumber \\ \end{eqnarray} Note that in this section we are keeping all formulae as general as possible, in accordance with the general principles of Quantum Mechanics. In analogy to eq.(\ref{2.7}) and again in full generality we can also define expansion coefficients $\mbox{$P_{K_S K_S}(t)$}$, $\mbox{$P_{K_L K_L}(t)$}$, $\mbox{$P_{K_L K_S}(t)$}$ and $\mbox{$P_{K_S K_L}(t)$}$ through \begin{eqnarray} \label{2.9} \mbox{$|K_S(t)\rangle$} = \mbox{$P_{K_S K_S}(t)$} \mbox{$|K_S\rangle$} + \mbox{$P_{K_L K_S}(t)$} \mbox{$|K_L\rangle$} \nonumber \\ \mbox{$|K_L(t)\rangle$} = \mbox{$P_{K_L K_L}(t)$} \mbox{$|K_L\rangle$} + \mbox{$P_{K_S K_L}(t)$} \mbox{$|K_S\rangle$} \end{eqnarray} Clearly the time dependent functions $\mbox{$P_{K_S K_L}(t)$}$ and $\mbox{$P_{K_L K_S}(t)$}$, absent in the WW approximation, would be, unless identical to zero, responsible for vacuum regeneration of $K_S \leftrightarrow K_L$. Using already the following CPT-constraint (being at same time a quite model-indendent test for CPT conservation \cite{kha3}) \begin{equation} \label{2.10} \mbox{$P_{K^0 K^0}(t)$} = \mbox{$P_{\bar{K^0}\bar{ K^0}}(t)$} \end{equation} the $\mbox{$P_{K_S K_S}(t)$}$ etc can be easily obtained from (\ref{2.8}) by using the inverse transformation of eq.(\ref{2.1}). The result is \begin{eqnarray} \label{2.11} &\mbox{$P_{K_S K_S}(t)$} - \mbox{$P_{K_L K_L}(t)$} ={q \over p} \mbox{$P_{K^0 \bar{K^0}}(t)$} + {p \over q} \mbox{$P_{\bar{K^0} K^0}(t)$} \nonumber \\ &\mbox{$P_{K_S K_S}(t)$} + \mbox{$P_{K_L K_L}(t)$} = \mbox{$P_{K^0 K^0}(t)$} + \mbox{$P_{\bar{K^0}\bar{ K^0}}(t)$} = 2 \mbox{$P_{K^0 K^0}(t)$} \nonumber \\ &\mbox{$P_{K_L K_S}(t)$} = -\mbox{$P_{K_S K_L}(t)$} ={1 \over 2}\left\{{q \over p} \mbox{$P_{K^0 \bar{K^0}}(t)$} -{p \over q}\mbox{$P_{\bar{K^0} K^0}(t)$} \right\} \end{eqnarray} Trivially eqs.(\ref{2.9}) imply a relation between the expansion coefficients $\mbox{$P_{K_S K_S}(t)$}$ etc and the corresponding matrix elements $\mbox{$\langle K_S \kst$}$ etc. \begin{eqnarray} \label{2.12} \mbox{$\langle K_S \kst$} = \mbox{$P_{K_S K_S}(t)$} + \mbox{$P_{K_L K_S}(t)$} \mbox{$\Delta_K$} \nonumber \\ \mbox{$\langle K_S \klt$} = \mbox{$P_{K_S K_S}(t)$} \mbox{$\Delta_K$} + \mbox{$P_{K_L K_S}(t)$} \nonumber \\ \mbox{$\langle K_L \klt$} = \mbox{$P_{K_L K_L}(t)$} - \mbox{$P_{K_L K_S}(t)$} \mbox{$\Delta_K$} \nonumber \\ \mbox{$\langle K_L \kst$} = \mbox{$P_{K_L K_L}(t)$} \mbox{$\Delta_K$} - \mbox{$P_{K_L K_S}(t)$} \end{eqnarray} This explicitly displays the above mentioned difference between the $K^0$, $\bar{K^0}$ and the $K_S$, $K_L$ cases. The matrix element e.g. $\mbox{$\langle K_L \kst$}$ is not equal to the corresponding coefficient $\mbox{$P_{K_L K_S}(t)$}$. Only if {\it both}, $\mbox{$P_{K_L K_S}(t)$}=-\mbox{$P_{K_S K_L}(t)$}=0$ {\it and} $\mbox{$\Delta_K$} =0$, are imposed is this equality guaranteed. Hence this property, $\mbox{$\langle K_L \kst$} \neq \mbox{$P_{K_L K_S}(t)$}$, has nothing to do with the generality of our formulae, but in general with the fact that $\mbox{$\Delta_K$} \neq 0$. Let us now come to the main point of the paper. The question which will be addressed in the next sections is whether \begin{equation} \label{2.13} \mbox{$P_{K_L K_S}(t)$} = -\mbox{$P_{K_S K_L}(t)$} =0 \, \, \, {\rm or} \, \, \, \neq 0 \end{equation} As discussed in the introduction Khalfin has proved \cite{kha2}, \cite{kha3} (confirmed in \cite{suder3}) that indeed the second possibilty must be true unless there is no CP-violation in the mixing, i.e. $\mbox{$\Delta_K$} =0$. We will describe Khalfin's result in the next section. Before doing so let us state explicitly that in the WW approximation we have $\mbox{$P_{K_L K_S}(t)$} = -\mbox{$P_{K_S K_L}(t)$} =0$ and that the $K_S$ and $K_L$ have the simple time evolution \begin{eqnarray} \label{2.14} \mbox{$P_{K_S K_S}(t)$} |_{{}_{WW}}= e^{-i m_{{}_{S}} t}e^{-{1 \over 2} \Gamma_{{}_{S}} t} \nonumber \\ \mbox{$P_{K_L K_L}(t)$} |_{{}_{WW}}= e^{-i m_{{}_{L}} t}e^{-{1 \over 2}\Gamma_{{}_{L}} t} \end{eqnarray} as would have been expected for physical, unstable particle states (which do not mix). As discussed above even in the WW approximation we have \begin{equation} \label{2.15} \mbox{$\langle K_L \kst$} |_{{}_{WW}} \neq 0, \, \, \, \mbox{$\langle K_S \klt$} |_{{}_{WW}} \neq 0 \end{equation} It is also useful to derive two further relations which will be the cornerstones of the discussion in the next sections. The first one follows immediately from eq.(\ref{2.12}) and reads \begin{equation} \label{2.16} \mbox{$\langle K_S \klt$} + \mbox{$\langle K_L \kst$} = \mbox{$\Delta_K$} \left[ \mbox{$\langle K_L \klt$} + \mbox{$\langle K_S \kst$} \right] \end{equation} This expression will lead in the next section to a relation between the spectral density functions $\rho_{{}_{S}}$ and $\rho_{{}_{L}}$. This in turn will yield a couple of consistency equation when the spectral functions are approximated by a one-pole ansatz. To obtain the second relation we have to essentially invert the formulae (\ref{2.11}) and express the $\mbox{$P_{K^0 K^0}(t)$}$ etc matrix elements through the expansion coeffients $\mbox{$P_{K_S K_S}(t)$}$ etc. \begin{eqnarray} \label{2.17} &\mbox{$P_{K^0 \bar{K^0}}(t)$} ={p \over q}\left\{{1 \over 2} \left[\mbox{$P_{K_S K_S}(t)$} -\mbox{$P_{K_L K_L}(t)$} \right] + \mbox{$P_{K_L K_S}(t)$} \right\} \\ \label{2.18} &\mbox{$P_{\bar{K^0} K^0}(t)$} ={q \over p}\left\{{1 \over 2} \left[\mbox{$P_{K_S K_S}(t)$} -\mbox{$P_{K_L K_L}(t)$} \right] - \mbox{$P_{K_L K_S}(t)$} \right\} \\ \label{2.19} &\mbox{$P_{K^0 K^0}(t)$} =\mbox{$P_{\bar{K^0}\bar{ K^0}}(t)$} = {1 \over 2}\left[\mbox{$P_{K_S K_S}(t)$} + \mbox{$P_{K_L K_L}(t)$} \right] \end{eqnarray} Setting therein $\mbox{$P_{K_L K_S}(t)$}=0$ we get \begin{equation} \label{2.20} {\mbox{$P_{K^0 \bar{K^0}}(t)$} \over \mbox{$P_{\bar{K^0} K^0}(t)$}} = {p^2 \over q^2}={\rm const} \end{equation} This last equation will, when rewritten in the spectral language, lead to $\mbox{$\Delta_K$}=0$. Hence the conclusion of Khalfin that $\mbox{$P_{K_S K_L}(t)$} \neq 0$. \setcounter{equation}{0} \section { Spectral Formulation} What we called spectral formalism for unstable quantum mechanical systems is based on two observations. The first one is simply the completeness of the eigenvectors $|q\rangle$ of a hermitian quantum mechanical Hamiltonian. We can then write an unstable state $|\lambda , t\rangle$ (which is never an eigenstate of the Hamiltonian) as \begin{equation} \label{3.1} |\lambda , t\rangle =\sum_q |q,t\rangle \langle q|\lambda \rangle \end{equation} The second observation is the reasonable assumption that the unstable state has only projections on continuum states in which it decays. Denoting from now on the continous eigenvalue of a Hamiltonian by $m$ we can write the survival amplitude $A(t)$ (or, as in case of $K^0 \leftrightarrow \bar{K^0}$ oscillations, transition amplitude) as \begin{equation} \label{3.2} A(t) = \int_{{\rm Spec}(H)}dm \, e^{-imt}\rho (m) \end{equation} where the integration extends over the whole spectrum of the Hamiltonian and $\rho (m)$ is \begin{equation} \label{3.3} \rho (m) =|\langle m|\lambda \rangle |^2 \end{equation} Of course the spectrum of any sensible Hamiltonian should be bounded from below. The ground state (vacuum) can be then normalized to have zero energy eigenvalue. The integration range in (\ref{3.2}) is in this case from $0$ to $\infty$. Despite this cut-off in the integral (\ref{3.2}) imposed on us by physical requirements we stress that $A(t)$ and $\rho (m)$ are still Fourier-transforms of each other. This is guaranteed by the Dirichlet-Jordan (see e.g. \cite{fourier}) conditions for Fourier integrals which under certain conditions (which we assume here to be fullfiled) allow us to introduce a finite number of discontinuities in the Fourier integrals. At the discontinuous points the result of the Fourier transform will be $1/2[f(x+0)+f(x-0)]$ and simply $f(x)$ otherwise. \footnote{The other above mentioned conditions are (a) piecewise continuity (except at isolated points), (b) bounded total variation and (c) $\int_{-\infty}^{\infty}dt|A(t)|<\infty$. It is then sufficient to define $\rho(m)\neq 0$ for $m \ge 0$ and $\rho(m)=0$ for $ m <0$. The absolute integrability is obvious.} With the following Breit-Wigner ansatz (see \cite{bohm}) \begin{equation} \label{3.4} \rho_{{}_{BW}}(m) = {\Gamma \over 2 \pi}\,{1 \over (m-m_0)^2 + {\Gamma^2 \over 4}} \end{equation} we obtain then for the survival amplitude \begin{equation} \label{3.5} A_{{}_{BW}}(t) = \int^{\infty}_{-\infty} dm \, e^{-imt}\rho_{{}_{BW}}(m) = e^{-im_0 t}\,e^{-{1 \over 2}\Gamma t},\, \, \, \, t \ge 0 \end{equation} which gives for the survival probability the well known exponential decay law, $P_{{}_{BW}}(t)=|A(t)|^2=\exp (-\Gamma t)$. Despite of what has been said about the integration range above we have integrated in (\ref{3.5}) over $(-\infty,\infty)$ for reasons which will be evident in section 5. There it will become apparent that taking the integral from $-\infty$ to $\infty$ is in some sense equivalent to neglecting terms of order $\Gamma /M$ (where $M$ is the mass). The existence of a ground state in ${\rm Spec}(H)$ indroduces non-exponential corrections (and non-oscillatory terms in $\mbox{$P_{K^0 K^0}(t)$}$ etc.) which, however, using the simple ansatz (\ref{3.4}) cannot be trusted \cite{bohm}. We will discuss this ansatz further in section 4. We can now apply the above formalism to the case of $K_S$ and $K_L$ by introducing a hermitian Hamiltonian with, as before, continuous spectrum of the decay products which we label by indices $\alpha, \beta$ etc. \begin{equation} \label{3.6} H |\phi_{\alpha}\rangle = m |\phi_{\alpha} \rangle , \, \, \, \langle \phi_{\beta}(m')|\phi_{\alpha}(m)\rangle =\delta_{\alpha \beta} \delta (m' - m) \end{equation} The unstable states $K_S$ and $K_L$ are then written in accordance with (\ref{3.1}) as superpositions of the eigenkets. \begin{eqnarray} \label{3.7} \mbox{$|K_S\rangle$} = \int_0^{\infty} dm \, \sum_{\alpha}\rho_{{}_{S,\alpha}}(m)|\phi_{\alpha} \rangle \nonumber \\ \mbox{$|K_L\rangle$} = \int_0^{\infty} dm \, \sum_{\beta}\rho_{{}_{L,\beta}}(m)|\phi_{\beta} \rangle \end{eqnarray} Note that this can be done for any unstable state. Therefore, strictly speaking, equations (\ref{3.7}) are as such not the definitions of $\mbox{$|K_S\rangle$}$ and $\mbox{$|K_L\rangle$}$. The latter are still defined as linear superposition of $|K^0 \rangle$ and $|\bar{K^0}\rangle$ in eq.(\ref{2.1}). In what follows we convert the general formulae of section 2 into the langauge of spectral functions $\rho (m)$. To do so we first write down the matrix elements from eq.(\ref{2.12}). Using (\ref{3.6}) and (\ref{3.7}) they are given by \begin{eqnarray} \label{3.8} & \mbox{$\langle K_S \kst$} = \int_0^{\infty} dm\, \sum_{\alpha} |\rho_{{}_{S,\alpha}}(m)|^2 e^{-imt} \nonumber \\ & \mbox{$\langle K_L \klt$} = \int_0^{\infty} dm\, \sum_{\beta} |\rho_{{}_{L,\beta}}(m)|^2 e^{-imt} \nonumber \\ & \mbox{$\langle K_S \klt$} = \int_0^{\infty} dm\, \sum_{\gamma} \rho_{{}_{S,\gamma}}^* (m)\rho_{{}_{L,\gamma}}(m) e^{-imt} \nonumber \\ & \mbox{$\langle K_L \kst$} = \int_0^{\infty} dm\, \sum_{\sigma} \rho_{{}_{L,\sigma}}^*(m)\rho_{{}_{S,\sigma}}(m) e^{-imt} \end{eqnarray} Eq.(\ref{2.16}) can be then recast in the following form \begin{eqnarray} \label{3.9} \int_0^{\infty}dm \, \sum_{\alpha} \left[\rho_{{}_{L,\alpha}}^*(m)\rho_{{}_{S,\alpha}}(m) + \rho_{{}_{S,\alpha}}^*(m)\rho_{{}_{L,\alpha}}(m) \right]e^{-imt} \nonumber \\ = \mbox{$\Delta_K$} \int_0^{\infty}dm\, \sum_{\beta} \left[|\rho_{{}_{L,\beta}}(m)|^2 + |\rho_{{}_{L,\beta}}(m)|^2 \right]e^{-imt} \end{eqnarray} Taking the inverse Fourier transform of (\ref{3.9}) we arrive at \begin{equation} \label{3.10} \sum_{\alpha} \left[\rho_{{}_{L,\alpha}}^*(m)\rho_{{}_{S,\alpha}}(m) + \rho_{{}_{S,\alpha}}^*(m)\rho_{{}_{L,\alpha}}(m) \right] \nonumber = \mbox{$\Delta_K$} \sum_{\beta} \left[|\rho_{{}_{L,\beta}}(m)|^2 + |\rho_{{}_{L,\beta}}(m)|^2 \right] \end{equation} which is valid for $m \in (0, \infty )$. This equation is one of Khalfin's main results \cite{kha1} and will play an important role in the subsequent discussion. It tells us that the spectral functions $\rho_{{}_{S,\alpha}}$ and $\rho_{{}_{L,\alpha}}$ are inter-related with each other and any reasonable ansatz which approximates these functions should be such that eq.(\ref{3.10}) is true at least to certain accuracy. Indeed an ansatz for $\rho_{{}_{S,\alpha}}$ and $\rho_{{}_{L,\alpha}}$ similar to (\ref{3.4}) does not fulfill this requirements in full generality and in section 4 we address this question in more detail. Note also that since eq.(\ref{3.10}) is an equation in the variable $m$ we might expect that given a certain ansatz for the spectral functions we get more than one consistency equations from it. To obtain the second main result of Khalfin \cite{kha2}, \cite{kha3} it is necessary to derive corresponding spectral expression for $\mbox{$P_{K^0 K^0}(t)$}$ etc. From (\ref{2.12}), (\ref{2.17})-(\ref{2.19}), (\ref{2.16}) (alternatively (\ref{3.10})) and (\ref{3.8})we see that \begin{eqnarray} \label{3.11} \mbox{$P_{K^0 K^0}(t)$} &=& \mbox{$P_{\bar{K^0}\bar{ K^0}}(t)$} =\int_0^{\infty}dm \, \rho_{{}_{K^0 K^0}}(m) e^{-imt} \nonumber \\ \quad \quad &=& {1 \over 2}\int_0^{\infty}\sum_{\alpha}\biggl\{|\rho_{{}_{S,\alpha}}(m)|^2 + |\rho_{{}_{L,\alpha}}(m)|^2 \biggr\} e^{-imt} \\ \label{3.12} \mbox{$P_{K^0 \bar{K^0}}(t)$} &=& \int_0^{\infty} dm \,\rho_{{}_{K^0 \bar{K^0}}}(m) e^{-imt} = {1 \over 4p^* q}\int_0^{\infty}dm \, \sum_{\beta}\biggl\{|\rho_{{}_{S,\beta} }(m)|^2 - |\rho_{{}_{L,\beta}}(m)|^2 \nonumber \\ & - & \rho_{{}_{S,\beta}}^*(m) \rho_{{}_{L,\beta}}(m) + \rho_{{}_{L,\beta}}^*(m)\rho_{{}_{S,\beta}} (m)\biggr\}e^{-imt} \\ \label{3.13} \mbox{$P_{\bar{K^0} K^0}(t)$} & =& \int_0^{\infty} dm \,\rho_{{}_{\bar{K^0} K^0}}(m) e^{-imt} = {1 \over 4p q^*}\int_0^{\infty}dm \, \sum_{\sigma}\biggl\{| \rho_{{}_{S,\sigma}}(m)|^2 - |\rho_{{}_{L,\sigma}}(m)|^2 \nonumber \\ & + & \rho^*_{{}_{S,\sigma}}(m) \rho_{{}_{L,\sigma}}(m) - \rho^*_{{}_{L,\sigma}}(m) \rho_{{}_{S,\sigma}} (m)\biggr\}e^{-imt} \end{eqnarray} Here $\rho_{{}_{K^0 K^0}}(m)$ etc. are simply defined by the right hand sides of the corresponding equations. As done at the end of the forgoing section if we now set $\mbox{$P_{K_L K_S}(t)$} =-\mbox{$P_{K_S K_L}(t)$} =0$ we obtain the spectral version of (\ref{2.20}) \begin{equation} \label{3.14} \int_0^{\infty}dm \, \rho_{{}_{K^0\bar{K^0}}}(m)e^{-imt} = {p^2 \over q^2}\int_0^{\infty}dm \, \rho_{{}_{\bar{K^0}K^0}}(m)e^{-imt} \end{equation} By observing from (\ref{3.12}) and (\ref{3.13}) that $\rho_{{}_{K^0\bar{K^0}}} =\rho^*_{{}_{\bar{K^0}K^0}}$ and taking again the inverse Fourier transform in (\ref{3.14}) we get \begin{equation} \label{3.15} {p^2 \over q^2} = {\rho_{{}_{K^0\bar{K^0}}} \over \rho^*_{{}_{K^0 \bar{K^0}}}} \end{equation} This, however, immediately leads to \begin{equation} \label{3.16} \mbox{$\Delta_K$} = |p|^2 - |q|^2=0 \end{equation} Hence Khalfin's second result states that putting $\mbox{$P_{K_L K_S}(t)$} =-\mbox{$P_{K_S K_L}(t)$}$ to zero invariably implies that on consistency grounds there can be no CP-violation in the mixing provided the $K_S$ and $K_L$ states are defined as in eqs.(\ref{2.1}). In other words since we know that CP-violation exists in the mixing of $K^0-\bar{K^0}$ we have to allow for {\it vacuum} regeneration of $K_S$ and $K_L$. Note that this conclusion does not depend on a particular choice of $\rho_{{}_{S,\alpha}}$ and $\rho_{{}_{L,\alpha}}$. This is quite an astounding and unexpected result which, using a completely different approach, has also been recently confirmed \cite{suder3}. It is not easy to give an interpretation of this result. Either we accept (\ref{2.1}) and the fact that the non-orthogonality of $K_S$ and $K_L$ makes this system different from any other (recall our discussion of this peculiarity in the introduction) known system (except for similar system with the same properties like $B^0-\bar{B^0}$ or $D^0-\bar{D^0}$) or we can suspect that (\ref{2.1}) is not the complete relation \cite{suder2}. The confirmation of the above result by Chiu and Sudershan \cite{suder3} shows that this is result is indeed reliable. We emphasize this because of its rather `exotic' implications. It is also worthwhile noting that the above result has been derived within the context of standard Quantum Mechanics and that CPT-symmetry has been implemented. Suggested tests of CPT and Quantum Mechanics based on terms which are in general forbidden by CPT or QM are then not affected by this result provided the chosen observables assume zero values in the limit of CPT conservation or in the context of QM. Any other tests which rely on standard WW expressions might, however, be affected. This is true regardless of the size of this new effect and more importantly this effect has nothing to do with deviations of the exponential decay law for very small and very large time. The latter will become manifest in the formulae for time evolution in section 5. It is nevertheless mandatory to try to estimate the size of this effect. A first step in this direction will be to make an ansatz for the spectral functions $\rho_{{}_{S,\alpha}}$ and $\rho_{{}_{L,\alpha}}$ and to check the consistency of this ansatz. Therefore we collect below all available expressions which can shed some light on the spectral functions. From (\ref{2.1})-(\ref{2.3}) we get \begin{eqnarray} \displaystyle \label{3.17} & \int_0^{\infty}dm \, \sum_{\alpha} |\rho_{{}_{S,\alpha}}(m)|^2 = \int_0^{\infty}dm \, \sum_{\beta} |\rho_{{}_{L,\beta}}(m)|^2=1 \\ \label{3.18} & \int_0^{\infty}dm \, \sum_{\sigma}\Im m \left(\rho^*_{{}_{S,\sigma}}(m) \rho_{{}_{L,\sigma}}(m)\right)=0 \\ \label{3.19} & \int_0^{\infty}dm \, \sum_{\gamma} \Re e \left(\rho^*_{{}_{S,\gamma}}(m) \rho_{{}_{L,\gamma}}(m)\right)=\mbox{$\Delta_K$} \end{eqnarray} Eqs.(\ref{3.18}) and (\ref{3.19}) follow from (\ref{2.3}) and the fact that $\mbox{$\Delta_K$}$ is real. Together with (\ref{3.10}) these equations is all the information on spectral functions $\rho_{{}_{S,\alpha}}$ and $\rho_{{}_{L,\alpha}}$ which is given to us. Any ansatz for the spectral functions has to respect these relations, up to a reasonable accuracy. We already mention that Khalfin in his estimate (see also \cite{suder3} where Khalfin's results and estimate are discussed) used essentially only eq.(\ref{3.17}). We also point out that once eq.(\ref{3.10}) and (\ref{3.17}) are assumed to hold eq.(\ref{3.19}) follows. \setcounter{equation}{0} \section{ One-Pole Approximation and its Consistency} We have seen that the Breit-Wigner ansatz led to the well known exponential decay law (up to corrections induced by the existence of the ground state). It is therefore reasonable to assume a similar form for the $\rho_{{}_{S,\alpha}}$ and $\rho_{{}_{L,\alpha}}$. More specifically we write \begin{eqnarray} \label{4.1} \rho_{{}_{S,\beta}}(m)=\sqrt{{\Gamma_{{}_{S}} \over 2 \pi}}\, {A_{{}_{S,\beta}}(K_S \to \beta) \over m - \mbox{$m_{{}_{S}}$} +i{\Gamma_{{}_{S}} \over 2}} \nonumber \\ \rho_{{}_{L,\beta}}(m)=\sqrt{{\Gamma_{{}_{L}} \over 2 \pi}}\, {A{{}_{L,\beta}}(K_L \to \beta) \over m - \mbox{$m_{{}_{L}}$} +i{\Gamma_{{}_{L}} \over 2}} \end{eqnarray} where $A_{{}_{S,\alpha}}$ and $A_{{}_{L,\alpha}}$ are decay amplitudes. It is convenient to make the following definitions \begin{eqnarray} \label{4.2} & \mbox{$\gamma_{{}_{S}}$} \equiv {\Gamma_{{}_{S}} \over 2}, \, \, \, \mbox{$\gamma_{{}_{L}}$} \equiv {\Gamma_{{}_{L}} \over 2} , \, \, \, \mbox{$\Delta m$} \equiv \mbox{$m_{{}_{S}}$} - \mbox{$m_{{}_{L}}$} \\ \label{4.3} & S \equiv \sum_{\alpha} |A_{{}_{S,\alpha}}|^2, \, \, \, L \equiv \sum_{\alpha} |A_{{}_{L,\alpha}}|^2 \\ \label{4.4} & R \equiv \sum_{\sigma} \Re e \left(A^*_{{}_{S,\sigma}}A_{{}_{L,\sigma}}\right) , \, \, \, I \equiv \sum_{\sigma} \Im m \left(A^*_{{}_{S,\sigma}}A_{{}_{L,\sigma}}\right) \end{eqnarray} The quantities (\ref{4.3}) and (\ref{4.4}) are the only apriori unknown variables which, with the spectral functions given by (\ref{4.1}), will enter e.g. equations like (\ref{3.11})-(\ref{3.13}). As already mentioned at the end of the last section we have to insert (\ref{4.1}) into the expressions (\ref{3.10}) and (\ref{3.17})-(\ref{3.19}) to examine the consistency of the one-pole approximation (\ref{4.1}). We start with eq.(\ref{3.17}) where the integral can be easily performed. The result is \begin{equation} \label{4.5} S= 1+ {\mbox{$\gamma_{{}_{S}}$} \over \pi \mbox{$m_{{}_{S}}$}}+ {\cal O}((\mbox{$\gamma_{{}_{S}}$} / \mbox{$m_{{}_{S}}$})^2), \, \, \, L= 1+ {\mbox{$\gamma_{{}_{L}}$} \over \pi \mbox{$m_{{}_{L}}$}}+ {\cal O}((\mbox{$\gamma_{{}_{L}}$} / \mbox{$m_{{}_{L}}$})^2) \end{equation} For reasons explained below we will keep, up to a certain point, terms of order $\Gamma_{{}_{X}}/m_{{}_{X}}$. Since (\ref{3.10}) contains the variable $m$ pluggging the one-pole approximation (\ref{4.1}) in (\ref{3.10}) we get a polynomial in the variable $m$ of degree two which should be identically zero. Therefore coefficient of each power in $m$ should be also zero. Instead of one equation we have three consistency equations. \begin{eqnarray} \label{4.6} & m^2 \left[2\sqrt{\mbox{$\gamma_{{}_{S}}$} \mbox{$\gamma_{{}_{L}}$}} \cdot R - \mbox{$\Delta_K$} (\mbox{$\gamma_{{}_{S}}$} \cdot S + \mbox{$\gamma_{{}_{L}}$} \cdot L) \right]=0 \nonumber \\ & m \left[-2\sqrt{\mbox{$\gamma_{{}_{S}}$} \mbox{$\gamma_{{}_{L}}$}}(\mbox{$m_{{}_{L}}$} + \mbox{$m_{{}_{S}}$}) \cdot R - 2\sqrt{\mbox{$\gamma_{{}_{S}}$} \mbox{$\gamma_{{}_{L}}$}}(\mbox{$\gamma_{{}_{S}}$} -\mbox{$\gamma_{{}_{L}}$}) \cdot I + 2\mbox{$\Delta_K$} (\mbox{$\gamma_{{}_{S}}$} \mbox{$m_{{}_{L}}$} \cdot S + \mbox{$\gamma_{{}_{L}}$} \mbox{$m_{{}_{S}}$} \cdot L) \right] =0 \nonumber \\ & \delta_{SL} \equiv \mbox{$\Delta_K$} \left[\mbox{$\gamma_{{}_{S}}$} \cdot S(\mbox{$m_{{}_{L}}$}^2 + \mbox{$\gamma_{{}_{L}}$}^2 ) +\mbox{$\gamma_{{}_{L}}$} \cdot L (\mbox{$m_{{}_{S}}$}^2 + \mbox{$\gamma_{{}_{S}}$}^2 )\right] - 2\sqrt{\mbox{$\gamma_{{}_{S}}$} \mbox{$\gamma_{{}_{L}}$}} (\mbox{$\gamma_{{}_{S}}$} \mbox{$\gamma_{{}_{L}}$} + \mbox{$m_{{}_{S}}$} \mbox{$m_{{}_{L}}$} )\cdot R \nonumber \\ & +2 \sqrt{\mbox{$\gamma_{{}_{S}}$} \mbox{$\gamma_{{}_{L}}$}} (\mbox{$\gamma_{{}_{L}}$} \mbox{$m_{{}_{S}}$} - \mbox{$\gamma_{{}_{S}}$} \mbox{$m_{{}_{L}}$} )\cdot I =0 \end{eqnarray} {}From the first two we easily get \begin{eqnarray} \label{4.7} & R & = {\mbox{$\Delta_K$} \over 2 \sqrt{\mbox{$\gamma_{{}_{S}}$} \mbox{$\gamma_{{}_{L}}$}}} \left(\mbox{$\gamma_{{}_{S}}$} \cdot S + \mbox{$\gamma_{{}_{L}}$} \cdot L \right) \\ \label{4.8} & I & = {\mbox{$\Delta_K$} \over 2\sqrt{\mbox{$\gamma_{{}_{S}}$} \mbox{$\gamma_{{}_{L}}$}}}\, {\mbox{$\Delta m$} \over \mbox{$\gamma_{{}_{L}}$} -\mbox{$\gamma_{{}_{S}}$}}\left (\mbox{$\gamma_{{}_{S}}$} \cdot S -\mbox{$\gamma_{{}_{L}}$} \cdot L \right) \end{eqnarray} whereas the last condition in (\ref{4.6}) needs a more detailed treatment. The reason why we did not neglect till now terms of order $\Gamma_{{}_{X}}/m_{{}_{X}}$ is now apparent. Namely, in zeroth order of $\Gamma_{{}_{X}}/m_{{}_{X}}$ we obtain \begin{equation} \label{4.9} \delta_{SL}|_{{}_{S=L=1}}=0 \end{equation} Hence to estimate how badly $\delta_{SL}$ deviates from zero it is necessary to include the next order of $\Gamma_{{}_{X}}/m_{{}_{X}}$. In this order using (\ref{4.5}) $\delta_{SL}$ reads \begin{eqnarray} \label{4.10} & \delta_{SL}= {\mbox{$\Delta_K$} \over \pi}\left({\mbox{$\gamma_{{}_{L}}$} \over \mbox{$m_{{}_{L}}$}}\right){\mbox{$\gamma_{{}_{S}}$} \over \mbox{$\gamma_{{}_{S}}$} - \mbox{$\gamma_{{}_{L}}$}} \left[(\mbox{$\gamma_{{}_{L}}$} - \mbox{$\gamma_{{}_{S}}$} )^2 + \mbox{$\Delta m$}^2 \right]\left[(\mbox{$\gamma_{{}_{S}}$} -\mbox{$\gamma_{{}_{L}}$} )- \mbox{$\gamma_{{}_{S}}$} {\mbox{$\Delta m$} \over \mbox{$m_{{}_{L}}$}}\right] \nonumber \\ & \sim {\mbox{$\Delta_K$} \over \pi} \mbox{$\Delta m$}^3 \left({\mbox{$\gamma_{{}_{L}}$} \over \mbox{$m_{{}_{L}}$} }\right) \end{eqnarray} For the order of magnitude estimate in (\ref{4.10}) we have used $\Gamma_{{}_{S}}/\mbox{$\Delta m$} \sim {\cal O}(1)$. Strictly speaking this amounts to saying that the ansatz (\ref{4.1}) is not consistent. Note, however, the following. The smallest mass scale parameter which appears in calculations involving the $K^0-\bar{K^0}$ system is $\mbox{$\Delta m$}$. $\delta_{SL}$ in (\ref{4.6}) has the canonical dimension 3. What eq.(\ref{4.10}) then tells us is that as compared to the third power of the smallest mass scale $\delta_{SL}$ is zero, up to corrections of order $\Gamma_{{}_{X}}/m_{{}_{X}}$. Therefore to this accuracy everything is consistent so far. Clearly, by assuming $\mbox{$\Delta_K$} = 0$ we obtain $R=I=0$. The reader will have noticed that in making estimates like in eq.(\ref{4.10}) we are relying on measured parameters of the $K^0-\bar{K^0}$ system. In order not to lose track of the main point we will not examine simultaneuosly the systems $B^0-\bar{B^0}$ and $D^0-\bar{D^0}$. There the smallest mass scale parameter is not $\mbox{$\Delta m$}$ but the corresponding difference in the widths $\Delta \Gamma$. The investigation of the consistency of (\ref{4.1}) will be then slightly different in those systems. The general (hypothetical) case as well as cases of physical interest other than the $K^0-\bar{K^0}$ system will be treated elsewhere \cite{me}. Using only Khalfin's eq.(\ref{3.10}) and the normalization condition (\ref{3.17}) we have already pinned down the $S$, $L$, $R$ and $I$ in terms of known quantities like widths, masses and $\mbox{$\Delta_K$}$. The equations (\ref{3.10}) and (\ref{3.17})-(\ref{3.19}) represent therefore an over-determined system. In contrast to situations discussed at the end of this section this is equivalent to a consistency check. On account of the validity of eq.(\ref{3.10}), proved for terms up to $\Gamma_{{}_{X}}/m_{{}_{X}}$, eq.(\ref{3.19}) is bound to hold. We are therefore left with one more condition, namlely (\ref{3.18}). We will discuss the calculation in connection with (\ref{3.18}) in some more detail since part of the steps will enter also the formulae of time evolution in section 5. The calculation will become more transparent by writing down explictly the product $\sum_{\beta}\rho^*_{{}_{S,\beta}}(m)\rho_{{}_{L,\beta}}(m)$ with the spectral functions given by (\ref{4.1}). \begin{eqnarray} \label{4.11} \sum_{\beta}\rho^*_{{}_{S,\beta}}(m)\rho_{{}_{L,\beta}}(m)|_{{}_{BW}} ={\sqrt{\mbox{$\gamma_{{}_{S}}$} \mbox{$\gamma_{{}_{L}}$}} \over \pi \left[(m-\mbox{$m_{{}_{L}}$})^2 + \mbox{$\gamma_{{}_{S}}$}^2 \right]\left[(m-\mbox{$m_{{}_{L}}$})^2 +\mbox{$\gamma_{{}_{L}}$}^2 \right]} \nonumber \\ \cdot \left\{(a_{{}_{R}}m^2+b_{{}_{R}}m + c_{{}_{R}}) +i(a_{{}_{I}} m^2+ b_{{}_{I}}m + c_{{}_{I}})\right\} \end{eqnarray} with \begin{eqnarray} \label{4.12} & a_{{}_{I}}=I, \, \, \, \, b_{{}_{I}}=\left(\mbox{$\gamma_{{}_{S}}$} -\mbox{$\gamma_{{}_{L}}$} \right) \cdot R - \left(\mbox{$m_{{}_{S}}$} + \mbox{$m_{{}_{L}}$} \right)\cdot I \nonumber \\ &c_{{}_{I}}= \left(\mbox{$\gamma_{{}_{L}}$} \mbox{$m_{{}_{S}}$} -\mbox{$\gamma_{{}_{S}}$} \mbox{$m_{{}_{L}}$} \right)\cdot R + \left(\mbox{$m_{{}_{L}}$} \mbox{$m_{{}_{S}}$} + \mbox{$\gamma_{{}_{S}}$} \mbox{$\gamma_{{}_{L}}$} \right) \cdot I \end{eqnarray} and similar expressions for $a_{{}_{R}}$, $b_{{}_{R}}$ and $c_{{}_{R}}$. Next an ansatz for the partial fraction decomposition \begin{equation} \label{4.13} {a_{{}_{I}} m^2+ b_{{}_{I}}m + c_{{}_{I}} \over \left[(m-\mbox{$m_{{}_{L}}$})^2 + \mbox{$\gamma_{{}_{S}}$}^2 \right] \left[(m-\mbox{$m_{{}_{L}}$})^2 +\mbox{$\gamma_{{}_{L}}$}^2 \right]}= {C_I m + D_I \over (m-\mbox{$m_{{}_{S}}$} )^2 +\mbox{$\gamma_{{}_{S}}$}^2} + {E_I m + F_I \over (m-\mbox{$m_{{}_{L}}$} )^2 +\mbox{$\gamma_{{}_{L}}$}^2} \end{equation} leads as usually to a linear system for coefficients $C_I$, $D_I$, $E_I$ and $F_I$ \begin{eqnarray} \label{4.14} &E_I =-C_I \nonumber \\ & C_I \mbox{$\Delta m$} + D'_I + F'_I = a_{{}_{I}} \nonumber \\ & C_I \left[\left(\mbox{$m_{{}_{L}}$}^2 + \mbox{$\gamma_{{}_{L}}$}^2\right) - \left(\mbox{$m_{{}_{S}}$}^2 +\mbox{$\gamma_{{}_{S}}$}^2 \right) \right] -2D'_I \mbox{$m_{{}_{L}}$} -2F'_I \mbox{$m_{{}_{S}}$} =b_{{}_{I}} \nonumber \\ & D'_I \left(\mbox{$m_{{}_{L}}$}^2 + \mbox{$\gamma_{{}_{L}}$}^2 \right) + F'_I \left(\mbox{$m_{{}_{S}}$}^2 + \mbox{$\gamma_{{}_{S}}$}^2 \right) + C_I \left[\mbox{$m_{{}_{L}}$}\left(\mbox{$m_{{}_{S}}$}^2 +\mbox{$\gamma_{{}_{S}}$}^2 \right) -\mbox{$m_{{}_{S}}$} \left(\mbox{$m_{{}_{L}}$}^2 + \mbox{$\gamma_{{}_{L}}$}^2 \right) \right]=c_{{}_{I}} \nonumber \\ \end{eqnarray} where we have used the redefinitions \begin{equation} \label{4.15} D'_I \equiv D_I + C_I \mbox{$m_{{}_{S}}$} , \, \, \, F'_I \equiv F_I - C_I \mbox{$m_{{}_{L}}$} \end{equation} This system plays a double role in our discussion. It appears here as a middle step in the consistency check and is a necessary ingredient in the calculation of the time dependent transition amplitudes in the next section. Hence we feel that it is of enough importance to give the explicit solution of this system in appendix A. To perform the integral in (\ref{3.18}) we need also \begin{eqnarray} \displaystyle \label{4.16} &\Lambda (R,I)\equiv \int_0^{\infty}dm \, {a_{{}_{I}} m^2+ b_{{}_{I}}m + c_{{}_{I}} \over \left[(m-\mbox{$m_{{}_{L}}$})^2 + \mbox{$\gamma_{{}_{S}}$}^2 \right] \left[(m-\mbox{$m_{{}_{L}}$})^2 +\mbox{$\gamma_{{}_{L}}$}^2 \right]}= \nonumber \\ & -C_I {\mbox{$\Delta m$} \over \mbox{$m_{{}_{L}}$}} + {D_I + C_I \mbox{$m_{{}_{S}}$} \over \mbox{$\gamma_{{}_{S}}$}} \left(\pi - {\mbox{$\gamma_{{}_{S}}$} \over \mbox{$m_{{}_{S}}$}}\right) + {F_I - C_I \mbox{$m_{{}_{L}}$} \over \mbox{$\gamma_{{}_{L}}$}} \left(\pi - {\mbox{$\gamma_{{}_{L}}$} \over \mbox{$m_{{}_{L}}$}}\right) \nonumber \\ & +{\cal O}((\Gamma_{{}_{X}}/m_{{}_{X}})^2) +{\cal O}((\mbox{$\Delta m$} /\mbox{$m_{{}_{L}}$} )^2) \end{eqnarray} such that the condition (\ref{3.18}) reduces to \begin{equation} \label{4.17} \Lambda (R,I)=0 \end{equation} Taking the solutions for $C_I$, $D'_I$ and $F'_I$ in terms of $R$ and $I$ (see appendix A) and inserting them into (\ref{4.17}) a lengthy calculation yields \begin{eqnarray} \label{4.18} &R \cdot \mbox{$\Delta m$} \left[\mbox{$\Delta m$}^2 +\left(\mbox{$\gamma_{{}_{S}}$} -\mbox{$\gamma_{{}_{L}}$} \right)^2 \right]\left[2\pi + {\mbox{$\gamma_{{}_{S}}$} + \mbox{$\gamma_{{}_{L}}$} \over \mbox{$m_{{}_{L}}$}}\right] \nonumber \\ &+ I \cdot \left(\mbox{$\gamma_{{}_{S}}$} + \mbox{$\gamma_{{}_{L}}$} \right) \left[\mbox{$\Delta m$}^2 +\left(\mbox{$\gamma_{{}_{S}}$} - \mbox{$\gamma_{{}_{L}}$} \right)^2 \right] \left[2\pi - {\mbox{$\Delta m$} \over \mbox{$m_{{}_{L}}$}}\, {\mbox{$\Delta m$} \over \mbox{$\gamma_{{}_{S}}$} +\mbox{$\gamma_{{}_{L}}$}} \right]=0 \end{eqnarray} In performing this calculation it is not advisable to make too strong approximations right from the beginning. This is due to some cancellations which can occur. It is now trivial to compare eq.(\ref{4.18}) with (\ref{4.7}) and (\ref{4.8}). In a simplified form eq.(\ref{4.18}) is \begin{equation} \label{4.19)} {I \over R}\simeq -{\mbox{$\Delta m$} \over \mbox{$\gamma_{{}_{S}}$} +\mbox{$\gamma_{{}_{L}}$}} +{\cal O}(\Gamma_{{}_{X}}/m_{{}_{X}}) +{\cal O}(\mbox{$\Delta m$} /\mbox{$m_{{}_{L}}$} ) \end{equation} which agrees with (\ref{4.7}) and (\ref{4.8}) when taking the approximation $S=L \simeq 1$. \footnote{Yet a different way of displaying the consistency of (\ref{3.18}) is descibed in section 5 (see there eqs.(\ref{5.8})-(\ref{5.10})).} The obvious conclusion here is that the one-pole ansatz (\ref{4.1}) indeed passes the consistency check which has been imposed on us by a set of equations in section 3. This check revealed that (\ref{4.1}) is valid up to terms of order ${\cal O}(\Gamma_{{}_{X}}/m_{{}_{X}})$, ${\cal O}(\mbox{$\Delta m$} /\mbox{$m_{{}_{L}}$} )$. We emphasize that this is not a trivial check. To see this let us investigate the situation where we put by hand $\mbox{$\Delta_K$} =0$. In this case we would obtain an homogeneous linear system whose only solution is $R=I=0$. No information on the accuracy of (\ref{4.1}) would follow from this. On the other hand keeping $\mbox{$\Delta_K$} \neq 0$ but dropping Khalfin's eq.(\ref{3.10}) from the analysis we would end up with four equations ((\ref{3.17})-(\ref{3.19}) for the four unkowns $S$, $L$, $R$, and $I$. Again no conclusion on the accuracy could have been reached. This displays once again the different nature of the $K^0-\bar{K^0}$ system and also the usefulness of (\ref{3.10}). As far as the {\it size } of one possible correction term ($\sim \Gamma_{{}_{X}}/m_{{}_{X}}$) is concerned the alert reader might object that this has been known all along as corrections to the exponential decay law. This is only partly true. As we have tried to argue above the presence of CP-violation alters the picture completely as only in this case equations (\ref{3.10}) and (\ref{3.17})-(\ref{3.19}) are an overdetermined system. In this context we remark that: 1. a consistency check has to be performed in any case as (\ref{4.1}) could have been inconsistent for totally different reasons and 2. it is probably safer not to rely on restrictions obtained in the framework of a CP-conserving theory. Corrections of the order ${\cal O}(\Gamma_{{}_{X}}/m_{{}_{X}})$ are of course expected to the {\it exponential decay law}, but the result here is more general as it explicitly states that corrections to {\it oscillatory terms} in $\mbox{$P_{K^0 K^0}(t)$}$ etc. coming from exact (unkown) spectral functions $\rho_{{}_{S,\alpha}}$ and $\rho_{{}_{L,\alpha}}$ will be of the same order. Both these corrections are totally different in nature since corrections to $\exp (-\Gamma t)$ are associated with the small/large time behaviour of the amplitudes whereas corrections to oscillatory terms might also arise for intermediate time scales. Indeed Khalfin's result on vacuum regeneration of $K_S$ and $K_L$ discussed in section 3 induces corrections of the latter type (see section 5). The nature of such corrections steming from beyond (\ref{4.1}) cannot be then apriori known and an analysis is required. That this analysis revealed ${\cal O}(\Gamma_{{}_{X}}/m_{{}_{X}})$ and ${\cal O}(\mbox{$\Delta m$} /\mbox{$m_{{}_{L}}$})$ as limits of applicability of (\ref{4.1}) means also that we can trust terms of order ${\cal O}(\Gamma_{{}_{L}}/\mbox{$\Delta m$})$, should such terms indeed appear along the line of further calculations. From now one we use \begin{equation} \label{4.20} S=L \simeq 1 \end{equation} unless otherwise stated. We close this section by observing that the sum of (\ref{4.7}) and (\ref{4.8}) with $S=L \simeq 1$ is nothing else but the well known Bell-Steinberger unitarity relation \cite{bell2}, namely \begin{equation} \label{4.21} \mbox{$\Delta_K$} \left(\mbox{$\gamma_{{}_{S}}$} +\mbox{$\gamma_{{}_{L}}$} -i \mbox{$\Delta m$} \right) =2\sqrt{\mbox{$\gamma_{{}_{S}}$} \mbox{$\gamma_{{}_{L}}$}} \sum_{\beta} A^*_{{}_{S,\beta}}A_{{}_{L,\beta}} \end{equation} The reason it appears here in a slightly different form (compare e.g. with \cite{maiani}) is the different normalization of the amplitudes. Recently corrections to (\ref{4.21}) of the order ${\cal O}(\mbox{$\Delta m$} /\mbox{$m_{{}_{L}}$})$ have been calculated (see the second reference in \cite{suder2}). As shown above such corrections are indeed expected. Finally we note that for the analysis in this section it is immaterial whether ot not $\mbox{$P_{K_L K_S}(t)$}$ is zero. \setcounter{equation}{0} \section{ Time Development} Having convinced ourselves that the one-pole approximation (\ref{4.1}) is consistent up to terms of order ${\cal O}(\Gamma_{{}_{X}}/m_{{}_{X}})$ and ${\cal O}(\mbox{$\Delta m$} /\mbox{$m_{{}_{L}}$})$ we can proceed to calculate the matrix elements (\ref{3.11})-(\ref{3.13}). With equations (\ref{4.7}), (\ref{4.8}) and (\ref{4.20}) we have all necessary information to do so. We mentioned in section 3 that the ground state in ${\rm Spec}(H)$ induces corrections to the exponential decay law (\ref{3.5}). Since this also implies the integration domain $(0, \infty)$ in (\ref{3.11})-(\ref{3.13}) we should handle such terms with care and make sure that all `new' terms induced by the lower integration limit are indeed of strictly non-oscillatory type in (\ref{3.11})-(\ref{3.13}). This is also important as we want to find out if Khalfin's effect is correlated with small/large time scales. The relevant integrals have been calculated analytically in appendix B. We can infer from the expressions in appendix B that such terms contain the exponential integral function $Ei$ \cite{grad}. We can safely neglect the terms with $Ei$ as it should be clear that the simple ansatz (\ref{4.1}) cannot account for the correctness of such non-oscillatory terms. Let us now have a closer look at (\ref{3.11}). In the one-pole approximation (\ref{4.1}) $\mbox{$P_{K^0 K^0}(t)$}$ can be conveniently written as (see also (\ref{B9}) in appendix B) \begin{eqnarray} \label{5.1} \mbox{$P_{K^0 K^0}(t)$} = \mbox{$P_{\bar{K^0}\bar{ K^0}}(t)$}& =&{1 \over 2 \pi}\biggl\{ e^{-im_{{}_{S}} t}\left(-\int_0^{-m_{{}_{S}} /\gamma_{{}_{S}}} dy \, {e^{-i\gamma_{{}_{S}} ty} \over y^2 + 1} +\int_0^{\infty}dy \, {e^{-i \gamma_{{}_{S}} ty} \over y^2 +1}\right) \nonumber \\ &+ &\left[S \to L \right]\biggr\} \end{eqnarray} We see that we have to calculate integrals of the following type \begin{eqnarray} \label{5.2} K^{(n)}(a) &\equiv & \int_0^{\infty} dx \, {x^n \over x^2 +1}e^{-iax} \nonumber \\ J^{(n)}(a,\eta) & \equiv & \int_0^{\eta}dx \, {x^n \over x^2 + 1}e^{-iax} \end{eqnarray} Collecting only oscillatory terms from the integrals in appendix B we obtain the same expression as in WW-approximation (this of course is not a surprise recalling that our concern here is the last equation in (\ref{2.11}) where only $\mbox{$P_{K^0 \bar{K^0}}(t)$}$ and $\mbox{$P_{\bar{K^0} K^0}(t)$}$ play a role) \begin{equation} \label{5.3} \mbox{$P_{K^0 K^0}(t)$} = \mbox{$P_{\bar{K^0}\bar{ K^0}}(t)$} ={1 \over 2} \left\{e^{-im_{{}_{S}} t}e^{-\gamma_{{}_{S}} t} + e^{-im_{{}_{L}} t}e^{-\gamma_{{}_{L}} t}\right\} + N_{{}_{K^0 K^0}}(t) \end{equation} where $N_{{}_{K^0 K^0}}(t)$ denotes all non-oscilllatory terms present in the integral. $N_{{}_{K^0 K^0}}(t)$ can, in principle, be extracted from equations (\ref{B1})-(\ref{B5}) but as we said before we cannot trust such terms to be the correct non-oscillatory corrections. One more comment is order. Putting $\mbox{$\gamma_{{}_{S}}$} /\mbox{$m_{{}_{S}}$}$ to zero the sum of the two integrals in (\ref{5.1}) can be compactly written as \begin{equation} \label{5.4} \int_{-\infty}^{\infty}dy \, {e^{-i\mu y} \over a^2 + y^2}={\pi \over a} e^{-\mu a} \end{equation} which of course means that \begin{equation} \label{5.5} N_{{}_{K^0 K^0}}(t) \to 0 \, \, \, {\rm as}\, \, \, {\Gamma_{{}_{S/L}} \over m_{{}_{S/L}}}\to 0 \end{equation} in agreement with what we said at the beginning of section 3 (see discussion below eq.(\ref{3.5})). Similarly the integration in (\ref{3.12}) and (\ref{3.13}) can be done analytically (see (\ref{B10}) in appendix B) and the result reads \begin{eqnarray} \label{5.6} \mbox{$P_{K^0 \bar{K^0}}(t)$} &=&{1 \over 4p^* q}\left\{e^{-im_{{}_{S}} t}e^{-\gamma_{{}_{S}} t} [1+\kappa_{{}_{S}}]- e^{-im_{{}_{L}} t}e^{-\gamma_{{}_{L}} t} [1+\kappa_{{}_{L}}]\right\} +N_{{}_{K^0 \bar{K^0}}}(t) \nonumber \\ \mbox{$P_{\bar{K^0} K^0}(t)$} &=&{1 \over 4p q^*}\left\{e^{-im_{{}_{S}} t}e^{-\gamma_{{}_{S}} t} [1-\kappa_{{}_{S}}]- e^{-im_{{}_{L}} t}e^{-\gamma_{{}_{L}} t} [1-\kappa_{{}_{L}}]\right\} +N_{{}_{\bar{K^0} K^0}}(t) \end{eqnarray} where $N_{{}_{K^0 \bar{K^0}}}(t)$ and $N_{{}_{\bar{K^0} K^0}}(t)$ are again non-oscillatory terms containing the exponential integral function $Ei$ and $\kappa_{{}_{S/L}}$ are given by \begin{eqnarray} \label{5.7} \kappa_{{}_{S}}&=&-2i{\sqrt{\mbox{$\gamma_{{}_{S}}$} \mbox{$\gamma_{{}_{L}}$}} \over \mbox{$\gamma_{{}_{S}}$}}\left[D'_I -i\mbox{$\gamma_{{}_{S}}$} C_I \right] \nonumber \\ \kappa_{{}_{L}}&=&+2i{\sqrt{\mbox{$\gamma_{{}_{S}}$} \mbox{$\gamma_{{}_{L}}$}} \over \mbox{$\gamma_{{}_{L}}$}}\left[F'_I +i\mbox{$\gamma_{{}_{L}}$} C_I \right] \end{eqnarray} The parameter $C_I$, $D'_I$ and $F'_I$ are defined as solutions of the linear system (\ref{4.14}). Equation (\ref{5.6}) together with (\ref{2.11}) shows that Khalfin's effect depends crucially on the size of the quantities $\kappa_{{}_{S/L}}$. We could, in principle, calculate these quantities taking the solutions $C_I$, $D'_I$ and $F'_I$ from appendix A. There is, however, a more elegant way by going back to (\ref{4.14}). This linear system fixes $C_I$, $D'_I$ and $F'_I$ in terms of $R$ and $I$ (eqs.(\ref{4.7})-(\ref{4.8})) the latter being kept at this stage in section 4 arbitrary i.e. in any order of $\Gamma_{{}_{X}} /m_{{}_{X}}$. But we know now that we are allowed to keep only the zeroth order of $\Gamma_{{}_{X}} /m_{{}_{X}}$. Then $R$, $I$ taken together with (\ref{4.20}) and a redefinition of the form \begin{equation} \label{5.8} \left( \begin{array}{c} \tilde{C}_I \\ \tilde{D}_I \\ \tilde{F}_I \end{array} \right)= 2{\sqrt{\mbox{$\gamma_{{}_{S}}$} \mbox{$\gamma_{{}_{L}}$}} \over \mbox{$\Delta_K$}} \left( \begin{array}{c} C_I \\ D'_I \\ F'_I \end{array} \right) + \left( \begin{array}{c} 1 \\ 0 \\ 0 \end{array}\right), \, \, \, \left( \begin{array}{c} \tilde{a}_{{}_{I}} \\ \tilde{b}_{{}_{I}} \\ \tilde{c}_{{}_{I}} \end{array} \right) =2 {\sqrt{\mbox{$\gamma_{{}_{S}}$} \mbox{$\gamma_{{}_{L}}$}} \over \mbox{$\Delta_K$}} \left( \begin{array}{c} a_{{}_{I}} \\ b_{{}_{I}} \\ c_{{}_{I}} \end{array} \right) \end{equation} convert (\ref{4.14}) into a homogeneous linear system in the limit $\Gamma_{{}_{X}}/m_{{}_{X}} \to 0$ \begin{equation} \label{5.9} \left( \begin{array}{ccc} -\tilde{a}_{{}_{I}} & 1 & 1 \\ -\tilde{b}_{{}_{I}} & -2\mbox{$m_{{}_{L}}$} & -2\mbox{$m_{{}_{S}}$} \\ -\tilde{c}_{{}_{I}} & \mbox{$m_{{}_{L}}$}^2 + \mbox{$\gamma_{{}_{L}}$}^2 & \mbox{$m_{{}_{S}}$}^2 +\mbox{$\gamma_{{}_{S}}$}^2 \end{array} \right) \left( \begin{array}{c} \tilde{C}_I \\ \tilde{D}_I \\ \tilde{F}_I \end{array} \right)=0 \end{equation} Since the determinant \footnote{Demanding the determinant to be zero gives $\tilde{b}_{{}_{I}}-\tilde{a}_{{}_{I}}\tilde{c}_{{}_{I}}=0$ and in the suitable accuracy $\mbox{$\gamma_{{}_{S}}$}^2 -\mbox{$\gamma_{{}_{L}}$}^2 \simeq 0$ or $(\mbox{$\gamma_{{}_{S}}$}^2 -\mbox{$\gamma_{{}_{L}}$}^2)/\mbox{$\Delta m$}^2 \simeq -2(\mbox{$m_{{}_{S}}$} +\mbox{$m_{{}_{L}}$} )/ \mbox{$\Delta m$}$. Clearly both these relations are only hypothetical and not valid in the $K^0-\bar{K^0}$ system.} of the cofficient matrix in (\ref{5.9}) is non-zero we get only a trivial solution \begin{equation} \label{5.10} \tilde{C}_I=\tilde{D}_I=\tilde{F}_I=0 \end{equation} This immediately implies that \begin{equation} \label{5.11} \kappa_{{}_{S}}=\kappa_{{}_{L}}=\mbox{$\Delta_K$} +{\cal O}(\Gamma_{{}_{X}}/m_{{}_{X}}) +{\cal O}(\mbox{$\Delta m$} /\mbox{$m_{{}_{L}}$} ) \end{equation} Equipped with this simple result eq.(\ref{5.6}) take the familar form \begin{eqnarray} \label{5.12} \mbox{$P_{K^0 \bar{K^0}}(t)$} = {p \over 2q}\left\{e^{-im_{{}_{S}} t} e^{-\gamma_{{}_{S}} t}- e^{-im_{{}_{L}} t}e^{-\gamma_{{}_{L}} t} \right\} + {\rm non-osc.\, terms} \nonumber \\ \mbox{$P_{\bar{K^0} K^0}(t)$} = {q \over 2p}\left\{e^{-im_{{}_{S}} t} e^{-\gamma_{{}_{S}} t}- e^{-im_{{}_{L}} t}e^{-\gamma_{{}_{L}} t} \right\} + {\rm non-osc.\, terms} \end{eqnarray} Up to non-oscillatory terms these equations are equivalent to the WW-expressions. What we have shown is that indeed corrections to oscillatory terms due to Khalfin's general result will appear in (\ref{5.12}), but they are necessarily of the order ${\cal O}(\Gamma_{{}_{X}}/m_{{}_{X}})$, ${\cal O}(\mbox{$\Delta m$} /\mbox{$m_{{}_{L}}$} )$. This follows from the fact that the one-pole approximation is trustable only up to such terms. In the calculation with the one-pole ansatz any term whose order of magnitude is much bigger than ${\cal O}(\Gamma_{{}_{X}}/m_{{}_{X}})$, ${\cal O}(\mbox{$\Delta m$} /\mbox{$m_{{}_{L}}$} )$, like $\Gamma_{{}_{L}}/\mbox{$\Delta m$}$, would be then still acceptable. But such a term does not show up along the line of the calculation. It should also be appreciated that such corrections have nothing to do with small/large time behaviour of the transition amplitudes (i.e. they are not interrelated to the usual corrections to the exponential decay law). This is evident from the way $\kappa_{{}_{S/L}}$ enters (\ref{5.6}). Finally the answer to the question we have put forward in the form of equation (\ref{2.13}) can also be given by a simple equation, namely \begin{equation} \label{5.13} \mbox{$P_{K_L K_S}(t)$} = -\mbox{$P_{K_S K_L}(t)$} =0 +{\cal O}(\Gamma_{{}_{X}}/m_{{}_{X}}) +{\cal O}(\mbox{$\Delta m$} /\mbox{$m_{{}_{L}}$} ) \end{equation} Had we not Khalfin's theorem discussed insection 3, it would be compeletely legitimate to assume $\mbox{$P_{K_L K_S}(t)$}$ to be strictly zero. Our result agrees with the conclusion of ref. \cite{suder3} reached there in a different way. We postpone any further discussion to the next section where we will give a summary. In the end we compare our result with expressions obtained by Khalfin who arrives at a equation similar to (\ref{5.6}) \cite{kha3}. To obtain his results we have to make only the following replacement in eq.(\ref{5.6}) \begin{equation} \label{5.14} \kappa_{{}_{S}} \to {-2i \sqrt{\Gamma_{{}_{S}}\Gamma_{{}_{L}}} \over \mbox{$\Delta m$} +i\left(\Gamma_{{}_{S}} + \Gamma_{{}_{L}}\right)}, \, \, \, \, \, \, \kappa_{{}_{L}} \to \kappa^*_{{}_{S}} \end{equation} As explicitly shown in \cite{suder3} the numenrical value of $\kappa_{{}_{S}}$ would then be \begin{equation} \label{5.15} \kappa_{{}_{S}} \sim 0.06\, e^{i\pi /4} \end{equation} The effect would then indeed be of the order $\Gamma_{{}_{L}}/\mbox{$\Delta m$}$ as can be seen from the equation \begin{equation} \label{5.16} |\mbox{$P_{K^0 \bar{K^0}}(t)$} |^2 \simeq {1 \over 4}\left\{e^{-\Gamma_{{}_{S}}t} + e^{-\Gamma_{{}_{L}}t} -2e^{-(\gamma_{{}_{S}}+ \gamma_{{}_{L}})t}\left[\cos (\mbox{$\Delta m$} t) - 0.4 \times 10^{-3}\sin (\mbox{$\Delta m$} t)\right]\right\} \end{equation} We have, hoever, shown that this is an overestimate by several orders of magnitude. The difference between Khalfin's approach and ours is essentially our consistent treatment of the one-pole approximation in section 4. \setcounter{equation}{0} \section{Conclusions} It is satisfactory to arrive after lengthy calculations at familar expressions of the Weisskopf-Wigner approximation. More so as our starting point was completely different from the WW-approach. This not only gives us more confidence in the WW-approximation whose equations, as we know, are of utmost importance for the $K^0-\bar{K^0}$ system, but has also the virtue that one is able to derive the limitations of the WW-approximation for the oscillatory as well as for the exponential terms. We have emphasized that corrections to the oscillatory terms are different in nature from corrections arising from small/large time behaviour of the amplitudes. It turned out, however, that both such corrections must be of the order ${\cal O}(\Gamma_{{}_{X}}/m_{{}_{X}})$, ${\cal O}(\mbox{$\Delta m$} /\mbox{$m_{{}_{L}}$} )$. This is apriori not evident due to the specifics of the $K^0-\bar{K^0}$ system where beside $\Gamma_{{}_{X}}/m_{{}_{X}}$ quantities like $\Gamma_{{}_{L}}/\mbox{$\Delta m$}$ do appear. The reanalysis of the present paper was also necessary in view of a claim of Khalfin that new effects in connection with the non-zero vacuum regeneration of $K_S$ and $K_L$ are of the order of $\Gamma_{{}_{L}}/\mbox{$\Delta m$}$. Let us recapitulate the steps which have led to our result. We have presented two of Khalfin's theorems. One was eq.(\ref{3.10}) which played a crucial role in our analysis. Actually without this equation no conclusion on the validity of the one-pole approximation could have been reached. The other one was the surprising result on the existence of $K_S$ and $K_L$ vacuum regeneration, an effect usually associated with interactions of $K_S$ and $K_L$ in matter. Although this result is quite `exotic' the author of the present paper could not find a loop-hole in the arguments which led to this result. The vacuum regeneration of $K_S$ and $K_L$ goes against what one would intuitively expect and what one is normally used to. Note, however, the this `intuition' is based on quantum mechanical systems where the unstable states have zero overlap. $\mbox{$|K_S\rangle$}$ and $\mbox{$|K_L\rangle$}$ have non-zero overlap, a singled-out property which is then responsible for counter-intuitive effects. The proof of Khalfin's result relies on well established formalims of Quantum Mechnics (eqs.(\ref{3.1})-(\ref{3.3})) and seems therefore hard to dispute once we assume that $\mbox{$|K_S\rangle$}$ and $\mbox{$|K_L\rangle$}$ are given as in (\ref{2.1}). To estimate the size of such an effect we had to perform a consistency check of the one-pole approximation (\ref{4.1}). The outcome of this check provided us with limits of the applicability of (\ref{4.1}) and the determination of apriori unkown variables (combinations of decay amplitudes). Indeed the difference between the present paper and the result obtained by Khalfin can be traced back to exactly this point. In a subsequent step we have derived the time evolution of the system starting from the equations (\ref{3.11})-(\ref{3.13}). The formulae so obtained agreed with expressions from the WW-formalism. This in turn implied that the effect of vacuum regeneration of $K_S$ and $K_L$ is necessary small and of the order of ${\cal O}(\Gamma_{{}_{X}}/m_{{}_{X}})$, ${\cal O}(\mbox{$\Delta m$} /\mbox{$m_{{}_{L}}$} )$ Our estimate does not render the general result of Khalfin useless as in fact the effect is non-zero. Furthermore we know from this result that on quite general grounds \begin{equation} \label{6.1} {\mbox{$P_{K^0 \bar{K^0}}(t)$} \over \mbox{$P_{\bar{K^0} K^0}(t)$}} \neq {\rm const} \end{equation} Any test therefore which as a starting assumption relies instead on (\ref{2.20}) \cite{dass} should be then carefully reconsidered. \vskip 2cm {\elevenbf \noindent Acknowledgments} \newline The author thanks N. Paver, G. Pancheri, M. Finkemeier and R.M. Godbole for useful discussions and comments on the subject treated in the paper. The author also wishes to acknowledge financial support by the HCM program under EEC contract number CHRX-CT 920026. \vglue 0.4cm \newpage \setcounter{equation}{0}
1,314,259,994,653
arxiv
\section{The General Dissipative Theory} The typical transport setup, see Fig.\ 1(A), can be described by \begin{eqnarray}\label{H-ms} H &=& H_S(a_{\mu}^{\dagger},a_{\mu})+\sum_{\alpha=L,R}\sum_{\mu k} \epsilon_{\alpha\mu k}d^{\dagger}_{\alpha\mu k}d_{\alpha\mu k} \nonumber \\ & & + \sum_{\alpha=L,R}\sum_{\mu k}(t_{\alpha\mu k}a^{\dagger}_{\mu} d_{\alpha\mu k}+\rm{H.c.}) . \end{eqnarray} $H_S$ is the system Hamiltonian, which can be rather general (e.g. including many-body interaction). $a^{\dagger}_{\mu}$ ($a_{\mu}$) is the creation (annihilation) operator of electron in state labelled by ``$\mu$", which labels both the multi-orbital and distinct spin states of the transport system. The second and third terms describe, respectively, the two (left and right) electrodes and the tunneling between the electrodes and the system. To contact with the quantum dissipation theory for quantum open systems, let us introduce the reservoir operators $F_{\mu} = \sum_{\alpha k} t_{\alpha\mu k}d_{\alpha\mu k} \equiv f_{L\mu} + f_{R\mu}$. Accordingly, the tunneling Hamiltonian $H'$ reads $ H' = \sum_{\mu} \left( a^{\dagger}_{\mu} F_{\mu} + \rm{H.c.}\right) $. Treating $H'$ perturbatively up to the cumulant second-order expansion, it yields \cite{Yan98} \begin{equation}\label{cumm-expan} \dot\rho(t)=-i{\cal L} \rho(t)-\int_{0}^{t} d\tau \langle {\cal L}'(t) {\cal G}(t,\tau){\cal L}'(\tau) {\cal G}^{\dagger}(t,\tau)\rangle \rho(t). \end{equation} The reduced density matrix is defined as $\rho(t)=\rm{Tr}_B[\rho_T(t)]$, by tracing out the reservoir degrees of freedom from the total system-plus-reservoirs density matrix. The Liouvillian superoperators are defined as ${\cal L}(\cdots)\equiv [H_S,(\cdots)]$, ${\cal L'}(\cdots)\equiv [H',(\cdots)]$, and ${\cal G}(t,\tau)(\cdots)\equiv G(t,\tau)(\cdots)G^{\dagger}(t,\tau)$ with $G(t,\tau)$ the usual propagator (Green's function) associated with the system Hamiltonian $H_S$. The integral kernel in \Eq{cumm-expan}, which is in the so-called partial ordering prescription (POP) (or time-local) form \cite{Yan98}, describes the second-order tunneling self-energy. At the second-order level, one may replace $\rho(t)$ in the last term of \Eq{cumm-expan} with ${\cal G}(t,\tau)\rho(\tau)$, leading the tunneling integral kernel to $\langle {\cal L}'(t) {\cal G}(t,\tau){\cal L}'(\tau) \rangle\rho(\tau)$, being in the chronological ordering prescription (COP) (or memory) form \cite{Yan98}. The corresponding four terms in the conventional Hilbert space \cite{Li04b,Li04c}, depicted on the real-time Keldysh contour in Fig.\ 2, provide a clear diagrammatic interpretation for the second-order tunneling self-energy process. \begin{figure} \begin{center} \includegraphics*[width=8cm,height=5cm,keepaspectratio]{gra_2} \caption{Diagrammatic illustration for the second-order tunneling self-energy processes, on the Keldysh contour. The upper and lower horizontal lines describe the forward and backward propagation of the transport system, which is treated exactly in terms of the system Green's function $G(t,\tau)$. The dashed line stands for the tunneling between the system and electrodes. } \end{center} \end{figure} Explicitly tracing out the states of electrodes, \Eq{cumm-expan} gives rise to \begin{eqnarray}\label{rho-t1} \dot{\rho} &=& -i {\cal L}\rho - \sum_{\mu} \left\{ [a_{\mu}^{\dagger},A_{\mu}^{(-)}\rho -\rho A_{\mu}^{(+)}] + \rm{H.c.} \right\}. \end{eqnarray} For {\it time-independent} system Hamiltonian, $ A^{(\pm)}_{\mu}=\sum_{\alpha=L,R}A^{(\pm)}_{\alpha\mu} = \sum_{\alpha\nu} \int_{-\infty}^{\infty}\!\!dt\, C^{(\pm)}_{\alpha\mu\nu}(\mp t)[i{\tilde a}_{\nu}(t)] $, with $C^{(+)}_{\alpha\mu\nu}(t) \equiv \langle f^{\dagger}_{\alpha\mu}(t) f_{\alpha\nu}(0)\rangle$, $C^{(-)}_{\alpha\mu\nu}(t) \equiv \langle f_{\alpha\mu}(t) f_{\alpha\nu}^{\dagger}(0)\rangle$, and ${\tilde a}_{\nu}(t)=-i\Theta(t){\cal G}(t,0)a_{\nu} \equiv \Pi^{(0)}(t,0)a_{\nu}$. Note that the step function $\Theta(t)$ extends the lower bound of the time integral from $0$ to $-\infty$, whereas the extension of the upper bound from $t$ to $\infty$ results from the consideration of Markovian approximation. For {\it time-dependent} system Hamiltonian, the time-translational invariance breaks down, we thus define ${\tilde a}_{\nu}(t,t')=-i\Theta(t-t'){\cal G}(t,t')a_{\nu}$. The backward evolution of $\tilde{a}_{\nu}(t,t')$ with respect to $t'$, starting from $t'=t$, can be carried out via $\partial_{t'} a_{\nu}(t,t')=i\delta(t-t')a_{\nu} +i [H_S(t'),a_{\nu}(t,t')]$. Thus, the time integral in $A_{\mu}^{(\pm)}$, which becomes now the type of $\sim \int^{t}_{0}dt'C^{(\pm)}_{\alpha\mu\nu}(t-t'){\tilde a}_{\nu}(t,t')$, can be calculated accordingly. Inserting the obtained $A_{\mu}^{(\pm)}$ into \Eq{rho-t1}, the time-dependent phenomena associated with either quantum dissipative dynamics or transport current can be easily treated. For clarity, we hereafter assume the system Hamiltonian to be time-independent, unless further specification. Now we consider the possibility to go beyond the second-order self-energy process diagrammatically shown in Fig.\ 1. An efficient scheme follows the idea of the well-known self-consistent Born approximation (SCBA), i.e., the free propagator defined above, $\Pi^{(0)}(t)\equiv -i\Theta(t){\cal G}(t,0) $, is replaced by an effective propagator $\Pi(t)$. The latter is obtained formally via the Dyson equation \cite{note-2} \begin{eqnarray}\label{pi-2} \dot{\Pi}(t)=-i\delta(t)-i{\cal L}\Pi(t) -i\int^{t}_{-\infty}dt'\Sigma(t-t')\Pi(t') , \end{eqnarray} or $\Pi(\omega) = [\omega-{\cal L}-\Sigma(\omega)]^{-1}$ in frequency domain, with $\Sigma$ being the irreducible self-energy defined again by Fig.\ 2. Accordingly, $\tilde{a}_{\nu}(t)=\Pi(t)a_{\nu}$, and \begin{eqnarray}\label{Apm-2} A^{(\pm)}_{\mu} = \sum_{\alpha\nu}\int\frac{d\omega}{2\pi} C^{(\pm)}_{\alpha\mu\nu}(\pm\omega) [i\Pi(\omega)a_{\nu}] . \end{eqnarray} Equations (\ref{rho-t1})--(\ref{Apm-2}) constitute the basic ingredients of the proposed SCBA scheme. This type of self-consistently partial summation correction has included an infinite number of higher order tunneling processes into the reduced system dynamics. The resulting non-trivial effect on quantum transport will be demonstrated soon. So far, the trace is performed over all the electrode states, and the resulting \Eq{rho-t1} is a {\it unconditional} master equation for the system reduced dynamics. To characterize the transport problem, we should keep track of the record of electron numbers entering the right reservoir (electrode). Following Refs.\ \onlinecite{Li04b} and \onlinecite{Li04c}, one can obtain a {\it conditional} master equation for the reduced system density matrix, $\rho^{(n)}(t)$, under the condition that $n$ electrons have arrived at the right electrode until time $t$. On the basis of $\rho^{(n)}(t)$, one is readily able to compute various transport properties, such as the transport current, the probability distribution function $P(n,t)\equiv\rm{Tr}[\rho^{(n)}(t)]$, and the noise spectrum \cite{Li04b}. For transport current, it can be carried out via $I(t)=e \sum_n n \rm{Tr}[\dot{\rho}^{(n)}(t)$, giving rise to \begin{eqnarray}\label{I-t} I(t) = e \sum_{\mu} \rm{Tr} \left[ \left( a^{\dagger}_{\mu}A^{(-)}_{R\mu}-A^{(+)}_{R\mu}a^{\dagger}_{\mu} \right)\rho(t)+\rm{H.c.} \right] . \end{eqnarray} Compared to other transport formalisms, \Eqs{rho-t1}-(\ref{I-t}) provide a convenient framework for quantum transport. As an illustrative application, we consider the non-trivial problem of quantum transport through strongly interacting quantum dot, under the well-known Anderson impurity model Hamiltonian: $ H_S = \sum_{\mu}(\epsilon_0a_{\mu}^{\dagger}a_{\mu} +\frac{U}{2}n_{\mu}n_{\bar{\mu}}) $. Here the index $\mu$ labels the spin up (``$\uparrow$") and spin down (``$\downarrow$") states, and $\bar{\mu}$ stands for the opposite spin orientation. The electron number operator $n_{\mu}=a^{\dagger}_{\mu}a_{\mu}$, and the Hubbard term $Un_{\uparrow}n_{\downarrow}$ describe the charging effect. Apparently, the reservoir correlation function is diagonal with respect to the spin indices, i.e., $C^{(\pm)}_{\alpha\mu\nu}(t)=\delta_{\mu\nu}C^{(\pm)}_{\alpha\mu\mu}(t)$, and $C_{\alpha\mu\mu}^{(\pm)}(t)=\sum_k |t_{\alpha\mu k}|^2 e^{\pm i\epsilon_k(t)}n^{(\pm)}_{\alpha}(\epsilon_k) $. Here $n^{(+)}_{\alpha}(\epsilon_k)= n_{\alpha}(\epsilon_k)$ is the Fermi distribution function, and $n^{(-)}_{\alpha}(\epsilon_k)= 1-n_{\alpha}(\epsilon_k)$. Accordingly, we have $ A^{(\pm)}_{\alpha\mu} = \Gamma_{\alpha\mu} \int\frac{d\epsilon}{2\pi} n_{\alpha}^{(\pm)}(\epsilon)[i\Pi(\epsilon)]a_{\mu} $, where, under the wide-band approximation, we have introduced $\Gamma_{\alpha\mu}= 2\pi g_{\alpha} |t_{\alpha\mu k}|^{2}$, and assumed it energy independent. From \Eqs{I-t} and (\ref{rho-t1}), the stationary current is obtained as \begin{eqnarray}\label{IV-2} I &=& \frac{e\Gamma_{L}\Gamma_{R}}{\Gamma_{L}+\Gamma_{R}} \int\frac{d\epsilon}{2\pi}{\rm Im}[\Pi(\epsilon)][n_L(\epsilon)-n_R(\epsilon)] . \end{eqnarray} For the single level system under study, the propagator in energy space simply reads $\Pi(\epsilon)=[\epsilon-\epsilon_0-\Sigma(\epsilon)]^{-1}$. Within the SCBA scheme, the self-energy $\Sigma$ can be explicitly carried out via Fig.\ 2. However, in the case of strong Coulomb repulsion, the dot can be occupied at most by one electron. As a result, it can be easily proven that only Fig.\ 2(C) and (D) contribute to the self-energy. Physically, replacing the bare system propagator with the effective propagator corresponds to including the infinitely multiple forward and backward tunnelings between the system and the {\it same} electrode. This is in fact a tunneling-induced quantum fluctuation, which would lead to the level broadening and the non-trivial interference between tunneling and system internal interaction. Explicitly, in large-$U$ limit, the real and imaginary parts of the self-energy read $ {\rm Re} \Sigma(\epsilon) = (m-1) \sum_{\alpha=L,R}\frac{\Gamma_{\alpha\mu}}{2\pi}\Big[ \ln\Big(\frac{\beta U}{2\pi}\Big) -{\rm Re}\psi\Big(\frac{1}{2} +i\frac{\beta}{2\pi}(\epsilon-\mu_{\alpha})\Big )\Big ] $ and $ {\rm Im} \Sigma (\epsilon) = -\sum_{\alpha=L,R}\frac{\Gamma_{\alpha\mu}}{2}\Big [ 1 + (m-1)n_{\alpha}(\epsilon)\Big ]$, respectively \cite{Sch94,Kon96}. Here $\beta \equiv 1/(k_BT)$ is the inverse temperature, $\mu_{\alpha}$ the chemical potential of the electrode, $\psi$ the digamma function, and $m$ denotes the spin degeneracy. (i) For $m=1$, i.e.\ neglecting the spin degree of freedom, ${\rm Im}\Pi(\epsilon)$ gives the well-known Breit-Wigner formula, which appropriately includes the level broadening effect. (ii) For $m\ge 2$ (e.g., $m=2$ for spin $1/2$), the above self-energy correction would result in rich behaviors, depending on the relative values of the parameters such as the temperature and the position of $\epsilon_0$ with respect to the Fermi levels. Detailed discussions, in particular the non-equilibrium Kondo effect, are referred to literature, e.g.\ Refs.\ \onlinecite{Kon96}-\onlinecite{Mei93}. \vspace{3ex} {\it Application to Large Scale Systems}.--- By far, the transport-related density matrix formalism has been constructed in many-particle Hilbert space, which may restrict its direct application only in small systems. For large-scale systems in the absence of many-electron interaction, we first recast the formalism to a very simple version in terms of the reduced {\it single-particle} density matrix (RSPDM), $\sigma_{\mu\nu}(t)\equiv \rm{Tr}[a^{\dagger}_{\nu}a_{\mu}\rho(t)]$, which greatly reduces the dimension of Hilbert space, thus saves computing expense. To account for the electron-electron interaction, we then propose an efficient time-dependent density functional theory (TDDFT) scheme. Note that it is quite natural to combine the TDDFT technique with the present RSPDM formalism, since the former self-consistently amounts to the many-body interaction but still keeps the single-particle picture \cite{note-5}. (i) {\it Time Independent System Hamiltonian}: For simplicity, we first proceed our derivation in the single-particle eigenstate basis, which is denoted as $\{|\mu\rangle,|\nu\rangle,\cdots\}$. In this representation, $A_{\alpha\mu}^{(\pm)}= \sum_{\nu}C^{(\pm)}_{\alpha\mu\nu}(\mp\epsilon_{\nu})a_{\nu}$, and the equation of motion for the RSPDM can be readily obtained by applying \Eq{rho-t1} directly for $\sigma_{\mu\nu}(t)=\rm{Tr}[a^{\dagger}_{\nu}a_{\mu}\rho(t)]$. We have \cite{note-4,Yok04} \begin{eqnarray}\label{S-ME-2} \dot{\sigma} = -i [h,\sigma]-\frac{1}{2} \left\{\left[C^{(-)}\sigma -C^{(+)}\bar{\sigma}\right]+\rm{H.c.} \right\} . \end{eqnarray} Here, $h$ is the single-particle Hamiltonian or the Fock matrix within the TDDFT framework which will be identified soon. $\bar{\sigma}\equiv 1 - \sigma $ denotes the ``hole'' density matrix. The involving matrix products are defined as usual; e.g., $[C^{(-)}\sigma]_{\mu\nu} \equiv \sum_{\alpha=L,R}\sum_{\nu'} C^{(-)}_{\alpha\mu\nu'}(\epsilon_{\nu'})\sigma_{\nu'\nu}$. Straightforwardly, the current can be expressed in terms of the RSPDM as \begin{eqnarray}\label{S-It} I(t) = e \rm{Re}\left\{\rm{Tr}\left[ C^{(-)}_{R}\sigma(t) -C^{(+)}_{R}\bar{\sigma}(t)\right] \right\} . \end{eqnarray} In arbitrary state basis, derivation is the same as above. The difference lies only at the expression of $A_{\alpha\mu}^{(\pm)}$, which in a non-eigenstate representation is given formally as $A_{\alpha\mu}^{(\pm)}\equiv\sum_{\nu}\tilde{C}^{(\pm)}_{\alpha\mu\nu}a_{\nu} =\sum_{\nu\nu',m} C^{(\pm)}_{\alpha\mu\nu'}(\mp \epsilon_m) D^{-1}_{\nu' m}D_{m\nu}a_{\nu} $. Here $\epsilon_m$ is the eigen-energy of eigenstate $|m\rangle$, and $D$ is the transformation matrix from the non-eigestate representation to the eigenstate one. Obviously, with this identification, the resultant master equation and current formula are the same as \Eqs{S-ME-2} and (\ref{S-It}), only replacing the matrices $C^{(\pm)}$ by $\tilde{C}^{(\pm)}$. As an illustrative application of \Eqs{S-ME-2} and (\ref{S-It}), we consider the simple non-interacting multi-level model studied in Ref.\ \onlinecite{Li04c}. In the non-equilibrium stationary state, $\sigma(t\rightarrow \infty)$ is diagonal in the eigenstate basis, thus $[h,\sigma]=0$. As a consequence, the stationary state solution is determined by $C^{(-)}_{\mu\mu}(\epsilon_{\mu})\sigma_{\mu\mu} =C^{(+)}_{\mu\mu}(-\epsilon_{\mu})(1-\sigma_{\mu\mu})$, leading to the well-known result \cite{Li04c}, $\sigma_{\mu\mu}=[\Gamma_L(\epsilon_{\mu})n_L(\epsilon_{\mu})+n_R(\epsilon_{\mu}) \Gamma_R(\epsilon_{\mu})]/[\Gamma_L(\epsilon_{\mu})+\Gamma_R(\epsilon_{\mu})]$. In particular, in the special case of equilibrium, $\sigma_{\mu\mu}$ reduces to the Fermi-Dirac function. Substituting $\sigma_{\mu\mu}$ into \Eq{S-It}, the well-known resonant tunnel current is obtained. (ii) {\it Time Dependent System Hamiltonian}: In this case, the RSPDM can be introduced in a similar manner. Consider, for example, ${\rm Tr}[a^{\dagger}_{\mu}A^{(-)}_{\nu}\rho(t)] = \sum_{\alpha\nu'} \int^{t}_{0}dt' C^{(-)}_{\alpha\nu\nu'}(t,t')\sigma_{\nu'\mu}(t',t) \equiv [C^{(-)}\sigma]_{\nu\mu}$. Here, $\sigma_{\nu'\mu}(t',t)\equiv {\rm Tr} \{a^{\dagger}_{\mu} [{\cal G}(t,t')a_{\nu'}]\rho(t)\}$, which can be solved via $\partial_{t'}\sigma_{\nu'\mu}(t',t)=-i[h(t')\sigma(t',t)]_{\nu'\mu}$, with the initial condition $\sigma_{\nu'\mu}(t,t)={\rm Tr}[a^{\dagger}_{\mu}a_{\nu'}\rho(t)]$. Similarly, we have ${\rm Tr}[A^{(+)}_{\nu}a^{\dagger}_{\mu}\rho(t)] = \sum_{\alpha\nu'} \int^{t}_{0}dt' C^{(+)}_{\alpha\nu\nu'}(t,t')\bar{\sigma}_{\nu'\mu}(t',t) \equiv [C^{(+)}\bar{\sigma}]_{\nu\mu}$. Here, $\bar{\sigma}_{\nu'\mu}(t',t)\equiv {\rm Tr} \{[{\cal G}(t,t')a_{\nu'}]a^{\dagger}_{\mu}\rho(t)\}$, satisfying an equation of the same form as $\sigma_{\nu'\mu}(t',t)$, but with initial condition $\bar{\sigma}_{\nu'\mu}(t,t)=\delta_{\nu'\mu}-\sigma_{\nu'\mu}(t)$. As a result, in the time-dependent case, the resultant master equation and transport current can also be expressed as \Eqs{S-ME-2} and (\ref{S-It}), only keeping in mind that the matrices product needs not only the inner-state summation, but also the ``inner-time'' integration. Now we extend the above RSPDM formalism, i.e., \Eqs{S-ME-2} and (\ref{S-It}), to interacting systems. Within the TDDFT framework \cite{RG84}, this can be straightforwardly done by replacing the single particle Hamiltonian by the Fock matrix \begin{eqnarray}\label{Fock} h_{mn}(t) = h^{0}_{mn}(t)+v^{\rm xc}_{mn}(t) +\sum_{ij}\sigma_{ij}(t)V_{mnij}. \end{eqnarray} In first-principles calculation the state basis is usually chosen as the local atomic orbitals, $\{ \phi_m({\bf r}), m=1,2, \cdots \}$. Here $h^0(t)$ is the non-interacting Hamiltonian which can be in general time-dependent; $V_{mnij}$ is the two-electron Coulomb integral, $V_{mnij}=\int d{\bf r}\int d{\bf r'}\phi^*_m({\bf r})\phi_n({\bf r}) \frac{1}{|{\bf r}-{\bf r'}|}\phi^*_i({\bf r'})\phi_j({\bf r'})$; and $v^{\rm xc}_{mn}(t)=\int d{\bf r} \phi^*_m({\bf r})v^{\rm xc}[n]({\bf r},t)\phi_n({\bf r})$, with $v^{\rm xc}[n]({\bf r},t)$ the exchange-correlation potential, which is defined by the functional derivative of the the exchange-correlation functional $A^{\rm xc}$. In practice, especially in the time-dependent case, the unknown functional $A^{\rm xc}$ can be approximated by the energy functional $E^{\rm xc}$, obtained in the Kohn-Sham theory and further with the local density approximation (LDA). Notice that the density function $n({\bf r},t)$ appeared in the Fock operator is related to the RSPDM via $n({\bf r},t)=\sum_{mn}\phi_m({\bf r})\sigma_{mn}(t)\phi^*_n({\bf r})$. Thus, \Eqs{S-ME-2}-(\ref{Fock}) constitute a closed form of TDDFT approach for the first-principles study of quantum transport, which is currently an intensive research subject \cite{Bur05}. To summarize, we have proposed a compact transport formalism from the perspective of quantum open systems. The new formulation is constructed in terms of an improved reduced density matrix approach at the SCBA level, which is shown to be accurate enough in practice. Based on the established density matrix formalism, we also developed a new TDDFT scheme for first-principles study of transport through complex large-scale systems. Systematic applications and numerical implementations are in progress and will be published elsewhere. \vspace{2ex} {\it Acknowledgments.} Support from the National Natural Science Foundation of China and the Research Grants Council of the Hong Kong Government is gratefully acknowledged.
1,314,259,994,654
arxiv
\section{Introduction \label{sec:intro}} The asymptotic iteration method (AIM) is an iterative algorithm for the solution of Sturm--Liouville equations~\cite{CHS03,F04}. Although this method does not seem to be better than other existing approaches, it has been applied to quantum--mechanical~\cite{B05,B06,CHS05} as well as mathematical problems~\cite{B05b}. For example, the AIM has proved suitable for obtaining both accurate approximate and exact eigenvalues~\cite {CHS03,F04,B05,B05b,B06,CHS05} and it has also been applied to the calculation of Rayleigh--Schr\"{o}dinger perturbation coefficients~\cite {B06,CHS05b}. Recently, Barakat applied the AIM to a Coulomb potential with a radial polynomial perturbation~\cite{B06}. By means of a well--known transformation he converted the perturbed Coulomb problem into an anharmonic oscillator. Since straightforward application of the AIM exhibited considerable oscillations and did not appear to converge Barakat resorted to perturbation theory in order to obtain acceptable results \cite{B06}. It is most surprising that the straightforward application of the AIM failed for the anharmonic oscillator studied by Barakat~\cite{B06} since it had been found earlier that the approach should be accurate in such cases~\cite {F04}. The main purpose of this paper is to verify whether the AIM gives accurate eigenvalues of the perturbed Coulomb model or if its sequences are oscillatory divergent as mentioned above. We also discuss the application of perturbation theory to that model. In Sec.~\ref{sec:model} we present the model and discuss useful scaling relations for the potential parameters. In Sec.~\ref{sec:AIM} we apply the AIM to the perturbed Coulomb model directly; that is to say we do not convert it into an anharmonic oscillator. In Sec.~\ref{sec:PT} we outline alternative perturbation approaches, and in Sec.~\ref{sec:concl} we interpret our results and draw conclusions. \section{The model \label{sec:model}} The problem studied by Barakat~\cite{B06} is given by the following radial Schr\"{o}dinger equation \begin{eqnarray} \hat{H}\Psi &=&E\Psi , \nonumber \\ \hat{H} &=&-\frac{1}{2}\frac{d^{2}}{dr^{2}}+\frac{l(l+1)}{2r^{2}}+V(r) \nonumber \\ V(r) &=&-\frac{Z}{r}+gr+\lambda r^{2}, \label{eq:Schro} \end{eqnarray} where $l=0,1,2,\ldots $ is the angular--momentum quantum number, and the boundary conditions are $\Psi (0)=\Psi (\infty )=0$. We restrict to the case $\lambda >0$ in order to have only bound states; on the other hand, $Z$ and $% g$ can take any finite real value. It is most useful to take into account the scaling relations \begin{eqnarray} E(Z,g,\lambda ) &=&Z^{2}E(1,gZ^{-3},\lambda Z^{-4})=|g|^{2/3}E(Z|g|^{-1/3},g|g|^{-1},\lambda |g|^{-4/3}) \nonumber \\ &=&\lambda ^{1/2}E(Z\lambda ^{-1/4},g\lambda ^{-3/4},1). \label{eq:scaling} \end{eqnarray} Notice that we can set either $Z$ or $\lambda $ equal to unity without loss of generality, and that, for example, $E(1,-g,\lambda )=E(-1,g,\lambda )$. Following Barakat~\cite{B06} we choose $n=0,1,\ldots $ to be the radial quantum number, and we may define a ``principal '' quantum number $\nu =n+l+1=1,2,\ldots $. \section{Direct application of the AIM\label{sec:AIM}} Barakat mentions that straightforward application of the AIM does not give reasonable results because the sequences oscillate when the number of iteration is greater than $30$ approximately~\cite{B06}. This conclusion is surprising because it has been shown that the AIM yields accurate results for anharmonic oscillators~\cite{F04}, and Barakat converted the perturbed Coulomb model into one of them\cite{B06}. In this section we apply the AIM directly to the original radial Schr\"{o}dinger equation (\ref{eq:Schro}). By means of the transformation $\psi (r)=\phi (r)y(r)$ we convert the perturbed Coulomb model (\ref{eq:Schro}) into a Sturm--Liouville equation for $y(r)$: \begin{eqnarray} y^{\prime \prime }(r) &=&Q(r)y^{\prime }(r)+R(r)y(r) \nonumber \\ Q(r) &=&-\frac{2\phi ^{\prime }(r)}{\phi (r)} \nonumber \\ R(r) &=&\left\{ 2[V(r)-E]-\frac{\phi ^{\prime \prime }(r)}{\phi (r)}\right\} , \label{eq:St_Li_gen} \end{eqnarray} where $\phi (r)$ is arbitrary. It seems reasonable to choose \begin{equation} \phi (r)=r^{l+1}e^{-\beta r-\alpha r^{2}} \label{eq:phi(r)} \end{equation} that resembles the asymptotic behaviour of the eigenfunction for a harmonic oscillator when $\beta =0$ or for a Coulomb interaction when $\alpha =0$. It leads to \begin{eqnarray} Q(r) &=&4\alpha r-\frac{2(l+1)}{r}+2\beta \nonumber \\ R(r) &=&\left( 2\lambda -4\alpha ^{2}\right) r^{2}+(2g-4\alpha \beta )r+% \frac{2\beta (l+1)-2Z}{r} \nonumber \\ &&+2\alpha (2l+3)-2E-\beta ^{2}. \label{eq:P(r)_Q(r)} \end{eqnarray} We can set the values of the two free parameters $\alpha $ and $\beta $ to obtain the greatest rate of convergence of the AIM sequences. From now on we call asymptotic values of $\alpha $ and $\beta $ to such values of those parameters that remove the terms of $R(r)$ that dominate at large $r$; that is to say: $\beta =g/(2\alpha )$ and $\alpha =\sqrt{\lambda /2}$. Since the asymptotic values of the free parameters do not necessarily lead to the greatest convergence rate\cite{F04}, in what follows we will also look for optimal values of $\alpha $. The Sturm--Liouville equation (\ref{eq:St_Li_gen}) with the functions $Q(r)$ and $R(r)$ (\ref{eq:P(r)_Q(r)}) is suitable for the application of the AIM. We do not show the AIM equations here because they have been developed and discussed elsewhere\cite{CHS03,F04}. Since the AIM quantization condition depends not only on the energy but also on the variable $r$ for non--exactly solvable problems, we have to choose a convenient value for the latter\cite {CHS03,F04}. Later on we will discuss the effect of the value of $r$ on the convergence of the method; for the time being we follow Barakat~\cite{B06} and select the positive root of $\phi ^{\prime }(r)=0$: \begin{equation} r_{0}=\frac{\sqrt{8\alpha (l+1)+\beta ^{2}}-\beta }{4\alpha } \label{eq:r_0} \end{equation} For concreteness we restrict to $Z=\lambda =1$ and $n=l=0$, and select $% g=-2,-1,1,2$ from Barakat's paper\cite{B06}. As expected from earlier calculations on anharmonic oscillators~\cite{F04}, the rate of convergence of the AIM depends on the value of $\alpha $. In order to investigate this point we choose $g=-2$ because it is the most difficult of all the cases considered here. More precisely, we focus on the behaviour of the logarithmic error $L_{N}=\log |E^{(N)}-E^{exact}|$, where $E^{(N)}$ is the AIM energy at iteration $N$ and $E^{exact}=-1.1716735847196510437987056$ was obtained by means of the rapidly converging Riccati--Pad\'{e} Method (RPM)~% \cite{F95,F96} from sequences of determinants of dimension $D=2$ through $% D=22$. We first consider the asymptotic value $\alpha =1/\sqrt{2}$. Fig.~\ref {fig:asympt} shows that $L_{N}$ decreases rapidly with $N$ when $% N<\approx 20$ and then more slowly but more smoothly for $N>20$. In the transition region about $N\approx 20$ we observe oscillations that can mislead one into believing that the AIM starts to diverge. Fig.~\ref{fig:optimal} shows that the behaviour of $L_{N}$ for a nearly optimal value $\alpha =1/2$ is similar to the previous case, except that the transition takes place at a larger value of $N$ and the convergence rate is greater. More precisely, $L_{N}$ decreases rapidly with $N$ when $% N<\approx 50$ approximately as $L_{N}\approx 0.22-0.064N-0.0029N^{2}$ and more slowly and smoothly for $N>50$ as $L_{N}\approx -6.5-0.068N$. Again, the transition region exhibits oscillations. Table~\ref{tab:Table2} shows the ground--state energies for $g=-2,-1,1,2$ and the corresponding nearly optimal values of $\alpha $. We estimated those eigenvalues from the sequences of AIM roots for $N=10$ through $N=80$. Notice that the optimal values of $\alpha $ in Table~\ref{tab:Table2} depend on $g$ and do not agree with the asymptotic value $\alpha =\sqrt{1/2}$. Table~\ref{tab:Table2} also shows that the AIM eigenvalues agree with those calculated by means of the RPM~\cite{F95,F96} from sequences of determinants of dimension $D=2$ through $D=15$. The rate of convergence also depends on the chosen value of $r$. The calculation of $L_{N}$ as a function of $\xi =r/r_{0}$ shows that $L_{N}(\xi )$ exhibits a minimum at $\xi _{N}$ and that $\xi _{N}$ increases with $N$ approximately as $\xi _{N}=0.435+0.005N$ (for $g=-2$). However, in order to keep the application of the AIM as simple as possible we just choose $% r=r_{0} $ for all the calculations. \section{Alternative perturbation approaches\label{sec:PT}} Barakat~\cite{B06} first converted the radial Schr\"{o}dinger equation (\ref {eq:Schro}) into another one for an anharmonic oscillator by means of the standard transformations $r=u^{2}$ and $\Phi (u)=u^{-1/2}\Psi (u^{2})$. Finally, he derived the Sturm--Liouville problem \begin{equation} f^{^{\prime \prime }}(u)+2\left( \frac{L+1}{u}-\alpha u^{3}\right) f^{\prime }(u)+\left( \epsilon u^{2}-8gu^{4}+8Z\right) f(u)=0, \label{eq:SL_u} \end{equation} where $L=2l+1/2$ and $\epsilon =8E-(2L+5)\alpha $, through factorization of the asymptotic behaviour of the solution: \begin{equation} \Phi (u)=u^{L+1}e^{-\alpha u^{2}/4}f(u),\;\alpha =\sqrt{8\lambda }. \label{eq:factor1} \end{equation} Notice that present $\alpha $ and Barakat's $\alpha $ are not exactly the same but they have a close meaning and are related by $\alpha _{asymptotic}^{present}=\alpha _{Barakat}/4$. Since Barakat's application of the AIM to Eq. (\ref{eq:SL_u}) did not appear to converge~\cite{B06} he opted for a perturbation approach that consists of rewriting Eq. (\ref {eq:SL_u}) as \begin{equation} f^{^{\prime \prime }}(u)+2\left( \frac{L+1}{u}-\alpha u^{3}\right) f^{\prime }(u)+\left[ \epsilon u^{2}+\gamma \left( -8gu^{4}+8Z\right) \right] f(u)=0 \label{eq:SL_u_PT} \end{equation} and expanding the solutions in powers of $\gamma $: \begin{equation} f(u)=\sum_{j=0}^{\infty }f^{(j)}(u)\gamma ^{j},\;\epsilon =\sum_{j=0}^{\infty }\epsilon ^{(j)}\gamma ^{j} \label{eq:AIM_PT_series_u} \end{equation} The perturbation parameter $\gamma $ is set equal to unity at the end of the calculation. The series for the energy exhibits considerable convergence rate and consequently Barakat obtained quite accurate results with just two to five perturbation corrections~\cite{B06}. Barakat calculated the coefficient $\epsilon ^{(0)}$ exactly and all the others approximately~\cite {B06}. The model (\ref{eq:Schro}) is suitable for several alternative implementations of perturbation theory in which we simply write $% V(r)=V_{0}(r)+\gamma V_{1}(r)$ and expand the solutions in powers of $\gamma $. If we choose $V_{0}(r)=-Z/r$ (when $Z>0$) and $V_{1}(r)=gr+\lambda r^{2}$ then we can calculate all the perturbation coefficients exactly by means of well known algorithms~\cite{F00}. One easily realizes that the perturbation series can be rearranged as \begin{equation} E=Z^{2}\sum_{i=0}^{\infty }\sum_{j=0}^{\infty }d_{ij}g^{i}\lambda ^{j}Z^{-(3i+4j)}. \label{eq:E_series_Z} \end{equation} It is well known that this series is asymptotic divergent for all values of the potential parameters. The other reasonable perturbation split of the potential energy is $% V_{0}(r)=\lambda r^{2}$, $V_{1}(r)=-Z/r+gr$. In this case we can rearrange the series as \begin{equation} E=\lambda ^{1/2}\sum_{i=0}^{\infty }\sum_{j=0}^{\infty }c_{ij}Z^{i}g^{j}\lambda ^{-(i+3j)/4}. \label{eq:E_series_lambda} \end{equation} One expects that this series has a finite radius of convergence. This is exactly the series obtained by Barakat~\cite{B06} by means of the AIM and, consequently, it is not surprising that he derived accurate results from it. In this case one can obtain exact perturbation corrections at least for the first two energy coefficients. For simplicity we concentrate on the states with $n=0$. The eigenfunctions and eigenvalues of order zero are \begin{eqnarray} \Psi _{0l}^{(0)}(r) &=&\frac{\sqrt{2}(2\lambda )^{(2l+3)/8}}{\Gamma (l+3/2)}% r^{l+1}e^{-\sqrt{2\lambda }r^{2}/2} \nonumber \\ E_{0l}^{(0)} &=&\frac{\sqrt{2\lambda }}{2}(2l+3) \label{eq:E^(0)} \end{eqnarray} respectively. With the unperturbed eigenfunctions one easily obtains the perturbation correction of first order to the energy \begin{equation} E_{0l}^{(1)}=\frac{(l+1)!g}{(2\lambda )^{1/4}\Gamma (l+3/2)}-\frac{% l!Z(2\lambda )^{1/4}}{\Gamma (l+3/2)} \label{eq:E^(1)} \end{equation} that is the term of the series (\ref{eq:E_series_lambda}) with $i+j=1$. One can easily carry out the same calculation for the states with $n>0$ using the appropriate eigenfunctions of the harmonic oscillator. Equation (\ref{eq:E^(1)}) yields all the numerical results for $\epsilon _{0l}^{(1)}$ in Tables 1-3 of Barakat's paper~\cite{B06}. In particular, $% E_{0l}^{(1)}=0$ when $g=\sqrt{2\lambda }Z/(l+1)$ as in Table 1 of Barakat's paper~\cite{B06}. This particular relationship between the potential parameters also leads to exact solutions of the eigenvalue equation (\ref {eq:Schro}). Some of them are given by \begin{eqnarray} \Psi _{0l}^{exact}(r) &=&N_{l}r^{l+1}e^{-\alpha r^{2}-\beta r},\;\alpha =% \sqrt{\frac{\lambda }{2}},\;\beta =\frac{Z}{l+1}, \nonumber \\ E_{0l}^{exact} &=&\alpha (2l+3)-\frac{Z^{2}}{2(l+1)^{2}},\;g=\frac{\sqrt{% 2\lambda }Z}{l+1}, \label{eq:QES} \end{eqnarray} where $N_{l}$ is a normalization constant. \section{Conclusions \label{sec:concl}} We have shown that the AIM converges for the perturbed Coulomb model if the values of the free parameters in the factor function that converts the Schr\"{o}dinger equation into a Sturm--Liouville one are not too far from optimal. It is clear that it is not necessary to transform the perturbed Coulomb model into an anharmonic oscillator for a successful application of the AIM. Our results do not exhibit the oscillatory divergence reported by Barakat\cite{B06} even when choosing the asymptotic value of $\alpha$. The perturbation approach proposed by Barakat~\cite{B06} is equivalent to choosing the harmonic oscillator as unperturbed or reference Hamiltonian, and if we apply perturbation theory to the original radial Schr\"{o}dinger equation we easily obtain two energy coefficients exactly instead of just only one. It is worth mentioning that the coefficients calculated by Barakat~% \cite{B06} are quite accurate and, consequently, the resulting series provide a suitable approach for the eigenvalues of the perturbed Coulomb potential. This application of the AIM to perturbation theory is certainly much more practical than the calculation of exact perturbation corrections proposed earlier~\cite{CHS05b} that can certainly be carried out more efficiently by other approaches~\cite{F00}. \noindent \textbf{Acknowledgements} \noindent P.A. acknowledges support from Conacyt grant C01-40633/A-1
1,314,259,994,655
arxiv
\section{Introduction} Let $\gamma = \gamma^n$ denote the standard Gaussian probability measure on Euclidean space $(\mathbb{R}^n,\abs{\cdot})$: \[ \gamma^n := \frac{1}{(2\pi)^{\frac{n}{2}}} e^{-\frac{|x|^2}{2}} \, dx =: e^{-W(x)}\, dx. \] More generally, if $\H^{k}$ denotes the $k$-dimensional Hausdorff measure, let $\gamma^k$ denote its Gaussian-weighted counterpart: \[ \gamma^k := e^{-W(x)} \H^{k} . \] The Gaussian-weighted (Euclidean) perimeter of a Borel set $U \subset \mathbb{R}^n$ is defined as: \[ P_\gamma(U) := \sup \left\{\int_U (\div X - \inr{\nabla W}{X}) \, d\gamma: X \in C_c^\infty(\mathbb{R}^n; T \mathbb{R}^n), |X| \le 1 \right\}. \] For nice sets (e.g. open sets with piecewise smooth boundary), $P_\gamma(U)$ is known to agree with $\gamma^{n-1}(\partial U)$ (see, e.g.~\cite{MaggiBook}). The weighted perimeter $P_\gamma(U)$ has the advantage of being lower semi-continuous with respect to $L^1(\gamma)$ convergence, and thus fits well with the direct method of calculus-of-variations. The classical Gaussian isoperimetric inequality, established independently by Sudakov--Tsirelson \cite{SudakovTsirelson} and Borell \cite{Borell-GaussianIsoperimetry} in 1975, asserts that among all Borel sets $U$ in $\mathbb{R}^n$ having prescribed Gaussian measure $\gamma(U) = v \in [0,1]$, halfplanes minimize Gaussian-weighted perimeter $P_\gamma(U)$ (see also \cite{EhrhardPhiConcavity, BakryLedoux, BobkovGaussianIsopInqViaCube, BobkovLocalizedProofOfGaussianIso,MorganManifoldsWithDensity}). Later on, it was shown by Carlen--Kerce \cite{CarlenKerceEqualityInGaussianIsop} (see also \cite{EhrhardGaussianIsopEqualityCases,MorganManifoldsWithDensity, McGonagleRoss:15}), that up to $\gamma$-null sets, halfplanes are in fact the \emph{unique} minimizers for the Gaussian isoperimetric inequality. \medskip In this work, we extend these classical results to the case of $3$-clusters. A \emph{$k$-cluster} $\Omega = (\Omega_1, \ldots, \Omega_k)$ is a $k$-tuple of Borel subsets $\Omega_i \subset \mathbb{R}^n$ called cells, such that $\set{\Omega_i}$ are pairwise disjoint, $P_\gamma(\Omega_i) < \infty$ for each $i$, and $\gamma(\mathbb{R}^n \setminus \bigcup_{i=1}^k \Omega_i) = 0$. Note that the cells are not required to be connected. The total Gaussian perimeter of a cluster $\Omega$ is defined as: \[ P_\gamma(\Omega) := \frac 12 \sum_{i=1}^k P_\gamma(\Omega_i) . \] The Gaussian measure of a cluster is defined as: \[ \gamma(\Omega) := (\gamma(\Omega_1), \ldots, \gamma(\Omega_k)) \in \Delta^{(k-1)} , \] where $\Delta^{(k-1)} := \{v \in \mathbb{R}^k: v_i \ge 0 ~,~ \sum_{i=1}^k v_i = 1\}$ denotes the $(k-1)$-dimensional simplex. The isoperimetric problem for $k$-clusters consists of identifying those clusters $\Omega$ of prescribed Gaussian measure $\gamma(\Omega) = v \in \Delta^{(k-1)}$ which minimize the total Gaussian perimeter $P_\gamma(\Omega)$. \smallskip Note that easy properties of the perimeter ensure that for a $2$-cluster $\Omega$, $P_\gamma(\Omega) = P_\gamma(\Omega_1) = P_\gamma(\Omega_2)$, and so the case $k=2$ corresponds to the classical isoperimetric setup when testing the perimeter of a single set of prescribed measure. In analogy to the classical \emph{unweighted} isoperimetric inequality in Euclidean space $(\mathbb{R}^n,\abs{\cdot})$, in which the Euclidean ball minimizes (unweighted) perimeter among all sets of prescribed Lebesgue measure \cite{BuragoZalgallerBook,MaggiBook}, we will refer to the case $k=2$ as the ``single-bubble" case (with the bubble being $\Omega_1$ and having complement $\Omega_2$). Accordingly, the case $k=3$ is called the ``double-bubble" problem. See below for further motivation behind this terminology and related results. \smallskip A natural conjecture is then the following: \begin{conjecture*}[Gaussian Multi-Bubble Conjecture] For all $k \leq n+1$, the least (Gaussian-weighted) perimeter way to decompose $\mathbb{R}^n$ into $k$ cells of prescribed (Gaussian) measure $v \in \interior \Delta^{(k-1)}$ is by using the Voronoi cells of $k$ equidistant points in $\mathbb{R}^n$. \end{conjecture*} \noindent Recall that the Voronoi cells of $\set{x_1,\ldots,x_k} \subset \mathbb{R}^n$ are defined as: \[ \Omega_i = \interior \set{ x \in \mathbb{R}^n : \min_{j=1,\ldots,k} \abs{x-x_j} = \abs{x-x_i} } \;\;\; i=1,\ldots, k ~ , \] where $\interior$ denotes the interior operation (to obtain pairwise disjoint cells). Indeed, when $k=2$, the Voronoi cells are precisely halfplanes, and the single-bubble conjecture holds by the classical Gaussian isoperimetric inequality. When $k=3$ and $v \in \interior \Delta^{(2)}$, the conjectured isoperimetric minimizing clusters are tripods (also referred to as ``Y's", ``rotors" or ``propellers" in the literature), whose interfaces are three half-hyperplanes meeting along an $(n-2)$-dimensional plane at $120^\circ$ angles (forming a tripod or ``Y" shape in the plane). When $v \in \partial \Delta^{(2)}$, the 3-cluster has (at least) one null-cell, reducing to the $k=2$ case (and indeed two complementing half-planes may be seen as a degenerate tripod cluster, whose vertex is at infinity). \smallskip Tripod clusters are the naturally conjectured minimizers in the double-bubble case, as their interfaces have constant Gaussian mean-curvature (being flat), and meet at $120^{\circ}$ angles, both of which are necessary conditions for any extremizer of (Gaussian) perimeter under a (Gaussian) measure constraint -- see Section \ref{sec:first-order}. Note that tripod clusters are also known to be the global extremizers in other optimization problems, such as the problem of maximizing the sum of the squared lengths of the Gaussian moments of the cells in $\mathbb{R}^3$ (see Heilman--Jagannath--Naor \cite{HeilmanJagannathNaor-PropellerInR3}). On the other hand, for the Gaussian noise-stability question, halfplanes are known to maximize noise-stability in the single-bubble case \cite{Borell-GaussianNoiseStability, MosselNeeman-GaussianNoiseStability}, but tripod-clusters are known to not always maximize noise-stability in the double-bubble case \cite{HeilmanMosselNeeman-GaussianNoiseStability}. \smallskip Our main result in this work is the following: \begin{theorem}[Gaussian Double-Bubble Theorem] \label{thm:main1} \label{thm:main-I-I_m} The Gaussian Double-Bubble Conjecture (case $k=3$) holds true for all $n \geq 2$. \end{theorem} In addition, we resolve the uniqueness question: \begin{theorem}[Uniqueness of Minimizing Clusters] \label{thm:main-uniqueness} Up to $\gamma^n$-null sets, tripod clusters are the \emph{unique} minimizers of Gaussian perimeter among all clusters of prescribed Gaussian measure $v \in \interior \Delta^{(2)}$. \end{theorem} \subsection{Previously Known and Related Results} The Gaussian Multi-Bubble Conjecture is known to experts. Presumably, its origins may be traced to an analogous problem of J.~Sullivan from 1995 in the \emph{unweighted} Euclidean setting \cite[Problem 2]{OpenProblemsInSoapBubbles96}, where the conjectured uniquely minimizing cluster (up to null-sets) is a standard $k$-bubble -- spherical caps bounding connected cells $\set{\Omega_i}_{i=1}^{k}$ which are obtained by taking the Voronoi cells of $k$ equidistant points in $\S^{n} \subset \mathbb{R}^{n+1}$, and applying all stereographic projections to $\mathbb{R}^n$. To put our results into appropriate context, let us go over some related results in three settings: the unweighted Euclidean setting $\mathbb{R}^n$, on the $n$-sphere $\S^n$ endowed with its canonical Riemannian metric and measure (normalized for convenience to be a probability measure), and finally in our Gaussian-weighted Euclidean setting $\mathbb{G}^n$. Further results may be found in F.~Morgan's excellent book \cite[Chapters 13,14,18,19]{MorganBook5Ed}. \begin{itemize} \item The classical isoperimetric inequality in the unweighted Euclidean setting, going back (at least in dimension $n=2$ and perhaps $n=3$) to the ancient Greeks, and first proved rigorously by Schwarz in $\mathbb{R}^n$ (see \cite[Chapter 13.2]{MorganBook5Ed}, \cite[Subsection 10.4]{BuragoZalgallerBook} and the references therein), states that the Euclidean ball is a single-bubble isoperimetric minimizer in $\mathbb{R}^n$. It was shown by DeGiorgi \cite[Theorem 14.3.1]{BuragoZalgallerBook} that up to null-sets, it is uniquely minimizing perimeter. Long believed to be true, but appearing explicitly as a conjecture in an undergraduate thesis by J.~Foisy in 1991 \cite{Foisy-UGThesis}, the double-bubble case was considered in the 1990's by various authors \cite{SMALL93, HHS95}, culminating in the work of Hutchings--Morgan--Ritor\'e--Ros \cite{DoubleBubbleInR3}, who proved that up to null-sets, the standard double-bubble is uniquely perimeter minimizing in $\mathbb{R}^3$; this was later extended to $\mathbb{R}^n$ in \cite{SMALL03,Reichardt-DoubleBubbleInRn}. That the standard triple-bubble is uniquely perimeter minimizing in $\mathbb{R}^2$ was proved by Wichiramala in \cite{Wichiramala-TripleBubbleInR2}. \item On $\S^n$, it was shown by P.~L\'evy \cite{LevyIsopInqOnSphere} and Schmidt \cite{SchmidtIsopOnModelSpaces} that geodesic balls are single-bubble isoperimetric minimizers. Uniqueness up to null-sets was established by DeGiorgi \cite[Theorem 14.3.1]{BuragoZalgallerBook}. As in the unweighted Euclidean setting, the multi-bubble conjecture on $\S^n$ asserts that the uniquely perimeter minimizing $k$-cluster (up to null-sets) is a standard $k$-bubble: spherical caps bounding connected cells $\set{\Omega_i}_{i=1}^{k}$ and having incidence structure as in the Euclidean and Gaussian cases, already described above. The double-bubble conjecture was resolved on $\S^2$ by Masters \cite{Masters-DoubleBubbleInS2}, but on $\S^n$ for $n\geq 3$ only partial results are known \cite{CottonFreeman-DoubleBubbleInSandH, CorneliHoffmanEtAl-DoubleBubbleIn3D,CorneliCorwinEtAl-DoubleBubbleInSandG}. In particular, we mention a result by Corneli-et-al \cite{CorneliCorwinEtAl-DoubleBubbleInSandG}, which confirms the double-bubble conjecture on $\S^n$ for all $n \geq 3$ when the prescribed measure $v \in \Delta^{(2)}$ satisfies $\max_{i} \abs{v_i - 1/3} \leq 0.04$. Their proof employs a result of Cotton--Freeman \cite{CottonFreeman-DoubleBubbleInSandH} stating that if the minimizing cluster's cells are known to be connected, then it must be the standard double-bubble. \item We finally arrive to the Gaussian setting $\mathbb{G}^n$ which we consider in this work. As already mentioned, halfplanes are the unique single-bubble isoperimetric minimizers (up to null-sets). The original proofs by Sudakov--Tsirelson and independently Borell both deduced the single-bubble isoperimetric inequality on $\mathbb{G}^n$ from the one on $\S^N$, exploiting the classical fact that the projection onto a fixed $n$-dimensional subspace of the uniform measure on a rescaled sphere $\sqrt{N} \S^N$, converges to the Gaussian measure $\gamma^n$ as $N \rightarrow \infty$. Building upon this idea, it was shown by Corneli-et-al \cite{CorneliCorwinEtAl-DoubleBubbleInSandG} that verification of the double-bubble conjecture on $\S^N$ for a sequence of $N$'s tending to $\infty$ and a fixed $v \in \interior \Delta^{(2)}$ will verify the double-bubble conjecture on $\mathbb{G}^n$ for the same $v$ and for all $n \geq 2$. As a consequence, they confirmed the double-bubble conjecture on $\mathbb{G}^n$ for all $n \geq 2$ when the prescribed measure $v \in \Delta^{(2)}$ satisfies $\max_{i} \abs{v_i - 1/3} \leq 0.04$. Note that this approximation argument precludes any attempts to establish uniqueness in the double-bubble conjecture, and to the best of our knowledge, no uniqueness in the double-bubble conjecture on $\mathbb{G}^n$ was known for any $v \in \interior \Delta^{(2)}$ prior to our Theorem \ref{thm:main-uniqueness}. \end{itemize} \medskip An essential ingredient in the proofs of the double-bubble results in $\mathbb{R}^n$ (and in many results on $\S^n$), is a symmetrization argument due to B.~White (see Foisy \cite{Foisy-UGThesis} and Hutchings \cite{Hutchings-StructureOfDoubleBubbles}). However, it is not clear to us what type of Gaussian symmetrization will produce a tripod cluster (for a symmetrization which works in the single-bubble case, see Ehrhard \cite{EhrhardPhiConcavity}). A second essential ingredient in the above results is Hutchings' theory \cite{Hutchings-StructureOfDoubleBubbles} of bounds on the number of connected components comprising each cell of a minimizing cluster. \smallskip In contrast, we do not use any of these ingredients in our approach. Furthermore, contrary to previous approaches, we do not directly obtain a lower bound on the perimeter of a cluster having prescribed measure $v \in \Delta^{(2)}$, nor do we identify the minimizing clusters by ruling out competitors. Our approach is based on obtaining a matrix-valued partial-differential inequality on the associated isoperimetric profile, concluding the desired lower bound in one fell swoop for \emph{all} $v \in \Delta^{(2)}$ by an application of the maximum-principle. \subsection{Outline of Proof} Let $I^{(k-1)} : \Delta^{(k-1)} \to \mathbb{R}_+$ denote the Gaussian isoperimetric profile for $k$-clusters, defined as: \[ I^{(k-1)}(v) := \inf\{P_\gamma(\Omega): \text{$\Omega$ is a $k$-cluster with $\gamma(\Omega) = v$}\}. \] Our goal will be to show that $I^{(2)} = I^{(2)}_m$ on $\interior \Delta^{(2)}$, where $I^{(2)}_m : \interior \Delta^{(2)} \to \mathbb{R}_+$ denotes the Gaussian double-bubble \emph{model} profile: \[ I^{(2)}_m(v) := \inf\{P_\gamma(\Omega): \text{$\Omega$ is a tripod-cluster with $\gamma(\Omega) = v$}\}. \] Note that for any $v \in \interior \Delta^{(2)}$, there exists a tripod-cluster $\Omega^m$ with $\gamma(\Omega^m) = v$. Indeed, by the product structure of the Gaussian measure, it is enough to establish this in the plane, and after fixing the orientation of the tripod, we actually show using a topological argument that there exists a bijection between the vertex of the tripod in $\mathbb{R}^2$ and $\interior \Delta^{(2)}$. To establish $I^{(2)} = I^{(2)}_m$, let us draw motivation from the single-bubble case. By identifying $\Delta^{(1)}$ with $[0,1]$ using the map $\Delta^{(1)} \ni (v_1,v_2) \mapsto v_1 \in [0,1]$, we will think of the single-bubble profile $I^{(1)}$ as defined on $[0,1]$. The single-bubble Gaussian isoperimetric inequality asserts that $I^{(1)} = I^{(1)}_m$, where $I^{(1)}_m : [0,1] \rightarrow \mathbb{R}_+$ denotes the single-bubble \emph{model} profile: \[ I^{(1)}_m(v) := \inf\{P_\gamma(U) : \text{$U$ is a halfplane with $\gamma(U) = v$} \} . \] The product structure of the Gaussian measure implies that $I^{(1)}_m$ may be calculated in dimension one, readily yielding: \[ I^{(1)}_m(v) = \varphi \circ \Phi^{-1}(v), \] where $\varphi(x) = (2\pi)^{-1/2} e^{-x^2/2}$ is the one-dimensional Gaussian density, and $\Phi(x) = \int_{-\infty}^x \varphi(y)\, dy$ is its cumulative distribution function. It is well-known and immediate to check that $I^{(1)}_m$ satisfies the following ODE: \begin{equation} \label{eq:intro-ODE} (I^{(1)}_m)^{\prime\prime} = -\frac{1}{I^{(1)}_m} \text{ on $[0,1]$.} \end{equation} Our starting observation is that $I^{(2)}_m$ satisfies a similar \emph{matrix-valued} differential equality on $\Delta^{(2)}$. Let $E$ denote the tangent space to $\Delta^{(2)}$, which we identify with $\{x \in \mathbb{R}^3: \sum_{i=1}^3 x_i = 0\}$. Given $A = (A_{12},A_{23},A_{13})$ with $A_{ij} \geq 0$, we introduce the following $3 \times 3$ positive semi-definite matrix: \[ L_{A} := \sum_{1 \leq i < j \leq 3} A_{ij} (e_i - e_j) (e_i-e_j)^T = \brac{\begin{matrix} A_{12} + A_{13} & -A_{12} & - A_{13} \\ -A_{12} & A_{12} + A_{23} & -A_{23} \\ -A_{13} & -A_{23} & A_{13} + A_{23} \end{matrix}} \geq 0 . \] In fact, as a quadratic form on $E$, it is easy to see that $L_A$ is strictly positive-definite as soon as at least two $A_{ij}$'s are positive. Given $v \in \interior \Delta^{(2)}$, let $A^m_{ij}(v) := \gamma^{n-1}(\partial \Omega^m_i \cap \partial \Omega^m_j) > 0$ denote the weighted areas of the interfaces of a tripod cluster $\Omega^m$ satisfying $\gamma(\Omega^m) = v$. A calculation then verifies that: \begin{equation} \label{eq:intro-MDE} \nabla^2 I^{(2)}_m(v) = -L_{A^m(v)}^{-1} \text{ on $\interior \Delta^{(2)}$,} \end{equation} where differentiation and inversion are both carried out on $E$. This constitutes the right extension of (\ref{eq:intro-ODE}) to the double-bubble setting. \medskip To establish that $I^{(2)} = I^{(2)}_m$ on $\interior \Delta^{(2)}$, our idea is as follows. First, it is not hard to show that $I^{(2)}_m$ may be extended continuously to the entire $\Delta^{(2)}$ by setting $I^{(2)}_m(v) := I^{(1)}_m(\max_i v_i)$ for $v \in \partial \Delta^{(2)}$. We clearly have $I^{(2)} \leq I^{(2)}_m$ on $\Delta^{(2)}$, with equality on the boundary by the single-bubble Gaussian isoperimetric inequality (where $I^{(2)}(v) = I^{(1)}(\max_i v_i) = I^{(1)}_m(\max_i v_i)$). \smallskip Assume for the sake of this sketch that $I^{(2)}$ is twice continuously differentiable on $\interior \Delta^{(2)}$. Given an isoperimetric minimizing cluster $\Omega$ with $\gamma(\Omega) = v \in \interior \Delta^{(2)}$, let $A_{ij}(v) := \gamma^{n-1}(\partial^*\Omega_i \cap \partial^* \Omega_j)$, $i < j$, denote the weighted areas of the cluster's interfaces, where $\partial^* U$ denotes the reduced boundary of a Borel set $U$ having finite perimeter (see Section \ref{sec:prelim}). Assume again for simplicity that $A_{ij}(v)$ are well-defined, that is, depend only on $v$. It is easy to see that at least two $A_{ij}(v)$ must be positive, and hence $L_{A(v)}$ is positive-definite on $E$. We will then show that the following matrix-valued differential inequality holds: \begin{equation} \label{eq:intro-MDI} \nabla^2 I^{(2)}(v) \leq -L_{A(v)}^{-1} \text{ on $\interior \Delta^{(2)}$,} \end{equation} in the positive semi-definite sense on $E$. Consequently: \begin{equation} \label{eq:intro-PDE} - \tr[ (\nabla^2 I^{(2)}(v))^{-1} ] \leq \tr(L_{A(v)}) = 2 \sum_{i<j} A_{ij}(v) = 2 I^{(2)}(v) \text{ on $\interior \Delta^{(2)}$.} \end{equation} On the other hand, by (\ref{eq:intro-MDE}), we have equality above when $I^{(2)}$ and $A$ are replaced by $I^{(2)}_m$ and $A^m$, respectively. Since $I^{(2)} \leq I^{(2)}_m$ on $\Delta^{(2)}$ with equality on the boundary, an application of the maximum principle for the (fully non-linear) second-order elliptic PDE (\ref{eq:intro-PDE}) yields the desired $I^{(2)} = I^{(2)}_m$. \smallskip The bulk of this work is thus aimed at establishing a rigorous version of (\ref{eq:intro-MDI}). To this end, we consider an isoperimetric minimizing cluster $\Omega$, and perturb it using a flow $F_t$ along a vector-field $X$. Since: \[ I^{(2)}(\gamma(F_t(\Omega))) \le P_\gamma(F_t(\Omega)) \] with equality at $t=0$, we deduce (at least, conceptually) that the first variations must coincide and that the second variations must obey the inequality. Such an idea in the single-bubble case is not new, and was notably used by Sternberg--Zumbrun in \cite{SternbergZumbrun} to establish concavity of the isoperimetric profile for a convex domain in the unweighted Euclidean setting; see also \cite{MorganJohnson,Kuwert, RosIsoperimetryInCrystals, BayleRosales} for further extensions and applications, \cite{KleinerProofOfCartanHadamardIn3D} for a first-order comparison theorem on Cartan-Hadamard manifolds, and \cite{BayleThesis} where the resulting second-order ordinary differential inequality was integrated to recover the Gromov--L\'evy isoperimetric inequality \cite{GromovGeneralizationOfLevy}, \cite[Appendix C]{Gromov}. On a technical level, it is crucial for us to apply this to a \emph{non-compactly supported} vector-field $X$, and so we spend some time to revisit classical results regarding the first and second variations of (weighted) volume and perimeter. Contrary to the single-bubble setting, the interfaces $\Sigma_{ij} = \partial^*\Omega_i \cap \partial^* \Omega_j$ will meet each other at common boundaries, which may contribute to these variations. Fortunately, for the first variation of perimeter, these contributions cancel out thanks to the isoperimetric stationarity of the cluster (without requiring us to use delicate regularity results regarding the structure of the \emph{boundary} of interfaces -- see Remark \ref{rem:no-higher-regularity}). For the second variation, we are not so fortunate, and in general the boundary of the interfaces will have a contribution. \smallskip However, in simplifying our original argument for establishing (\ref{eq:intro-MDI}), we obtain an interesting dichotomy: if the minimizing cluster is effectively one-dimensional (a case we do not a-priori rule out), it is easy to obtain (\ref{eq:intro-MDI}) by a one-dimensional computation directly; otherwise, it turns out we obtain enough information from applying the above machinery to the constant vector-field $X \equiv w$ in all directions $w \in \mathbb{R}^n$, amounting to translation of the minimizing cluster. This simplifies considerably the resulting formulas for the second variation of (weighted) volume and perimeter, and allows us again to bypass the delicate regularity results mentioned before. Consequently, we only need to use the classical results from Geometric Measure Theory on the existence of isoperimetric minimizing clusters and regularity of their interfaces, due to Almgren \cite{AlmgrenMemoirs} (see also Maggi's excellent book \cite{MaggiBook}). \smallskip To establish the uniqueness of minimizers, we observe that all of our inequalities in the derivation of (\ref{eq:intro-MDI}) must have been equalities, and this already provides enough information for characterizing tripod-clusters. \medskip The rest of this work is organized as follows. In Section \ref{sec:model}, we construct the model tripod-clusters and associated model isoperimetric profile, and establish their properties. In Section \ref{sec:prelim} we recall relevant definitions and provide some preliminaries for the ensuing calculations. In Section \ref{sec:first-order} we deduce first-order properties of isoperimetric minimizing clusters (such as stationarity) in the weighted unbounded setting, as well as second-order stability. In Section \ref{sec:second-order}, we calculate the second variations of measure and weighted-perimeter under translations. In Section \ref{sec:MDI} we obtain a rigorous version of the matrix-valued differential inequality (\ref{eq:intro-MDI}) in both cases of the above-mentioned dichotomy. In Section \ref{sec:proof}, we conclude the proof of Theorem \ref{thm:main1} by employing a maximum-principle. In Section \ref{sec:uniqueness}, we establish Theorem \ref{thm:main-uniqueness} on the uniqueness of the isoperimetric minimizing clusters. Finally, in Section \ref{sec:conclude}, we provide some concluding remarks on extensions and additional results which will appear in \cite{EMilmanNeeman-GaussianMultiBubbleConj}. For brevity of notation, we omit the superscript $^{(2)}$ in all subsequent references to $I^{(2)}$, $I^{(2)}_m$ and $\Delta^{(2)}$ in this work. In addition, we use $\mathcal{C} = \{(1,2),(2,3),(3,1) \}$ to denote the set of positively oriented pairs in $\{1, 2, 3\}$. \medskip \noindent \textbf{Acknowledgement.} We thank Francesco Maggi for pointing us towards a simpler version of the maximum principle than our original one, Frank Morgan for many helpful references, and Brian White for informing us of the reference to the recent \cite{CES-RegularityOfMinimalSurfacesNearCones}. We also acknowledge the hospitality of MSRI where part of this work was conducted. \section{Model Tripod Configurations} \label{sec:model} In this section, we construct the model tripod clusters which are conjectured to be optimal on $\mathbb{R}^n$. It will be enough to construct them on $\mathbb{R}^2$, since by taking Cartesian product with $\mathbb{R}^{n-2}$ and employing the product structure of the Gaussian measure, these clusters extend to $\mathbb{R}^n$ for all $n \geq 2$. Actually, it will be convenient to construct them on $E$, rather than on $\mathbb{R}^2$, where recall that $E$ denotes the tangent space to $\Delta^{(2)}$, which we identify with $\{x \in \mathbb{R}^3: \sum_{i=1}^3 x_i = 0\}$. Consequently, in this section, let $\gamma = \gamma^2$ denote the standard two-dimensional Gaussian measure on $E$, and if $\Psi$ denotes its (smooth, Gaussian) density on $E$, set $\gamma^1 := \Psi \H^1$. \medskip Define $\Omega^m_i = \interior \{x \in E: \max_j x_j = x_i\}$ (``$m$'' stands for ``model''). For any $x \in E$, $x + \Omega^m = (x + \Omega^m_1, x + \Omega^m_2, x + \Omega^m_3)$ is a cluster, which we call a ``tripod" cluster (also referred to as a ``Y" or ``rotor" in the literature). Let $\Sigma^m_{ij} := \partial \Omega^m_i \cap \partial \Omega^m_j$ denote the interfaces of $\Omega^m$. Observe that $x + \Sigma^m_{ij}$, the interfaces of $x + \Omega^m$, are flat, and meet at $120^\circ$ angles at the single point $x$. We denote: \[ A^m_{ij}(x) := \gamma^1(x + \Sigma^m_{ij}) . \] \medskip We begin with some preparatory lemmas: \begin{lemma} \label{lem:model-sigma} For all distinct $i,j,k \in \{ 1,2,3\}$, let $n_{ij} = (e_j - e_i)/\sqrt 2$ and $t_{ij} = (e_i + e_j - 2 e_k) / \sqrt 6$. Then $n_{ij}$ and $t_{ij}$ form an orthonormal basis of $E$, and: \begin{equation} \label{eq:tripod-areas} A^m_{ij}(x) = \varphi(\inr{x}{n_{ij}}) (1 - \Phi(\inr{x}{t_{ij}})) \end{equation} \end{lemma} \begin{proof} It is immediate to check that $\Sigma^m_{ij}$ can be parametrized (with unit speed) as $\{a t_{ij}: a \ge 0\}$, and that $t_{ij}$ and $n_{ij}$ form an orthonormal basis of $E$. Consequently, for any $a \in \mathbb{R}$, $|a t_{ij} + x|^2 = (a + \inr{x}{t_{ij}})^2 + \inr{x}{n_{ij}}^2$. Hence, \begin{align*} \gamma^1(x + \Sigma_{ij}) &= \frac{1}{2\pi} \int_0^\infty e^{-|a t_{ij} + x|^2/2} \, da \notag \\ &= \varphi(\inr{x}{n_{ij}}) \int_0^\infty \varphi(a + \inr{x}{t_{ij}})\, da \notag \\ &= \varphi(\inr{x}{n_{ij}}) (1 - \Phi(\inr{x}{t_{ij}})). \end{align*} \end{proof} \begin{lemma} \label{lem:model-upper-lower} For all $x \in E$ and distinct $i,j,k \in \{ 1,2,3\}$: \begin{align} \label{eq:model-upper} \gamma(x + \Omega^m_i) &\leq \min\brac{1 - \Phi(x_i) , 1 - \Phi\brac{\frac{x_i-x_j}{\sqrt{2}}} , 1 - \Phi\brac{\frac{x_i-x_k}{\sqrt{2}}}} ~,\\ \label{eq:model-lower} \gamma(x + \Omega^m_i) & \geq \brac{1 - \Phi\brac{\frac{x_i-x_j}{\sqrt{2}}}}\brac{ 1 - \Phi\brac{\frac{x_i-x_k}{\sqrt{2}}} } ~. \end{align} \end{lemma} \begin{proof} For the upper bound, note that: \[ x + \Omega^m_i = \interior \{z \in E : \max_j (z_j - x_j) = z_i - x_i \} \subseteq \{z \in E : z_i - x_i > 0\}, \] where the last inclusion follows since $y = z-x\in E$ implies that $\max_j y_j \ge \frac{1}{3} \sum_{j=1}^3 y_j = 0$. Similarly, for any $j \neq i$: \[ x + \Omega^m_i = \interior \{z \in E : \max_j (z_j - x_j) = z_i - x_i \} \subseteq \{ z \in E : z_i - z_j > x_i - x_j \} . \] It remains to note that if $Z$ is distributed according to $\gamma$ on $E$, then $Z_i$ and $(Z_i - Z_j) / \sqrt{2}$ are distributed according to the standard Gaussian measure on $\mathbb{R}$, and (\ref{eq:model-upper}) follows. For the lower bound, note that: \[ x + \Omega^m_i = \{z: z_i - z_j > x_i - x_j\} \cap \{z: z_i - z_k > x_i - x_k \}. \] By the Gaussian FKG inequality~\cite{Pitt:82}, \[ \gamma(x + \Omega^m_i) \ge \gamma(\{z: z_i - z_j > x_i - x_j\}) \gamma(\{z: z_i - z_k > x_i - x_k\}) , \] and (\ref{eq:model-lower}) is established. \end{proof} \begin{lemma} \label{lem:V-diffeo} The map: \[ E \ni x \mapsto V(x) := \gamma(x + \Omega^m) \in \interior \Delta \] is a diffeomorphism between $E$ and $\interior \Delta$. Its differential is given by: \begin{equation}\label{eq:DV} D V = -\frac{1}{\sqrt 2} \brac{\begin{matrix} A^m_{12} + A^m_{31} & -A^m_{12} & -A^m_{31} \\ -A^m_{12} & A^m_{12} + A^m_{23} & -A^m_{23} \\ -A^m_{31} & -A^m_{23} & A^m_{23} + A^m_{31} \end{matrix}} = - \frac{1}{\sqrt 2} L_{A^m} . \end{equation} \end{lemma} \begin{proof} Clearly $V(x)$ is $C^\infty$, since the Gaussian density is $C^\infty$ and all of its derivatives vanish rapidly at infinity. To see that $V$ is injective, simply note that if $y \neq x$, then there exists $i \in \{1,2,3\}$ such that $y \in x + \Omega^m_i$, and therefore $y + \Omega^m_i \subsetneq x + \Omega^m_i$ and hence $\gamma(y + \Omega^m_i) < \gamma(x + \Omega^m_i)$. To show that $V$ is surjective, fix $R$ and consider the open triangle $K_R = \{x \in E: \max_i x_i < R\}$. The boundary of $K_R$ is made up of three line segments, each of the form $K_{R,i} := \{x \in E : x_i = R, x_j \le R, x_k \le R\}$. By (\ref{eq:model-upper}), we know that for any $x \in K_{R,i}$ we have $\gamma(x + \Omega_i) \le 1 - \Phi(R)$, i.e. that $V_i(x) \le 1 - \Phi(R)$ on $K_{R,i}$. It follows that for any $v \in \Delta_R := \set{ u \in \Delta : \min_i u_i > 1 - \Phi(R)}$, as $x$ runs clockwise along $\partial K_R$, then $V(x)$ runs clockwise around $v$. By Invariance of Domain Theorem \cite{Munkres-TopologyBook2ndEd}, $V$ is a homeomorphism between the open triangle $K_R$ and $V(K_R)$, and since $V(\partial K_R) \cap \Delta_R = \emptyset$, it follows that $V(K_R)$ must contain $\Delta_R$. Since every $v \in \interior \Delta$ satisfies $v \in \Delta_R$ for some $R < \infty$, the surjectivity follows. Finally, we compute at $x \in E$: \[ \nabla_v V_i(x) = \int_{x + \Omega^m_i} \nabla_v e^{-W(x)} \, dx = \int_{x + \partial \Omega^m_i} \inr{v}{\mathbf{n}} e^{-W(x)}\, d\mathcal{H}^{n-1}, \] where $\mathbf{n}$ denotes the outward unit normal to $x + \partial \Omega^m_i$. But note that $\partial \Omega^m_i = \bigcup_{j \ne i} \Sigma_{ij}$, and that the outward unit normal is the constant vector-field $(e_j - e_i)/\sqrt{2}$ on $\Sigma_{ij}$. Consequently: \[ \nabla_v V_i(x) = \frac{1}{\sqrt 2} \sum_{j \ne i} \inr{v}{e_j - e_i} A^m_{ij}(x) , \] and (\ref{eq:DV}) is established. Since each of the $A^m_{ij}(x)$ is strictly positive, $L_{A^m(x)}$ is non-singular (in fact, strictly positive-definite) as a quadratic form on $E$, and hence $DV(x)$ is non-singular as well. It follows that $V$ is indeed a diffeomorphism, concluding the proof. \end{proof} We can now give the following: \begin{definition} The model isoperimetric double-bubble profile $I_m : \interior \Delta \to \mathbb{R}$ is defined as: \[ I_m(v) := P_\gamma(x + \Omega^m) = A^m_{12}(x) + A^m_{23}(x) + A^m_{31}(x) \;\;\; \text{such that} \;\;\; \gamma(x + \Omega^m) = v . \] \end{definition} Lemma \ref{lem:V-diffeo} verifies that the above is well-defined. Thanks to the next lemma, we may (and do) extend $I_m$ by continuity to the entire $\Delta$. \begin{lemma}\label{lem:tripod-profile-continuous} $I_m$ is $C^\infty$ on $\interior \Delta$, and continuous up to $\partial \Delta$. Moreover, if $v^{h} \in \interior \Delta$ converge to $v \in \partial \Delta$ then $I_m(v^{h}) \to I^{(1)}_m(\max_i v_i) =: I_m(v)$. \end{lemma} \begin{proof} Since $I_m(v) = P_\gamma(V^{-1}(v) + \Omega^m)$ and both $V^{-1}$ and the map $x \mapsto P_\gamma(x + \Omega^m)$ are $C^\infty$ on their respective domains, it follows that $I_m$ is $C^\infty$ on $\interior \Delta$. Now suppose that $\interior \Delta \ni v^{h} \to v \in \partial \Delta$ and let $x^{h} = V^{-1}(v^{h})$. We first consider the case that $v$ is a corner of the simplex, in which case $v = (1, 0, 0)$ without loss of generality. Since $\gamma(x^{h} + \Omega_1^m) \to 1$, we see from~\eqref{eq:model-upper} that $x_1^{h} - x_2^{h} \to -\infty$ and $x_1^{h} - x_3^{h} \to -\infty$. In particular, $\inr{x^{h}}{n_{12}}$, $\inr{x^{h}}{n_{13}} \to \infty$, and since $t_{23} = \frac{1}{\sqrt{3}} (n_{12} + n_{13})$, it follows that $\inr{x^{h}}{t_{23}} \to \infty$. It follows from~\eqref{eq:tripod-areas} that $\gamma^1(x^{h} + \Sigma_{ij}) \to 0$ for every $i \ne j$, and so $I_m(v^{h}) \to 0 = I^{(1)}_m(1)$. Finally, consider the case that $v \in \partial \Delta$ but is not a corner. Without loss of generality, $v = (a, 1-a, 0)$ for some $a \in (0, 1)$. Then $\gamma(x^{h} + \Omega_1^m) \to a$ and $\gamma(x^{h} + \Omega_2^m) \to 1 - a$; it follows from~\eqref{eq:model-upper} that \[ \limsup_{n \to \infty} \frac{x_1^{h} - x_2^{h}}{\sqrt 2} \le \Phi^{-1}(1 - a) ~,~ \limsup_{n \to \infty} \frac{x_2^{h} - x_1^{h}}{\sqrt 2} \le \Phi^{-1}(a). \] Since $\Phi^{-1}(1 - a) = -\Phi^{-1}(a)$, and recalling the notation of Lemma \ref{lem:model-sigma}, we conclude that \begin{equation}\label{eq:12-boundary} \inr{x^{h}}{n_{12}} = \frac{x_2^{h} - x_1^{h}}{\sqrt 2} \to \Phi^{-1}(a). \end{equation} Now by (\ref{eq:model-lower}): \[ \gamma(x^h + \Omega_m^3) \ge (1 - \Phi(\inr{x^h}{n_{23}})) (1 - \Phi(\inr{x^h}{n_{13}})) , \] and since $\gamma(x^{(n)} + \Omega_m^3) \to 0$, at least one of $\inr{x^{h}}{n_{23}}$ or $\inr{x^{h}}{n_{13}}$ must diverge to $\infty$. But since $\inr{x^{h}}{n_{12}}$ converges to a finite limit by~\eqref{eq:12-boundary}, and since $n_{12} + n_{23} + n_{31} = 0$, it follows that both $\inr{x^{h}}{n_{23}}$ and $\inr{x^{h}}{n_{13}}$ must diverge to $\infty$. By~\eqref{eq:tripod-areas}, we conclude that $\gamma^1(x^{h} + \Sigma_{23}),\gamma^1(x^{h} + \Sigma_{13}) \to 0$. In addition, as $t_{12} = \frac{1}{\sqrt{3}} (n_{31} + n_{32})$, it follows that $\inr{x^h}{t_{12}} \to -\infty$. Together with~\eqref{eq:12-boundary}, the representation~\eqref{eq:tripod-areas} implies that $\gamma^1(x^{h} + \Sigma_{12}) \rightarrow \varphi(\Phi^{-1}(a)) = I^{(1)}_m(a)$. We thus confirm that $I_m(v^{h}) \to I^{(1)}_m(a) = I^{(1)}_m(1-a) = I^{(1)}_m(\max_j v_j)$, as asserted. \end{proof} We have constructed the model double-bubble clusters in $E \simeq \mathbb{R}^2$ for $v \in \interior \Delta$ (tripod clusters), and on $\mathbb{R}$ for $v \in \partial \Delta$ (halfline clusters). Clearly, by taking Cartesian products with $\mathbb{R}^{n-2}$ and $\mathbb{R}^{n-1}$, respectively, and employing the product structure of the Gaussian measure, these clusters extend to $\mathbb{R}^n$ for all $n \geq 2$. Consequently, $I(v) \le I_m(v)$ for all $v \in \Delta$, and our goal in this work is to establish the converse inequality. \medskip To this end, we observe that $I_m$ satisfies a remarkable differential equation: \begin{proposition}\label{prop:I_m-equation} At any point in $\interior \Delta$: \[ \nabla I_m = \frac{1}{\sqrt 2} V^{-1} \text{ and } \nabla^2 I_m = - (L_{A^m \circ V^{-1}})^{-1} \] as tensors on $E$. \end{proposition} \begin{proof} Let $n_{ij} = (e_j - e_i)/\sqrt 2$ and $t_{ij} = (e_i + e_j - 2e_k)/\sqrt 6$ as in Lemma~\ref{lem:model-sigma}. Differentiating~\eqref{eq:tripod-areas}, and recalling that the one-dimensional Gaussian density $\varphi$ satisfies $\varphi'(x) = -x \varphi(x)$, we calculate: \begin{align*} \nabla A^m_{ij}(x) &= - n_{ij} \inr{x}{n_{ij}} A^m_{ij}(x) - t_{ij} \varphi(\inr{x}{n_{ij}}) \varphi(\inr{x}{t_{ij}}) \\ &= - n_{ij} \inr{x}{n_{ij}} A^m_{ij}(x) - t_{ij} \varphi_2(x), \end{align*} where $\varphi_2$ denotes the standard Gaussian density on $E$. Since $\sum_{(i, j) \in \mathcal{C}} t_{ij} = 0$, \[ \nabla \sum_{(i, j) \in \mathcal{C}} A^m_{ij}(x) = - \sum_{(i, j) \in \mathcal{C}} A^m_{ij}(x) n_{ij} n_{ij}^T x = -\frac 12 L_{A^m(x)} x. \] Since $I_m(v) = \sum_{(i, j) \in \mathcal{C}} A^m_{ij}(V^{-1}(v))$, the chain rule yields \[ \nabla I_m(v) = -\frac 12 L_{A^m(V^{-1}(v))} (D V)^{-1}(V^{-1}(v)) V^{-1}(v). \] But according to~\eqref{eq:DV}, $(DV)^{-1} = -\sqrt 2 L_{A^m}^{-1}$, and the first claim follows. The second claim follows by differentiating the first one, writing $D (V^{-1}) = (D V)^{-1} \circ V^{-1}$ and applying once again $(DV)^{-1} = -\sqrt 2 L_{A^m}^{-1}$. \end{proof} \section{Definitions and Technical Preliminaries} \label{sec:prelim} We will be working in Euclidean space $(\mathbb{R}^n,\abs{\cdot})$ endowed with a measure $\gamma = \gamma^n$ having $C^\infty$-smooth and strictly-positive density $e^{-W}$ with respect to Lebesgue measure. We develop here the preliminaries we require for clusters $\Omega = (\Omega_1,\ldots,\Omega_k)$ with $k$ cells, for general $k \geq 3$, as this poses no greater generality over the case $k=3$. Recall that the cells $\Omega_i$ are assumed to be Borel, pairwise disjoint, and satisfy $\gamma(\mathbb{R}^n \setminus \cup_{i=1}^k \Omega_i) = 0$. In addition, they are assumed to have finite $\gamma$-weighted perimeter $P_\gamma(\Omega_i) < \infty$, to be defined below. We denote by $\mathcal{T}$ the collection of all cyclically ordered triplets: \[ \mathcal{T} := \{ \{ (a,b) , (b,c) , (c,a) \} : 1 \leq a < b < c \leq k\} . \] \medskip We write $\div X$ to denote divergence of a smooth vector-field $X$, and $\div_\gamma X$ to denote its weighted divergence: \[ \div_{\gamma} X := \div X - \nabla_X W . \] Given a unit vector $\mathbf{n}$, we write $\div_{\mathbf{n}^{\perp}} X$ to denote $\div X - \inr{\mathbf{n}}{\nabla_\mathbf{n} X}$, and set its weighted counterpart to be: \[ \div_{\mathbf{n}^{\perp},\gamma} X = \div_{\mathbf{n}^{\perp}} X - \nabla_X W. \] For a smooth hypersurface $\Sigma \subset \mathbb{R}^n$ co-oriented by a unit normal vector-field $\mathbf{n}$, let $H_\Sigma: \Sigma \to \mathbb{R}$ denote its mean-curvature, i.e. the sum of its principal curvatures (equivalently, the trace of its second fundamental form). Its weighted mean-curvature $H_{\Sigma,\gamma}$ is defined as: \[ H_{\Sigma,\gamma} := H_{\Sigma} - \nabla_\mathbf{n} W . \] We write $\div_\Sigma X$ for the surface divergence of a vector-field $X$ defined on $\Sigma$, i.e. $\sum_{i=1}^{n-1} \scalar{\tau_i,\nabla_{\tau_i} X}$ where $\tau_i$ is a local orthonormal frame on $\Sigma$; this coincides with $\div_{\mathbf{n}^{\perp}} X$ for any smooth extension of $X$ to a neighborhood of $\Sigma$. The weighted surface divergence $\div_{\Sigma,\gamma}$ is defined as: \[ \div_{\Sigma,\gamma} X = \div_{\Sigma} X - \nabla_X W. \] Note that $\div_{\Sigma} \mathbf{n} = H_{\Sigma}$ and $\div_{\Sigma,\gamma} \mathbf{n} = H_{\Sigma,\gamma}$. We will also abbreviate $\inr{X}{\mathbf{n}}$ by $X^\mathbf{n}$, and we will write $X^\mathbf{t}$ for the tangential part of $X$, i.e. $X - X^{\mathbf{n}} \mathbf{n}$. \medskip Given a Borel set $U \subset \mathbb{R}^n$ with locally-finite perimeter, its reduced boundary $\partial^* U$ is defined (see e.g. \cite[Chapter 15]{MaggiBook}) as the subset of $\partial U$ for which there is a uniquely defined outer unit normal vector to $U$ in a measure theoretic sense. While the precise definition will not play a role in this work, we provide it for completeness. The set $U$ is said to have locally-finite (unweighted) perimeter, if for any compact subset $K \subset \mathbb{R}^n$ we have: \[ \sup \left\{\int_{U} \div X \; dx : X \in C_c^\infty(\mathbb{R}^n; T \mathbb{R}^n) \; ,\; \text{supp}(X) \subset K \; , \; |X| \le 1 \right\} < \infty . \] With any Borel set with locally-finite perimeter one may associate a vector-valued Radon measure $\mu_U$ on $\mathbb{R}^n$ so that: \[ \int_U \div X \; dx = \int_{\mathbb{R}^n} \inr{X}{d\mu_U} \;\;\; \forall X \in C_c^\infty(\mathbb{R}^n; T \mathbb{R}^n) . \] The reduced boundary $\partial^* U$ of a set $U$ with locally-finite perimeter is defined as the collection of $x \in \text{supp } \mu_U$ so that the vector limit: \[ \mathbf{n}_U := \lim_{\epsilon \to 0+} \frac{\mu_U(B(x,\epsilon))}{\abs{\mu_U}(B(x,\epsilon))} \] exists and has length $1$ (here $\abs{\mu_U}$ denotes the total-variation of $\mu_U$ and $B(x,\epsilon)$ is the open Euclidean ball of radius $\epsilon$ centered at $x$). When the context is clear, we will abbreviate $\mathbf{n}_U$ by $\mathbf{n}$. It is known that $\partial^* U$ is a Borel subset of $\partial U$ and that $\abs{\mu_U}(\mathbb{R}^n \setminus \partial^* U) = 0$. The relative perimeter of $U$ in a Borel set $F$, denoted $P(U ; F)$, is defined as $\abs{\mu_U}(F)$. Recall that the $\gamma$-weighted perimeter of $U$ was defined in the Introduction as: \[ P_\gamma(U) := \sup \left\{\int_U \div_\gamma X \, d\gamma: X \in C_c^\infty(\mathbb{R}^n; T \mathbb{R}^n), |X| \le 1 \right\}. \] Clearly, if $U$ has finite weighted-perimeter $P_\gamma(U) < \infty$, it has locally-finite (unweighted) perimeter. It is known \cite[Theorem 15.9]{MaggiBook} that in that case: \[ P_\gamma(U) = \gamma^{n-1}(\partial^* U) , \] where as usual: \[ \gamma^{n-1} := e^{-W} \H^{n-1} . \] In addition, by the Gauss--Green--De Giorgi theorem, the following integration by parts formula holds for any $C_c^1$ vector-field $X$ on $\mathbb{R}^n$, and Borel subset $U \subset \mathbb{R}^n$ with $P_\gamma(U) < \infty$ (the proof in \cite[Theorem 15.9]{MaggiBook} immediately carries over to the weighted setting): \begin{equation}\label{eq:integration-by-parts} \int_U \div_\gamma X \, d\gamma^n = \int_{\partial^* U} X^\mathbf{n} \, d\gamma^{n-1} . \end{equation} \subsection{Volume and Perimeter Regular Sets} \begin{definition}[Admissible Vector-Fields] A vector-field $X$ on $\mathbb{R}^n$ is called \emph{admissible} if it is $C^\infty$-smooth and satisfies: \begin{equation} \label{eq:field-bdd} \forall i \geq 0 \;\;\; \max_{x \in \mathbb{R}^n} \norm{\nabla^i X(x)} \leq C_i < \infty . \end{equation} \end{definition} \noindent Any smooth vector-field which is compactly-supported is clearly admissible, but we will need to use more general vector-fields in this work. Let $F_t$ denote the associated flow along an admissible vector-field $X$, defined as the family of maps $\set{F_t : \mathbb{R}^n \to \mathbb{R}^n}$ solving the following ODE: \[ \frac{d}{dt} F_t(x) = X \circ F_t(x) ~,~ F_0(x) = x . \] It is well-known that a unique smooth solution in $t \in \mathbb{R}$ exists for all $x \in \mathbb{R}^n$, and that the resulting maps $F_t : \mathbb{R}^n \rightarrow \mathbb{R}^n$ are $C^\infty$ diffeomorphisms, so that the partial derivatives in $t$ and $x$ of any fixed order are uniformly bounded in $(x,t) \in \mathbb{R}^n \times [-T,T]$, for any fixed $T > 0$. Note that if $\Omega=(\Omega_1,\ldots,\Omega_k)$ is a cluster then so is $F_t(\Omega) = (F_t(\Omega_1),\ldots,F_t(\Omega_k))$, since its cells remain Borel, pairwise disjoint, as well as of $\gamma$-weighted finite-perimeter and satisfying $\gamma(\mathbb{R}^n \setminus \cup_i F_t(\Omega_i)) = \gamma(F_t(\mathbb{R}^n \setminus \cup_i \Omega_i)) = 0$ since $F_t$ is a Lipschitz map. We define the $r$-th variations of weighted volume and perimeter of $\Omega$ as: \begin{align*} \delta_X^r V(\Omega) &:= \left. \brac{\frac{d}{dt}}^r\right|_{t=0} \gamma(F_t(\Omega)) ,\\ \delta_X^r A(\Omega) &:= \left. \brac{\frac{d}{dt}}^r \right|_{t=0} P_\gamma(F_t(\Omega)) , \end{align*} whenever the right-hand sides exist. When $\Omega$ is clear from the context, we will simply write $\delta_X^r V$ and $\delta_X^r A$; when $r = 1$, we will write $\delta_X V$ and $\delta_X A$. \smallskip It will be of crucial importance for us in this work to calculate the first and second variations of weighted volume and perimeter for \textbf{non-}compactly supported (albeit simple) vector-fields, for which even the existence of $\delta_X^r V(\Omega)$ and especially $\delta_X^r A(\Omega)$ is not immediately clear. Indeed, even for the case of the standard Gaussian measure, the derivatives of its density are asymptotically larger at infinity than the Gaussian density itself. We consequently introduce the following: \begin{definition}[Volume / Perimeter Regular Set] A Borel set $U$ is said to be \emph{volume regular} with respect to the measure $\gamma$ (which, recall, has density $e^{-W}$ with respect to Lebesgue) if: \[ \forall i,j \geq 0 \;\;\; \exists \delta > 0 \;\; \int_{U} \sup_{z \in B(x,\delta)} \norm{\nabla^i W(z)}^j e^{-W(z)} dx < \infty ~,~ \] It is said to be \emph{perimeter regular} with respect to the measure $\gamma$ if: \[ \forall i,j \geq 0 \;\;\; \exists \delta > 0 \;\; \int_{\partial^* U} \sup_{z \in B(x,\delta)} \norm{\nabla^i W(z)}^j e^{-W(z)} d\H^{n-1}(x) < \infty . \] If $\delta > 0$ above may be chosen uniformly for all $i,j \geq 0$, $U$ is called \emph{uniformly} volume / perimeter regular. \end{definition} \noindent Here $\norm{\cdot}$ denotes the Hilbert-Schmidt norm of a tensor, defined as the square-root of the sum of squares of its coordinates in any local orthonormal frame. Note that volume (perimeter) regular sets clearly have finite weighted volume (perimeter). \medskip Write $JF_t = \text{det}(dF_t)$ for the Jacobian of $F_t$, and observe that by the change-of-variables formula for smooth injective functions: \begin{equation} \label{eq:Jac-vol} \gamma(F_t(U)) = \int_{U} J F_t e^{-W \circ F_t} \, dx, \end{equation} for any Borel set $U$. Similarly, if $U$ is in addition of locally finite-perimeter, let $\Phi_t = F_t|_{\partial^* U}$ and write $J \Phi_t = \text{det}((d_{n_U^{\perp}} F_t)^T d_{n_U^{\perp}} F_t)^{1/2}$ for the Jacobian of $\Phi_t$ on $\partial^* U$. Since $\partial^* U$ is locally $\H^{n-1}$-rectifiable, \cite[Theorem 11.6]{MaggiBook} implies: \begin{equation} \label{eq:Jac-area} \gamma^{n-1}(F_t(\partial^* U)) = \int_{\partial^* U} J \Phi_t e^{-W \circ F_t} \, d\H^{n-1} . \end{equation} \begin{lemma} \label{lem:regular} \hfill \begin{enumerate} \item If $U$ is volume regular with respect to $\gamma$ then for any $r \geq 1$, $t \mapsto \gamma(F_t(U))$ is $C^r$ in an open neighborhood of $t=0$, and: \[ \delta_X^r V(U) = \int_{U} \frac{d^r}{(dt)^r} (J F_t e^{-W \circ F_t}) dx . \] Furthermore, if $U$ is uniformly volume regular then there exists an open neighborhood of $t=0$ where $t \mapsto \gamma(F_t(U))$ is $C^\infty$. \item If $U$ is perimeter regular with respect to $\gamma$ then for any $r \geq 1$, $t \mapsto P_\gamma(F_t(U))$ is $C^r$ in an open neighborhood of $t=0$, and: \[ \delta_X^r A(U) = \int_{\partial^* U} \frac{d^r}{(dt)^r} (J \Phi_t e^{-W \circ F_t}) dx . \] Furthermore, if $U$ is uniformly perimeter regular then there exists an open neighborhood of $t=0$ where $t \mapsto P_\gamma(F_t(U))$ is $C^\infty$. \end{enumerate} \end{lemma} For the proof of the second part, we require a simple: \begin{lemma} \label{lem:F-reduced} For any Borel set $U \subset \mathbb{R}^n$ with $P_\gamma(U) < \infty$ and diffeomorphism $F : \mathbb{R}^n \rightarrow \mathbb{R}^n$: \[ \gamma^{n-1}(\partial^* F(U)) = \gamma^{n-1}(F(\partial^* U)) . \] \end{lemma} \begin{proof} This follows from~\cite[Proposition 17.1]{MaggiBook}, along with the fact that $\gamma^{n-1}$ is absolutely continuous with respect to $\mathcal{H}^{n-1}$. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:regular}] In view of Lemma \ref{lem:F-reduced}, our task is to justify taking derivative inside the integral representations (\ref{eq:Jac-vol}) and (\ref{eq:Jac-area}). Taking difference quotients, applying Taylor's theorem with Lagrange remainder and induction on $r$, it is enough to establish by Dominant Convergence Theorem that for some $\epsilon > 0$: \begin{align} \label{eq:dominant1} & \int_U \sup_{t \in [-\epsilon,\epsilon]} \frac{d^r}{(dt)^r} (J F_t(x) e^{-W(F_t(x))}) dx < \infty ~,\\ \label{eq:dominant2} & \int_{\partial^* U} \sup_{t \in [-\epsilon,\epsilon]} \frac{d^r}{(dt)^r} (J \Phi_t(x) e^{-W(F_t(x))}) d\H^{n-1}(x) < \infty . \end{align} By the Leibniz product rule, for $\mathcal{F} = F,\Phi$: \[ \frac{d^r}{(dt)^r} (J \mathcal{F}_t(x) e^{-W(F_t(x))}) = \sum_{p+q=r} {r \choose p} \frac{d^p}{(dt)^p} J \mathcal{F}_t(x) \frac{d^{q}}{(dt)^{q}} e^{-W(F_t(x))} . \] For each $x$, $t \mapsto J \mathcal{F}_t(x)$ is a smooth function of the differential $d F_t(x)$ (note that for $\mathcal{F} = \Phi$, the normal $n_{U}(x)$ remains fixed for all $t$). This differential satisfies: \[ \frac{d}{dt} dF_t(x) = \nabla X(F_t(x)) dF_t(x) , \] and since $X$ satisfies (\ref{eq:field-bdd}), it follows that $\sup_{t \in [-\epsilon,\epsilon]} \frac{d^p}{(dt)^p} J \mathcal{F}_t(x)$ is uniformly bounded in $x \in \mathbb{R}^n$ for all fixed $\epsilon > 0$ and $p$. It remains to handle the $\frac{d^{q}}{(dt)^{q}} e^{-W(F_t(x))}$ term. Repeated differentiation and application of the chain rule results in a polynomial expression in $\nabla^a W$ and $\nabla^b X$ time $e^{-W}$ evaluated at $F_t(x)$, where the degree of the polynomial, $a$ and $b$ are bounded by a function of $q$. Since $X$ satisfies (\ref{eq:field-bdd}), we may bound the magnitudes of $\nabla^b X$ by a constant depending on $q$. Now let $\delta > 0$ be given by the assumption that $U$ is volume and perimeter regular with respect to $\gamma$, for appropriately bounded above $i,j$. Since $\abs{F_t(x) - x} \leq \abs{t} \max_{x \in \mathbb{R}^n} \abs{X(x)}$, we may find $\epsilon > 0$ so that $\abs{F_t(x) - x} \leq \delta$ uniformly in $x \in \mathbb{R}^n$. It follows that for an appropriate constant $D_r >0$: \[ \sup_{t \in [-\epsilon,\epsilon]} \frac{d^r}{(dt)^r} (J \mathcal{F}_t(x) e^{-W(F_t(x))}) \leq D_r \sup_{z \in B(x,\delta)} \norm{\nabla^i W(z)}^j e^{-W(z)} , \] and so the required (\ref{eq:dominant1}) and (\ref{eq:dominant2}) are established by definition of a regular set. Finally, when the set is assumed to be uniformly regular, $\delta > 0$ and hence $\epsilon > 0$ above may be chosen uniformly in $r \geq 1$, and $C^\infty$ smoothness is established for $t \in (-\epsilon,\epsilon)$. \end{proof} \subsection{Cutoff Function} The following lemma will be very useful in this work for calculating first and second variations of non-compactly-supported vector-fields. In several of these instances the vector-field will not be bounded in magnitude, and so we formulate it quite generally. \begin{definition}[Cutoff function $\eta_R$] Given $R > 0$, we denote by $\eta_R : \mathbb{R}^n \rightarrow [0,1]$ a smooth compactly-supported cutoff function on $\mathbb{R}^n$ with $\eta_R(x) = 1$ for $\abs{x} \leq R$ and $\abs{\nabla \eta_R} \leq 1$. \end{definition} \noindent We also denote by $B_R$ the open Euclidean ball of radius $R>0$ centered at the origin. \begin{lemma} \label{lem:cutoff} \hfill \begin{enumerate} \item Let $X$ denote a $C^1$ vector-field on $\mathbb{R}^n$ so that $\abs{X}, \abs{\nabla X} \leq P(\abs{\nabla W},\ldots,\norm{\nabla^p W})$ for some real valued polynomial $P$ and $p \geq 1$. Assume that $U$ is volume regular with respect to the measure $\gamma$. Then: \[ \int_U \div_\gamma X \; d\gamma = \lim_{R \rightarrow \infty} \int_U \div_{\gamma}( \eta_R X) \; d\gamma . \] \item Let $X$ denote a $C^1$ vector-field on a smooth hypersurface $\Sigma \subset \mathbb{R}^n$ so that $\abs{X}, \abs{\nabla_{\Sigma} X} \leq P(\abs{\nabla W})$ for some real valued polynomial $P$ and $p \geq 1$. Assume that $\Sigma \subset \partial^*U$ with $U$ being perimeter regular with respect to the measure $\gamma$. Then: \[ \int_{\Sigma} \div_{\Sigma,\gamma} X \; d\gamma^{n-1} = \lim_{R \rightarrow \infty} \int_\Sigma \div_{\Sigma,\gamma}( \eta_R X) \; d\gamma^{n-1} . \] \end{enumerate} \end{lemma} \begin{proof} We will prove the first part, the proof of the second one is identical. Write: \[ \int_U \div_{\gamma} X d\gamma = \int_U \div_\gamma (\eta_R X) d\gamma - \int_U \scalar{\nabla \eta_R,X} d\gamma + \int_U (1-\eta_R) \div_{\gamma} X d\gamma . \] Next, note that: \[ \abs{\div_\gamma X} \leq \abs{\div X} + \abs{\nabla_X W} \leq Q(\abs{\nabla W},\ldots,\norm{\nabla^p W}) ~,~ \abs{\scalar{\nabla \eta_R,X}} \leq P(\abs{\nabla W},\ldots,\norm{\nabla^p W}) , \] for an appropriate polynomial $Q$. Consequently: \begin{align*} \abs{\int_U (1-\eta_R) \div_{\gamma} X d\gamma} & \leq \int_{U \setminus B_R} Q(\abs{\nabla W},\ldots,\norm{\nabla^p W}) d\gamma ~, \\ \abs{\int_U \scalar{\nabla \eta_R,X} d\gamma} & \leq \int_{U \setminus B_R} P(\abs{\nabla W},\ldots,\norm{\nabla^p W}) d\gamma ~, \end{align*} and volume regularity implies that both of these terms go to zero as $R \rightarrow \infty$, yielding the claim. \end{proof} \subsection{First Variation of Weighted Volume and Perimeter} We now derive formulas for the first variations of (weighted) volume and perimeter. While these are well-known for sets with smooth boundaries and compactly supported vector-fields (see e.g. \cite{RCBMIsopInqsForLogConvexDensities}), we put special emphasis on not assuming anything on the set nor vector-field beyond what we need for the proof, as this will be important for us in the sequel. \begin{proposition}\label{prop:first-variation} Let $X$ be an admissible vector-field on $\mathbb{R}^n$, and let $U \subset \mathbb{R}^n$ denote a Borel subset. \\ \begin{enumerate} \item If $U$ is volume regular with respect to the measure $\gamma$ and satisfies $P_\gamma(U) < \infty$, then: \begin{equation}\label{eq:formula-first-variation-of-volume} \delta_X V(U) = \int_{\partial^* U} X^\mathbf{n} \, d\gamma^{n-1} . \end{equation} \item If $U$ is perimeter regular with respect to the measure $\gamma$ then: \[ \delta_X A(U) = \int_{\partial^* U} \div_{\mathbf{n}_U^{\perp},\gamma} X d\gamma^{n-1} . \] If in addition $\partial^* U = \Sigma \cupdot \Xi$ where $\Sigma$ is a smooth hypersurface and $\H^{n-1}(\Xi) = 0$, then: \begin{equation}\label{eq:formula-first-variation-of-area-before} \delta_X A(U) = \int_{\Sigma} (H_{\Sigma,\gamma} X^\mathbf{n} + \div_{\Sigma,\gamma} X^\mathbf{t}) \, d\gamma^{n-1}. \end{equation} \end{enumerate} \end{proposition} \begin{proof}Let $F_t$ be the flow along $X$. Under the assumptions of the first assertion, Lemma \ref{lem:regular} implies: \[ \delta_X V(U) = \int_U \left . \frac{d}{dt} \right |_{t=0} (J F_t e^{-W \circ F_t}) dx . \] It is well-known (e.g. \cite[(2.13)]{SternbergZumbrun}) that $\frac{d}{dt} |_{t=0} J F_t = \div X$, and therefore: \[ \delta_X V(U) = \int_U \brac{\div X - \nabla_X W} d\gamma = \int_{U} \div_\gamma X \, d\gamma . \] In order to apply integration-by-parts, we need to first make $X$ compactly supported. Applying Lemma \ref{lem:cutoff} and integrating by parts the compactly-supported vector-field $\eta_R X$ using ~\eqref{eq:integration-by-parts}, we deduce: \[ \int_U \div_{\gamma} X d\gamma = \lim_{R \rightarrow \infty} \int_U \div_\gamma (\eta_R X) d\gamma = \lim_{R \rightarrow \infty} \int_{\partial^* U} \eta_R X^{\mathbf{n}} d\gamma^{n-1} = \int_{\partial^* U} X^{\mathbf{n}} d\gamma^{n-1} , \] where the last equality follows by Dominant Convergence since $X$ is uniformly bounded and $P_\gamma(U) < \infty$. For the second assertion, set as usual $\Phi_t = F_t|_{\partial^* U}$ and recall that $J \Phi_t = \text{det}((d_{\mathbf{n}_{U}^{\perp}} F_t)^T d_{\mathbf{n}_{U}^{\perp}} F_t)^{1/2}$ is the Jacobian of $\Phi_t$. Under the assumptions of the second assertion, Lemma \ref{lem:regular} implies: \[ \delta_X A(U) = \int_{\partial^* U} \left . \frac{d}{dt} \right |_{t=0} (J \Phi_t e^{-W \circ F_t}) d\H^{n-1}(x) . \] It is well-known (e.g. \cite[(2.16)]{SternbergZumbrun}) that $\frac{d}{dt} |_{t=0} J \Phi_t = \div_{\mathbf{n}_U^\perp} X$, and hence: \[ \delta_X A(U) = \int_{\partial^* U} \brac{\div_{\mathbf{n}_U^\perp} X - \nabla_X W} = \int_{\partial^* U} \div_{\mathbf{n}_U^\perp,\gamma} X \, d\gamma^{n-1} . \] To establish the last claim, simply continue as follows: \begin{align*} & = \int_{\Sigma} \div_{\Sigma,\gamma} X \, d\gamma^{n-1} \notag \\ & = \int_{\Sigma} \brac{\div_{\Sigma,\gamma} X^\mathbf{t} + \div_{\Sigma,\gamma} (X^\mathbf{n} \mathbf{n}) } d\gamma^{n-1} \notag \\ &= \int_{\Sigma} \brac{\div_{\Sigma,\gamma} X^\mathbf{t} + H_{\Sigma,\gamma} X^\mathbf{n} } d\gamma^{n-1}. \end{align*} \end{proof} \section{Isoperimetric Minimizing Clusters} \label{sec:first-order} Given a cluster $\Omega = (\Omega_1,\ldots,\Omega_k)$, we define the interface between the $i$-th and $j$-th cells ($i \neq j$) as: \[ \Sigma_{ij} = \Sigma_{ij}(\Omega) := \partial^* \Omega_i \cap \partial^* \Omega_j , \] and set: \[ A_{ij} = A_{ij}(\Omega) := \gamma^{n-1}(\Sigma_{ij}). \] It is standard to show (see \cite[Exercise 29.7, (29.8)]{MaggiBook}) that for any $S \subset \set{1,\ldots,k}$: \begin{equation} \label{eq:nothing-lost} \H^{n-1}\brac{\partial^*(\cup_{i \in S} \Omega_i) \setminus \cup_{i \in S , j \notin S} \Sigma_{ij}} = 0 . \end{equation} In particular: \[ P_\gamma(\Omega_i) = \sum_{j \neq i} A_{ij}(\Omega) , \] and hence: \[ P_\gamma(\Omega) = \frac{1}{2} \sum_{i=1}^k P_\gamma(\Omega_i) = \sum_{i < j} A_{ij}(\Omega) . \] \medskip A cluster $\Omega$ is called an \emph{isoperimetric minimizer} if $P_\gamma(\Omega') \ge P_\gamma(\Omega)$ for every other cluster $\Omega'$ satisfying $\gamma(\Omega') = \gamma(\Omega)$. The following theorem, due to Almgren \cite{AlmgrenMemoirs} (see also the exposition by Morgan \cite[Chapter 13]{MorganBook5Ed} and simplified presentation by Maggi \cite[Chapters 29-30]{MaggiBook}), summarizes the results we will need from Geometric Measure Theory on the existence and regularity of minimizing clusters: \begin{theorem}[Almgren] \label{thm:Almgren} Let $(\mathbb{R}^n,\abs{\cdot},\gamma = e^{-W} dx)$ with $W \in C^\infty(\mathbb{R}^n)$ so that $\gamma$ is a probability measure. Let $\Delta = \{ v \in \mathbb{R}^k : v_i \geq 0 , \sum_{i=1}^k v_i = 1 \}$. \begin{enumerate}[(i)] \item For any prescribed $v \in \Delta$, an isoperimetric minimizing $k$-cluster $\Omega$ satisfying $\gamma(\Omega) = v$ exists. \item Moreover, $\Omega$ may be chosen so that all of its cells are open, and for every $i$, $\gamma^{n-1}(\partial \Omega_i \setminus \partial^* \Omega_i) = 0$. In particular, for all $i \neq j$, $\gamma^{n-1}(\partial \Omega_i \cap \partial \Omega_j \setminus \Sigma_{ij}) = 0$. \item For all $i \neq j$ the interfaces $\Sigma_{ij}$ are $C^\infty$ smooth $(n-1)$-dimensional submanifolds, relatively open in $\partial \Omega_i \cap \partial \Omega_j$, and for every $x \in \Sigma_{ij}$ there exists $\epsilon > 0$ such that $B(x,\epsilon) \cap \Omega_l = \emptyset$ for all $l \neq i,j$. \end{enumerate} \end{theorem} \begin{proof} \hfill \begin{enumerate}[(i)] \item It is well-known (e.g.~\cite[Proposition~12.15]{MaggiBook}) that the (weighted) perimeter is lower semi-continuous with respect to (weighted) $L^1$ convergence: if $U^r \to U$ in $L^1(\gamma)$ then $\liminf_r P_\gamma(U^r) \ge P_\gamma(U)$ for all Borel sets $U^r$ of finite (weighted) perimeter. Clearly, the same applies to clusters, where $L^1(\gamma)$ convergence is understood for each of the individual cells. It is also well-known that, since $\gamma$ has finite mass, the set $\set{ U \in \mathcal{B}(\mathbb{R}^n) : P_\gamma(U) \leq C}$ (where $\mathcal{B}(\mathbb{R}^n)$ denotes the collection of Borel subsets of $\mathbb{R}^n$) is compact in $L^1(\gamma)$ -- for bounded sets, this follows from ~\cite[Theorem~12.26]{MaggiBook}, and the general case follows by truncation with a large ball and a standard diagonalization argument (see e.g. \cite[Theorem 2.1]{RitoreRosalesMinimizersInEulideanCones}). Set $I(v) := \inf \set{ P_\gamma(\Omega) : \text{$\Omega$ is a $k$-cluster with }\gamma(\Omega) = v}$. As the latter set is clearly non-empty, obviously $I(v) < \infty$. Given $v \in \Delta$, let $\Omega^r$ be a sequence of $k$-clusters with $\gamma(\Omega^r) = v$ and $P_\gamma(\Omega^r) \rightarrow I(v)$. As $P_\gamma(\Omega^r_i) \leq P_\gamma(\Omega^r) \leq I(v)+1$ for large enough $r$, by passing to a subsequence, it follows that each of the cells $\Omega^r_i$ converges in $L^1(\gamma)$ to $\Omega_i$. By Dominated Convergence (as the total mass is finite), we must have $\gamma(\Omega_i) = v_i$, and the limiting $\Omega$ is easily seen to be a cluster (possibly after a measure-zero modification to ensure disjointness of the cells). It follows by lower semi-continuity that: \[ I(v) \le P_\gamma(\Omega) \le \liminf_{r \to \infty} P_\gamma(\Omega^r) = I(v), \] and consequently $P_\gamma(\Omega) = I(v)$. Hence $\Omega$ is a minimizing cluster with $\gamma(\Omega) = v$. Note that the proof is much simpler than the one in the unweighted setting, where the total mass is infinite, but on the other hand perimeter and volume are translation-invariant. \item That $\Omega$ may be chosen so that $\gamma^{n-1}(\partial \Omega_i \setminus \partial^* \Omega_i) = 0$ for all $i$ follows from \cite[Theorem 30.1]{MaggiBook}; the proof carries over to the weighted setting (see below). In particular, the topological boundary of each cell has zero $\gamma^n$-measure, and so by replacing each cell by its interior, we do not change its measure nor its reduced boundary (and hence its $\gamma$-weighted perimeter), so $\Omega$ remains an isoperimetric minimizer. As this operation can only reduce the topological boundary, it still holds that $\gamma^{n-1}(\partial \Omega_i \setminus \partial^* \Omega_i) = 0$ for all $i$. In particular, since: \[ \partial \Omega_i \cap \partial \Omega_j \setminus [\partial^* \Omega_i \cap \partial^* \Omega_j] \subset [\partial \Omega_i \setminus \partial^* \Omega_i] \cup [\partial \Omega_j \setminus \partial^* \Omega_j] , \] the final assertion follows. \item The assertions follow from \cite[Theorem 30.1, Lemma 30.2 and Corollary 3.3]{MaggiBook}, whose proofs carry over to the weighted setting. Indeed, all of the arguments used in those proofs are local in nature, and so as long as our density $e^{-W}$ is $C^{\infty}$-smooth and positive, and hence locally bounded above and below (away from zero), the proofs of $\gamma^{n-1}(\partial \Omega_i \setminus \partial^* \Omega_i) = 0$, of the relative openness of $\Sigma_{ij}$ in $\partial \Omega_i \cap \partial \Omega_j$, and of the disjointness of $B(x,\epsilon)$ from $\Omega_\ell$ carry over by adjusting constants (which are local). By \cite[Corollary 3.3 and Remark 3.4]{MaggiBook}, we also know that for any $x \in \Sigma_{ij}$ there exists $r_x > 0$ so that $\Omega_i$ and $\Omega_j$ are measure-constrained weighted-perimeter minimizers in $B(x,r_x)$. Consequently, by regularity theory for volume-constrained perimeter minimizers (see Morgan \cite[Section 3.10]{MorganRegularityOfMinimizers} for an adaptation to the weighted Riemannian setting), since our density is $C^{\infty}$-smooth, it follows that $\Sigma_{ij} \cap B(x,r_x) = \partial \Omega_i \cap \partial \Omega_j \cap B(x,r_x)$ is a $C^{\infty}$-smooth $(n-1)$-dimensional submanifold. \end{enumerate} \end{proof} Given a minimizing cluster with (smooth) interfaces $\Sigma_{ij}$, let $\mathbf{n}_{ij}$ be the unit normal field along $\Sigma_{ij}$ that points from $\Omega_i$ to $\Omega_j$. We use $\mathbf{n}_{ij}$ to co-orient $\Sigma_{ij}$, and since $\mathbf{n}_{ij} = -\mathbf{n}_{ji}$, note that $\Sigma_{ij}$ and $\Sigma_{ji}$ have opposite orientations. When $i$ and $j$ are clear from the context, we will simply write $\mathbf{n}$. We will typically abbreviate $H_{\Sigma_{ij}}$ and $H_{\Sigma_{ij},\gamma}$ by $H_{ij}$ and $H_{ij,\gamma}$, respectively. \subsection{Volume and Perimeter Regularity} For simplicity, we assume henceforth that $\gamma$ is a probability measure and that: \begin{equation} \label{eq:decreasing} \exists R_* \geq 0 \;\;\; [R_*,\infty) \ni R \mapsto \gamma^{n-1}(\partial B_R) \text{ is decreasing.} \end{equation} \begin{lemma} \label{lem:regular-cond} For each $i \geq 0$, let $f_i : \mathbb{R}_+ \rightarrow \mathbb{R}_+$ denote a $C^1$ increasing function so that: \[ \norm{\nabla^i W(x)} \leq f_i(\abs{x}) \;\;\; \forall x \in \mathbb{R}^n . \] Assume that for all $i,j \geq 0$, there exists $\delta_{ij} > 0$, so that defining $F_{ij} : \mathbb{R}_+ \rightarrow \mathbb{R}_+$ by: \[ F_{ij}(R) := f_i(R + \delta_{ij})^j e^{\delta_{ij} f_1(R+\delta_{ij})} , \] we have: \begin{equation} \label{eq:integrability-conds} \int_{\mathbb{R}^n} F_{ij}(\abs{x}) d\gamma < \infty ~,~ \int_{\mathbb{R}^n} F'_{ij}(\abs{x}) d\gamma < \infty . \end{equation} Then any Borel set $U \subset \mathbb{R}^n$ is volume regular, and the cells of any isoperimetric minimizing cluster are perimeter regular, with respect to $\gamma$. \\ Furthermore, if $\delta_{ij} \geq \delta > 0$ uniformly in $i,j \geq 0$, the above sets are in fact uniformly volume and perimeter regular, respectively. \end{lemma} For the proof of the perimeter regularity, we require the following perimeter decay estimate: \begin{lemma}\label{lem:area-decay} If $\Omega = (\Omega_1,\ldots,\Omega_k)$ is an isoperimetric minimizing cluster then: \[ \sum_{i=1}^k \gamma^{n-1}(\partial^* \Omega_i \setminus B_R) \leq 3 k \gamma^{n-1}(\partial B_R) \] for all $R \geq R_*$. \end{lemma} \begin{proof} Let $R \geq R_*$. Since $\gamma(\mathbb{R}^n \setminus B_R) = \sum_{i=1}^k \gamma(\Omega_i \setminus B_R)$, we may choose a non decreasing sequence $R = R_0 \leq R_1 \leq R_2 \leq \ldots R_{k-1} \leq R_k = \infty$ so that $\gamma(B_{R_i} \setminus B_{R_{i-1}}) = \gamma(\Omega_i \setminus B_R)$ for all $i=1,\ldots,k$. Now define the cells of a competing cluster $\tilde \Omega$ as follows: \[ \tilde \Omega_i := (\Omega_i \cap B_R) \cup (B_{R_i} \setminus B_{R_{i-1}}) . \] Clearly $\gamma(\tilde \Omega) = \gamma(\Omega)$. Now observe that (see e.g. \cite[Lemma 12.22, Theorem 16.3]{MaggiBook}): \begin{align*} P_\gamma(\tilde \Omega_i) &\le P_\gamma(\Omega_i \cap B_R) + P_\gamma(B_{R_i} \setminus B_{R_{i-1}}) \\ & \le \gamma^{n-1}(\partial^* \Omega_i \cap B_R) + \gamma^{n-1}(\partial B_R) + \gamma^{n-1}(\partial B_{R_i}) + \gamma^{n-1}(\partial B_{R_{i-1}}) \\ &\le \gamma^{n-1}(\partial^* \Omega_i\cap B_R) + 3 \gamma^{n-1}(\partial B_R) . \end{align*} Summing in $i$ and dividing by $2$, \[ P_\gamma(\tilde \Omega) \leq \frac{1}{2} \sum_{i=1}^k \gamma^{n-1}(\partial^* \Omega_i\cap B_R) + \frac{3}{2} k \gamma^{n-1}(\partial B_R) . \] On the other hand, the fact that $\Omega$ is minimizing and $\gamma(\Omega) = \gamma(\tilde \Omega)$ implies that \[ P_\gamma(\tilde \Omega) \ge P_\gamma(\Omega) = \frac{1}{2} \sum_{i=1}^k \gamma^{n-1}(\partial^* \Omega_i) . \] Combining these inequalities yields the assertion. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:regular-cond}] Note that by the intermediate value theorem: \[ \abs{W(z) -W(x)} \leq \abs{z-x} f_1(\max(\abs{x},\abs{z})) . \] Consequently: \[ \sup_{z \in B(x,\delta_{ij})} \norm{\nabla^i W(z)}^j e^{-W(z)} \leq f_i(\abs{x}+\delta_{ij})^j e^{\delta_{ij} f_1(\abs{x}+\delta_{ij})} e^{-W(x)} = F_{ij}(\abs{x}) e^{-W(x)} , \] and so if for any $i,j \geq 0$, the right-hand function is integrable on $\mathbb{R}^n$, then all Borel sets are volume regular with respect to $\gamma$. Now if $\Omega = (\Omega_1,\ldots,\Omega_k)$ is an isoperimetric minimizing cluster, then for each of its cells $U$: \[ \int_{\partial^* U} \sup_{z \in B(x,\delta_{ij})} \norm{\nabla^i W(z)}^j e^{-W(z)} d\H^{n-1} \leq \int_{\partial^* U} F_{ij}(\abs{x}) d\gamma^{n-1}(x) . \] Integrating by parts and applying Lemma \ref{lem:area-decay}, it follows that: \begin{align*} & \leq F_{ij}(R_*) \gamma^{n-1}(\partial^* U) + \int_{R_*}^\infty F_{i,j}'(R) \gamma^{n-1}(\partial^* \Omega_i \setminus B_R) dR \\ & \leq F_{ij}(R_*) \gamma^{n-1}(\partial^* U) + 3 k \int_{R_*}^\infty F_{i,j}'(R) \gamma^{n-1}(\partial B_R) dR \\ & = F_{ij}(R_*) \gamma^{n-1}(\partial^* U) + 3 k \int_{\mathbb{R}^n \setminus B_{R_*}} F'_{i,j}(\abs{x}) d\gamma(x) . \end{align*} Since $\gamma^{n-1}(\partial^* U) < \infty$ for any cell of a minimizing cluster, the perimeter regularity of $U$ with respect to $\gamma$ follows as soon as the second term above is finite for all $i,j \geq 0$, as asserted. \end{proof} \begin{corollary} \label{cor:Gaussian-regular} For the standard Gaussian measure $\gamma$, any Borel set $U \subset \mathbb{R}^n$ is uniformly volume regular, and the cells of any isoperimetric minimizing cluster are uniformly perimeter regular, with respect to $\gamma$. \end{corollary} \begin{proof} Note that (\ref{eq:decreasing}) indeed holds since $\gamma^{n-1}(\partial B_R) = c_n R^{n-1} e^{-R^2/2}$. Setting $f_0(R) = \frac{R^2}{2}$, $f_1(R) = R$, $f_2(R) = \sqrt{n}$, $f_i(R) = 0$ for all $i \geq 3$ and $\delta = 1$, it is immediate to verify the integrability conditions (\ref{eq:integrability-conds}), as the Gaussian measure decays faster than any polynomial or exponential function. The assertion therefore follows by Lemma \ref{lem:regular-cond}. \end{proof} We henceforth proceed under the following assumption (note that as long we take only a finite number of variations in our analysis, the uniformity assumption may be dropped): \begin{equation} \label{eq:gamma-regular} \begin{array}{l} \text{$\gamma$ is a probability measure for which all cells of any isoperimetric}\\ \text{minimizing cluster are uniformly volume and perimeter regular.} \end{array} \end{equation} \subsection{Stationarity} In this subsection, we show that a minimizing cluster satisfies the stationarity property: \[ \delta_X V = 0 \;\; \Rightarrow \;\; \delta_X A = 0 , \] for all admissible vector-fields $X$. When $X$ is compactly supported this is well-known and standard in the single-bubble case, and was proved for Euclidean double-bubbles in \cite{DoubleBubbleInR3} assuming higher-order boundary regularity, which we avoid in this work (see Remark \ref{rem:no-higher-regularity} below). In what follows, let (\ref{eq:gamma-regular}) hold, let $\Omega$ be a minimizing cluster, and let $\Sigma_{ij}$ be its interfaces. Recall that $E = \set{x \in \mathbb{R}^k : \sum_{i=1}^k x_i = 0}$. \begin{lemma}\label{lem:span} If $\gamma(\Omega) \in \interior \Delta$ then the set $\{e_i - e_j: \gamma^{n-1}(\Sigma_{ij}) > 0\}$ spans $E$. \end{lemma} \begin{proof} Suppose in the contrapositive that $\{e_i - e_j: \gamma^{n-1}(\Sigma_{ij}) > 0\}$ does not span $E$. Then there exists some non-zero $v$ in $E$ such that $v_i = v_j$ whenever $\gamma^{n-1}(\Sigma_{ij}) > 0$. Let $S = \{i : v_i < 0\}$ and let $T = \{i : v_i \ge 0\}$. Since $v \in E$ implies $\sum_i v_i = 0$, it follows that both $S$ and $T$ are non-empty. Since $i \in S$ and $j \in T$ imply that $v_i \ne v_j$, it follows that $\gamma^{n-1}(\Sigma_{ij}) = 0$ whenever $i \in S$ and $j \in T$. Now let $U = \bigcup_{i \in S} \Omega_i$. By (\ref{eq:nothing-lost}) it follows that $P_\gamma(U) = \gamma^{n-1} (\partial^* U) = 0$ and hence $\H^{n-1}(\partial^* U) = 0$. But the condition $\gamma(\Omega) \in \interior \Delta$ implies that $\gamma(U) = \sum_{i \in S} \gamma(\Omega_i) \in (0, 1)$ and hence $\H^n(U) > 0$ and $\H^n(\mathbb{R}^n \setminus U) > 0$, in contradiction to $\H^{n-1}(\partial^* U) = 0$. (To see the contradiction, one may simply endow Euclidean space $\mathbb{R}^n$ with the standard Gaussian measure $\gamma_G$, infer that $\gamma_G(U) \in (0,1)$, and apply the single-bubble Gaussian isoperimetric inequality to conclude that $\gamma_G^{n-1}(\partial^* U) > 0$ and hence $\H^{n-1}(\partial^* U) > 0$). \end{proof} \begin{corollary} \label{cor:pos-def} Denoting $A_{ij} := \gamma^{n-1}(\Sigma_{ij})$, the matrix: \[ L_{A} := \sum_{i < j} A_{ij} (e_i - e_j) (e_i-e_j)^T \] is strictly positive-definite on $E$ when $\gamma(\Omega) \in \interior \Delta$. \end{corollary} \begin{lemma}\label{lem:compensators} If $\gamma(\Omega) \in \interior \Delta$ then there exists a collection of $C_c^\infty$ vector-fields $Y_1, \dots, Y_{k-1}$ with disjoint supports, so that for every $p=1,\ldots,k-1$, $(Y_p)|_{\cup_{i<j}\Sigma_{ij}}$ is supported in $\Sigma_{ij}$ for some $i<j$, and such the set $\{\delta_{Y_i} V: i = 1, \dots, k-1\}$ spans $E$. \end{lemma} \begin{proof} For each pair $i, j$ with $\gamma^{n-1}(\Sigma_{ij}) > 0$, choose some $x_{ij} \in \Sigma_{ij}$. By Theorem \ref{thm:Almgren}, there exists some $\epsilon > 0$ such that $B(x_{ij},\epsilon)$ is disjoint from all the interfaces besides $\Sigma_{ij}$ and $\cl(B(x_{ij},\epsilon) \cap \Sigma_{ij}) \subset \Sigma_{ij}$, where $\cl$ denotes closure. By replacing $\epsilon$ by $\epsilon/2$, we can ensure that all of the $B(x_{ij},\epsilon)$ are pairwise disjoint. Let $f_{ij}$ be a non-negative $C_c^\infty$ function supported in $B(x_{ij},\epsilon)$ such that $f(x_{ij}) > 0$, and let $X_{ij}$ be a smooth extension of $f_{ij} \mathbf{n}_{ij}$ that is supported in $B(x_{ij},\epsilon)$. Then $\delta_{X_{ij}} V = \alpha_{ij} (e_i - e_j)$ for some $\alpha_{ij} > 0$. By Lemma~\ref{lem:span}, $\{\delta_{X_{ij}} V: \gamma^{n-1}(\Sigma_{ij}) > 0\}$ spans $E$; hence, we may choose $Y_1, \dots, Y_{k-1}$ to be an appropriate subset of the $X_{ij}$. \end{proof} \begin{lemma}[Stationarity] \label{lem:first-order} Let $\Omega$ be a minimizing cluster. For any admissible vector-field $X$, if $\delta_X V = 0$ then $\delta_X A = 0$. \end{lemma} \begin{proof} Let $F_t$ denote the flow along $X$, defined as usual by: \[ \frac{d}{dt} F_t = X \circ F_t ~,~ F_0 = \text{Id} . \] Choose a family $Y_1, \dots, Y_{k-1}$ of vector-fields as in Lemma~\ref{lem:compensators}, having compact and pairwise disjoint supports. Let $\{F_{t,s}\}_{t \in \mathbb{R} , s \in \mathbb{R}^{k-1}}$ be a family of $C^\infty$ diffeomorphisms defined by solving the following system of linear stationary PDEs: \begin{align*} \pdiff{}{s_i} F_{t,s} &= Y_i \circ F_{t,s} \;\;\; \forall i=1,\ldots,k-1 \\ F_{t,\vec 0} &= F_t . \end{align*} Observe that the above system is indeed integrable since the $Y_i$'s have disjoint supports, and hence the flows they individually generate necessarily commute (cf. the Frobenius Theorem \cite{Lang-ManifoldsBook}). Consequently: \begin{equation} \label{eq:concat} F_{t,s} = F^{k-1}_{s_{k-1}} \circ \ldots \circ F^{1}_{s_1} \circ F_t , \end{equation} where: \[ \frac{d}{ds} F^i_s = Y_i \circ F^i_s ~,~ F^i_0 = \text{Id} \;\;\; \forall i=1,\ldots,k-1 ~, \] and all of the usual smoothness and uniform boundedness of all partial derivatives of any fixed order apply to $F_{t,s}$ for $t \in [-T,T]$, $s \in [-T,T]^{k-1}$, for any fixed $T > 0$. Let $V(t, s) = \gamma(F_{t,s}(\Omega))$ and $A(t,s) = P_\gamma(F_{t,s}(\Omega))$. By the assumption (\ref{eq:gamma-regular}) and a tedious yet straightforward adaptation of the proof of Lemma \ref{lem:regular} to a concatenation of (partly non-commuting) flows as in (\ref{eq:concat}), it follows that $V$ and $A$ are both $C^\infty$ on $\{(t,s) : t \in (-\epsilon,\epsilon) , s \in (-\epsilon,\epsilon)^{k-1}\}$ for some $\epsilon > 0$. Clearly: \begin{align*} \left.\frac{\partial^m V}{(\partial t)^m}\right|_{t=0,s=0} = \delta_X^m V ~,~ & \left.\frac{\partial^m A}{(\partial t)^m}\right|_{t=0,s=0} = \delta_X^m A \;\;\;\;\;\forall m =1,2,\ldots\\ \left.\frac{\partial V}{\partial s_i}\right|_{t=0,s=0} = \delta_{Y_i} V ~,~ & \left.\frac{\partial A}{\partial s_i}\right|_{t=0,s=0} = \delta_{Y_i} A. \end{align*} Since $\{\delta_{Y_i} V\}_{i=1,\ldots,k-1}$ span $E$, $\{\partial_{s_i} V_j(0, 0)\}_{ji} : \mathbb{R}^{k-1} \to E$ is full rank. By the implicit function theorem, there exists a $\delta \in (0,\epsilon)$ and a $C^\infty$ curve $s(t) : (-\delta,\delta) \rightarrow (-\epsilon,\epsilon)^{k-1}$, such that $s(0) = 0$ and $V(t,s(t)) = V(0,0) = \gamma(\Omega)$ for all $|t| < \delta$. Moreover, $\frac{\partial V}{\partial t}(0,0) = \delta_X V = 0$ and the full-rank of $\{\partial_{s_i} V_j(0, 0)\}$ imply that $s'(0) = 0$. From this property and the chain rule, \[ \diffat{A(t,s(t))}{t}{t=0} = \pdiffat{A(t,s)}{t}{t=0,s=0} = \delta_X A. \] Hence, we conclude that $\delta_X A = 0$; if not, there is some $t \ne 0$ such that the cluster $\tilde \Omega = F_{t,s(t)}(\Omega)$ has $\gamma(\tilde \Omega) = \gamma(\Omega)$ and $P_\gamma(\tilde \Omega) = A(t,s(t)) < A(0,0) = P_\gamma(\Omega)$, contradicting the minimality of $\Omega$. \end{proof} \subsection{First-Order Conditions} First-order conditions for an isoperimetric minimizing cluster are well-understood, and points~\ref{it:first-order-constant},~\ref{it:first-order-cyclic} and~\ref{it:first-order-lambda} below are well-known. We denote by $E^*$ the dual of $E$; as usual, $E^*$ may be identified with $E$, acting by the Euclidean inner product. \begin{theorem}\label{thm:first-order-conditions-expanded} Assume (\ref{eq:gamma-regular}) holds. If $\Omega$ is an isoperimetric minimizing cluster then: \begin{enumerate}[(i)] \item On each $\Sigma_{ij}$, $H_{ij,\gamma}$ is constant. \label{it:first-order-constant} \item For all $\mathcal{C} \in \mathcal{T}$, $\displaystyle \sum_{(i, j) \in \mathcal{C}} H_{ij,\gamma} = 0$. \label{it:first-order-cyclic} \\ Equivalently, there is a unique $\lambda \in E^*$ such that $H_{ij,\gamma} = \lambda_i - \lambda_j$. \item For every admissible vector-field $X$: \label{it:first-order-div} \[ \sum_{i<j} \int_{\Sigma_{ij}} \div_{\Sigma,\gamma} X^\mathbf{t}\, d\gamma^{n-1} = 0. \] \item For every admissible vector-field $X$: \begin{equation} \delta_X A = \sum_{i<j} H_{ij,\gamma} \int_{\Sigma_{ij}} X^{\mathbf{n}_{ij}}\, d\gamma^{n-1} = \inr{\lambda}{\delta_X V} . \label{eq:formula-first-variation-of-area} \end{equation} \label{it:first-order-lambda} \end{enumerate} \end{theorem} \begin{remark} \label{rem:no-higher-regularity} The third point essentially says that at any point where the interfaces $\Sigma_{ij}$ meet, their boundary normals must sum to zero. In fact, more delicate boundary regularity results are known. In $\mathbb{R}^2$, the interfaces meet at $120^\circ$ angles at a discrete set of points - this was shown by F.~Morgan in \cite{MorganSoapBubblesInR2} building upon the work of Almgren \cite{AlmgrenMemoirs}, and also follows from the results of J.~Taylor~\cite{Taylor-SoapBubbleRegularityInR3}. The regularity results of Taylor also cover the case of $\mathbb{R}^3$, establishing that the interfaces must meet in threes at $120^\circ$ angles along smooth curves, in turn meeting in fours at equal angles of $\cos^{-1}(-1/3) \simeq 109^\circ$. A generalization of this to $\mathbb{R}^n$ for $n \geq 4$ has been announced by B.~White \cite{White-SoapBubbleRegularityInRn} and recently proved in \cite{CES-RegularityOfMinimalSurfacesNearCones}. We will not require these much more delicate boundary regularity results in this work. \end{remark} \begin{proof}[Proof of Theorem~\ref{thm:first-order-conditions-expanded}] We will sketch parts~\ref{it:first-order-constant} and~\ref{it:first-order-cyclic}, which are well-known, and provide a more detailed explanation of part~\ref{it:first-order-div}; part~\ref{it:first-order-lambda} is then an immediate consequence of the previous ones. Since $\H^{n-1}(\partial^* \Omega_i \setminus \cup_{j \neq i} \Sigma_{ij}) = 0$ by (\ref{eq:nothing-lost}), and since $\Sigma_{ij}$ are all smooth for a minimizing cluster $\Omega$ by Theorem \ref{thm:Almgren}, the smoothness assumption in the second part of Proposition \ref{prop:first-variation} is satisfied and we may appeal to the formulas for first variation of volume and perimeter derived there: \begin{align*} \delta_X V(\Omega_i) & = \sum_{j \neq i} \int_{\Sigma_{ij}} X^{\mathbf{n}_{ij}} \, d\gamma^{n-1} \;\;\; \forall i=1,\ldots,k ~, \\ \delta_X A(\Omega) & = \sum_{i < j} \int_{\Sigma_{ij}} (H_{\Sigma_{ij},\gamma} X^{\mathbf{n}_{ij}} + \div_{\Sigma_{ij},\gamma} X^\mathbf{t}) \, d\gamma^{n-1} ~. \end{align*} Note that whenever the support of $X^{\mathbf{t}}|_{\Sigma_{ij}}$ is contained in $\Sigma_{ij}$, we may integrate by parts to obtain: \begin{equation} \label{eq:tang-zero} \int_{\Sigma_{ij}} \div_{\Sigma_{ij},\gamma} X^\mathbf{t} \, d\gamma^{n-1} = 0 . \end{equation} To establish part~\ref{it:first-order-constant}, observe that if $H_{ij,\gamma}$ were not the same at two different points $x_1,x_2 \in \Sigma_{ij}$, we could find by Theorem \ref{thm:Almgren} an $\epsilon > 0$ so that $B(x_1,\epsilon)$, $B(x_2,\epsilon)$ and all the other interfaces are disjoint, $\gamma^{n-1}(B(x_p,\epsilon) \cap \Sigma_{ij}) > 0$, and $\cl(B(x_p,\epsilon) \cap \Sigma_{ij}) \subset \Sigma_{ij}$, for $p=1,2$. We could then construct a smooth vector-field $X$ supported in $B(x_1,\epsilon) \cup B(x_2,\epsilon)$ (so that (\ref{eq:tang-zero}) applies) with $\delta_X V(\Omega) = 0$ while $\delta_X A(\Omega) < 0$, in violation of Lemma~\ref{lem:first-order}. To establish part~\ref{it:first-order-cyclic}, if $\sum_{(i, j) \in \mathcal{C}} H_{ij,\gamma}$ were not zero for some $\mathcal{C} \in \mathcal{T}$, we could construct a smooth vector-field compactly supported around three points (one in each $\Sigma_{ij}$) that would preserve weighted volume to first order, while decreasing weighted perimeter to first order, exactly as above, again in violation of Lemma~\ref{lem:first-order}. Clearly, this implies the existence of $\lambda \in \mathbb{R}^{k}$ so that $H_{ij,\gamma} = \lambda_i - \lambda_j$; as $\lambda$ is defined uniquely up to an additive constant, we may assume that $\sum_{i=1}^k \lambda_i = 0$, i.e. that $\lambda \in E^*$ (it will soon be clear that $\lambda$ acts on $E$). At this point, we have shown that for any admissible vector-field $X$: \begin{align} \notag & \sum_{i<j} \int_{\Sigma_{ij}} H_{ij,\gamma} X^{\mathbf{n}_{ij}}\, d\gamma^{n-1} = \sum_{i < j} (\lambda_i - \lambda_j) \int_{\Sigma_{ij}} X^{\mathbf{n}_{ij}} \, d\gamma^{n-1} \\ \notag & = \sum_{i \neq j} \lambda_i \int_{\Sigma_{ij}} X^{\mathbf{n}_{ij}} \, d\gamma^{n-1} = \sum_{i} \lambda_i \sum_{j \neq i} \int_{\Sigma_{ij}} X^{\mathbf{n}_{ij}}\, d\gamma^{n-1} \\ \label{eq:inr-lambda} &= \sum_{i} \lambda_i \delta_X V_i = \inr{\lambda}{\delta_X V}, \end{align} where we have used above that $\int_{\Sigma_{ij}} X^{\mathbf{n}_{ij}}\, d\gamma^{n-1} = - \int_{\Sigma_{ji}} X^{\mathbf{n}_{ji}}\, d\gamma^{n-1}$. For any such $X$, there exists by Lemma \ref{lem:compensators} a vector-field $Y = \sum_{i=1}^{k-1} c_i Y_i$ with $\delta_Y V(\Omega) = -\delta_X V(\Omega)$ so that $Y|_{\cup_{i<j} \Sigma_{ij}}$ is supported in $\cup_{i<j} \Sigma_{ij}$. In particular, we may integrate-by-parts and obtain: \begin{equation} \label{eq:lambda1} \int_{\Sigma_{ij}} \div_{\Sigma_{ij},\gamma} Y^\mathbf{t} \, d\gamma^{n-1} = 0\;\;\; \forall i < j . \end{equation} Since $\delta_{X+Y} V = 0$, it follows by (\ref{eq:inr-lambda}) that: \begin{equation} \label{eq:lambda2} \sum_{i<j}\int_{\Sigma_{ij}} H_{ij,\gamma} (X^{\mathbf{n}_{ij}} + Y^{\mathbf{n}_{ij}}) \, d\gamma^{n-1} = \inr{\lambda}{\delta_{X+Y} V} = 0 . \end{equation} In addition, Lemma \ref{lem:first-order} implies that: \begin{equation} \label{eq:lambda3} \delta_{X+Y} A = 0 . \end{equation} Combining (\ref{eq:lambda1}), (\ref{eq:lambda2}) and (\ref{eq:lambda3}), we obtain: \begin{align*} 0 & = \delta_{X + Y} A \\ &= \sum_{i<j} \int_{\Sigma_{ij}} \brac{H_{ij,\gamma} (X^{\mathbf{n}_{ij}} + Y^{\mathbf{n}_{ij}}) + \div_{\Sigma_{ij},\gamma} (X^\mathbf{t} + Y^\mathbf{t})} \, d\gamma^{n-1} \\ & = \sum_{i<j} \div_{\Sigma_{ij},\gamma} X^\mathbf{t} \, d\gamma^{n-1} , \end{align*} concluding the proof of part~\ref{it:first-order-div}. Part~\ref{it:first-order-lambda} follows immediately since by part~\ref{it:first-order-div} and (\ref{eq:inr-lambda}): \begin{align*} \delta_X A &= \sum_{i<j} \int_{\Sigma_{ij}} \brac{H_{ij,\gamma} X^{\mathbf{n}_{ij}} + \div_{\Sigma_{ij},\gamma} X^\mathbf{t}} \, d\gamma^{n-1} \\ & = \sum_{i<j} \int_{\Sigma_{ij}} H_{ij,\gamma} X^{\mathbf{n}_{ij}} \, d\gamma^{n-1} = \inr{\lambda}{\delta_X V} . \end{align*} \end{proof} \subsection{Stability} Before concluding this section, we show that a minimizing cluster is necessarily stable. \begin{definition}[Index Form $Q$] The Index Form $Q$, associated to a cluster satisfying the first-order conditions of Theorem \ref{thm:first-order-conditions-expanded}, is defined as the following quadratic form: \[ Q(X) := \delta^2_X A - \scalar{\lambda,\delta^2_X V} . \] \end{definition} \begin{lemma}[Stability] \label{lem:stable} Assume (\ref{eq:gamma-regular}) holds, and let $\Omega$ be an isoperimetric minimizing cluster. Then for any admissible vector-field $X$: \[ \delta_X V = 0 \;\; \Rightarrow \;\; Q(X) \geq 0 . \] \end{lemma} Again, this is well-known and standard for compactly supported vector-fields in the single-bubble case, and was proved in \cite{DoubleBubbleInR3} assuming higher-order boundary regularity, which we avoid in this work. Consequently, for completeness, we provide a proof. \begin{proof}[Proof of Lemma \ref{lem:stable}] Let $Y_1, \dots, Y_{k-1}$, $F_{t,s}$, and $s(t)$ be as in the proof of Lemma~\ref{lem:first-order}. Recall that $\delta_X V = 0$ implies that $s'(0)=0$. By the chain rule, it follows (using $s'(0)=0$) that: \begin{align*} \left.\frac{d^2 A(t,s(t))}{(dt)^2} \right|_{t=0} & = \left.\frac{\partial^2 A}{(\partial t)^2}\right|_{t=0,s=0} + \sum_{i=1}^{k-1} s_i''(0) \pdiffat{A}{s_i}{t=0,s=0} \\ &= \delta_X^2 A + \sum_{i=1}^{k-1} s_i''(0) \delta_{Y_i} A \\ &= \delta_X^2 A + \sum_{i=1}^{k-1} s_i''(0) \inr{\lambda}{\delta_{Y_i} V}, \end{align*} where the last equality follows from (\ref{eq:formula-first-variation-of-area}). Differentiating the relation $V(0,0) = V(t,s(t))$ twice in $t$ (and using again that $s'(0) = 0$), we obtain: \[ 0 = \left.\frac{\partial^2 V}{(\partial t)^2}\right|_{t=0,s=0} + \sum_i s_i''(0) \pdiffat{V}{s_i}{t=0,s=0} = \delta_X^2 V + \sum_i s_i''(0) \delta_{Y_i} V . \] Hence, \[ \left.\frac{d^2 A(t,s(t))}{(dt)^2} \right|_{t=0} = \delta_X^2 A - \inr{\lambda}{\delta_X^2 V} = Q(X). \] It follows that necessarily $Q(X) \ge 0$, since otherwise, recalling that $\left.\frac{dA(t,s(t))}{dt} \right|_{t=0} = 0$ by Lemma~\ref{lem:first-order}, we would find some $t \ne 0$ so that the cluster $\tilde \Omega = F_{t,s(t)}(\Omega)$ satisfies $\gamma(\tilde \Omega) = \gamma(\Omega)$ and $P_\gamma(\tilde \Omega) = A(t,s(t)) < A(0,0) = P_\gamma(\Omega)$, contradicting the minimality of $\Omega$. \end{proof} \section{Second Variations} \label{sec:second-order} Unlike the formulas for first variation above, the following identity for the second variation under translations appears to be new, and moreover, is particular to the Gaussian measure. \begin{theorem}\label{thm:formula} If $\Omega$ is an isoperimetric minimizing cluster for the Gaussian measure $\gamma$ then for any $w \in \mathbb{R}^n$, \begin{equation} Q(w) = \delta_w^2 A - \inr{\lambda}{\delta_w^2 V} = - \sum_{i < j} \int_{\Sigma_{ij}} \inr{w}{\mathbf{n}_{ij}}^2\, d\gamma^{n-1}. \label{eq:formula} \end{equation} \end{theorem} Here $\delta_w$ denotes the variation for the constant vector-field $X \equiv w \in \mathbb{R}^n$. Recall that given an isoperimetric minimizing cluster, $\lambda \in E^*$ denotes the unique vector guaranteed by Theorem \ref{thm:first-order-conditions-expanded} to satisfy $\lambda_i - \lambda_j = H_{ij,\gamma}$. We continue to treat the case of a $k$-cell cluster, $k \geq 3$, as this poses no additional effort. \subsection{Second variation of volume} \begin{lemma} \label{lem:delta2-V} Let $\Omega$ be an isoperimetric minimizing cluster for a measure $\gamma = e^{-W} dx$ satisfying (\ref{eq:gamma-regular}). Then for any $i=1,\ldots,k$ and $w \in \mathbb{R}^n$: \begin{equation} \label{eq:delta2-V-cell} \delta_w^2 V(\Omega_i) = - \sum_{j \neq i}\int_{\Sigma_{ij}} \inr{w}{\mathbf{n}_{ij}} \nabla_w W \, d\gamma^{n-1} . \end{equation} In particular: \[ \inr{\lambda}{\delta_w^2 V} = - \sum_{i < j} H_{ij,\gamma} \int_{\Sigma_{ij}} \inr{w}{\mathbf{n}_{ij}} \nabla_w W \, d\gamma^{n-1} . \] \end{lemma} \begin{proof} By (\ref{eq:gamma-regular}) $\Omega_i$ is volume regular, and so by Lemma \ref{lem:regular} we have: \begin{align*} \delta_w^2 V(\Omega_i) &= \int_{\Omega_i} \diffIIat{e^{-W(x + tw)}}{t}{t}{t=0}\, dx \\ &= \int_{\Omega_i} (\nabla_w W)^2 - \nabla^2_{w,w} W\, d\gamma \\ & = - \int_{\Omega_i} \div_{\gamma} (w \nabla_w W) \, d\gamma . \end{align*} In order to apply integration-by-parts, we need to first make $X = w \nabla_w W$ compactly supported. Noting that $\abs{X} \leq \abs{w}^2 \abs{\nabla W}$ and $\abs{\div X} = \abs{\nabla^2_{w,w} W} \leq \abs{w}^2 \norm{\nabla^2 W}$, we may apply Lemma \ref{lem:cutoff} to approximate $X$ by the compactly-supported $\eta_R X$. Integrating by parts using ~\eqref{eq:integration-by-parts}, we obtain: \begin{align*} &= - \lim_{R \rightarrow \infty} \int_{\Omega_i} \div_\gamma(\eta_R w \nabla_w W)\, d\gamma \\ &= -\lim_{R \rightarrow \infty} \int_{\partial^* \Omega_i} \eta_R \inr{w}{\mathbf{n}} \nabla_w W \, d\gamma^{n-1} \\ & = - \int_{\partial^* \Omega_i} \inr{w}{\mathbf{n}} \nabla_w W \, d\gamma^{n-1} , \end{align*} where the last equality follows by Dominant Convergence since $\Omega_i$ is perimeter regular by (\ref{eq:gamma-regular}). Since $\H^{n-1}(\partial^* U_i \setminus \cup_{j \neq i} \Sigma_{ij}) = 0$, (\ref{eq:delta2-V-cell}) follows. Finally, since swapping $i$ and $j$ changes the sign of $\inr{w}{\mathbf{n}_{ij}}$, we have: \begin{align*} \inr{\lambda}{\delta_w^2 V} & = - \sum_{i=1}^k \lambda_i \sum_{j \ne i} \int_{\Sigma_{ij}} \inr{w}{\mathbf{n}_{ij}} \nabla_w W \, d\gamma^{n-1} \\ &= - \sum_{i < j} (\lambda_i - \lambda_j)\int_{\Sigma_{ij}} \inr{w}{\mathbf{n}_{ij}} \nabla_w W \, d\gamma^{n-1} \notag \\ &= - \sum_{i < j} H_{ij,\gamma} \int_{\Sigma_{ij}} \inr{w}{\mathbf{n}_{ij}} \nabla_w W \, d\gamma^{n-1}. \end{align*} \end{proof} \subsection{Second variation of perimeter} \begin{lemma} Let $\Omega$ be an isoperimetric minimizing cluster for a measure $\gamma = e^{-W} dx$ satisfying (\ref{eq:gamma-regular}). Then for any $w \in \mathbb{R}^n$: \[ \delta_w^2 A = - \sum_{i < j} \int_{\Sigma_{ij}} \brac{ H_{ij,\gamma} \scalar{w,\mathbf{n}_{ij}} \nabla_w W + \scalar{w, \mathbf{n}_{ij}} \nabla^2_{\mathbf{n}_{ij},w} W } d\gamma^{n-1} . \] \end{lemma} \begin{proof} By (\ref{eq:gamma-regular}) $\Omega_i$ is perimeter regular, and so by Lemma \ref{lem:regular} we have for every $i$: \begin{align*} \delta_w^2 A(\Omega_i) & = \int_{\partial^* \Omega_i} \diffIIat{e^{-W(x + tw)}}{t}{t}{t=0}\, d\mathcal{H}^{n-1}(x) \\ & = \int_{\partial^* \Omega_i} \brac{(\nabla_w W)^2 - \nabla^2_{w,w} W} d\gamma^{n-1} \\ & = \sum_{j \neq i} \int_{\Sigma_{ij}} \brac{(\nabla_w W)^2 - \nabla^2_{w,w} W} d\gamma^{n-1} . \end{align*} On the other hand, we now have: \begin{equation}\label{eq:div-constant} \div_{\Sigma_{ij},\gamma} (w \nabla_w W) = \div_{\Sigma_{ij}} (w \nabla_w W) - (\nabla_w W)^2 = \nabla^2_{w^\mathbf{t},w} W - (\nabla_w W)^2, \end{equation} where $w^\mathbf{t}$ is the tangential part of $w$. Consequently, we obtain: \[ \delta_w^2 A(\Omega_i) = - \sum_{j \neq i} \int_{\Sigma_{ij}} \brac{\div_{\Sigma_{ij},\gamma} (w \nabla_w W) + \nabla^2_{w,w} W - \nabla^2_{w^\mathbf{t},w} W} d\gamma^{n-1} . \] Summing over $i$ and dividing by $2$, we obtain: \[ \delta_w^2 A = - \sum_{i < j} \int_{\Sigma_{ij}} \brac{\div_{\Sigma_{ij},\gamma} (w \nabla_w W) + \scalar{w, \mathbf{n}_{ij}} \nabla^2_{\mathbf{n}_{ij},w} W } d\gamma^{n-1} . \] We now claim that the contribution of the tangential part of divergence terms vanishes. This is essentially the content of Theorem~\ref{thm:first-order-conditions-expanded} part~\ref{it:first-order-div}, applied to the vector-field $X = w \nabla_w W$; however, $X$ is not bounded and so does not satisfy the required assumption (\ref{eq:field-bdd}). To remedy this, note as before that $\abs{X} \leq \abs{w}^2 \abs{\nabla W}$ and $\abs{\div_\Sigma X} = \abs{\nabla^2_{w^{\mathbf{t}},w} W} \leq \abs{w}^2 \norm{\nabla^2 W}$, and so we may apply Lemma \ref{lem:cutoff} to approximate $X$ by the compactly-supported $\eta_R X$. Now applying Theorem~\ref{thm:first-order-conditions-expanded} part~\ref{it:first-order-div} to $\eta_R X$, we deduce: \begin{align*} \delta_w^2 A & = - \lim_{R \rightarrow \infty} \sum_{i < j} \int_{\Sigma_{ij}} \brac{\div_{\Sigma_{ij},\gamma} (\eta_R w \nabla_w W) + \scalar{w, \mathbf{n}_{ij}} \nabla^2_{\mathbf{n}_{ij},w} W } d\gamma^{n-1} \\ & = - \lim_{R \rightarrow \infty} \sum_{i < j} \int_{\Sigma_{ij}} \brac{\div_{\Sigma_{ij},\gamma} (\eta_R \mathbf{n}_{ij} \scalar{w,\mathbf{n}_{ij}} \nabla_w W) + \scalar{w, \mathbf{n}_{ij}} \nabla^2_{\mathbf{n}_{ij},w} W } d\gamma^{n-1} \\ & = - \lim_{R \rightarrow \infty} \sum_{i < j} \int_{\Sigma_{ij}} \brac{ \eta_R H_{ij,\gamma} \scalar{w,\mathbf{n}_{ij}} \nabla_w W + \scalar{w, \mathbf{n}_{ij}} \nabla^2_{\mathbf{n}_{ij},w} W } d\gamma^{n-1} \\ & = - \sum_{i < j} \int_{\Sigma_{ij}} \brac{ H_{ij,\gamma} \scalar{w,\mathbf{n}_{ij}} \nabla_w W + \scalar{w, \mathbf{n}_{ij}} \nabla^2_{\mathbf{n}_{ij},w} W } d\gamma^{n-1} , \end{align*} where the last equality follows by Dominant Convergence since the cells $\Omega_i$ are perimeter regular by (\ref{eq:gamma-regular}). \end{proof} \begin{corollary} \label{cor:delta2-A} For $\gamma$ the standard Gaussian measure we have: \[ \delta_w^2 A = - \sum_{i < j} \int_{\Sigma_{ij}} \brac{ H_{ij,\gamma} \scalar{w,\mathbf{n}_{ij}} \nabla_w W + \inr{w}{\mathbf{n}_{ij}}^2 } d\gamma^{n-1} . \] \end{corollary} \begin{proof} Apply Corollary \ref{cor:Gaussian-regular} and use $\nabla^2 W = \text{Id}$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:formula}] Immediate from combining Lemma \ref{lem:delta2-V} and Corollary \ref{cor:delta2-A}. \end{proof} \section{The differential inequality} \label{sec:MDI} From hereon, we specialize to the case of the standard Gaussian measure $\gamma$ on $\mathbb{R}^n$ and 3-cell clusters (the double-bubble problem). In this section, we prove a rigorous version of the inequality $\nabla^2 I \le -L_A^{-1}$ (in view of the fact that we don't yet know $I$ to be differentiable). \begin{theorem}[Main Differential Inequality] \label{thm:hessian-bound-for-I} Fix $v \in \interior \Delta$. Let $\Omega$ be an isoperimetric minimizing cluster with $\gamma(\Omega) = v$, and let $A_{ij} = \gamma^{n-1}(\Sigma_{ij})$, where $\Sigma_{ij}$ are the interfaces of $\Omega$. Then for any $y \in E$, there exists an admissible vector-field $X$ such that: \[ \delta_X V = y \;\; \text{ and } \;\; Q(X) \le - y^T L_A^{-1} y , \] where: \[ L_{A} := \sum_{i < j} A_{ij} (e_i - e_j) (e_i-e_j)^T . \] \end{theorem} Recall that by Corollary \ref{cor:pos-def}, $L_A$ is strictly positive-definite on $E$. For the proof, define the ``average'' normals $\overline{\n}_{ij}$ by \[ \overline{\n}_{ij} = \begin{cases} \frac{1}{A_{ij}} \int_{\Sigma_{ij}} \mathbf{n}_{ij}\, d\gamma^{n-1} & A_{ij} > 0 \\ 0 & A_{ij} = 0 \end{cases} . \] By~\eqref{eq:formula-first-variation-of-volume}, we have: \[ (\delta_w V)_i = \sum_{j \neq i} \int_{\Sigma_{ij}} \inr{w}{\mathbf{n}_{ij}} \, d\gamma^{n-1} = \sum_{j \neq i} A_{ij} \inr{w}{\overline{\n}_{ij}} . \] Since $\overline{\n}_{ji} = -\overline{\n}_{ij}$, we can write this as: \[ \delta_w V = \sum_{i<j} A_{ij} (e_i - e_j) \overline{\n}_{ij}^T w = M w , \] where $M: \mathbb{R}^n \to E$ is the following linear operator: \begin{equation} \label{eq:M} M := \sum_{i<j} A_{ij} (e_i - e_j) \overline{\n}_{ij}^T . \end{equation} We first observe the following dichotomy: for a minimizing cluster, either $M$ is surjective or the cluster is ``effectively one-dimensional.'' \begin{definition} The cluster $\Omega$ is called effectively one-dimensional if there exist a cluster $\tilde \Omega$ in $\mathbb{R}$ and $\theta \in S^{n-1}$ such that for all $i$, $\Omega_i$ and $\{x \in \mathbb{R}^n: \inr{x}{\theta} \in \tilde \Omega_i\}$ coincide up to a null-set. \end{definition} \begin{lemma}\label{lem:dichotomy} Let $n \ge 2$ and let $\Omega$ be an isoperimetric minimizing cluster for the Gaussian measure. Then exactly one of the following possibilities holds: either $M: \mathbb{R}^n \to E$ is surjective, or $\Omega$ is effectively one-dimensional. \end{lemma} \begin{proof} If $\Omega$ is effectively one-dimensional then for all $x \in \Sigma = \cup_{i<j} \Sigma_{ij}$, the normals $\mathbf{n}_{ij}$ coincide with $\pm \theta \in S^{n-1}$, and hence $\overline{\n}_{ij}$ all lie in the linear span of $\theta$, so that $M$ is not surjective (as $n \geq 2$). For the converse direction, we may assume that $\max_i \gamma(\Omega_i) < 1$, otherwise there is nothing to prove. Note that for any $w \in \ker M$, $\delta_w V = M w = 0$, and so by stability (Lemma~\ref{lem:stable}) and~\eqref{eq:formula}, \[ Q(w) = - \sum_{i < j} \int_{\Sigma_{ij}} \inr{w}{\mathbf{n}_{ij}}^2\, d\gamma^{n-1} \geq 0 . \] It follows that $\inr{w}{\mathbf{n}_{ij}}$ must vanish $\gamma^{n-1}$-a.e.\ on each interface $\Sigma_{ij}$. Since $w \in \ker M$ was arbitrary, we see that $\mathbf{n} \in (\ker M)^\perp$ a.e. on $\Sigma$. But since $\gamma^{n-1}(\Sigma) \geq \frac 12 \sum_i I^{(1)}_m(\gamma(\Omega_i)) > 0$ (by applying the single-bubble Gaussian isoperimetric profile $I^{(1)}_m: [0, 1] \to\mathbb{R}_+$ to each cell $\Omega_i$), and since $\abs{\mathbf{n}} = 1$, it follows that $M \neq 0$. Consequently, if $M: \mathbb{R}^n \to E$ is not surjective then necessarily (since $\dim E = 2$) $\dim \ker M = n-1$. This means that there exists $\theta \in S^{n-1}$ such that $\mathbf{n}$ is a.e.\ equal to $\pm \theta$ on $\Sigma$. By rotational invariance of the Gaussian measure, we may assume without loss of generality that $\theta = e_1$. Let us construct the one-dimensional cluster $\tilde \Omega$ witnessing that $\Omega$ is effectively one-dimensional. Fix $i \ne j$; since $\mathbf{n}_{ij}$ is continuous on $\Sigma_{ij}$ and takes only two values, it must be locally constant; hence, $H_{ij}(x) = 0$ for all $x \in \Sigma_{ij}$, and so $H_{ij,\gamma}(x) = -\inr{x}{\mathbf{n}_{ij}} \in \{ \pm x_1 \}$ for all $x \in \Sigma_{ij}$. Since $H_{ij,\gamma}$ is constant on $\Sigma_{ij}$, it follows that $\Sigma_{ij}$ is contained in the two halfspaces $\{x: x_1 = \pm a\}$, for some $a \in \mathbb{R}$. Hence, $\partial^* \Omega_i$ is contained in at most four halfspaces: call them $\{x: x_1 \in \{\pm a, \pm b\}\}$. Up to modifying $\Omega_i$ on a $\gamma^n$-null set, $\partial^* \Omega_i$ is dense in $\partial \Omega_i$ (see~\cite[Proposition~12.19]{MaggiBook}). Hence, $\partial \Omega_i$ is also contained in $\{x: x_1 \in \{\pm a, \pm b\}\}$. Now, $\mathbb{R} \setminus \{\pm a, \pm b\}$ has at most five connected components. For each of these components $U$, either $U \times \mathbb{R}^{n-1} \subset \Omega_i$ or $U \times \mathbb{R}^{n-1} \cap \Omega_i = \emptyset$, because anything else would contradict the fact that $\partial \Omega_i \subset \{x: x_1 \in \{\pm a, \pm b\}\}$. Now define $\tilde \Omega_i$ to be the union of those intervals $U$ satisfying $U \times \mathbb{R}^{n-1} \subset \Omega_i$. Repeating this construction for all $i$, we see that $\Omega$ is effectively one-dimensional. \end{proof} Based on the dichotomy in Lemma~\ref{lem:dichotomy}, we will prove Theorem~\ref{thm:hessian-bound-for-I} in either of the two cases. Note that \emph{a-posteriori} we will show that the unique isoperimetric minimizing clusters $\Omega$ (with $\gamma(\Omega) \in \interior \Delta$) are tripods, which are clearly \emph{not} effectively one-dimensional, but we do not exclude this possibility beforehand. \smallskip We will require the following matrix form of the Cauchy--Schwarz inequality: \begin{lemma}\label{lem:cs} If $X$ is a random vector in $\mathbb{R}^n$ and $Y$ is a random vector in $\mathbb{R}^k$ such that $\mathbb{E} |X|^2 < \infty$, $\mathbb{E} |Y|^2 < \infty$, and $\mathbb{E} Y Y^T$ is non-singular, then \[ (\mathbb{E} XY^T) (\mathbb{E} Y Y^T)^{-1} (\mathbb{E} YX^T) \le \mathbb{E} X X^T \] (in the positive semi-definite sense) with equality if and only if $X = BY$ a.s., for a deterministic matrix $B$. \end{lemma} While this inequality may be found in the literature (e.g.~\cite{Tripathi:99}), we present a short independent proof, including that of the equality case, which we will require for establishing uniqueness of minimizing clusters. \begin{proof}[Proof of Lemma \ref{lem:cs}] Let $Z = X - (\mathbb{E} X Y^T) (\mathbb{E} Y Y^T)^{-1} Y$. Then $Z Z^T \ge 0$ (in the positive semi-definite sense), and since convex combinations of positive semi-definite matrices are also positive semi-definite, $\mathbb{E} Z Z^T \ge 0$. A computation verifies that: \[ \mathbb{E} Z Z^T = \mathbb{E} X X^T - (\mathbb{E} XY^T) (\mathbb{E} Y Y^T)^{-1} (\mathbb{E} Y X^T) , \] completing the proof of the inequality. To check the equality case, note that equality occurs iff $\mathbb{E} Z Z^T = 0$, iff $Z = 0$ a.s., iff $X = (\mathbb{E} X Y^T) (\mathbb{E} Y Y^T)^{-1} Y$ a.s. . \end{proof} Consider a minimizing cluster $\Omega$ with $\gamma(\Omega) = v \in \interior \Delta$. By~\eqref{eq:formula} and the Cauchy--Schwarz (or Jensen) inequality: \[ Q(w) = \delta_{w}^2 A - \inr{\delta_{w}^2 V}{\lambda} = -\sum_{i<j} \int_{\Sigma_{ij}} \inr{w}{\mathbf{n}_{ij}}^2\, d\gamma^{n-1} \le -\sum_{i<j} \inr{w}{\overline{\n}_{ij}}^2 A_{ij} . \] Denoting: \begin{equation} \label{eq:N} N := \sum_{i<j} A_{ij} \overline{\n}_{ij} \overline{\n}_{ij}^T , \end{equation} we have shown that: \[ Q(w) \leq - w^T N w \;\;\; \forall w \in \mathbb{R}^n . \] Now choose random vectors $X \in \mathbb{R}^n$ and $Y \in \mathbb{R}^k$ so that with probability $A_{ij}/\sum_{i<j} A_{ij}$, $X = \overline{\n}_{ij}$ and $Y = e_i - e_j$. Then, by definition of $N$, $M$ and $L_A$: \[ \mathbb{E} X X^T = \frac{1}{ \sum_{i<j} A_{ij}} N ~,~ \mathbb{E} Y X^T = \frac{1}{ \sum_{i<j} A_{ij}} M ~,~ E Y Y^T = \frac{1}{ \sum_{i<j} A_{ij}} L_A . \] Lemma~\ref{lem:cs} implies that $M^T L_A^{-1} M \le N$, yielding: \[ Q(w) \le -w^T N w \leq -w^T M^T L_A^{-1} M w \;\;\; \forall w \in \mathbb{R}^n . \] Consequently, when $M$ is surjective, for any $y \in E$, we may choose $w \in \mathbb{R}^n$ so that: \[ \delta_w V = M w = y \;\; \text{ and } \;\; Q(w) \leq -y L_A^{-1} y , \] completing the proof of Theorem~\ref{thm:hessian-bound-for-I} in that case. \medskip Now suppose that $M$ is not surjective. By Lemma~\ref{lem:dichotomy}, $\Omega$ is effectively one-dimensional. To lighten our notation, we will assume that $\Omega$ itself is a cluster in $\mathbb{R}$; by the product structure of the Gaussian measure, everything that we do here can be lifted to the original cluster in $\mathbb{R}^n$ by taking Cartesian product with $\mathbb{R}^{n-1}$. Now, the fact that $H_{ij,\gamma}$ is constant implies that each $\Sigma_{ij}$ can contain at most two points. For each $i \ne j$, define the vector-field $X_{ij}$ by defining $X_{ij} = \mathbf{n}_{ij}$ on $\Sigma_{ij}$, $X_{ij} = 0$ on all other interfaces, and extending $X_{ij}$ to a $C_c^\infty$ vector-field on $\mathbb{R}$ (it is possible to make it have compact support because there are at most finitely many points in $\Sigma_{ij}$). Choose $y \in E$, let $a = L_A^{-1} y$, and let $X = \sum_{i<j} (a_i - a_j) X_{ij}$. Because the first two derivatives of the one-dimensional Gaussian density $\varphi$ are $\varphi'(x) = -x\varphi(x)$ and $\varphi''(x) = (x^2 - 1) \varphi(x)$, we easily compute that \begin{align*} \delta_X V_i &= \sum_{j \ne i} (a_i - a_j) \sum_{x \in \Sigma_{ij}} \varphi(x) = (L_A a)_i = y_i ~,\\ \delta_X^2 A &= \sum_{i<j} (a_i - a_j)^2 \sum_{x \in \Sigma_{ij}} (x^2 - 1) \varphi(x) ~,\\ \delta_X^2 V_i &= \sum_{j \ne i} (a_i - a_j)^2 \sum_{x \in \Sigma_{ij}} \mathbf{n}_{ij}(x) \cdot (-x \varphi(x)) ~. \end{align*} On the other hand, at $x \in \Sigma_{ij}$ one has $H_{ij,\gamma} = \mathbf{n}_{ij}(x) \cdot (-x)$. Since $\mathbf{n}_{ij} \in \{-1, 1\}$, it follows that $H_{ij,\gamma} \mathbf{n}_{ij}(x) \cdot (-x) = x^2$, and so: \begin{align*} & \inr{\lambda}{\delta_X^2 V} = \sum_{i} \lambda_i \delta_X^2 V_i = \sum_{i<j} (\lambda_i - \lambda_j) (a_i - a_j)^2 \sum_{x \in \Sigma_{ij}} \mathbf{n}_{ij}(x) \cdot (-x \varphi(x)) \\ & = \sum_{i<j} H_{ij,\gamma} (a_i - a_j)^2 \sum_{x \in \Sigma_{ij}} \mathbf{n}_{ij}(x) \cdot (-x \varphi(x)) = \sum_{i<j} (a_i - a_j)^2 \sum_{x \in \Sigma_{ij}} x^2 \varphi(x) . \end{align*} Consequently: \begin{align*} Q(X) &= \delta_X^2 A - \inr{\lambda}{\delta_X^2 V} = -\sum_{i<j} (a_i - a_j)^2 \sum_{x \in \Sigma_{ij}} \varphi(x) \\ &= -\sum_{i<j} (a_i - a_j)^2 A_{ij} = -a^T L_A a = -y^T L_A^{-1} y. \end{align*} This completes the proof of Theorem~\ref{thm:hessian-bound-for-I}. \begin{remark} A slight variation on the above argument actually shows that if the minimizing cluster $\Omega$ is effectively one-dimensional (with each cell of $\tilde \Omega$ consisting of finitely many intervals), then up to null-sets, each cell of $\tilde{\Omega}$ must in fact be a single (connected) interval. However, we will show in Section \ref{sec:uniqueness} that minimizing clusters $\Omega$ with $\gamma(\Omega) \in \interior \Delta$ must actually be tripod clusters (up to null-sets), and so in particular they cannot be effectively one-dimensional, and hence there is no point to insist on this additional information here. \end{remark} \section{Proof of the Double-Bubble Theorem} \label{sec:proof} Recall that the Gaussian double-bubble profile $I$ is defined by \[ I(v) = \inf\{P_\gamma(\Omega): \Omega \text{ is a 3-cluster with $\gamma(\Omega) = v$}\}. \] We are finally ready to prove our main theorem: that the Gaussian double-bubble profile $I$ agrees with the model (tripod) profile $I_m$. We begin with two straightforward observations about $I$. \begin{lemma}\label{lem:semi-cont} $I : \Delta \to \mathbb{R}_+$ is lower semi-continuous. \end{lemma} \begin{proof} The proof is identical to the one of Theorem \ref{thm:Almgren} (i). If $v_r \to v \in \Delta$, let $\Omega^r$ be a minimizing cluster with $\gamma(\Omega^r) = v_r$. Note that for all $r$, $P(\Omega^r) = I(v_r) \leq I_m(v_r) \leq \max_{v \in \Delta} I_m(v) =: C < \infty$, and that $P(\Omega^r_i) \leq P(\Omega^r)$ for each $i$. Consequently, by passing to a subsequence, each of the cells $\Omega^r_i$ converges in $L^1(\gamma)$ to $\Omega_i$. By Dominated Convergence (as the total mass is finite), we must have $\gamma(\Omega_i) = v_i$, and the limiting $\Omega$ is easily seen to be a cluster (possibly after a measure-zero modification to ensure disjointness of the cells). Consequently: \[ I(v) \le P_\gamma(\Omega) \le \liminf_{r \to \infty} P_\gamma(\Omega^r) = \liminf_{r \to \infty} I(v_r). \] Hence, $I$ is lower semi-continuous. \end{proof} \begin{lemma}\label{lem:agreement-on-boundary} On $\partial \Delta$, $I = I_m$. \end{lemma} \begin{proof} Take $v \in \partial \Delta$. Then some $v_i$ is zero; without loss of generality suppose that $v_1 = 0$. The (single-bubble) Gaussian isoperimetric inequality states that $I(v) = I^{(1)}_m(v_2) = I^{(1)}_m(v_3)$, where recall $I^{(1)}_m : [0,1] \to \mathbb{R}_+$ denotes the single-bubble Gaussian isoperimetric profile. But by Lemma~\ref{lem:tripod-profile-continuous} this is also equal to $I_m(v)$, concluding the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:main-I-I_m}] By definition, $I \le I_m$, so we will show the other direction. Since $I_m$ is continuous and $I$ is lower semi-continuous on $\Delta$, $I - I_m$ attains a global minimum at $v_0 \in \Delta$. Suppose in the contrapositive that $I(v_0) - I_m(v_0) < 0$. By Lemma~\ref{lem:agreement-on-boundary}, we must have $v_0 \in \interior \Delta$. Let $\Omega$ be a minimizing cluster with $\gamma(\Omega) = v_0$. Since $v_0$ is a minimum point of $I - I_m$, for any smooth flow $F_t$ along a vector-field $X$, we have by definition: \begin{equation}\label{eq:I_m-and-I} I_m(\gamma(F_t(\Omega))) \le I(\gamma(F_t(\Omega)) - (I - I_m)(v_0) \le P_\gamma(F_t(\Omega)) - (I - I_m)(v_0) . \end{equation} The two sides above are equal when $t = 0$, and twice differentiable in $t$. By comparing first and second derivatives at $t=0$, we conclude that: \begin{equation} \label{eq:compare-first} \inr{\nabla I_m(v_0)}{\delta_X V} = \delta_X A ~, \end{equation} \begin{equation} \label{eq:compare-second} \nabla^2_{\delta_X V,\delta_X V} I_m(v_0) + \inr{\nabla I_m(v_0)}{\delta_X^2 V} \le \delta_X^2 A. \end{equation} Now recall that by (\ref{eq:formula-first-variation-of-area}), we also have $\inr{\lambda}{\delta_X V} = \delta_X A$ for all admissible vector-fields $X$, where as usual $\lambda$ is the unique element of $E^*$ such that $\lambda_i - \lambda_j = H_{ij,\gamma}(\Omega)$. Since for every $y \in E$ there is a vector-field $X$ as above with $\delta_X V = y$ (e.g. by Theorem~\ref{thm:hessian-bound-for-I}), it follows from~\eqref{eq:compare-first} that necessarily $\nabla I_m(v_0) = \lambda$. Applying this to~\eqref{eq:compare-second}, we deduce: \[ \nabla^2_{\delta_X V,\delta_X V} I_m(v_0) \le \delta_X^2 A - \inr{\delta_X^2 V}{\lambda} = Q(X). \] By Theorem~\ref{thm:hessian-bound-for-I}, for any $y \in E$ we may choose $X$ so that: \[ \nabla^2_{y,y} I_m(v_0) \le Q(X) \le - y^T L_A^{-1} y \] (where recall $A = A(\Omega) = \set{A_{ij}}$ is the collection of interface measures). It follows that $\nabla^2 I_m(v_0) \le - L_A^{-1}$ in the positive semi-definite sense, and since $L_A$ is positive-definite, we deduce that $-\nabla^2 I_m(v_0)^{-1} \le L_A$. It follows by Proposition~\ref{prop:I_m-equation} that: \[ 2 I_m(v_0) = -\tr[(\nabla^2 I_m(v_0))^{-1}] \le \tr(L_A) = 2 \sum_{i<j} A_{ij} = 2 I(v_0) , \] in contradiction to the assumption that $I(v_0) < I_m(v_0)$. \end{proof} \section{Uniqueness of Isoperimetric Minimizers} \label{sec:uniqueness} In Section \ref{sec:model} we have constructed tripod clusters on $E \simeq \mathbb{R}^2$. By taking Cartesian product with $\mathbb{R}^{n-2}$ in arbitrary orientation, we obtain tripod clusters in $\mathbb{R}^n$ ($n \geq 2$). Namely, we say that $\Omega$ is a tripod cluster in $\mathbb{R}^n$ if there exist three unit-vectors $\{t_i \in \mathbb{R}^n: 1 \le i \le 3\}$ summing to zero, and $x_0 \in \mathbb{R}^n$, such that up to $\gamma^n$-null sets, \[ \Omega_i = \interior \{x \in \mathbb{R}^n: \max_j \{\inr{x - x_0}{t_j} \} = \inr{x - x_0}{t_i} \}. \] Equivalently, $\Omega$ is a tripod cluster if there exist three unit-vectors $\{n_{ij} \in \mathbb{R}^n: (i,j) \in \mathcal{C} \}$ summing to zero, and three scalars $\{ h_{ij} \in \mathbb{R} : (i,j) \in \mathcal{C} \}$ summing to zero, so that denoting $n_{ji} := -n_{ij}$ and $h_{ji} := - h_{ij}$, we have up to $\gamma^n$-null sets \[ \Omega_i = \bigcap_{j \neq i} \{ x \in \mathbb{R}^n : \inr{x}{n_{ij}} < h_{ij} \} . \] Indeed, the equivalence is easily seen by using $n_{ij} = (t_j - t_i) / \sqrt{3}$ and $h_{ij} = \inr{x_0}{n_{ij}}$. \begin{theorem}\label{thm:unique-minimizer} If $\Omega$ is an isoperimetric minimizing cluster with $\gamma(\Omega) \in \interior \Delta$ then $\Omega$ is a tripod cluster. \end{theorem} Now that we know Theorem~\ref{thm:main-I-I_m}, the idea is to revisit the various inequalities used in that proof, and observe that they must be sharp in the case of minimizing clusters. We begin by restating the relationship between variations and the derivatives of $I$. We essentially already used this relationship in the proof of Theorem~\ref{thm:main-I-I_m}, but this is easier to state now that we know that $I = I_m$ is smooth on $\interior \Delta$. \begin{lemma}\label{lem:minimizer-variation} Let $\Omega$ be an isoperimetric minimizing cluster with $\gamma(\Omega) = v \in \interior \Delta$, and let $\lambda \in E^*$ be given by Theorem \ref{thm:first-order-conditions-expanded}, so that $\lambda_i - \lambda_j = H_{ij,\gamma}$, where $H_{ij,\gamma}$ are the weighted mean-curvatures of $\Omega$'s interfaces. Then $\nabla I(v) = \lambda$, and for any admissible vector-field $X$ we have: \begin{equation} \label{eq:minimizer-variation-conc} (\delta_X V)^T \nabla^2 I(v) \delta_X V \le Q(X) = \delta^2_X A - \inr{\lambda}{\delta^2_X V} . \end{equation} \end{lemma} \begin{proof} If $\set{F_t}_{t \in \mathbb{R}}$ is a flow along $X$, the definition of $I$ implies that \begin{equation}\label{eq:I-inequality-pointwise} I(\gamma(F_t(\Omega))) \le P_\gamma(F_t(\Omega)), \end{equation} with equality at $t=0$ because $\Omega$ is minimizing. It follows that the (smooth) functions on either side above must be tangent at $t=0$, and so equating derivatives at $t=0$ we deduce: \[ \inr{\nabla I(v)}{\delta_X V} = \delta_X A, \] for all admissible vector-fields $X$. Since for every $y \in E$ there is a vector-field $X$ as above with $\delta_X V = y$ (e.g. by Theorem~\ref{thm:hessian-bound-for-I}), it follows from~\eqref{eq:compare-first} that necessarily $\nabla I (v) = \lambda$. In addition, we must have domination between the second derivatives at $t=0$ of the two sides in~\eqref{eq:I-inequality-pointwise}, and so differentiating twice at $t=0$ we obtain: \[ \inr{\nabla^2 I(v) \delta_X V}{\delta_X V} + \inr{\nabla I(v)}{\delta_X^2 V}\leq \delta_X^2 A . \] Rearranging terms and using $\nabla I(v) = \lambda$, (\ref{eq:minimizer-variation-conc}) follows. \end{proof} We immediately deduce from Lemma~\ref{lem:minimizer-variation} and Theorem~\ref{thm:hessian-bound-for-I} that $\nabla^2 I(v) \le - L_A^{-1}$ in the positive-definite sense. We now observe that this in fact must be an equality. Note that this characterizes the boundary measures $A_{ij}$ for a minimizing cluster. \begin{lemma}\label{lem:minimizer} Let $\Omega$ be an isoperimetric minimizing cluster with $\gamma(\Omega) = v \in \interior \Delta$, and let $A_{ij} = \gamma^{n-1}(\Sigma_{ij})$, where $\Sigma_{ij}$ are the interfaces of $\Omega$. Let $A^m_{ij} = \gamma^{n-1}(\Sigma^m_{ij})$, where $\Sigma^m_{ij}$ are the interfaces of a model tripod cluster $\Omega^m$ with $\gamma(\Omega^m) = v$. Then: \begin{enumerate} \item $\nabla^2 I(v) = - L_A^{-1}$. \item $A_{ij} = A^m_{ij}$ for all $i \neq j$. \end{enumerate} \end{lemma} \begin{proof} By the preceding comments, we know that $\nabla^2 I(v) \le - L_A^{-1}$, and hence $-(\nabla^2 I(v))^{-1} \leq L_A$. On the other hand, by Proposition~\ref{prop:I_m-equation}: \[ \tr[-(\nabla^2 I(v))^{-1}] = \tr[-(\nabla^2 I_m(v))^{-1}] = \tr(L_{A^m}) = 2 I_m(v) = 2 I(v) = \tr(L_A) . \] Since $A \geq 0$ and $\tr(A) = 0$ imply that $A=0$, it follows that necessarily $-(\nabla^2 I(v))^{-1} = L_A$, thereby concluding the proof of the first assertion. The second follows by inspecting the off-diagonal elements of the matrices on either side of the following equality: \[ L_A = - (\nabla^2 I(v))^{-1} = - (\nabla^2 I_m(v))^{-1} = L_{A^m} . \] \end{proof} \begin{corollary}\label{cor:minimizer} Under the same assumptions and notation as in Lemma \ref{lem:minimizer}, $A_{ij} > 0$ for all $i \neq j$, and for any admissible vector-field $X$: \[ - (\delta_X V)^T L_A^{-1} (\delta_X V) \le Q(X) . \] \end{corollary} \begin{proof} Immediate from the previous lemma since $A^m_{ij} > 0$ and by (\ref{eq:minimizer-variation-conc}). \end{proof} Let us now complete the proof of Theorem~\ref{thm:unique-minimizer}. Recall the definitions (\ref{eq:M}) and (\ref{eq:N}) of $M$ and $N$ from the proof of Theorem~\ref{thm:hessian-bound-for-I}, and recall that $M w = \delta_w V$. In the proof of Theorem~\ref{thm:hessian-bound-for-I}, we proved that for all $w \in \mathbb{R}^n$: \begin{align} Q(w) & = \delta^2_{w} A - \inr{\lambda}{\delta^2_{w} V} = -\sum_{i<j} \int_{\Sigma_{ij}} \inr{w}{\mathbf{n}}^2\, d\gamma^{n-1} \notag \\ &\le -\sum_{i<j} \inr{w}{\overline{\n}_{ij}}^2 A_{ij} = - w^T N w \le -(M w)^T L_{A}^{-1} (M w) . \label{eq:steps-in-inequality} \end{align} On the other hand, by Corollary~\ref{cor:minimizer} applied to $X \equiv w$: \[ -(M w)^T L_{A}^{-1} (M w) \le Q(w) . \] It follows that all three inequalities above must be equalities for every $w \in \mathbb{R}^n$. From the first inequality in (\ref{eq:steps-in-inequality}), we conclude that on each $\Sigma_{ij}$, $\mathbf{n}_{ij}$ is $\gamma^{n-1}$-a.e.\ constant; as $\Sigma_{ij}$ are smooth $(n-1)$-dimensional submanifolds, it follows that $\mathbf{n}_{ij}$ is constant. Since $A_{ij} = \gamma^{n-1}(\Sigma_{ij}) > 0$ by Corollary ~\ref{cor:minimizer}, it follows that $\overline{\n}_{ij}$ are non-zero and hence unit-vectors for all $i \neq j$. From the second inequality and the characterization of equality cases in Lemma~\ref{lem:cs}, we deduce the existence of an $n \times k$ matrix $B$ such that $B(e_i - e_j) = \overline{\n}_{ij}$ for every $i \ne j$ (using again that $A_{ij} > 0$ for all $i \neq j$). In particular, this implies that $\sum_{(i,j) \in \mathcal{C}} \overline{\n}_{ij} = B \sum_{(i,j) \in \mathcal{C}} (e_i - e_j) = 0$. Since the normal vectors $\mathbf{n}_{ij}$ are constant, the unweighted mean curvatures $H_{ij}$ of $\Sigma_{ij}$ vanish, and so $H_{ij,\gamma}(x) = -\inr{x}{\mathbf{n}_{ij}(x)} = -\inr{x}{\overline{\n}_{ij}}$ for all $x \in \Sigma_{ij}$. Recalling that each $H_{ij,\gamma}$ is constant, we must have \begin{equation}\label{eq:interface-containment} \Sigma_{ij} \subset \{x \in \mathbb{R}^n: \inr{x}{\overline{\n}_{ij}} = -H_{ij,\gamma}\}. \end{equation} Denoting $S_{ij} := \{x \in \mathbb{R}^n: \inr{x}{\overline{\n}_{ij}} < -H_{ij,\gamma} \}$, we have $\Sigma_{ij} \subset \partial S_{ij}$. We will show that up to a $\gamma^n$-null set, $\Omega_i = \bigcap_{j \ne i} S_{ij}$; as $\sum_{(i,j) \in \mathcal{C}} \overline{\n}_{ij} = 0$ and $\sum_{(i,j) \in \mathcal{C}} H_{ij,\gamma} = 0$ by Theorem~\ref{thm:first-order-conditions-expanded}, this will establish that $\Omega$ is a tripod cluster and conclude the proof of Theorem~\ref{thm:unique-minimizer}. To this end, fix $i$, and consider the open set $S_{ab} \cap S_{cd}$, for all 4 possibilities when $(a,b) \in \set{(i,j) , (j,i)}$ and $(c,d) \in \set{(i,k), (k,i)}$, where $i,j,k$ are all distinct. Note that the relative perimeter $P(\Omega_i ; S_{ab} \cap S_{cd})$ is zero since $\H^{n-1}(\partial^* \Omega_i \setminus \cup_{j \neq i} \Sigma_{ij}) = 0$ and $\Sigma_{ij} \subset \partial S_{ij}$. As $S_{ab} \cap S_{cd}$ is open and connected, it follows by \cite[Exercise 12.17]{MaggiBook} that we may modify $\Omega_i$ by a $\gamma^{n}$-null set, so that $\Omega_i \cap S_{ab} \cap S_{cd}$ is either the empty set or $S_{ab} \cap S_{cd}$, for all 4 possibilities above. We now claim that $\Omega_i \cap S_{ij} \cap S_{ik} = S_{ij} \cap S_{ik}$, since otherwise, in all remaining 8 possibilities, we would have $\mathbf{n}_{ij} = -\overline{\n}_{ij}$ or $\mathbf{n}_{ik} = -\overline{\n}_{ik}$, in violation of $A_{ij} , A_{ik} > 0$. We therefore deduce that $\Omega_i \supset \bigcap_{j \ne i} S_{ij}$ for every $i$. Since $\gamma^n(\mathbb{R}^n \setminus \cup_{i} \Omega_i) = 0$, we conclude that $\Omega_i = \bigcap_{j \ne i} S_{ij}$ up to a $\gamma^n$-null set. \section{Concluding Remarks} \label{sec:conclude} \subsection{Extension to measures having strongly convex potentials} Theorem \ref{thm:main-I-I_m} may be immediately extended to probability measures having strongly convex potentials. \begin{definition} A probability measure $\mu$ on $\mathbb{R}^n$ is said to have a $K$-strongly convex potential, $K > 0$, if $\mu = \exp(-W(x)) dx$ with $W \in C^\infty$ and $\text{Hess} W \geq K \cdot \mathrm{Id}$. \end{definition} \begin{theorem} \label{thm:CaffarelliCor} Let $\mu$ be a probability measure having $K$-strongly convex potential. Denote by $I_\mu : \Delta \rightarrow \mathbb{R}_+$ its associated $3$-cluster isoperimetric profile, given by: \[ I_\mu(v) := \inf \set{P_\mu(\Omega) : \text{$\Omega$ is a $3$-cluster with $\mu(\Omega) = v$}} . \] Then: \[ I_\mu \geq \sqrt{K} I_m \text{ on } \Delta . \] \end{theorem} The proof is an immediate consequence of the following remarkable theorem by L.~Caffarelli \cite{CaffarelliContraction} (see also \cite{KimEMilmanGeneralizedCaffarelli} for an alternative proof and an extension to a more general scenario): \begin{theorem}[Caffarelli's Contraction Theorem] If $\mu$ is a probability measure having a $K$-strongly convex potential, there exists a $C^{\infty}$ diffeomorphism $T : \mathbb{R}^n \rightarrow \mathbb{R}^n$ which pushes forward the Gaussian measure $\gamma$ onto $\mu$ which is $\frac{1}{\sqrt{K}}$-Lipschitz, i.e. $\abs{T(x) - T(y)} \leq \frac{1}{\sqrt{K}} \abs{x-y}$ for all $x,y \in \mathbb{R}^n$. \end{theorem} We will require the following calculation: \begin{lemma} Let $T : \mathbb{R}^n \to \mathbb{R}^n$ denote a $C^{\infty}$ diffeomorphism pushing-forward $\mu_1 = \Psi_1(x) dx$ onto $\mu_2 = \Psi_2(y) dy$, where $\Psi_1,\Psi_2$ are strictly-positive $C^{\infty}$ densities. Let $X$ denote a $C^\infty$ vector-field $X$ on $\mathbb{R}^n$, and let $Y$ denote the vector-field obtained by push-forwarding $X$ via the differential $dT$, $Y = (dT)_* X$, given by: \[ Y(y) = (dT)_* X(y) := (dT \cdot X)(T^{-1} y) . \] Then $Y$ is $C^{\infty}$-smooth and satisfies: \[ (\div_{\mu_2} Y)(T x) = (\div_{\mu_1} X) (x) \;\;\; \forall x \in \mathbb{R}^n . \] \end{lemma} \noindent The proof, which we leave as an exercise, is a simple application of the change-of-variables formula: \[ \frac{\Psi_1(x)}{\Psi_2(T x)} = \text{det} \; dT (x) . \] \begin{proof}[Proof of Theorem \ref{thm:CaffarelliCor}] Let $T$ be the map from Caffarelli's theorem, pushing forward $\gamma$ onto $\mu$. For any $C^{\infty}$ compactly-supported vector-field $X$ with $\abs{X} \leq 1$, denote $Y := (dT)_* X$, and observe that since $T$ is $\frac{1}{\sqrt{K}}$-Lipschitz, then $Y$ is also $C^{\infty}$, compactly-supported, and satisfies $\abs{Y} \leq \frac{1}{\sqrt{K}}$. In addition, for any Borel subset $U$ in $\mathbb{R}^n$, observe that: \[ \int_U \div_{\mu} Y (y) d\mu(y) = \int_{T^{-1} U} ( \div_{\mu} Y) (Tx) d\gamma(x) = \int_{T^{-1} U} \div_{\gamma^n} X (x) d\gamma(x) . \] Consequently, taking supremum over all such vector-fields $X$, we deduce that: \[ \frac{1}{\sqrt{K}} P_{\mu}(U) \geq P_{\gamma}(T^{-1} U) . \] On the other hand, by definition, $\mu(U) = \gamma(T^{-1} U)$. Applying these observations to the cells of an arbitrary $3$-cluster $\Omega$, and applying Theorem \ref{thm:main-I-I_m}, we deduce: \[ \frac{1}{\sqrt{K}} P_{\mu}(\Omega) \geq P_{\gamma}(T^{-1} \Omega) \geq I_m(\gamma(T^{-1}(\Omega))) = I_m(\mu(\Omega)) . \] It follows that $I_\mu \geq \sqrt{K} I_m$, as asserted. \end{proof} \begin{remark} It is possible to extend the above argument to measures $\mu$ with \emph{non-smooth} densities having $K$-strongly convex potentials, namely $\mu = \exp(-W(x)) dx$ with $U(x) = W(x) - \frac{K}{2} \abs{x}^2 : \mathbb{R}^n \rightarrow \mathbb{R} \cup \{+\infty\}$ being convex, but we do not insist on this generality here. \end{remark} \subsection{Functional Version} It is also possible to obtain a functional version of the Gaussian double-bubble isoperimetric inequality for locally Lipschitz functions $f : \mathbb{R}^n \rightarrow \Delta^{(2)}$, in the spirit of Bobkov's functional version of the classical single-bubble one \cite{BobkovGaussianIsopInqViaCube}. This and additionally related analytic directions will be described in \cite{EMilmanNeeman-FunctionalVersions}. \subsection{Gaussian Multi-Bubble Conjecture} It is also possible to extend our results and to verify the Gaussian Multi-Bubble Conjecture, described in the Introduction, in full generality for all $k \leq n+1$. \smallskip To establish the conjecture when $k \geq 4$, we require additional regularity and structural information on the boundary of the interfaces $\Sigma_{ij}$ in $\mathbb{R}^m$ for $m \leq k-2$. As described in Remark \ref{rem:no-higher-regularity}, such results have been obtained by various authors: F.~Morgan when $m=2$ \cite{MorganSoapBubblesInR2}, J.~Taylor when $m=2,3$ \cite{Taylor-SoapBubbleRegularityInR3}, and B.~White \cite{White-SoapBubbleRegularityInRn} and very recently M.~Colombo, N.~Edelen and L.~Spolaor \cite{CES-RegularityOfMinimalSurfacesNearCones} when $m \geq 4$. We use these results to construct variations of the isoperimetric minimizers which are more complicated than merely translations. As the argument is much more involved and requires several additional ingredients on top of the ones already introduced in this work, we will develop the proof in a subsequent work \cite{EMilmanNeeman-GaussianMultiBubbleConj}. \subsection{Stable stationary clusters have flat interfaces} As an intermediate step in our proof of the Gaussian Multi-Bubble Conjecture for general $k \leq n+1$, we will also show in \cite{EMilmanNeeman-GaussianMultiBubbleConj} that clusters satisfying the aforementioned regularity assumptions, which are isoperimetrically stationary: \[ \delta_X V = 0 \;\; \Rightarrow \;\; \delta_X A = 0 , \] and stable: \[ \delta_X V = 0 \;\; \Rightarrow \;\; Q(X) \geq 0 , \] necessarily have flat interfaces. In particular, this applies to the case $k=3$ and $n \geq 2$ treated in this work, and adds additional information to the structure of clusters which are not necessarily isoperimetrically minimizing. In the single-bubble case ($k=2$), this was previously shown by McGonagle and Ross in \cite{McGonagleRoss:15}. \bibliographystyle{plain} \def$'$} \def\textasciitilde{$\sim${$'$} \def\textasciitilde{$\sim$}
1,314,259,994,656
arxiv
\section{Introduction} The Lugiato-Lefever equation \cite{Lugiato_Lefever1987} is the most commonly used model to describe electromagnetic fields inside a resonant cavity that is pumped by a strong continuous laser source. Inside the cavity the electromagnetic field propagates and suffers losses due to curvature and/or material imperfections. Most importantly, the cavity consists of a Kerr-nonlinear material so that triggered by modulation instability the field may experience a nonlinear interaction of the pumped and resonantly enhanced modes of the cavity. Under appropriate driving conditions of the resonant cavity and the laser, a stable Kerr-frequency comb may form in the cavity, which is a spatially localized and spectrally broad waveform. Since their discovery by the 2005 noble prize laureate Theodor H\"ansch, frequency combs have seen an enormously wide field of applications, e.g., in high capacity optical communications \cite{marin2017microresonator}, ultrafast optical ranging \cite{trocha2018ultrafast}, optical frequency metrology \cite{Udem2002}, or spectroscopy \cite{picque2019frequency,yang2017microresonator}. The Lugiato-Lefever equation (LLE) is an amplitude equation for the electromagnetic field inside the cavity derived by means of the slowly varying envelope approximation. In the following we assume that the cavity is a ring-shaped microresonator with normalized perimeter $2\pi$. Using dimensionless quantities and writing $u(x,t)=\sum_{k\in\Z} u_k(t)\mathrm{e}^{\mathrm{i} kx}$ for the slowly varying and $2\pi$-periodic amplitude of the electromagnetic field, the LLE in its original form \cite{Lugiato_Lefever1987} reads as \begin{equation}\label{LLE_original} \mathrm{i} \partial_t u =- d \partial_x^2 u+(\zeta-\mathrm{i}\mu)u-|u|^2u+ \mathrm{i}f_0, \qquad (x,t) \in \mathbb{T} \times \mathbb{R}, \end{equation} where $\mathbb{T}$ is a circle of length $2\pi$. The dispersion relation for the $k$-th Fourier mode of the resonator is given in the form $\omega_k = \omega_0+d_1k+d_2k^2$ with $d := \frac{2}{\kappa}d_2$ being the normalized dispersion coefficient and $\kappa>0$ being the cavity decay rate. The detuning value $\zeta$ represents the off-set between the laser frequency $\omega_{p_0}$ and the closest resonance frequency $\omega_0$ of the zero-mode $k_0=0$ of the resonator, and the value $\mu$ quantifies the damping coefficient. Finally, $f_0$ stands for pump strength with power $|f_0|^2$. More recently, novel pumping schemes have been discussed \cite{Taheri_2017}, where instead of one monochromatic laser pump one uses a dual laser pump with two different frequencies as a source term. Using again dimensionless quantities the resulting equation is given by \begin{equation}\label{LLE_dual} \mathrm{i} \partial_t u =- d \partial_x^2 u+(\zeta-\mathrm{i}\mu)u-|u|^2u+ \mathrm{i}f_0 + \mathrm{i}f_1\mathrm{e}^{\mathrm{i}(k_1 x-\nu_1 t)}, \qquad (x,t) \in \mathbb{T} \times \mathbb{R}, \end{equation} cf. \cite{Gasmi_Jahnke_Kirn_Reichel,Gasmi_Peng_Koos_Reichel,Taheri_2017} for a detailed derivation. In contrast to \eqref{LLE_original} there is now a second source term with pump strength $f_1$ and $k_1$ stands for the second pumped mode (the first pumped mode is again $k_0=0$). This gives rise to two detuning variables $\zeta=\frac{2}{\kappa}(\omega_0-\omega_{p_0})$, $\zeta_1=\frac{2}{\kappa}(\omega_{k_1}-\omega_{p_1})$ and they define $\nu_1=\zeta-\zeta_1+d k_1^2$. One of the main outcomes of \cite{Gasmi_Peng_Koos_Reichel} is that the stationary states of \eqref{LLE_dual} are far more localized than the stationary states of \eqref{LLE_original}, and the best results can be achieved when $f_0=f_1$ among all power distributions such that $f_0^2+f_1^2$ is kept constant. However, there are cases where a power distribution $|f_0|\gg |f_1|$ is more adequate in physical experiments. In this case, it is shown in Appendix \ref{appA} that one can derive from \eqref{LLE_dual} the perturbed LLE in the form \begin{equation}\label{TWE_dyn} \mathrm{i} \partial_t u = -d \partial_{x}^2 u + \mathrm{i} \epsilon V(x) \partial_x u +(\zeta-\mathrm{i} \mu) u -|u|^2 u + \mathrm{i} f_0, \qquad (x,t) \in \mathbb{T} \times \mathbb{R}, \end{equation} where in the physical context $V(x)=\omega_1-2dk_1^2\frac{f_1}{f_0}\cos(x)$ and $\epsilon=1$. However, if $\omega_1$ and $k_1^2f_1/f_0$ are small, we will consider \eqref{TWE_dyn} as the perturbed LLE with $\epsilon\in \R$ being small and $V\in C^1([-\pi,\pi],\R)$ being a generic periodic potential. Recall that \eqref{TWE_dyn} is already set in a moving coordinate frame. In its stationary form the equation becomes \begin{equation}\label{TWE} -d u''+\mathrm{i} \epsilon V(x) u'+(\zeta-\mathrm{i}\mu)u-|u|^2 u+\mathrm{i}f_0=0, \qquad x \in \mathbb{T}. \end{equation} The main questions addressed in this paper are the existence and stability of the stationary solution of \eqref{TWE_dyn}. Our main results, which are stated in detail in Section~\ref{sec:results}, can be summarized as follows: \begin{itemize} \item In Theorem~\ref{Fortsetzung_nichttrivial} we prove existence of solutions of \eqref{TWE} for small $\epsilon$ provided the effective potential $V_{\text{eff}}$ changes sign, where $V_{\text{eff}}$ is a weighted integrated version of the coefficient function $V$. \item In Theorems~\ref{thm:spectral_stability} and ~\ref{thm:nonlinear_stability} we prove stability/instability properties of the solution obtained from Theorem~\ref{Fortsetzung_nichttrivial} with the time evolution of \eqref{TWE_dyn}. \item In Section~\ref{sec:numerical} we illustrate the findings of our theorems by numerical simulations. The numerical simulations show that the location of the intensity extremum of the $\epsilon$-continued solutions does not change significantly for small $\epsilon$. Therefore, we call this phenomenon \emph{pinning of solutions at zeroes of the effective potential $V_\text{eff}$}. \end{itemize} Existence and bifurcation behavior of solutions of \eqref{LLE_original} have been studied quite well, cf. \cite{Gaertner_et_al,gaertner_reichel_waves,Godey_2017,Godey_et_al2014,Mandel,Miyaji_Ohnishi_Tsutsumi2010,Parra-Rivas2018,Parra-Rivas2014,Parra-Rivas2016} and their stability properties have been investigated in \cite{hara_delcey,DelHara_periodic,Hakkaev_Stefanov,hara_johns_perk, hara_johns_perk_derijk,Perinet,Stanislavova_Stefanov}. Analytical and numerical investigations of \eqref{LLE_dual} have recently been reported \cite{Gasmi_Jahnke_Kirn_Reichel,Gasmi_Peng_Koos_Reichel}. In contrast, we are not aware of any treatment of \eqref{TWE_dyn}. However, a related problem, where instead of $\mathrm{i} \epsilon V(x) u'$ a term of the form $\epsilon V(x) u$ appears in the NLS equation, has been quite well studied, cf. \cite{Alama,Sigal,Kev}. In this case solutions are pinned near nondegenerate critical points of $V_\text{eff}$ instead of the zeroes of $V_\text{eff}$ as in our case. \section{Main results} \label{sec:results} In this section we present our main results regarding existence and stability of stationary solutions of \eqref{TWE_dyn}. For $\epsilon=0$ there is a plethora of non-trivial (non-constant) stationary solutions, cf. \cite{Gaertner_et_al, Mandel}. We start with such a solution under the assumption of its non-degeneracy according to the following definition. \begin{Definition} A non-constant solution $u\in H_\mathrm{per}^2([-\pi,\pi],\C)$ of \eqref{TWE} for $\epsilon=0$ is called non-degenerate if the kernel of the linearized operator $$ L_u\varphi := -d \varphi'' +(\zeta-\mathrm{i}\mu-2|u|^2)\varphi-u^2\bar\varphi, \quad \varphi\in H_\mathrm{per}^2([-\pi,\pi],\C) $$ consists only of $\spann\{u'\}$. \label{non_degenerate_solution} \end{Definition} \begin{Remark} \label{fredholm_etc} Note that $L_u: H_\mathrm{per}^2([-\pi,\pi],\C) \to L^2([-\pi,\pi],\C)$ is a compact perturbation of the isomorphism $-d\partial_x^2+\sign(d): H_\mathrm{per}^2([-\pi,\pi],\C) \to L^2([-\pi,\pi],\C)$ and hence a Fredholm operator. Notice also that $\spann\{u'\}$ always belongs to the kernel of $L_u$ due to translation invariance in $x$ for $\epsilon = 0$. Non-degeneracy means that except for the obvious candidate $u'$ (and its real multiples) there is no other element in the kernel of $L_u$. \end{Remark} One can ask the question whether non-constant non-degenerate solutions at $\epsilon=0$ in Definition \ref{non_degenerate_solution} may be continued into the regime of $\epsilon\not =0$. In order to describe the continuation, we denote such a solution by $u_0$ and its spatial translations by $u_\sigma(x):= u_0(x-\sigma)$. The non-degeneracy assumption implies that $\ker L_{u_{\sigma}}=\spann\{u_\sigma'\}$. Since the adjoint operator $L_{u_\sigma}^*$ also has a one-dimensional kernel there exists $\phi_\sigma^*\in H_\mathrm{per}^2([-\pi,\pi],\C)$ such $\ker L_{u_\sigma}^*=\spann\{\phi_\sigma^*\}$. Notice that $\phi_\sigma^*(x) = \phi_0^*(x-\sigma)$. Before stating our existence result, let us clarify the assumption on the potential $V$. \begin{itemize} \item[(A1)] The potential $V:[-\pi,\pi]\to \R,x \mapsto V(x)$ is a $2\pi$-periodic, continuously differentiable function. \end{itemize} The existence result is given by the following theorem. \begin{Theorem} \label{Fortsetzung_nichttrivial} Let $d \in \R\setminus\{0\},f_0,\zeta,\mu \in \R$ be fixed and assume that (A1) holds. Let furthermore $u_0\in H_\mathrm{per}^2([-\pi,\pi],\C)$ be a non-constant, non-degenerate solution of \eqref{TWE} for $\epsilon=0$. If $\sigma_0$ is a simple zero of the function \begin{equation} \label{eq:sigma_0} \sigma \mapsto V_\mathrm{eff}(\sigma):= \operatorname{Re} \int_{-\pi}^{\pi} \mathrm{i} V(x+\sigma) u_0'\bar\phi_0^*\,dx \end{equation} then there exists a continuous curve $(-\epsilon^\ast,\epsilon^\ast) \ni \epsilon \to u(\epsilon) \in H_\mathrm{per}^2([-\pi,\pi],\C)$ consisting of solutions of \eqref{TWE} with $\|u(\epsilon)-u_0(\cdot-\sigma_0)\|_{H^2} \leq C\epsilon$ for some constant $C>0$. \end{Theorem} \begin{Remark} The value of $\sigma_0$ is determined from the existence of a unique solution $v\in H_\mathrm{per}^2([-\pi,\pi],\C)$ of the linear inhomogeneous equation $$ L_{u_{\sigma_0}}v =-\mathrm{i} V(x) u_{\sigma_0}' $$ with the property that $v\perp_{L^2} u_{\sigma_0}'$. Fredholm's condition shows that $\sigma_0$ is a zero of $V_{\mathrm{eff}}$. Simplicity of the zero of $V_{\mathrm{eff}}$ yields the result of Theorem \ref{Fortsetzung_nichttrivial}. \end{Remark} To investigate the stability of a stationary solution $u$ we introduce the expansion $$ u(x) + v(x,t) = u_1(x) + \mathrm{i} u_2(x) + v_1(x,t) + \mathrm{i} v_2(x,t) $$ and substitute this into the perturbed LLE \eqref{TWE_dyn}. After neglecting the quadratic and cubic terms in $v$ and separating real and imaginary parts we obtain the linearized system for $\bm v = (v_1,v_2)$ which reads as $$ \partial_t \bm v = \widetilde{L}_{u,\epsilon} \bm{v} $$ and the linearization has the form \begin{equation} \label{decomposition} \widetilde{L}_{u,\epsilon} = J A_u - I (\mu -\epsilon V(x) \partial_x) \end{equation} with \begin{align*} J := \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix},\;\; I := \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \;\; A_u:= \begin{pmatrix} -d\partial_x^2 + \zeta - (3 u_1^2 + u_2^2) & -2u_1 u_2 \\ -2u_1 u_2 & -d\partial_x^2 + \zeta - (u_1^2 + 3u_2^2) \end{pmatrix}. \end{align*} In the following we will often identify functions in $\C$ as vector-valued functions in $\R \times \R$ and use the notation $$ u = u_1 + \mathrm{i} u_2 \in \C \quad\leftrightarrow\quad \bm u = \begin{pmatrix}u_1 \\ u_2 \end{pmatrix} \in \R^2. $$ We denote the spectrum of $\widetilde{L}_{u,\epsilon}$ in $L^2([-\pi,\pi]) \times L^2([-\pi,\pi])$ by $\sigma(\widetilde{L}_{u,\epsilon})$ and the resolvent set of $\widetilde{L}_{u,\epsilon}$ by $\rho(\widetilde{L}_{u,\epsilon})$. For our stability results we require one additional spectral assumption on the non-degenerate solution $u_0$ regarding the spectrum of $\widetilde{L}_{u_0,0}$. \begin{itemize} \item[(A2)] The eigenvalue $0 \in \sigma(\widetilde{L}_{u_0,0})$ is algebraically simple and there exists $\xi > 0$ such that $$ \sigma(\widetilde{L}_{u_0,0}) \subset \{z\in \C: \operatorname{Re} z \leq -\xi\} \cup \{0\}. $$ \end{itemize} \begin{Remark} By Fredholm theory, the assumption of simplicity of the zero eigenvalue of $\widetilde{L}_{u_0,0}$ is equivalent to $\bm u_0' \not\in \range \widetilde{L}_{u_0,0} = \spann\{J \bm\phi_0^*\}^\perp$. It will be convenient to use the normalization $\langle \bm u_0', J\bm\phi_0^* \rangle_{L^2} = \int_{-\pi}^\pi \bm u_0' \cdot J\bm\phi_0^* \,dx = 1$. We also note that $$ \int_{-\pi}^\pi \bm u_0' \cdot J\bm\phi_0^* \,dx = \operatorname{Re} \int_{-\pi}^{\pi} \mathrm{i} u_0'\bar\phi_0^*\,dx. $$ \label{rem-kernel} \end{Remark} Before stating the stability results, let us clarify that $\ker L_{u}^*$ and $\ker L_{u}$ are linearly independent so that $V_{\mathrm{eff}}$ is generically nonzero. We also clarify the parity of eigenfunctions in $\ker L_{u}^*$ and $\ker L_{u}$ if $u_0$ is even in $x$. This is used for many practical computations. \begin{Lemma}\label{lem:parity} Let $u_0 \in H^2_{\text{per}}([-\pi,\pi],\C)$ be a non-constant, non-degenerate solution of \eqref{TWE} for $\epsilon=0$. Then the following holds: \begin{itemize} \item[(i)] $u_0'$ and $\phi_0^*$ are linearly independent, \item[(ii)] if $u_0$ is even then $\phi_0^*$ is odd. \end{itemize} \end{Lemma} \begin{proof} Part (i): By using the decomposition (\ref{decomposition}) with $u = u_0$ and $\epsilon = 0$, the eigenvalue problems $L_{u_0} u_0' = 0$ and $L_{u_0}^* \phi_0^* = 0$ are equivalent to $$ JA_{u_0} \begin{pmatrix} u_{01}' \\ u_{02}' \end{pmatrix} = \mu \begin{pmatrix} u_{01}' \\ u_{02}' \end{pmatrix} ,\quad\quad JA_{u_0} \begin{pmatrix} \phi_{01}^* \\ \phi_{02}^* \end{pmatrix} =-\mu \begin{pmatrix} \phi_{01}^* \\ \phi_{02}^* \end{pmatrix}. $$ But since $(u_{01}',u_{02}')$ and $(\phi_{01}^*,\phi_{02}^*)$ are eigenvectors to the different eigenvalues $\mu$ and $-\mu$ of $JA_{u_0}$, respectively, they are linearly independent. \medskip Part (ii): By assumption we have that $\ker L_{u_0} = \spann\{u_0'\}$ and $u_0'$ is an odd function. Let us define the restriction of $L_{u_0}$ onto the odd function $$ L_{u_0}^\#: H_{\text{per,odd}}^2 \to L_{\text{per,odd}}^2,\, \varphi \mapsto L_{u_0} \varphi. $$ Then $L_{u_0}^\#$ is again an index $0$ Fredholm operator with $\ker L_{u_0}^\#= \spann\{u_0'\}$. Further we have $(L_{u_0}^\#)^* = (L_{u_0}^*)^\#$ where $$ (L_{u_0}^*)^\#: H_{\text{per,odd}}^2 \to L_{\text{per,odd}}^2, \, \varphi \mapsto L_{u_0}^* \varphi $$ is the restriction of the adjoint onto the odd functions. But since $1=\dim \ker (L_{u_0}^*)^\# = \dim \ker L_{u_0}^*$ it follows that $\ker (L_{u_0}^*)^\# = \ker L_{u_0}^*$ and hence $\phi_0^* \in H_{\text{per,odd}}^2$ as claimed. \end{proof} The stability results are given by the following two theorems. A stationary solution $u$ of \eqref{TWE} is called spectrally stable if $\operatorname{Re}(\lambda) \leq 0$ for all eigenvalues $\lambda$ of $\widetilde{L}_{u,\epsilon}$. It is called spectrally unstable if there exists one eigenvalue $\lambda$ with $\operatorname{Re}(\lambda)>0$. \begin{Theorem}\label{thm:spectral_stability} Let $d \in \R\setminus\{0\}, f_0,\zeta,\mu\in \R$ be fixed and assume that (A1) and (A2) hold. With $\sigma_0$ being a simple zero of $V_{\mathrm{eff}}$ as in Theorem~\ref{Fortsetzung_nichttrivial}, we have $$ V_{\mathrm{eff}}'(\sigma_0)=\operatorname{Re} \int_{-\pi}^{\pi} \mathrm{i} V'(x+\sigma_0) u_0'\bar\phi_0^* \,dx = \langle V'(\cdot + \sigma_0) \bm u_0', J\bm\phi_0^* \rangle_{L^2} \not = 0. $$ Then there exists $\epsilon_0>0$ such that on the solution branch $(-\epsilon_0,\epsilon_0) \ni \epsilon \to u(\epsilon) \in H_{\mathrm{per}}^2([-\pi,\pi],\C)$ of \eqref{TWE} with $u(0)=u_{\sigma_0}$ the solutions $u(\epsilon)$ are spectrally stable for $V'_{\mathrm{eff}}(\sigma_0) \cdot\epsilon >0$ and spectrally unstable for $V'_{\mathrm{eff}}(\sigma_0) \cdot\epsilon<0$. \end{Theorem} \begin{Theorem}\label{thm:nonlinear_stability} Let $u(\epsilon) \in H_\mathrm{per}^2([-\pi,\pi], \C)$ be a spectrally stable stationary solution of \eqref{TWE_dyn} for a small value of $\epsilon$ as in Theorem \ref{thm:spectral_stability}. Then $u(\epsilon)$ is asymptotically stable, i.e., there exist $\eta, \delta, C>0$ with the following properties. If $\varphi \in C([0,T),H^1_{\text{per}}([-\pi,\pi],\C))$ is a solution of \eqref{TWE_dyn} with maximal existence time $T$ and $$ \|\varphi(\cdot,0) - u(\epsilon) \|_{H^1} < \delta $$ then $T=\infty$ and $$ \|\varphi(\cdot,t) - u(\epsilon) \|_{H^1} \leq C \mathrm{e}^{-\eta t} \|\varphi(\cdot,0) - u(\epsilon) \|_{H^1} \quad\text{for all } t\geq 0. $$ \end{Theorem} \begin{Remark} Due to periodicity of $V_{\mathrm{eff}}$ on $\mathbb{T}$, simple zeros of $V_{\mathrm{eff}}$ comes in pairs. By Theorems~\ref{thm:spectral_stability} and ~\ref{thm:nonlinear_stability} , one simple zero gives a solution branch consisting of asymptotically stable solutions for any sign of $\epsilon$. Moreover, at the bifurcation point $\epsilon = 0$ there is an exchange of stability, i.e., the zero eigenvalue crosses the imaginary axis with non zero speed. \label{rem:approx_stability} \end{Remark} \begin{Remark} In \cite{DelHara_periodic,Hakkaev_Stefanov} the authors constructed spectrally stable solutions $u$ of \eqref{TWE} for $\epsilon=0$ in the case of anomalous dispersion $d>0$. These solutions satisfy the spectral condition $ \sigma(\widetilde{L}_{u,0}) \subset \{-2\mu\} \cup \{\operatorname{Re} z = -\mu\} \cup \{0\}$ and are therefore non-degenerate starting solutions for which our main results from Theorems~\ref{Fortsetzung_nichttrivial}, ~\ref{thm:spectral_stability}, and ~\ref{thm:nonlinear_stability} hold. \end{Remark} \begin{Remark} If $u$ is a solution of \eqref{TWE} then the relation $$ \int_{-\pi}^{\pi} (u'\bar{u}-\bar{u}'u) dx = 0 $$ holds. This constraint is satisfied by every even function $u$. In fact, the only solutions of equation \eqref{TWE} for $\epsilon = 0$ that we are aware of are even around $x=0$ (up to a shift). \end{Remark} \begin{Remark} In the limit where $u_0$ is highly localized around $0$ (e.g.~the limit $d \to 0 \pm$) and the potential $V$ is wide, the effective potential $V_{\text{eff}}$ is well approximated by the actual potential $V$. More precisely we find the asymptotic $$ V_{\text{eff}}(\sigma) = \operatorname{Re} \int_{-\pi}^\pi \mathrm{i} V(x+\sigma) u_0' \bar{\phi}_0^* \, dx \approx V(\sigma) \operatorname{Re} \int_{-\pi}^\pi \mathrm{i} u_0' \bar{\phi}_0^* \, dx = V(\sigma) $$ provided $\langle \mathrm{i} u_0',\phi_0^*\rangle_{L^2}=1$. Thus, the asymptotically stable branch bifurcates from a simple zero $\sigma_0$ of $V$ with $V'(\sigma_0) \epsilon > 0$. \label{rem:approx_potential} \end{Remark} \begin{Remark} The criterion for stability of stationary solutions in Theorem \ref{thm:spectral_stability} can be written in a more precise form for small $\mu$ in the case of solitary waves. This limit is considered in Appendix \ref{appB}. \end{Remark} To summarize, our main results show that nondegenerate solutions of \eqref{TWE} for $\epsilon=0$ can be extended locally for small $\epsilon\not =0$ provided the effective potential $V_{\mathrm{eff}}$ has a sign-change. Depending on the derivative of $V_{\mathrm{eff}}$ at a simple zero we determined the stability properties of these solutions. It remains an open problem to give a criterion on $V$ or $V_{\mathrm{eff}}$ for the existence/stability of stationary solutions which applies when $|\epsilon|$ is large. \section{Numerical simulations} \label{sec:numerical} In the following we describe numerical simulations of solutions to \eqref{TWE}. We choose $f_0 = 2$, $\mu =1$, $V(x)=0.1+0.5\cos(x)$ and $d=\pm 0.1$. All computations are done with help of the Matlab package \texttt{pde2path} (cf. \cite{DOH14, UEC14}) which has been designed to numerically treat continuation and bifurcation in boundary value problems for systems of PDEs. We begin with the description of the stationary solutions of the LLE \eqref{LLE_original}, which are the same as the solutions of \eqref{TWE} for $\epsilon=0$. The corresponding results are mainly taken from \cite{Gaertner_et_al, Mandel}. There is a curve of trivial, spatially constant solutions, cf. black line in Figure~\ref{fig:review}, and this is the same curve for anomalous dispersion ($d=0.1$) and normal dispersion ($d=-0.1$). Next one finds that there are finitely many bifurcation points on the curve of trivial solutions (blue dots). Depending on the sign of the dispersion parameter $d$ one can find now the branches of the single solitons on the periodic domain $\mathbb{T}$. In the following descriptions we always follow the path of trivial solutions by starting from negative values of $\zeta$. \begin{figure*}[h] \centering \begin{minipage}[t]{0.32\textwidth} \begin{tikzpicture}[overlay] \node at (2.655,2.07) {\includegraphics[width=\columnwidth]{bif_review_pos.png}}; \node at (4.55,2.55) {\includegraphics[width=0.3\columnwidth]{bif_review_pos_zoom.png}}; \draw [draw=black] (4.54,1.11) -- (3.94,2.06); \draw [draw=black] (4.634,1.11) -- (5.34,2.06); \end{tikzpicture} \end{minipage}\hspace*{0.5cm} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\columnwidth]{bif_review_neg.png} \end{minipage} \caption{Bifurcation diagram for the case $\epsilon = 0$. Blue dots indicate bifurcation points on the line of trivial solutions (black). The red curve denotes the single soliton solution branch. The point BP is chosen as a starting point for Theorem~\ref{Fortsetzung_nichttrivial}. Further solutions on the same branch for the same value of $\zeta$ are denoted by C (left panel) and A, C (right panel). Left panel for $d=0.1$, right panel for $d=-0.1$.} \label{fig:review} \end{figure*} For $d=0.1$ (left panel in Figure~\ref{fig:review}) along the trivial branch there is a last bifurcation point which gives rise to a single bright soliton branch (red line). This branch has a turning point, at which the solutions change from unstable (dashed) to stable (solid), and after the turning point it tends back towards the trivial branch. Thus, the red line in the left panel of Figure~\ref{fig:review} represents two different but almost identical curves, which can be seen in the enlarged inset. We have chosen a solution at the point $BP$ on the stable branch as a starting point for the illustration of Theorems~\ref{Fortsetzung_nichttrivial} and \ref{thm:spectral_stability}. In the case where $d=-0.1$ (right panel in Figure~\ref{fig:review}) along the trivial branch there is a first bifurcation point from which a single dark soliton branch (red line) bifurcates. Near the second turning point of this branch the most localized single solitons live and we have chosen a stable dark soliton solution at the point $BP$ as a starting point for the illustration of Theorems~\ref{Fortsetzung_nichttrivial} and \ref{thm:spectral_stability}. Next we explain the global picture in Figure~\ref{fig:Global_Bif_diag} of the continuation in $\epsilon$ of the chosen point BPs from the $\epsilon=0$ case in Figure~\ref{fig:review}. The local picture is covered by Theorem~\ref{Fortsetzung_nichttrivial}. First we note the following symmetry: since $V(x)$ is even around $x=0$ we find that $(u(x),\epsilon)$ solves \eqref{TWE} if and only if $(u(-x),-\epsilon)$ satisfies \eqref{TWE}. Since reflecting $u$ does not affect the $L^2$-norm we see for $\epsilon>0$ an exact mirror image of the one for $\epsilon<0$. \begin{figure*}[ht] \centering \begin{minipage}[t]{0.32\textwidth} \begin{tikzpicture}[overlay] \node at (2.655,2.07) {\includegraphics[width=\columnwidth]{bif_global_stab_gestrichelt.png}}; \node at (3.045,3.5) {\includegraphics[width=0.3\columnwidth]{bif_local_stab_gestrichelt.png}}; \draw [draw=black] (2.95,1.26) rectangle (3.15,1.7); \draw [draw=black] (2.95,1.7) -- (2.25,2.9); \draw [draw=black] (3.15,1.7) -- (3.83,2.9); \end{tikzpicture} \end{minipage}\hspace*{0.15cm} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\columnwidth]{all_solitons.png} \end{minipage} \\ \begin{minipage}[h]{0.3\textwidth} \includegraphics[width=\columnwidth]{loop_global_stab_gestrichelt.png} \end{minipage} \hspace*{0.15cm} \begin{minipage}[h]{0.3\textwidth} \includegraphics[width=\columnwidth]{loop_local_stab_gestrichelt.png} \end{minipage} \hspace*{0.15cm} \begin{minipage}[h]{0.3\textwidth} \includegraphics[width=\columnwidth]{all_solitons_neg.png} \end{minipage} \caption{Continuation diagrams w.r.t $\epsilon$ with stability regions (solid = stable; dashed = unstable) and solutions at designated points. The two different zeroes of $V_\text{eff}$ give rise to two different continuation curves (blue and green). Top panels: $d=0.1$, $\zeta=3.7$. Bottom panels: $d=-0.1$, $\zeta=4.5$ with zoom (middle panel) of the continuation curve near the starting point. } \label{fig:Global_Bif_diag} \end{figure*} Next we observe that continuation curves in $\epsilon$ appear to be unbounded for $d=0.1$ (upper left panel of Figure~\ref{fig:Global_Bif_diag}) and closed and bounded for $d=-0.1$ (lower left panel of Figure~\ref{fig:Global_Bif_diag}). In our example the map $\sigma\mapsto V_\text{eff}(\sigma):= \operatorname{Re} \int_{-\pi}^{\pi} \mathrm{i} V(x+\sigma) u_0'\bar\phi_0^*\,dx$ has two zeroes in the periodic domain $\mathbb{T}$ denoted by $\sigma_0$ and $\sigma_1$. Since moreover $u_0$ is even and consequently $u_0'$, $\phi_0^*$ are odd we see that the effective potential $V_\text{eff}$ is also even and hence $\sigma_0=-\sigma_1$. Thus, continuation in $\epsilon$ works for the starting point $u_0(\cdot-\sigma_0)$ (blue curve) and $u_0(\cdot+\sigma_0)$ (green curve) with $\sigma_0 < 0$. As predicted from Theorem~\ref{thm:spectral_stability} locally on one side of $\epsilon=0$ we have stable and on the other side unstable solutions. On the top and bottom right panels of Figure~\ref{fig:Global_Bif_diag} we see the graph of $|u|^2$ for several solutions on the continuation diagram. The top left panel and the bottem left panel indicate that the $\epsilon$-continuation curves meet all other nontrivial points (C for $d=0.1$ and A, C for $d=-0.1$) at $\epsilon=0$ from Figure~\ref{fig:review}. In Figure~\ref{fig:Effective_potential} we show the starting solutions $u_0(x-\sigma_0)$ and $u_0(x-\sigma_1)$ together with the potential $V(x)$. Here the zeroes $\sigma_0<0<\sigma_1$ of the effective potential $V_\text{eff}$ are shown as blue and green dots and we already observed $\sigma_0=-\sigma_1$ due to the evenness of both $V$ and $V_\text{eff}$. Since $u_0$ is sufficiently strongly localized the zeroes of $V_\text{eff}$ are well approximated by the zeroes of $V$ and the starting solutions are thus centered near the zeroes of $V$. Therefore, by applying Remark~\ref{rem:approx_potential}, we see that slope of $V$ at the center of the soliton being positive in the blue bifurcation point indicates that the $\epsilon$-continuation will be stable for $\epsilon>0$ and unstable for $\epsilon<0$. The stability behavior is exactly opposite for the green bifurcation point. The stability considerations are valid both for $d=0.1$ and $d=-0.1$. \begin{figure*} \centering \begin{minipage}[h]{0.32\textwidth} \includegraphics[width=\columnwidth]{shift_pos_blau.png} \end{minipage} \hspace*{0.15cm} \begin{minipage}[h]{0.32\textwidth} \includegraphics[width=\columnwidth]{shift_pos_green.png} \end{minipage}\\ \begin{minipage}[h]{0.32\textwidth} \includegraphics[width=\columnwidth]{shift_neg_blau.png} \end{minipage} \hspace*{0.15cm} \begin{minipage}[h]{0.32\textwidth} \includegraphics[width=\columnwidth]{shift_neg_green.png} \end{minipage} \caption{Top row: $d=0.1$, bottom row: $d=-0.1$. Left panels: starting solutions $u_0(x-\sigma_0)$ together with $V(x)$ and negative zero $\sigma_0$ of $V_\text{eff}$ (blue dot). Stability for $\epsilon>0$, instability for $\epsilon<0$. Right panels: starting solutions $u_0(x+\sigma_0)$ together with $V(x)$ and positive zero $\sigma_1=-\sigma_0$ of $V_\text{eff}$ (green dot). Stability for $\epsilon<0$, instability for $\epsilon>0$.} \label{fig:Effective_potential} \end{figure*} Finally, let us illustrate the spectral stability properties of the $\epsilon$-continuations in Figure~\ref{fig:Spectra}. For $\epsilon=0$ we see in the left panel the spectrum of the linearization around $u_0$ with most of spectrum having real part $-1$ due to damping $\mu=1$ and further spectrum in the left half plane together with the zero eigenvalue caused by shift-invariance. Now we consider how the critical eigenvalue behaves when $\epsilon$ varies. We do this for the case where the starting soliton sits at a zero of $V_\text{eff}$ with positive slope, cf. blue bifurcation point in Figure~\ref{fig:Effective_potential}. As predicted, the critical eigenvalue moves into the complex left half plane for $\epsilon>0$ rendering the $\epsilon$-continuations stable. Since the starting solitons are sufficiently localized $-V'(\sigma_0)$ predicts well the slope of the critical eigenvalue, cf. Lemma~\ref{lem:eig_0} and Remark~\ref{rem:approx_potential}. \begin{figure*} \centering \begin{minipage}[h]{0.32\textwidth} \includegraphics[width=\columnwidth]{spec_pos_os.png} \end{minipage} \hspace*{0.15cm} \begin{minipage}[h]{0.32\textwidth} \includegraphics[width=\columnwidth]{pos_tracking.png} \end{minipage}\\ \begin{minipage}[h]{0.32\textwidth} \includegraphics[width=\columnwidth]{spec_neg_os.png} \end{minipage} \hspace*{0.15cm} \begin{minipage}[h]{0.32\textwidth} \includegraphics[width=\columnwidth]{neg_tracking.png} \end{minipage} \caption{Top: $d=0.1$, bottom: $d=-0.1$. Left: spectrum for $\epsilon=0$. Right: critical eigenvalue $\lambda_0(\epsilon)$ together with $-V'(\sigma_0)\epsilon$ as functions of $\epsilon$.} \label{fig:Spectra} \end{figure*} \begin{comment} \section{Hamiltonian limit} In this section we study equation $$ -du''+\mathrm{i} \epsilon V(x)u' + (\zeta-\mathrm{i}\mu ) u -|u|^2u + \mathrm{i} f = 0 $$ as a perturbation of the case $\mu = 0$, i.e., we consider small values of $\mu$. Let us assume that $u \in H_{\text{per}}^2([-\pi,\pi],\C)$ is a non-degenerate even solution for $\epsilon = \mu =0$. By the Implicit Function Theorem this solution can be continued to a family of solution in the space of even functions provided that $|\mu| \ll 1$ and thus giving rise to a solution branch $(-\mu^\ast , \mu^\ast) \ni \mu \mapsto u(\mu) \in H_{\text{per}}^2([-\pi,\pi],\C)$. Since this map is analytic in $\mu$, we can use analytic perturbation theory (see Kato) to study the spectrum and eigenvectors of the operators $L_{u(\mu)}$ and $L_{u(\mu)}^*$ as perturbations of a self-adjoint operator. In particular we are interested into the kernel of the adjoint $\ker L_{u(\mu)}^* = \spann\{\phi^*(\mu)\}$ and derive an asymptotic formula for $\phi^*(\mu)$. More precisely we have the following perturbation result. \begin{Theorem}\label{thm:ham_limit} Let $\mu \mapsto u(\mu)$ be a family of solutions of equation \eqref{TWE} for $\epsilon = 0$, analytic in $\mu$. Then the following expansions hold: \begin{itemize} \item[(i)] $\phi^*(\mu) = \partial_x u(\mu) + \mathcal{O}(\mu)$ as $\mu \to 0$. \item[(ii)] $\operatorname{Re} \int_{-\pi}^{\pi} \mathrm{i} V(x+\sigma) \partial_x u(\mu) \bar{\phi}^*(\mu) dx = - \mu \operatorname{Re} \int_{-\pi}^{\pi} V(x+\sigma) \big(L_{u(0)}\varphi_0 \big) \bar\varphi_0 dx + \mathcal{O}(\mu^2)$ as $\mu \to 0$. \item[(iii)] $\operatorname{Re} \int_{-\pi}^{\pi} \mathrm{i} V'(x+\sigma) \partial_x u(\mu) \bar{\phi}^*(\mu) dx = - \mu \operatorname{Re}\int_{-\pi}^{\pi} V'(x+\sigma) \big( L_{u(0)}\varphi_0 \big) \bar\varphi_0 dx + \mathcal{O}(\mu^2)$ as $\mu \to 0$. \end{itemize} Here the vector $\varphi_0 \in H_{\text{per}}^2([-\pi,\pi])$ is given by the formula $\varphi_0 = \big( \partial_\mu\partial_x u(0)-\partial_{\mu} \phi^*(0) \big)/2$. {\color{blue} Not yet finished.} \end{Theorem} \begin{proof} Part (i): By assumption we have that $\mu \mapsto L_{u(\mu)}$ and $\mu \mapsto L_{u(\mu)}^*$ are analytic. Let us denote $P_\mu:L^2([-\pi,\pi]) \to \ker L_{u(\mu)} \subset L^2([-\pi,\pi])$ and $Q_\mu:L^2([-\pi,\pi]) \to \ker L_{u(\mu)}^* \subset L^2([-\pi,\pi])$ the projections onto $\ker L_{u(\mu)}$ and $\ker L_{u(\mu)}^*$ respectively. Then by analytic perturbation theory we find that $P_\mu$ and $Q_\mu$ are analytic in $\mu$ and satisfy \begin{align*} P_\mu = P_0 + \mathcal{O}(\mu) \text{ as }\mu \to 0, \quad Q_\mu = P_0 + \mathcal{O}(\mu) \text{ as }\mu \to 0 \end{align*} with $P_0$ being the $L^2$ projection onto $\ker L_{u(0)} = \spann\{\partial_x u(0)\}$. Therefore it follows that $$ \phi^*(\mu) = \partial_x u(\mu) + \mathcal{O}(\mu) \text{ as } \mu \to 0 . $$ \medskip Part (ii): We have the following expansion \begin{align}\label{eq:proof_ham_1} &\operatorname{Re} \int_{-\pi}^{\pi} \mathrm{i} V(x+\sigma) \partial_x u(\mu) \bar{\phi}^*(\mu) dx = \operatorname{Re} \int_{-\pi}^{\pi} \mathrm{i} V(x+\sigma) |\partial_x u(0)|^2 dx \\ &+ \mu \left( \operatorname{Re} \int_{-\pi}^{\pi} \mathrm{i} V(x+\sigma) \partial_\mu\partial_x u(0) \partial_x \bar{u}(0) dx + \operatorname{Re} \int_{-\pi}^{\pi}\mathrm{i} V(x+\sigma) \partial_{x} u(0) \partial_{\mu} \bar\phi^*(0) dx \right) + \mathcal{O}(\mu^2) \notag\\ &= - \mu \operatorname{Re} \int_{-\pi}^{\pi} \mathrm{i} V(x+\sigma) \partial_{x} u(0) \big( \partial_\mu\partial_x \bar{u}(0) -\partial_{\mu} \bar\phi^*(0) \big) dx + \mathcal{O}(\mu^2) \text{ as }\mu \to 0 \notag \end{align} and want to derive equations for the functions $\partial_\mu\partial_{x} u(0)$ and $\partial_{\mu} \phi^*(0)$. Observe that for all $\mu$ we have $$ L_{u(\mu)}\partial_{x} u(\mu) = 0 \quad\text{and}\quad L_{u(\mu)}^* \phi^*(\mu) = 0. $$ Differentiating with respect to $\mu$ and evaluating at $\mu = 0$ yields $$ L_{u(0)} \partial_\mu\partial_x u(0) = \mathrm{i} \partial_x u(0) + 2 \left( \bar{u}(0) \partial_\mu u(0) \partial_x u(0) + u(0) \partial_\mu \bar{u}(0) \partial_x u(0) + u(0) \partial_\mu u(0) \partial_x \bar{u}(0) \right) $$ and $$ L_{u(0)} \partial_{\mu} \phi^*(0) = - \mathrm{i} \partial_x u(0) + 2 \left( \bar{u}(0) \partial_\mu u(0) \partial_x u(0) + u(0) \partial_\mu \bar{u}(0) \partial_x u(0) + u(0) \partial_\mu u(0) \partial_x \bar{u}(0) \right). $$ This yields $$ L_{u(0)} (\partial_\mu\partial_x u(0)-\partial_{\mu} \phi^*(0)) = 2 \mathrm{i} \partial_x u(0) $$ or after defining $\varphi_0 := \big( \partial_\mu\partial_x u(0)-\partial_{\mu} \phi^*(0) \big)/2$, \begin{equation}\label{eq:proof_ham_2} L_{u(0)} \varphi_0 = \mathrm{i} \partial_x u(0). \end{equation} Combining \eqref{eq:proof_ham_1} and \eqref{eq:proof_ham_2} we find $$ \operatorname{Re} \int_{-\pi}^{\pi} \mathrm{i} V(x+\sigma) \partial_x u(\mu) \bar{\phi}^*(\mu) dx = - \mu \operatorname{Re} \int_{-\pi}^{\pi} V(x+\sigma) \big(L_{u(0)}\varphi_0 \big) \bar\varphi_0 dx + \mathcal{O}(\mu^2) \text{ as }\mu \to 0. $$ \medskip Part (iii): The proof follows as in (ii). \end{proof} \begin{Corollary} If $0 \in \sigma(-\mathrm{i} L_{u(0)})$ has algebraic multiplicity 2, then $0 \in \sigma(-\mathrm{i} L_{u(\mu)})$ has algebraic multiplicity 1, i.e., the zero eigenvalue splits into one nonzero eigenvalue and the zero eigenvalue. \end{Corollary} \begin{proof} The fact that $0 \in \sigma(-\mathrm{i} L_{u(\mu)})$ has algebraic multiplicity 1 is equivalent to $\langle \partial_x u(\mu), \phi^*(\mu) \rangle_{L^2} \not = 0$. Expanding in $\mu$ yields $$ \langle \mathrm{i} \partial_x u(\mu), \phi^*(\mu) \rangle_{L^2} = -2\mu \langle \mathrm{i}\partial_x u(0), \varphi_0\rangle_{L^2} + \mathcal{O}(\mu^2) \text{ as }\mu \to 0. $$ Since $-\mathrm{i} L_{u(0)} \varphi_0 = \partial_x u(0) \in \ker -\mathrm{i} L_{u(0)}$ it follows that $\varphi_0 \in \ker (-\mathrm{i} L_{u(0)})^2$ and $\varphi_0 \notin \range -\mathrm{i} L_{u(0)}$ if and only if $\varphi_0 \not\perp_{L^2} \ker (-\mathrm{i} L_{u(0)})^* = \spann \{\mathrm{i} \partial_x u(0)\}$. Hence we can deduce that $0 \in \sigma(-\mathrm{i} L_{u(0)})$ has multiplicity 2 if $\langle \mathrm{i}\partial_x u(0), \varphi_0\rangle_{L^2} \not = 0$. \end{proof} \end{comment} \section{Proof of the existence result} Theorem~\ref{Fortsetzung_nichttrivial} will be proved via Lyapunov-Schmidt reduction and the Implicit Function Theorem. Fix the values of $d, \zeta,\mu$ and $f_0$. Let $u_0\in H^2_\text{per}([-\pi,\pi],\C)$ be a non-degenerate solution of \eqref{TWE} for $\epsilon=0$ and recall that for $\sigma\in\R$ its shifted copy $u_\sigma(x):=u_0(x-\sigma)$ is also a solution of \eqref{TWE} for $\epsilon=0$. \begin{proof}[Proof of Theorem \ref{Fortsetzung_nichttrivial}:] We seek solutions $u$ of \eqref{TWE} of the form $$ u = u_\sigma + v,\quad \langle v,u_{\sigma}' \rangle_{L^2} = 0, \quad v \in H_\mathrm{per}^2([-\pi,\pi],\C). $$ Inserting it into \eqref{TWE} we obtain the following equation for the correction term $v$: \begin{align}\label{eq:ansatz} L_{u_\sigma} v + \mathrm{i} \epsilon V(u_\sigma' + v') - N(v,\sigma) = 0 \end{align} with nonlinearity given by \begin{align*} N(v,\sigma) = \bar{u}_\sigma v^2 + 2 u_\sigma |v|^2 + |v|^2 v. \end{align*} The nonlinearity is a sum of quadratic and cubic terms in $v$. Since $H^2_{\rm per}$ is a Banach algebra, it is clear that for every $R > 0$, there exists $C_R > 0$ such that \begin{equation} \label{bound-nonlinear} \|N(v,\sigma)\|_{L^2} \leq C_R \|v\|_{H^2}^2, \quad \mbox{\rm for every } \; v \in H^2_{\rm per} : \;\; \| v \|_{H^2} \leq R. \end{equation} Moreover, since $V \in L^\infty$ it follows that $$ \|\mathrm{i} \epsilon V(u_\sigma'+ v')\|_{L^2} \leq |\epsilon| \|V\|_{L^\infty} \|u_\sigma + v\|_{H^2}. $$ Next we solve \eqref{eq:ansatz} according to the Lyapunov-Schmidt reduction method. Define the orthogonal projections $$ P_\sigma : L^2\to \spann\{u_{\sigma}'\} \subset L^2, \quad Q_\sigma: L^2\to \spann\{\phi_{\sigma}^*\}^\perp \subset L^2 $$ onto $\ker L_{u_\sigma}$ and $(\ker L_{u_\sigma}^*)^\perp = \spann\{\phi_{\sigma}^*\}^\perp = \range L_{u_\sigma}$, respectively. Then \eqref{eq:ansatz} can be decomposed into a non-singular and singular equation \begin{align} Q_\sigma \left(L_{u_\sigma} (I - P_\sigma) v + \mathrm{i} \epsilon V(u_{\sigma}'+v') - N(v,\sigma)\right) &= 0, \label{eq:LS1} \\ \langle \mathrm{i} \epsilon V u_\sigma', \phi_{\sigma}^* \rangle_{L^2} + \langle \mathrm{i} \epsilon V v' - N(v,\sigma), \phi_{\sigma}^* \rangle_{L^2} &= 0. \label{eq:LS2} \end{align} Notice that the linear part $Q_\sigma L_{u_\sigma} (I - P_\sigma)$ in \eqref{eq:LS1} is invertible between the $\sigma$-dependent subspaces $(\ker L_{u_\sigma})^\perp$ and $\range L_{u_\sigma}$. Therefore, the Implicit Function Theorem cannot be applied directly to solve \eqref{eq:LS1}. However, \eqref{eq:LS1} is equivalent to $F(v,\epsilon,\sigma)=0$ with $$ F(v,\epsilon,\sigma):= Q_\sigma \left(L_{u_\sigma} (I - P_\sigma) v + \mathrm{i} \epsilon V(u_{\sigma}'+v') - N(v,\sigma)\right) + \phi_\sigma^* \langle v , u_\sigma' \rangle_{L^2} $$ and $F:H_\mathrm{per}^2([-\pi,\pi],\C) \times \R\times \R \to L^2([-\pi,\pi],\C)$. Here the added term $\phi_\sigma^* \langle v , u_\sigma' \rangle_{L^2}$ enforces $v\perp u_\sigma'$. For any fixed $\sigma_0 \in \R$ we have $F(0,0,\sigma_0) = 0$. Since $$ D_v F(0,0,\sigma_0) \varphi = L_{\sigma_0} \varphi + \phi_{\sigma_0}^* \langle \varphi , u_{\sigma_0}' \rangle_{L^2} $$ is an isomorphism from $H_\mathrm{per}^2$ to $L^2$, we can apply the Implicit Function Theorem to the function $F$ which gives the existence of a smooth function $v = v(\epsilon,\sigma)$ solving the problem $F(v(\epsilon,\sigma),\epsilon,\sigma) = 0$ for $(\epsilon,\sigma)$ in a neighborhood of $(0,\sigma_0)$. Then, by construction, $v$ is a solution of \eqref{eq:LS1} and satisfies the orthogonality condition $$ \langle v(\epsilon,\sigma) , u_\sigma' \rangle_{L^2} = 0 $$ as required at the beginning of the proof. Moreover from \eqref{eq:LS1} we see that $F(0,0,\sigma)=0$ so that $v(0,\sigma)=0$ which implies the bound \begin{align}\label{eq:bounds_epsilon} \|v(\epsilon,\sigma)\|_{H^2} \leq C |\epsilon|. \end{align} As a consequence, $\|v'(\epsilon,\sigma)\|_{L^2} \leq C |\epsilon|$, where $v'(\epsilon,\sigma)$ denotes the derivative of $v$ with respect to $x$. Inserting $v(\epsilon,\sigma)$ into the singular equation \eqref{eq:LS2} we end up with with the 2-dimensional problem $$ f(\epsilon,\sigma) := \langle \mathrm{i} \epsilon V u_\sigma', \phi_{\sigma}^* \rangle_{L^2} + \langle \mathrm{i} \epsilon V v'(\epsilon,\sigma) - N(v(\epsilon,\sigma),\sigma), \phi_{\sigma}^* \rangle_{L^2} = 0. $$ For all $\sigma \in \R$ we have the asymptotic $$ |\langle \mathrm{i} \epsilon V v'(\epsilon,\sigma) - N(v(\epsilon,\sigma),\sigma), \phi_{\sigma}^* \rangle_{L^2}| = \mathcal{O}(\epsilon^2) \;\; \text{ as } \; \epsilon \to 0 $$ which follows from the bounds \eqref{bound-nonlinear} and \eqref{eq:bounds_epsilon}. Thus $f$ can be written as $$ f(\epsilon,\sigma) = \epsilon \langle \mathrm{i} V u_\sigma', \phi_{\sigma}^* \rangle_{L^2} + \mathcal{O}(\epsilon^2) \;\; \text{ as } \;\; \epsilon \to 0. $$ Note that if $\langle \mathrm{i} V u_\sigma', \phi_{\sigma}^* \rangle_{L^2} \not= 0$ the function $f(\epsilon,\sigma)$ has no root near $(0,\sigma)$ other than the trivial root $(0,\sigma)$. However, by our assumption on the effective potential $V_{\text{eff}}$ there exists $\sigma_0 \in \R$ such that $$ \langle \mathrm{i} V u_{\sigma_0}', \phi_{\sigma_0}^* \rangle_{L^2} = \operatorname{Re} \int_{-\pi}^\pi \mathrm{i} V(x) u_{\sigma_0}' \bar{\phi}_{\sigma_0}^* dx =V_{\text{eff}}(\sigma_0) = 0 $$ and $$ \left.\partial_\sigma\langle \mathrm{i} V u_\sigma', \phi_{\sigma}^* \rangle_{L^2} \right\vert_{\sigma=\sigma_0} = \left.\partial_\sigma \operatorname{Re} \int_{-\pi}^\pi \mathrm{i} V(x) u_{\sigma}' \bar{\phi}_\sigma^* dx \right\vert_{\sigma=\sigma_0} =V'_{\text{eff}}(\sigma_0) \not= 0. $$ Hence the Implicit Function Theorem can be applied to the function $\epsilon^{-1} f(\epsilon,\sigma)$ and yields a curve of unique non-trivial solutions $\sigma = \sigma(\epsilon)$ to the singular equation $f(\epsilon,\sigma) = 0$ such that $\sigma(0) = \sigma_0$. Finally we conclude that $u(\epsilon) = u_0(\cdot-\sigma(\epsilon))+ v(\epsilon,\sigma(\epsilon))$ solves \eqref{TWE} for small $\epsilon$. \end{proof} \begin{comment} Theorem~\ref{Fortsetzung_nichttrivial} will follow from the Crandall-Rabinowitz Theorem of {\color{blue} transcritical }bifurcation from a simple eigenvalue which we recall next. \begin{Theorem}[Crandall-Rabinowitz \cite{CrRab_bifurcation,Kielhoefer}] \label{Thm Crandall-Rabinowitz} Let $I\subset\R$ be an open interval, $X,Y$ Banach spaces and let $F:X\times I\to Y$ be twice continuously differentiable such that $F(0,\lambda)=0$ for all $\lambda\in I$ and such that $D_x F(0,\lambda_0):X\to Y$ is an index-zero Fredholm operator for $\lambda_0\in I$. Moreover assume: \begin{itemize} \item[(H1)] there is $\phi\in X,\phi\neq 0$ such that $\ker(D_x F(0,\lambda_0))=\spann\{\phi\}$, \item[(H2)] $D^2_{x,\lambda} F(0,\lambda_0)(\phi,1) \not \in \range(D_x F(0,\lambda_0))$. \end{itemize} Then there exists $\epsilon>0$ and a continuously differentiable curve $(x,\lambda): (-\epsilon,\epsilon)\to X\times \R$ with $\lambda(0)=\lambda_0$, $x(0)=0$, $x'(0)=\phi$ and $x(t)\neq 0$ for $0<|t|<\epsilon$ and $F(x(t),\lambda(t))=0$ for all $t\in (-\epsilon,\epsilon)$. Moreover, there exists a neighborhood $U\times J\subset X\times I$ of $(0,\lambda_0)$ such that all non-trivial solutions in $U\times J$ of $F(x,\lambda)=0$ lie on the curve. Finally $$ \lambda'(0)= -\frac{1}{2} \frac{\langle D^2_{xx} F(0,\lambda_0)[\phi,\phi],\phi^*\rangle}{\langle D^2_{x,\lambda} F(0,\lambda_0)\phi,\phi^*\rangle} $$ where $\spann\{\phi^*\}=\ker D_x F(0,\lambda_0)^*$ and $\langle\cdot,\cdot\rangle$ is the duality pairing between $Y$ and its dual $Y^*$. \end{Theorem} Next we provide the functional analytic setup. Fix the values of $d, \zeta$ and $f$. If $u_0\in C^2_\text{per}([0,2\pi],\C)$ is the non-degenerate solution of \eqref{TWE} for $\epsilon=0$ then for $\sigma\in\R$ we denote by $u_\sigma(x):=u_0(x-\sigma)$ its shifted copy, which is also a solution of \eqref{TWE} for $\epsilon=0$. Consider the mapping $$ G:\left\{\begin{array}{rcl} C^2_{\text{per}}([0,2\pi],\C)\times \R & \to & C_{\text{per}}([0,2\pi],\C), \vspace{\jot} \\ (u,\epsilon) & \mapsto & -d u''+\mathrm{i} \epsilon V(x) u'+(\zeta-\mathrm{i})u-|u|^2 u+\mathrm{i}f. \end{array} \right. $$ Then $G$ is twice continuously differentiable since the map $\C=\R^2\to \C=\R^2$ given by $u \mapsto |u|^2u$ is a polynomial. The linearized operator $\grad G(u_\sigma,0)= (\partial_u G(u_\sigma,0), \partial_\epsilon G(u_\sigma,0))=(L_{u_\sigma}, \mathrm{i} V(x)u_{\sigma}')$ is a Fredholm operator and $(u_\sigma',0)\in \ker \grad G(u_\sigma,0)$. As we shall see there may be more elements in the kernel. Next we fix the value $\sigma_0$ (its precise value will be given later) and let $C^2_\text{per}([0,2\pi],\C) = \spann\{u_{\sigma_0}'\}\oplus X$, e.g., $$X=\spann\{u_{\sigma_0}'\}^{\perp_{L^2}}=\{\varphi-\langle \varphi, u_{\sigma_0}'\rangle_{L^2} u_{\sigma_0}' : \varphi\in C^2_\text{per}([0,2\pi],\C)\}. $$ Here $\langle f, g\rangle_{L^2}:= \operatorname{Re} \int_0^{2\pi} f(x) \bar{g}(x) dx$ denotes the scalar product of the vector space $L^2([0,2\pi],\C)$ over $\R$. Next we define $$ F:\left\{\begin{array}{rcl} X\times \R\times\R \ & \to & C_{\text{per}}([0,2\pi],\C), \vspace{\jot} \\ (v,\epsilon,\sigma) & \mapsto & G(u_\sigma+v,\epsilon) \end{array} \right. $$ which is also realy-analytic and where $\grad_{v,\epsilon} F(0,0,\sigma_0)$ is a Fredholm operator of index $0$. Our goal will be to solve \begin{equation} \label{bifurcation_s} F(v,\epsilon,\sigma)=0 \end{equation} by means of bifurcation theory, where $\sigma\in\R$ is the bifurcation parameter. Notice that $F(0,0,\sigma)=0$ for all $\sigma\in \R$, i.e., $(v,\epsilon)=(0,0)$ is a trivial solution of \eqref{bifurcation_s}. \begin{Lemma} If $\sigma_0\in [0,2\pi]$ is a zero of \eqref{eq:sigma_0} then $\dim\ker\grad_{v,\epsilon} F(0,0,\sigma_0)=1$ and $\range\grad_{v,\epsilon} F(0,0,\sigma_0) = \spann\{\phi_{\sigma_0}^*\}^{\perp_{L^2}}$. \label{one_d_kernel} \end{Lemma} \begin{proof} The fact that $\grad_{v,\epsilon} F(0,0,\sigma_0)$ is a Fredholm operator follows from Remark~\ref{fredholm_etc}. For $(\psi,\alpha)\in X\times \R$ belonging to the kernel of $\grad_{v,\epsilon} F(0,0,\sigma_0)$ we have \begin{equation} \label{eq:one_d_kernel} \grad_{v,\epsilon} F(0,0,\sigma_0)[\psi,\alpha]= L_{u_{\sigma_0}}\psi+\mathrm{i}\alpha V(x) u_{\sigma_0}'=0. \end{equation} If $\alpha=0$ then by non-degeneracy we find $\psi\in \spann\{u_{\sigma_0}'\}\cap X=\{0\}$, which is impossible. Hence we may assume w.l.o.g. that $\alpha=1$ and $\psi$ has to solve \begin{equation} \label{def_psi} L_{u_{\sigma_0}}\psi = -\mathrm{i} V(x) u_{\sigma_0}'. \end{equation} By the Fredholm alternative this is possible if and only if $\mathrm{i} V(x) u_{\sigma_0}'\perp_{L^2} \phi_{\sigma_0}^*$. If this $L^2$-ortho\-gonality holds then there exists $\psi\in C^2_\text{per}([0,2\pi],\C)$ solving \eqref{def_psi} and $\psi$ is unique up to adding a multiple of $u_{\sigma_0}'$. Hence there is a unique $\psi\in X$ solving \eqref{def_psi}. The $L^2$-orthogonality means \begin{align*} 0&=\operatorname{Re}\int_0^{2\pi}\mathrm{i} V(x) u_{\sigma_0}' \bar \phi_{\sigma_0}^*\,dx = \operatorname{Re}\int_0^{2\pi}\mathrm{i} V(x+\sigma_0) u_0' \bar \phi_0^*\,dx \\ &= \frac{\mathrm{i}}{2} \int_0^{2\pi} V(x+\sigma_0)(u_0'\bar \phi_0^*-\bar u_0'\phi_0^*)\,dx. \end{align*} Finally, it remains to determine the range of $\grad_{v,\epsilon} F(0,0,\sigma_0)$. Let $\tilde\psi\in C_\text{per}([0,2\pi],\C)$ be such that $\tilde\psi = \grad_{v,\epsilon} F(0,0,\sigma_0)[\tilde\phi,\alpha]$ with $\tilde\phi\in X$ and $\alpha \in \R$. Thus \begin{equation} \label{eq:range} L_{u_{\sigma_0}}\tilde\phi+\mathrm{i}\alpha V(x)u_{\sigma_0}'=\tilde\psi \end{equation} and since $\mathrm{i} V(x)u_{\sigma_0}'\perp_{L^2} \phi_{\sigma_0}^*$ by the definition of $\sigma_0$, the Fredholm alternative says that a necessary and sufficient condition for $\tilde\psi$ to satisfy \eqref{eq:range} is that $\tilde\psi\in \spann\{\phi_{\sigma_0}^*\}^{\perp_{L^2}}$ as claimed. Note that in this case $\tilde\phi\in C^2_\text{per}([0,2\pi],\C)=X\oplus\ker L_{u_{\sigma_0}}$ and hence, for every given $\alpha\in \R$ and $\tilde\psi\in\spann\{\phi_{\sigma_0}^*\}^{\perp_{L^2}}$ there is a unique element $\tilde\phi\in X$ that solves \eqref{eq:range}. \end{proof} \begin{proof}[Proof of Theorem~\ref{Fortsetzung_nichttrivial}:] We begin by verifying for \eqref{bifurcation_s} the conditions for the local bifurcation theorem of Crandall-Rabinowitz, cf. Theorem~\ref{Thm Crandall-Rabinowitz}. By Lemma~\ref{one_d_kernel} $\partial_{v,\epsilon} F(0,0,\sigma_0): X\times \R\to C_\text{per}([0,2\pi],\C)$ is an index $0$ Fredholm operator and it satisfies (H1). To see (H2) note that $$ \partial_\sigma\grad_{v,\epsilon} F(0,0,\sigma_0)[\psi,1]=-2u_{\sigma_0}'\bar u_{\sigma_0}\psi-2 \bar u_{\sigma_0}' u_{\sigma_0} \psi -2 u_{\sigma_0}u_{\sigma_0}'\bar\psi+\mathrm{i} V(s)u_{\sigma_0}''. $$ At the same time, differentiation of \eqref{def_psi} w.r.t. $x$ yields $$ L_{u_{\sigma_0}} \psi' = 2u_{\sigma_0}'\bar u_{\sigma_0}\psi+2 \bar u_{\sigma_0}' u_{\sigma_0} \psi +2 u_{\sigma_0}u_{\sigma_0}'\bar\psi-\mathrm{i} V(s)u_{\sigma_0}'' -\mathrm{i} V'(x)u_{\sigma_0}' $$ so that $$ \partial_\sigma\grad_{v,\epsilon} F(0,0,\sigma_0)[\psi,1]= -L_{u_{\sigma_0}} \psi'-\mathrm{i} V'(x)u_{\sigma_0}'. $$ Hence the characterization of $\range\grad_{v,\epsilon} F(0,0,\sigma_0)$ from Lemma~\ref{one_d_kernel} implies that the trans\-versality condition (H2) is satisfied if and only if $\operatorname{Re}\int_0^{2\pi} \mathrm{i} V'(x)u_{\sigma_0}'\bar\phi_{\sigma_0}^*\,dx\not =0$, which is exactly the simplicity of the zero $\sigma_0$ of \eqref{eq:sigma_0}. Therefore we are able to apply Theorem~\ref{Thm Crandall-Rabinowitz} and we obtain the existence of a local curve $t \mapsto (v(t), \epsilon(t),\sigma(t))$, $\epsilon'(0)=1$, $\epsilon(0)=0$, $v(0)=0$, $\sigma(0)=\sigma_0$ as claimed. \end{proof} \end{comment} \section{Proof of the stability result} In this section we will find the condition when the stationary solutions obtained in Theorem~\ref{Fortsetzung_nichttrivial} as a continuation of a stable solution $u_0$ of the LLE \eqref{LLE_original} are spectrally stable against co-periodic perturbations in the perturbed LLE (\ref{TWE_dyn}). Moreover, we prove the nonlinear asymptotic stability of stationary spectrally stable solutions. \subsection{Preliminary notes} For our stability analysis we consider \eqref{TWE_dyn} as a 2 dimensional system by decomposing the function $u = u_1 + \mathrm{i} u_2$ into real and imaginary part. This leads us to the system of dynamical equations \begin{align}\label{eq:2d_LLE} \left\{ \begin{array}{l} \partial_t u_1 = -d \partial_{x}^2 u_2 + \epsilon V(x) \partial_x u_1 + \zeta u_2 - \mu u_1 - (u_1^2+u_2^2) u_2 + f_0, \\ \partial_t u_2 = d \partial_{x}^2 u_1 + \epsilon V(x) \partial_x u_2 - \zeta u_1 - \mu u_2 + (u_1^2+u_2^2) u_1, \end{array}\right. \end{align} equipped with the $2\pi$-periodic boundary condition on $\mathbb{R}$. The spectral problem associated to the nonlinear system (\ref{eq:2d_LLE}) can be written as $$ \widetilde{L}_{u,\epsilon} \bm{v} = \lambda \bm{v}, \quad\lambda \in \C, \quad \bm{v} \in H_\mathrm{per}^2([-\pi,\pi],\C) \times H_\mathrm{per}^2([-\pi,\pi],\C) $$ and the linearized operator $\widetilde{L}_{u,\epsilon}$ is given by (\ref{decomposition}). Note that the operator $A_u$ in the decomposition (\ref{decomposition}) is self-adjoint on $L^2([-\pi,\pi],\C) \times L^2([-\pi,\pi],\C)$ and $\widetilde{L}_{u,\epsilon}$ is an index $0$ Fredholm operator. Moreover we see that if $u_0$ is a non-degenerate solution of \eqref{TWE} for $\epsilon = 0$ then the following relations for the linearized operators are true: $$ \ker \widetilde{L}_{u_0,0} = \spann\{\bm{u}_0'\}, \quad \ker \widetilde{L}_{u_0,0}^* = \spann\{J \bm{\phi}_0^*\}, $$ where the vectors $\bm{u}_0' = (u_{01}',u_{02}')$ and $\bm{\phi}_0^* = (\phi_{01}^*, \phi_{02}^*)$ are obtained from $u_0'=u_{01}' + \mathrm{i} u_{02}'$ and $\phi_0^* = \phi_{01}^* + \mathrm{i} \phi_{02}^*$. We recall that $\langle \bm u_0', J\bm\phi_0^* \rangle_{L^2} = 1$ due to normalization, cf. Remark \ref{rem-kernel}. Finally we observe that since the embedding $$ H_\mathrm{per}^2([-\pi,\pi],\C) \times H_\mathrm{per}^2([-\pi,\pi],\C) \hookrightarrow L^2([-\pi,\pi],\C) \times L^2([-\pi,\pi],\C) $$ is compact, the linearization has compact resolvents and thus the spectrum of $\widetilde{L}_{u,\epsilon}$ consists of isolated eigenvalues with finite multiplicity where the only possible accumulation point is at $\infty$. In the following we will use the spaces \begin{align*} H_\mathrm{per}^2([-\pi,\pi],\C) =: X, \quad H_\mathrm{per}^1([-\pi,\pi],\C) =: Y,\quad L^2([-\pi,\pi],\C) =: Z. \end{align*} Both the proof of Theorem~\ref{thm:spectral_stability} and Theorem~\ref{thm:nonlinear_stability} rely on the next lemma for the linearized operator $\widetilde{L}_{u(\epsilon),\epsilon}$ where $u(\epsilon)$ lies on the solution branch of Theorem~\ref{Fortsetzung_nichttrivial} and $|\epsilon|$ is small. The lemma gives spectral bounds for eigenvalues with large imaginary part together with a uniform resolvent estimate. The proof is presented in Section~\ref{sec:proof_resolvent_est}. \begin{Lemma}\label{lem:resolvent_est}\label{lem:resolvent_estimate} Denote $\Lambda_{\lambda^*}:=\{\lambda \in \C: \operatorname{Re}(\lambda) \geq 0 , |\operatorname{Im}(\lambda)|\geq \lambda^* \}$. Given $\epsilon_1>0$ sufficiently small there exists $\lambda^* >0$ such that we have the uniform resolvent bound $$ \sup_{\lambda \in \Lambda_{\lambda^*}} \|(\lambda I - \widetilde{L}_{u(\epsilon),\epsilon})^{-1}\|_{L^2 \to L^2} < \infty $$ for all $\epsilon\in [-\epsilon_1,\epsilon_1]$. \end{Lemma} \begin{Remark} \label{rem:extension_to_left} The uniformity of the resolvent estimate on the imaginary axis allows to sharpen the above result as follows. If we define $S$ as the supremum from Lemma~\ref{lem:resolvent_estimate} and let $0<\delta<1/S$ then the estimate $$ \sup_{\lambda \in \Lambda_{\lambda^*}-\delta} \|(\lambda I - \widetilde{L}_{u(\epsilon),\epsilon})^{-1}\|_{L^2 \to L^2} < \infty $$ holds. This follows from taking inverses in the identity $$ (\lambda-\delta-\widetilde{L}_{u(\epsilon),\epsilon}) = (\lambda- \widetilde{L}_{u(\epsilon),\epsilon})(I-\delta(\lambda-\widetilde{L}_{u(\epsilon),\epsilon})^{-1}). $$ \end{Remark} \subsection{Proof of Theorem~\ref{thm:spectral_stability}} For $\lambda \in \C$ we study the spectral problem \begin{equation}\label{eq:spectral_problem} \widetilde{L}_{u,\epsilon} \bm{v} = \lambda \bm{v}. \end{equation} Since \eqref{TWE} has the translational symmetry in the case that $\epsilon=0$ we find $$ \widetilde{L}_{u,0} \bm{u}'=0. $$ For $\epsilon \not = 0$, this symmetry is broken, and the zero eigenvalue is expected to move either into the stable or unstable half-plane. In our stability analysis, it is therefore important to understand how the critical zero eigenvalue behaves along the bifurcating solution branch given by $(-\epsilon^*,\epsilon^*) \ni \epsilon \mapsto u(\epsilon) \in X$ with $u(0) = u_{\sigma_0}$, where $\sigma_0$ is a simple zero of $V_{\mathrm{eff}}$ as in Theorem \ref{Fortsetzung_nichttrivial}. For the following calculations we will identify $u(\epsilon)$ with a vector-valued function $\bm{u}(\epsilon): \mathbb{T}\times\R\to\R^2$ and write this as $\bm{u}(\epsilon) \in X \times X$. \medskip We start with the tracking of the simple critical zero eigenvalue and set up the equation for the perturbed eigenvalue $\lambda_0 = \lambda_0(\epsilon)$ which reads $$ \widetilde{L}_{u(\epsilon),\epsilon} \bm{v}(\epsilon) = \lambda_0(\epsilon) \bm{v}(\epsilon). $$ After a possible re-scaling we find that $\bm{v}(0) = \bm{u}_{\sigma_0}'$ and using regular perturbation theory for simple eigenvalues, cf. \cite{Kato,Kielhoefer}, the mapping $(-\epsilon^*,\epsilon^*)\ni\epsilon \mapsto \lambda_0(\epsilon) \in \R$ is continuously differentiable. Our first goal is to derive a formula for $\lambda_0'(0)$. If $\lambda_0'(0) > 0$ this means that the solutions $u(\epsilon)$ for $\epsilon >0$ are spectrally unstable. In contrast, if $\lambda_0'(0) < 0$, the solutions $u(\epsilon)$ for $\epsilon > 0$ are spectrally stable. \begin{Lemma}\label{lem:eig_0} Let $\epsilon \mapsto \lambda_0(\epsilon)$ be the $C^1$ parametrization of the perturbed zero eigenvalue. Then the following formula holds true: $$ \lambda_0'(0) = - \int_{-\pi}^{\pi} V'(x) \bm{u}_{\sigma_0}' \cdot J\bm{\phi}_{\sigma_0}^* dx . $$ \end{Lemma} \begin{proof} On the one hand, if we differentiate the equation $$ \widetilde{L}_{u(\epsilon),\epsilon} \bm{v}(\epsilon) = \lambda_0(\epsilon) \bm{v}(\epsilon). $$ with respect to $\epsilon$ and evaluate at $\epsilon = 0$ we find $$ \widetilde{L}_{u_{\sigma_0},0} \partial_\epsilon\bm{v}(0) - J N_u \bm{u}_{\sigma_0}' + V(x) \bm{u}_{\sigma_0}'' = \lambda_0'(0) \bm{u}_{\sigma_0}', $$ where $N_u$ is given by $$ N_u = 2 \begin{pmatrix} 3 u_{\sigma_01} \partial_\epsilon u_1(0) + u_{\sigma_02} \partial_\epsilon u_2(0) & u_{\sigma_01} \partial_\epsilon u_2(0) + u_{\sigma_02} \partial_\epsilon u_1(0) \\ u_{\sigma_01} \partial_\epsilon u_2(0) + u_{\sigma_02} \partial_\epsilon u_1(0) & u_{\sigma_01} \partial_\epsilon u_1(0) + 3 u_{\sigma_02} \partial_\epsilon u_2(0) \end{pmatrix}. $$ On the other hand, if we differentiate \eqref{TWE} with respect to $\epsilon$ at $\epsilon=0$, then we obtain $$ \widetilde{L}_{u_{\sigma_0},0} \partial_\epsilon\bm{u}(0) + V(x) \bm{u}_{\sigma_0}'=0. $$ If we differentiate this equation with respect to $x$ we find $$ \widetilde{L}_{u_{\sigma_0},0} \partial_\epsilon\bm{u}'(0) + V(x) \bm{u}_{\sigma_0}'' + V'(x) \bm{u}_{\sigma_0}' - JN_u \bm{u}_{\sigma_0}'= 0. $$ Combining both equations yields $$ \widetilde{L}_{u_{\sigma_0},0} [\partial_\epsilon\bm{v}(0) - \partial_\epsilon\bm{u}'(0)] -V'(x) \bm{u}_{\sigma_0}' = \lambda_0'(0) \bm{u}_{\sigma_0}' $$ and testing this equation with $J \bm{\phi}_{\sigma_0}^*\in \ker \widetilde{L}^*_{u_{\sigma_0},0}$ we obtain $$ -\int_{-\pi}^{\pi} V'(x) \bm{u}_{\sigma_0}'\cdot J \bm{\phi}_{\sigma_0}^* dx= -\langle V'(x) \bm{u}_{\sigma_0}', J \bm{\phi}_{\sigma_0}^* \rangle_{L_2} = \lambda_0'(0) \langle \bm{u}_{\sigma_0}', J \bm{\phi}_{\sigma_0}^* \rangle_{L_2} = \lambda_0'(0) $$ which finishes the proof. \end{proof} By Lemma~\ref{lem:eig_0} we can control the critical part of the spectrum close to the origin along the bifurcating solution branch. In fact, using standard perturbation theory, cf. \cite{Kato}, we know that all the eigenvalues of $\widetilde{L}_{u(\epsilon),\epsilon}$ depend continuously on the parameter $\epsilon$. However, this dependence is in general not uniform w.r.t. all eigenvalues, so we have to make sure that no unstable spectrum occurs far from the origin. At this point, it is worth mentioning that we have an a-priori bound on the spectrum of the form $$ \exists \lambda_* = \lambda_*(u(\epsilon),\epsilon)>0: \quad \lambda \in \sigma(\widetilde{L}_{u(\epsilon),\epsilon}) \implies \operatorname{Re}(\lambda) \leq \lambda_*. $$ This bound follows from the Hille-Yoshida Theorem since $\widetilde{L}_{u(\epsilon),\epsilon}$ generates a $C_0$-semigroup on $Z\times Z$, cf. Lemma~\ref{lem:gen_in_L2} below. It can also be shown directly by testing the eigenvalue problem with the corresponding eigenfunction and integration by parts. As a conclusion, spectral stability holds if we can prove that there exists $\lambda^* >0$ such that $$ \{ \lambda \in \C: 0 \leq \operatorname{Re}(\lambda)\leq \lambda_*, |\operatorname{Im}(\lambda)|\geq \lambda^* \} \subset \rho(\widetilde{L}_{u(\epsilon),\epsilon}). $$ This relation is shown as part of Lemma~\ref{lem:resolvent_est} and it is extended to the left of the origin by the subsequent Remark~\ref{rem:extension_to_left}. Since in any rectangle $\{ \lambda \in \C: -M \leq \operatorname{Re}(\lambda)\leq \lambda_*, |\operatorname{Im}(\lambda)|\leq \lambda^* \}$ there are only finitely many eigenvalues of $\widetilde{L}_{u(\epsilon),\epsilon}$ and they depend (uniformly) continuoulsy on $\epsilon$, our assumption (A2) on $\widetilde{L}_{u_0,0}$ shows that none of these eigenvalues (except possibly the critical one) can move into the right half plane if $|\epsilon|$ is small. Therefore, only the movement of the critical eigenvalue determines the spectral stability and therefore Theorem~\ref{thm:spectral_stability} is true. \subsection{Proof of Theorem~\ref{thm:nonlinear_stability}} In order to prove nonlinear asymptotic stability of stationary solutions of \eqref{eq:2d_LLE} it is enough to show exponential stability of the semigroup of the linearization in $Y \times Y$, see e.g. \cite{Cazenave}. For the proof of Theorem~\ref{thm:nonlinear_stability} we will show the following three steps: \begin{itemize} \item[(i)] Prove that $\widetilde{L}_{u(\epsilon),\epsilon}$ is the generator of a $C_0$-semigroup on $Z \times Z$. \item[(ii)] Show exponential decay of $(\mathrm{e}^{\widetilde{L}_{u(\epsilon),\epsilon} t})_{t \geq 0}$ in $Z \times Z$. \item[(iii)] Show exponential decay of $(\mathrm{e}^{\widetilde{L}_{u(\epsilon),\epsilon} t})_{t \geq 0}$ in $Y \times Y$. \end{itemize} For step (i), we establish the generator properties of the linearization in $Z \times Z$. \begin{Lemma}\label{lem:gen_in_L2} The operator $\widetilde{L}_{u(\epsilon),\epsilon}$ generates a $C_0$-semigroup on $Z \times Z$. \end{Lemma} \begin{proof} We split the operator into $$ \widetilde{L}_{u(\epsilon),\epsilon} = L_1 + L_2 + L_3, $$ where $L_1: X \times X \to Z \times Z$, $L_2: Y \times Y \to Z \times Z$, and $L_3: Z \times Z \to Z \times Z$ are defined by $$ L_1 \begin{pmatrix} \varphi_1 \\ \varphi_2 \end{pmatrix} := \begin{pmatrix} -d \varphi_2'' - \mu \varphi_1 \\ d \varphi_1'' - \mu \varphi_2 \end{pmatrix}, $$ $$ L_2 \bm{\varphi} := \epsilon V(x) \bm{\varphi}' - \frac{|\epsilon|}{2} \|V'\|_{L^\infty} \bm{\varphi}, $$ and $$ L_3 \begin{pmatrix} \varphi_1 \\ \varphi_2 \end{pmatrix} := \begin{pmatrix} \frac{|\epsilon|}{2} \|V'\|_{L^\infty}-2u_1 u_2 & \zeta-(u_1^2+3u_2^2) \\ -\zeta + 3u_1^2+ u_2^2 & \frac{|\epsilon|}{2} \|V'\|_{L^\infty} +2 u_1 u_2 \end{pmatrix} \begin{pmatrix} \varphi_1 \\ \varphi_2 \end{pmatrix} $$ We will show that \begin{itemize} \item[(i)] $L_1$ generates a contraction semigroup. \item[(ii)] $L_2$ is dissipative and bounded relative to $L_1$. \item[(iii)] $L_3$ is a bounded operator on $Z \times Z$. \end{itemize} By using the semigroup theory, this will prove that the sum $L_1 + L_2 + L_3$ is the generator of a $C_0$-semigroup on $Z \times Z$. \medskip Part (i): It follows that $\operatorname{Re} \langle L_1 \bm{\varphi},\bm{\varphi}\rangle_{L^2} = -\mu \|\bm{\varphi}\|_{L^2}^2 \leq 0$ for every $\bm{\varphi} \in X \times X$, and $\lambda-L_1$ is invertible for every $\lambda>0$ which can be seen using Fourier transform. By the Lumer-Phillips Theorem we find that $L_1$ generates a contraction semigroup on $Z \times Z$. \medskip Part (ii): We have to show that $$ \forall \bm{\varphi} \in Y \times Y: \quad\operatorname{Re}\langle L_2 \bm{\varphi},\bm{\varphi} \rangle_{L^2} \leq 0 $$ and $$ \forall a>0, \, \exists b>0:\quad \|L_2 \bm{\varphi}\|_{L^2} \leq a \|L_1\bm{\varphi}\|_{L^2} + b \|\bm{\varphi}\|_{L^2} \quad\forall \bm{\varphi}\in X \times X. $$ Let $\bm{\varphi} = (\varphi_1,\varphi_2) \in Y \times Y$ and observe that integration by parts yields \begin{align*} \operatorname{Re} \int_{-\pi}^{\pi} \epsilon V(x) (\varphi_1' \bar\varphi_1 + \varphi_2' \bar\varphi_2 ) - \frac{|\epsilon|}{2} \|V'\|_{L^\infty} |\bm\varphi|^2 dx = \int_{-\pi}^{\pi} -\frac{\epsilon}{2} V'(x) |\bm\varphi|^2 - \frac{|\epsilon|}{2} \|V'\|_{L^\infty} |\bm\varphi|^2 dx \leq 0 \end{align*} which shows that $L_2$ is dissipative. Further, if $\bm{\varphi} \in X \times X$, then for every $a>0$ we have \begin{align*} \|\epsilon V \bm\varphi' - \frac{|\epsilon|}{2} \|V'\|_{L^\infty} \bm\varphi\|_{L^2} &\leq |\epsilon| \|V\|_{L^\infty} \|\bm\varphi'\|_{L^2} + \frac{|\epsilon|}{2} \|V'\|_{L^\infty} \|\bm\varphi\|_{L^2} \\ &\leq |\epsilon| a \|V\|_{L^\infty} \|\bm\varphi''\|_{L^2} + \frac{|\epsilon|}{4a} \|V\|_{L^\infty} \|\bm\varphi\|_{L^2} + \frac{|\epsilon|}{2} \|V'\|_{L^\infty} \|\bm\varphi\|_{L^2} \\ &\leq \frac{|\epsilon| a}{|d|} \|V\|_{L^\infty} \|L_1 \bm\varphi\|_{L^2} + |\epsilon| \left( \left(\frac{a \mu}{|d|} +\frac{1}{4a} \right) \|V\|_{L^\infty} + \frac{1}{2} \|V'\|_{L^\infty} \right) \|\bm\varphi\|_{L^2} \end{align*} where we used the inequality $$ \forall \bm\varphi \in X \times X, \,\forall a>0:\quad \|\bm\varphi'\|_{L^2} \leq a \|\bm\varphi''\|_{L^2} + \frac{1}{4a} \|\bm\varphi\|_{L^2}. $$ Hence, by the dissipative perturbation theorem, cf.~Chapter III, Theorem~2.7 in \cite{Engel_Nagel}, for generators the operator $L_1+L_2 : X \times X \to Z \times Z$ generates a contraction semigroup. \medskip Part (iii): It follows that $L_3$ is bounded on $Z \times Z$. Then the bounded perturbation theorem for generators, cf.~Chapter III, Theorem~1.3 in \cite{Engel_Nagel}, yields that $\widetilde{L}_{u(\epsilon),\epsilon} = L_1+L_2+L_3$ generates a $C_0$-semigroup on $Z \times Z$ as desired. \end{proof} \begin{Remark} Using similar arguments, one can show that $\widetilde{L}_{u(\epsilon),\epsilon}$ is the generator of a $C_0$-semigroup on $Y \times Y$. \end{Remark} For step (ii), we use a characterization of exponential decay of semigroups in Hilbert spaces known as the Gearhart-Greiner-Pr\"uss~Theorem, cf.~Chapter V, Theorem~1.11 in \cite{Engel_Nagel}. \begin{Theorem}[Gearhart-Greiner-Pr\"uss Theorem]\label{Thm:GP} Let $L$ be the generator of a $C_0$-semigroup $(\mathrm{e}^{Lt})_{t\geq 0}$ on a complex Hilbert space $H$. Then $(\mathrm{e}^{Lt})_{t\geq 0}$ is exponentially stable in $H$ if and only if $$ \{\lambda \in \C:\operatorname{Re} (\lambda) \geq 0\} \subset \rho(L) \quad\text{and}\quad \sup_{\operatorname{Re} \lambda \geq 0} \|(\lambda I-L)^{-1}\|_{H \to H} < \infty. $$ \end{Theorem} By the assumption of Theorem~\ref{thm:nonlinear_stability}, spectral stability of the solution $u(\epsilon)$ is guaranteed and we are left with the proof of the uniform resolvent estimate on $\{\lambda\in \C:\operatorname{Re}(\lambda)\geq 0\}$. Using Lemma~\ref{lem:resolvent_est}, we find $\lambda^* \gg 1$ such that $(\lambda I-\widetilde{L}_{u(\epsilon),\epsilon})^{-1}$ is uniformly bounded on the set $\Lambda_{\lambda^*}$ for sufficiently small $\epsilon$. Moreover, since $\widetilde{L}_{u(\epsilon),\epsilon}$ is the generator of a $C_0$-semigroup on the state-space $Z \times Z$, the Hille-Yosida~Theorem ensures a uniform bound of the resolvent on $\{\lambda \in \C : \operatorname{Re}(\lambda)>\lambda_*\}$ for some constant $\lambda_*>0$. From the fact that $\lambda \mapsto (\lambda I-\widetilde{L}_{u(\epsilon),\epsilon})^{-1}$ is a meromorphic function with no poles in $\{ \lambda \in \mathbb{C}: \; \operatorname{Re}(\lambda) \geq 0 \}$, the resolvent is uniformly bounded on compact subsets of $\C$ in $\{ \lambda \in \mathbb{C}: \; \operatorname{Re}(\lambda) \geq 0 \}$. Thus, we can conclude that $\widetilde{L}_{u(\epsilon),\epsilon}$ satisfies the Gearhart-Greiner-Pr\"uss resolvent bound and exponential stability in $Z \times Z$ follows. \medskip Finally, for step (iii), we will interpolate the decay estimate between the spaces $Z \times Z$ and $X \times X$. To do so, we have to establish bounds in $X \times X$ which is done the next lemma. The interpolation argument is then in the spirit of Lemma~5 in \cite{Stanislavova_Stefanov} and will also lead to decay estimates in the more general interpolation spaces $H_\mathrm{per}^s \times H_\mathrm{per}^s$ for $s \in [0,2]$. \begin{Lemma}\label{lem:Lin_Stability_Hs} For any $s\in [0,2]$ and sufficiently small $\epsilon$ the semigroup $(\mathrm{e}^{\widetilde{L}_{u(\epsilon),\epsilon} t})_{t\geq 0}$ has exponential decay in $H_\mathrm{per}^s([-\pi,\pi],\C) \times H_\mathrm{per}^s([-\pi,\pi],\C)$, i.e., there exist $C_s> 0$ such that $$ \|\mathrm{e}^{\widetilde{L}_{u(\epsilon),\epsilon} t}\|_{H^s \to H^s} \leq C_s \mathrm{e}^{-\eta t} \quad\text{for } t \geq 0, $$ where $-\eta<0$ is the previously established growth bound of the semigroup in $Z \times Z$. \end{Lemma} \begin{proof} We consider only the case $d>0$, since the other case can be shown by rewriting $JA_u$ as $-J(-A_u)$ and using the same arguments as presented below. If $d>0$, the operator $A_{u(\epsilon)} + \gamma I$ is positive and self-adjoint provided $\gamma>0$ is sufficiently large. Hence, for $z \in \C$ we can define the complex powers by $$ (A_{u(\epsilon)} + \gamma I)^z \bm v = \int_0^\infty \lambda^{z} d E_\lambda \bm v, \quad \text{for }\bm v \in \mathrm{dom}(A_u{u(\epsilon)} + \gamma I)^z, $$ with domain given by $$ \mathrm{dom}(A_{u(\epsilon)} + \gamma I)^z = \left\{ \bm v \in Z \times Z : \|(A_{u(\epsilon)} + \gamma I)^z\bm v\|_{L^2}^2 = \int_0^\infty \lambda^{2 \operatorname{Re} z} d \|E_\lambda \bm v\|_{L^2}^2 < \infty \right\} $$ and where $E_\lambda$ for $\lambda \in \R$ is the family of self-adjoint spectral projections associated to $A_{u(\epsilon)} + \gamma I$. Note that for $\theta \in [0,1]$ the relation $$ \mathrm{dom}(A_{u(\epsilon)} + \gamma I)^\theta = H_\mathrm{per}^{2\theta}([-\pi,\pi],\C) \times H_\mathrm{per}^{2\theta}([-\pi,\pi],\C) $$ is true, cf.~\cite{Lunardi} Theorem~4.36, and further for any $r \in \R$ the operator $(A_{u(\epsilon)} + \gamma I)^{\mathrm{i} r}$ is unitary on $Z \times Z$. If $\theta = 0,1$ we will show that there exists $C_\theta >0$ such that \begin{align*} \forall r \in \R,\, \forall t \geq 0, \,\forall \bm v \in X \times X,:\quad \|(A_{u(\epsilon)}+\gamma I)^{\theta + \mathrm{i} r} \mathrm{e}^{\widetilde{L}_{u(\epsilon),\epsilon} t} \bm v\|_{L^2} \leq C_\theta \mathrm{e}^{-\eta t} \|\bm v\|_{H^{2\theta}}, \end{align*} which implies $$ \forall r \in \R, \,\forall t \geq 0,\, \forall \theta \in (0,1),\,\forall \bm v \in X \times X: \;\|(A_{u(\epsilon)}+\gamma I)^{\theta + \mathrm{i} r} \mathrm{e}^{\widetilde{L}_{u(\epsilon),\epsilon} t} \bm v\|_{L^2} \leq C_0^{1-\theta} C_1^{\theta} \mathrm{e}^{-\eta t} \|\bm v\|_{H^{2\theta}}, $$ by complex interpolation, cf.~\cite{Lunardi} Theorem~2.7. In particular, we see that $$ \|\mathrm{e}^{\widetilde{L}_{u(\epsilon),\epsilon} t}\|_{H^{s} \to H^s} \leq C_0^{1-s} C_1^s \mathrm{e}^{-\eta t} $$ which is precisely our claim. The estimate for $\theta=0$ has already been shown in the preceding discussion, so it remains to check the estimate for $\theta=1$. Let $\bm v \in X \times X$ and observe that \begin{align*} \|(A_{u(\epsilon)}+\gamma I)^{1 + \mathrm{i} r } \mathrm{e}^{\widetilde{L}_{u(\epsilon),\epsilon} t} \bm v\|_{L^2} &= \|(A_{u(\epsilon)}+\gamma I)\mathrm{e}^{\widetilde{L}_{u(\epsilon),\epsilon} t} \bm v\|_{L^2}\\ &=\|(\widetilde{L}_{u(\epsilon),\epsilon} + J\gamma +I(\mu - \epsilon V(x) \partial_x)) \mathrm{e}^{\widetilde{L}_{u(\epsilon),\epsilon} t} \bm v \|_{L^2} \\ &\leq \| \mathrm{e}^{\widetilde{L}_{u(\epsilon),\epsilon} t} \widetilde{L}_{u(\epsilon),\epsilon}\bm v\|_{L^2} + C \| \mathrm{e}^{\widetilde{L}_{u(\epsilon),\epsilon}t} \bm v\|_{L^2} + |\epsilon| \|V\|_{L^\infty} \|\partial_x \mathrm{e}^{\widetilde{L}_{u(\epsilon),\epsilon}t} \bm v\|_{L^2} \\ &\leq C \mathrm{e}^{-\eta t} \|\widetilde{L}_{u(\epsilon),\epsilon} \bm v\|_{L^2} + C \mathrm{e}^{-\eta t} \|\bm v\|_{L^2} + |\epsilon| \|V\|_{L^\infty} \|\mathrm{e}^{\widetilde{L}_{u(\epsilon),\epsilon}t} \bm v\|_{H^1} \\ &\leq C \mathrm{e}^{-\eta t} \|\bm v\|_{H^2} + |\epsilon| C \|\mathrm{e}^{\widetilde{L}_{u(\epsilon),\epsilon}t} \bm v\|_{H^2}, \end{align*} which yields $\|(A_{u(\epsilon)}+\gamma I)^{1 + \mathrm{i} r }\mathrm{e}^{\widetilde{L}_{u(\epsilon),\epsilon} t} \bm v\|_{L^2} \leq C \mathrm{e}^{-\eta t} \|\bm v\|_{H^2}$ if $\epsilon$ is sufficiently small because of the norm equivalence $\|\bm v\|_{H^2} \sim \|(A_{u(\epsilon)}+\gamma I)\bm v\|_{L^2}$. \end{proof} In particular Lemma~\ref{lem:Lin_Stability_Hs} establishes exponential stability of the linearization in $Y \times Y$, thus we have proved Theorem~\ref{thm:nonlinear_stability}. \subsection{Proof of Lemma~\ref{lem:resolvent_est}}\label{sec:proof_resolvent_est} The uniform resolvent estimate is proved if we can find a constant $C>0$ independent of $\lambda \in \Lambda_{\lambda^*}$ such that \begin{equation}\label{eq:resolvent_reversed} \forall \bm\varphi \in X \times X:\quad \|(\lambda I - \widetilde{L}_{u(\epsilon),\epsilon}) \bm\varphi\|_{L^2} \geq C \|\bm\varphi\|_{L^2}. \end{equation} In order to simplify the situation, let us introduce the rotation on $Z \times Z$ as follows: $$ R \begin{pmatrix} \varphi_1 \\ \varphi_2 \end{pmatrix}:= \begin{pmatrix} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta \end{pmatrix} \begin{pmatrix} \varphi_1 \\ \varphi_2 \end{pmatrix} $$ with spatially varying angular $\theta(x) = \frac{\epsilon}{2d} \int_{-\pi}^x [V(y) - \hat{V}_0]dy$ where $\hat{V}_0 = \frac{1}{2\pi} \int_{-\pi}^{\pi} V(y) \,dy$ is the mean of the potential $V$. Since $R$ is an isometry on $Z \times Z$ the resolvent estimate \eqref{eq:resolvent_reversed} is equivalent to $$ \forall \bm\varphi \in X \times X:\quad \|(\lambda I - R\widetilde{L}_{u(\epsilon),\epsilon} R^{-1}) \bm\varphi\|_{L^2} \geq C \|\bm\varphi\|_{L^2}, $$ where we note that $\sigma(\widetilde{L}_{u(\epsilon),\epsilon}) = \sigma(R\widetilde{L}_{u(\epsilon),\epsilon}R^{-1})$. The advantage of considering the operator $R\widetilde{L}_{u(\epsilon),\epsilon}R^{-1}$ becomes clear if we calculate $$ R\widetilde{L}_{u(\epsilon),\epsilon}R^{-1} = J \tilde{A}_{u(\epsilon),\epsilon,V} - I (\mu - \epsilon \hat{V}_0 \partial_x) $$ where the operator $\tilde{A}_{u(\epsilon),\epsilon,V}$ given by $$ \tilde{A}_{u(\epsilon),\epsilon,V} := \begin{pmatrix} -d\partial_x^2 + W_1 & W_2+W_4 \\ W_2-W_4 & -d\partial_x^2 + W_3 \end{pmatrix} $$ with potentials \begin{align*} W_1 &= \zeta +\cos^2\theta U_1 + 2 \cos\theta \sin\theta U_2 + \sin^2\theta U_3 + d \theta'^2 - \epsilon\theta' V,\\ W_2 &= (\cos^2\theta - \sin^2\theta) U_2 + \cos\theta \sin\theta (U_3-U_1),\\ W_3 &= \zeta + \sin^2\theta U_1 - 2 \cos\theta \sin\theta U_2 + \cos^2\theta U_3 + d \theta'^2 - \epsilon\theta' V ,\\ W_4 &= d \theta'', \end{align*} and functions \begin{align*} U_1 =-( 3 u_1^2(\epsilon) + u_2^2(\epsilon)), \,\, U_2 = -2 u_1(\epsilon) u_2(\epsilon), \,\, U_3 = -(u_1^2(\epsilon)+3u_2^2(\epsilon)). \end{align*} Clearly, the first order derivative is now multiplied by a constant instead of a spatially varying potential which will be used in the following calculations. We also note that the functions $W_i \in X$, $i=1,2,3$ depend upon the solution $u$ and the potential $V$ whereas $W_4 \in X$ only depends upon the potential $V$. For the proof of the resolvent estimate we use techniques presented in \cite{Stanislavova_Stefanov}, where the authors construct resolvents for the unperturbed LLE \eqref{LLE_original}. We need the following proposition, which is Lemma~4 in \cite{Stanislavova_Stefanov}. \begin{Proposition}\label{prop:estimate_for_determinant} Let $d\not=0$ and $\mu > 0$. Then there exists $\lambda^*>0$ depending on $d$ and $\mu$ with the property that for all $\omega \geq \lambda^* $ there is at most one $k_0=k_0(\omega,\mu) \in \N$ such that $$ \omega \geq |d^2k_0^4+{\mu}^2 -\omega^2 |. $$ For all other $k \in \Z \setminus \{\pm k_0(\omega,\mu)\}$ we have $$ |d^2k^4 +{\mu}^2 -\omega^2 | \geq \frac{1}{10} \max\{d^2k^2,\omega\}^{3/2}. $$ Moreover, we find $k_0(\omega,\mu) = \mathcal{O}(\omega^{1/2})$ as $\omega \to \infty$. \end{Proposition} \begin{comment} \begin{proof} It is enough to consider the case $k\geq 0$ and $|d|=1$. If $\lambda > 100$ then we find that the inequality $$ \lambda \geq |-\lambda^2 +\mu^2 + k^4| \geq |-\lambda +k^2| (\lambda + k^2) $$ implies $ 1 \geq |k^2 - \lambda| = |k-\sqrt{\lambda}|(k + \sqrt{\lambda})$ which then gives $|k-\sqrt{\lambda}| \leq \sqrt{\lambda}^{-1} \leq 10^{-1}$. Let $k_0 = k_0(\lambda) \in \N$ denote the integer satisfying $|k-\sqrt{\lambda}| \leq 10^{-1}$, if it exists. Then for $k \not = k_0$ we find \begin{align*} |-\lambda^2 +\mu^2 + k^4| &\geq |-\lambda +k^2| (\lambda + k^2)\\ &= |k - \sqrt{\lambda}| (k + \sqrt{\lambda}) (\lambda + k^2) \\ &\geq \frac{1}{10} (k + \sqrt{\lambda}) (\lambda + k^2) \\ &\geq \frac{1}{10} \max\{k^3, \lambda^{3/2}\}. \end{align*} \end{proof} \end{comment} Now we can start to construct and bound the resolvent. By the Hille-Yoshida Theorem, a uniform resolvent estimate holds whenever $\operatorname{Re}\lambda$ is sufficiently large. It therefore remains to consider $\lambda = \delta + \mathrm{i} \omega \in \Lambda_{\lambda^*}$ for some $\lambda^*>0$ and $\delta \geq 0$ on a compact set. Since $\delta$ replaces $\mu$ in $\lambda I - \widetilde{L}_{u(\epsilon),\epsilon}$ by $\mu + \delta$ and the estimates of Proposition \ref{prop:estimate_for_determinant} holds for any $\mu > 0$ on a compact set, it sufficies to prove the uniform estimates for $\delta = 0$. For now, we do not specify the value of $\lambda^*$, since this will be done later in the proof. We can restrict to the case $\omega \geq \lambda^*$, since the proof for $\omega \leq -\lambda^*$ follows from symmetries of the spectral problem under complex conjugation. For $\bm v \in X \times X$ we define \begin{equation}\label{eq:resolvent_est} (\lambda I- R\widetilde{L}_{u(\epsilon),\epsilon} R^{-1}) \bm v =: \bm\psi \in Z \times Z \end{equation} and show that there exist bounded operators $T_1$ and $T_2$ on $Z \times Z$ depending on $\lambda$ with norms satisfying $\|T_1\|_{L^2 \to L^2} = \mathcal{O}(\omega^{-1/2})$ and $\|T_2\|_{L^2 \to L^2}=\mathcal{O}(1)$ as $\omega \to \infty$ such that \eqref{eq:resolvent_est} implies \begin{equation}\label{eq:neumann_form} (I+T_1) \bm v = T_2 \bm\psi. \end{equation} If $\lambda^*$ is sufficiently large, we then deduce that $I+T_1$ is a small perturbation of the identity, and hence invertible with norm uniformly bounded in $\lambda$ which is our claim. Therefore, it remains to show \eqref{eq:neumann_form}. We introduce the matrix-valued potential $$ W = \begin{pmatrix} W_1 & W_2+W_4 \\ W_2 - W_4 & W_3 \end{pmatrix} $$ in order to write $$ \lambda I- R\widetilde{L}_{u(\epsilon),\epsilon} R^{-1} = \mathrm{i} \omega I- J(-d \partial_x^2 + W ) + I (\mu - \epsilon \hat{V}_0 \partial_x). $$ Now, let $A = \lambda I- R\widetilde{L}_{u(\epsilon),\epsilon} R^{-1} + JW$ and observe that $A\bm v(x) = \sum_{k \in \Z} A_k \hat{\bm v}_k \mathrm{e}^{\mathrm{i} k x}$ with $\bm v (x) = \sum_{k \in \Z} \hat{\bm v}_k \mathrm{e}^{\mathrm{i} k x}$ and Fourier multiplier $$ A_k = A_k^1 + A_k^2 = \begin{pmatrix} \mathrm{i} \omega + \mu & -dk^2 \\ dk^2 & \mathrm{i} \omega + \mu \end{pmatrix} + \begin{pmatrix} - \mathrm{i}\epsilon \hat{V}_0k & 0 \\ 0 & - \mathrm{i} \epsilon \hat{V}_0k \end{pmatrix}. $$ The inverse of $A_k^1$ is given by $$ (A_k^1)^{-1} = \frac{1}{\text{det($A_k^1$)}} \begin{pmatrix} \mathrm{i} \omega + \mu & dk^2 \\ -dk^2 & \mathrm{i} \omega + \mu \end{pmatrix} $$ and by Proposition~\ref{prop:estimate_for_determinant} there exists at most one $k_0 = k_0(\omega,\mu) \in \N$ such that $$ |\mathrm{det}(A_k^1)| \geq |d^2k^4 + \mu^2 -\omega^2| \geq \frac{1}{10} \max\{d^2k^2,\omega\}^{3/2} \text{ for all $k \not= \pm k_0$} $$ provided that $\lambda^*$ is sufficiently large. Thus $A_k^1$ is invertible with bound $\|(A_k^1)^{-1}\|_{\C^{2\times 2}} \leq C / \max\{\omega^{1/2}, k\}$ for all $k \not = \pm k_0$. Using again Proposition~\ref{prop:estimate_for_determinant}, we have the asymptotic $k_0 = k_0(\omega) = \mathcal{O}(\omega^{1/2})$ as $\omega \to \infty$. Consequently, if $|\epsilon|$ is sufficiently small, then $A_k=A_k^1(I+(A_k^1)^{-1} A_k^2)$, $k\not= \pm k_0$, is also invertible with the bound $\|(A_k)^{-1}\|_{\C^{2\times 2}} =\mathcal{O}(\omega^{-1/2})$ as $\omega \to \infty$. Next, for the above $k_0 \in \N$, we introduce the orthogonal projections $P, Q, Q_1, Q_2: Z\times Z\to Z\times Z$ as follows: $$ Q_1 \bm v = \hat{\bm v}_{k_0} \mathrm{e}^{\mathrm{i} k_0(\cdot)}, \quad Q_2 \bm v = \hat{\bm v}_{-k_0} \mathrm{e}^{-\mathrm{i} k_0(\cdot)} $$ and $$ Q = Q_1+ Q_2, \quad P=I-Q. $$ This allows us to decompose \eqref{eq:resolvent_est} as follows: \begin{align} PAP \bm v - P JW \bm v &= P \bm\psi, \label{eq:split_1} \\ Q A Q \bm v - Q JW \bm v &= Q \bm\psi. \label{eq:split_2} \end{align} From the preceding arguments we find $$ \|(PAP)^{-1}\|_{L^2 \to L^2} = \mathcal{O}(\omega^{-1/2}) \text{ as }\omega\to \infty $$ which implies that \eqref{eq:split_1} is equivalent to \begin{align}\label{eq:proof_step1} P \bm v - (PAP)^{-1} P JW \bm v = (PAP)^{-1} \bm\psi \end{align} with bound $\|(PAP)^{-1} JW\|_{L^2 \to L^2} = \mathcal{O}(\omega^{-1/2})$ as $\omega \to \infty$. \medskip Next we investigate \eqref{eq:split_2} which we decompose a second time to find \begin{align} Q_1 A Q_1 \bm v - Q_1 JW Q_1 \bm v - Q_1 JW Q_2 \bm v - Q_1 JW P \bm v &= Q_1 \bm\psi, \label{eq:split_2_1}\\ Q_2 A Q_2 \bm v - Q_2 JW Q_1 \bm v - Q_2 JW Q_2 \bm v - Q_2 JW P \bm v &= Q_2 \bm\psi. \label{eq:split_2_2} \end{align} Both equations can be handled similarly and thus we focus on the first one. Using \eqref{eq:proof_step1} we can write \eqref{eq:split_2_1} as $$ [Q_1 A Q_1 - Q_1 JW Q_1 ] \bm v - Q_1 JW Q_2 \bm v - Q_1 JW (PAP)^{-1} P JW \bm v = Q_1 JW (PAP)^{-1} \bm\psi + Q_1 \bm\psi. $$ The operator $B:=Q_1 A Q_1 - Q_1 JW Q_1$ acts like a Fourier-multiplier on $\range Q_1$ with matrix $$ B_{k_0} = \begin{pmatrix} \mathrm{i} (\omega - \epsilon \hat{V}_0 k_0) +\mu - (\hat{W_2})_{0} + (\hat{W_4})_{0} & -dk_0^2 - (\hat{W_3})_{0} \\ dk_0^2 + (\hat{W_1})_0 & \mathrm{i} (\omega - \epsilon \hat{V}_0 k_0) + \mu + (\hat{W_2})_{0} + (\hat{W_4})_{0} \end{pmatrix} $$ and we observe that $$ |\mathrm{det}(B_{k_0})| \geq |\operatorname{Im} \mathrm{det}(B_{k_0})| = 2 |\omega - \epsilon \hat{V}_0 k_0| |\mu + \hat{(W_4)}_0| \sim \omega $$ since $k_0 = \mathcal{O}(\omega^{1/2})$ and $\omega \gg 1$. This means that $B_{k_0}$ is invertible with $\|B_{k_0}^{-1}\|_{\C^{2 \times 2}}$ uniformly bounded in $\omega \gg 1$, and thus the same holds for the operator $B$. Inverting $B$ yields \begin{align*} Q_1 \bm v - B^{-1}[Q_1 JW Q_2 + Q_1 JW (PAP)^{-1} P JW]\bm v = B^{-1} Q_1 JW (PAP)^{-1} \bm\psi + B^{-1}Q_1 \bm\psi \end{align*} and since we have $W_i \in Y$ for $i=1,2,3,4$ we can exploit decay of the Fourier-coefficients $$ |(\hat{W_i})_{k}| \leq \frac{C}{\sqrt{1+k^2}} \quad\text{for all $k\in \Z$} $$ to bound $Q_1 JW Q_2\bm v = (\hat{JW})_{2k_0} \hat{\bm v}_{-k_0} \mathrm{e}^{\mathrm{i} k_0 (\cdot)}$: $$ \|Q_1 JW Q_2\|_{L^2 \to L^2} = \mathcal{O}(k_0(\omega,\mu)^{-1}) = \mathcal{O}(\omega^{-1/2}) \text{ as } \omega \to \infty. $$ Finally from the bounds of the first part we infer that \begin{align*} \|Q_1 JW (PAP)^{-1} P JW\|_{L^2 \to L^2} &= \mathcal{O}(\omega^{-1/2}) \text{ as }\omega \to \infty,\\ \|Q_1 JW (PAP)^{-1}\|_{L^2 \to L^2} &= \mathcal{O}(\omega^{-1/2}) \text{ as }\omega \to \infty \end{align*} and as a conclusion we arrive at \eqref{eq:neumann_form} which is all we had to prove.
1,314,259,994,657
arxiv
\section{Introduction}\label{sec:intro} Let $Q = (I,\Omega)$ be a quiver. Fix a vector $q\in (\mathbb{C}^\times)^I$. Associated to these data is a noncommutative algebra $\Lambda^q$, the {\em multiplicative preprojective algebra} \cite{CBS} of $Q$ with parameter $q$. Letting $\alpha\in \mathbb{Z}_{\geq 0}^I$ be a dimension vector for $Q$ and choosing a stability condition $\theta\in \mathbb{Z}^I$, we get a moduli space $\mathcal{M}_\theta^q(\alpha)$ of $\theta$-semistable representations of $\Lambda^q$ with dimension vector $\alpha$, called a {\em multiplicative quiver variety}, investigated in \cite{CBS, Yamakawa} (and both investigated and substantially generalized in \cite{Boalch}). Multiplicative quiver varieties provide concrete realizations of character varieties and related spaces: see \cite{BoalchYamakawa, BezKap, ST} among others. \subsection{Results} As for its cousins, the Nakajima quiver varieties, the multiplicative quiver variety $\mathcal{M}_{\theta}^q(\alpha)$ is defined as a GIT quotient (at a character $\chi_\theta: \mathbb{G}\rightarrow \Gm$) of an affine algebraic variety $\operatorname{Rep}(\Lambda^q,\alpha)$ by the group $\mathbb{G} = \big(\prod_i GL(\alpha_i)\big)/\Delta(\Gm)$, a product of general linear groups modulo the diagonal copy of $\Gm$; when it is a {\em free} quotient, this endows $\mathcal{M}_{\theta}^q(\alpha)$ with a map $\mathcal{M}_{\theta}^q(\alpha)\rightarrow B\mathbb{G}$. The rational cohomology $H^*(B\mathbb{G},\mathbb{Q})$ is pure in the sense of Hodge theory: $H^m(B\mathbb{G},\mathbb{Q}) = W_mH^m(B\mathbb{G},\mathbb{Q})$, where $ W_mH^m(B\mathbb{G},\mathbb{Q})$ denotes the weight $m$ part. Thus, the image of the pullback map on cohomology must land in the pure part, in the Hodge-theoretic sense, namely \begin{displaymath} PH^*(\mathcal{M}_{\theta}^q(\alpha)) \overset{\operatorname{def}}{=} \displaystyle\bigoplus_m W_mH^m\big(\mathcal{M}_{\theta}^q(\alpha), \mathbb{Q}\big) \end{displaymath} of $H^*\big(\mathcal{M}_{\theta}^q(\alpha), \mathbb{Q}\big)$. The main result of the present paper is: \begin{thm}\label{main thm} \mbox{} \begin{enumerate} \item Suppose that $U\subseteq \mathcal{M}_{\theta}^q(\alpha)^{\operatorname{s}}$ is any connected open subset of the stable locus of the multiplicative quiver variety $\mathcal{M}_{\theta}^q(\alpha)$. Then the induced map on cohomology \begin{displaymath} H^*(B\mathbb{G}, \mathbb{Q})\rightarrow H^*\big(U, \mathbb{Q}\big) \end{displaymath} defines a surjection onto the pure cohomology $PH^*(U) = \displaystyle\bigoplus_m W_mH^m\big(U, \mathbb{Q}\big)$. \item In particular, if $\mathcal{M}_{\theta}^q(\alpha) = \mathcal{M}_{\theta}^q(\alpha)^{\operatorname{s}}$ and $\mathcal{M}_{\theta}^q(\alpha)$ is connected, then \begin{displaymath} H^*(B\mathbb{G}, \mathbb{Q})\rightarrow H^*\big(\mathcal{M}_{\theta}^q(\alpha), \mathbb{Q}\big) \end{displaymath} surjects onto $PH^*\big(\mathcal{M}_{\theta}^q(\alpha)\big)$. \end{enumerate} \end{thm} \noindent In light of Theorem 1.2 of \cite{McNKirwan}, Theorem \ref{main thm} is nicely consonant with Hausel's ``purity conjecture'' (cf. \cite{Hausel} as well as \cite[Theorem~1.3.1 and Corollary~1.3.2]{HLV}, and the discussion around Conjecture 1.1.3 of \cite{HWW}), which predicts that when $\mathcal{M}_{\theta}^q(\alpha) = \mathcal{M}_{\theta}^q(\alpha)^{\operatorname{s}}$, one should have an isomorphism $PH^*(\mathcal{M}_{\theta}^q(\alpha)^{\operatorname{s}}) \cong H^*\big(\mathfrak{M}_{\theta}(\alpha)^{\operatorname{s}},\mathbb{Q}\big)$, where $\mathfrak{M}_{\theta}(\alpha)^{\operatorname{s}}$ denotes the corresponding Nakajima quiver variety. In the special case in which $Q$ is a quiver with a single node and $g\geq 1$ loops, the dimension vector is $\alpha=n$, and $q\in \mathbb{C}^\times$ is a primitive $n$th root of unity, the multiplicative quiver variety $\mathcal{M}_{\theta}^q(\alpha)$ is identified with the $GL_n$-character variety $\operatorname{Char}(\Sigma_g, GL_n, q\operatorname{Id})$ of a genus $g$ surface with a single puncture with residue $q\operatorname{Id}$, sometimes called a genus $g$ {\em twisted character variety} \cite{HR2}. We obtain: \begin{corollary}\label{character var} The pure cohomology $PH^*\big(\operatorname{Char}(\Sigma_g, GL_n, q\operatorname{Id})\big)$ is generated by tautological classes. \end{corollary} Corollary \ref{character var} has already appeared in \cite{Shende}, where it was deduced, via the non-abelian Hodge theorem, from Markman's theorem \cite{Markman} that the cohomology of the moduli space of $GL_n$-Higgs bundles of degree $1$ on a smooth projective genus $g$ curve is generated by tautological classes. A novelty of our result, compared to \cite{Shende}, is that we avoid invoking non-abelian Hodge theory: instead, we deduce Corollary \ref{character var} (as well as Theorem \ref{main thm}) via a more direct and concrete method that invokes only basic facts of ordinary mixed Hodge theory as in \cite{Deligne}.\footnote{On the other hand, a major source of interest in twisted character varieties lies \cite{HR2} in non-abelian Hodge theory, specifically the $P=W$ conjecture.}. Theorem \ref{main thm} has the following slightly different but equivalent formulation. Choose a subgroup $\bS\subset \prod_i GL(\alpha_i)$ whose projection $\bS\rightarrow \mathbb{G}$ is a finite covering. Then one can form the stack quotient $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS$, which comes with a morphism $\pi: \operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS\rightarrow \mathcal{M}_{\theta}^q(\alpha)^{\operatorname{s}}$ that is a gerbe, in fact a torsor over the commutative group stack $BH$ where $H=\ker(\mathbb{S}\rightarrow\mathbb{G})$. We have an isomorphism $H^*(B\bS,\mathbb{Q})\cong H^*(B\mathbb{G},\mathbb{Q})$ and $\pi$ induces an isomorphism $H^*\big(\mathcal{M}_{\theta}^q(\alpha)^{\operatorname{s}},\mathbb{Q}\big)\cong H^*\big(\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS,\mathbb{Q}\big)$. Thus Theorem \ref{main thm} can be restated as: \begin{thm}\label{stack main thm} For each connected open substack $U\subseteq \operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\mathbb{S}$, the pure cohomology $PH^*(U)$ is generated as a $\mathbb{Q}$-algebra by the Chern classes of tautological bundles $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}\times_{\bS} V$ associated to finite-dimensional representations $V$ of $\bS$. \end{thm} It is Theorem \ref{stack main thm} that we prove directly: the tautological bundles $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}\times_{\bS} V$ that appear naturally and geometrically in our proof do not themselves descend to the multiplicative quiver variety in general, so it is more convenient to work on the Deligne-Mumford stack $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS$. Unlike the situation of quiver varieties in \cite{McNKirwan}, we know of no obvious generalizations of Theorems \ref{main thm} and \ref{stack main thm} to other even-oriented cohomology theories (such as topological $K$-theory or elliptic cohomology). However, we do obtain the following analogue of Theorem 1.6 of \cite{McNKirwan}. \begin{thm}\label{derived cat} Suppose there is some vertex $i\in I$ for which the dimension vector $\alpha$ satisfies $\alpha_i=1$, and let $\mathcal{M} = \mathcal{M}_{\theta}^q(\alpha)^{\operatorname{s}}$. Let $D(\mathcal{M})$ denote the unbounded quasicoherent derived category of $\mathcal{M}$, and $D^b_{\operatorname{coh}}(\mathcal{M})$ its bounded coherent subcategory. \begin{enumerate} \item The category $D(\mathcal{M})$ is generated by tautological bundles. \item There is a finite list of tautological bundles from which every object of $D^b_{\operatorname{coh}}(\mathcal{M})$ is obtained by finitely many applications of (i) direct sum, (ii) cohomological shift, and (iii) cone. \end{enumerate} \end{thm} As for the analogous result in \cite{McNKirwan}, we emphasize that Theorem \ref{derived cat}(2) is {\em not} simply a formal consequence of Theorem \ref{derived cat}(1), since we do {\em not} include taking direct summands (i.e., retracts) among the operations (i)-(iii). It would be interesting to know generators for $D^b_{\operatorname{coh}}(\mathcal{M})$ for more general dimension vectors $\alpha$ than in Theorem \ref{derived cat}. \subsection{Method of Proof} The proof of Theorem \ref{stack main thm} is broadly similar to the proof used in \cite{McNKirwan} to establish that tautological classes generate the cohomology of Nakajima quiver varieties. A main part of the proof consists in producing a suitable modular compactification of the multiplicative quiver variety (or rather its Deligne-Mumford stack analogue). One major difference from the Nakajima quiver variety case arises already at this stage: one frequently relies on $q$ being an appropriate tuple of primitive roots of unity to deduce that $\mathcal{M}_\theta^q(\alpha)$ parameterizes only stable representations, independently of the choice of $\theta$; whereas in \cite{McNKirwan}, we assumed, without significant loss of generality, that $\theta$ was a generic stability condition. We note that such a genericity assumption here would exclude the possibility of applications to the character variety $\operatorname{Char}(\Sigma_g, GL_n, q\operatorname{Id})$; hence we avoid it. Instead we identify a compactification by a ``projective Artin stack'' $\overline{\mathcal{M}}$, a quotient of a quasiprojective scheme by a reductive group whose coarse moduli space is a projective scheme. Known techniques \cite{Kirwan, Edidin} allow us to replace the Artin stack compactification by a projective Deligne-Mumford stack at no cost to the validity of our approach. The second stage is to identify a complex on $\mathcal{M}_\theta^q(\alpha)\times\overline{\mathcal{M}}$ that, roughly speaking, resolves the graph of the embedding $\mathcal{M}_\theta^q(\alpha)\hookrightarrow \overline{\mathcal{M}}$. Again, while this is morally similar to \cite{McNKirwan}, the actual construction and proofs are more complicated and subtle. This is essentially because our compactification of the Nakajima quiver variety relied on a graded 3-Calabi-Yau algebra, whereas the compactification of $\mathcal{M}_\theta^q(\alpha)$ uses an algebra, denoted by $\sA$ in the body of this paper, that may (conjecturally) be what one might call a ``relative $2g$-Koszul algebra'' in most cases but (as far as we know) is not known to be so. Fortunately it turns out that we can proceed as if the algebra $\sA$ were known to have certain desired properties, carry out some constructions, and check by hand that the resulting complex behaves as hoped. Unfortunately, in the generality in which we work here (and again unlike \cite{McNKirwan}), it seems one cannot expect the complex to actually provide a resolution of the structure sheaf of the graph of the embedding: instead, we rely on work of Markman \cite{Markman} to show that an appropriate Chern class of the complex we built is the Poincar\'e dual of the fundamental class of the graph. The final step is to deduce the theorem via usual integral transform arguments. In \cite{McNKirwan}, we used Nakajima's result that the (integral) cohomology of a quiver variety is generated by algebraic cycles, hence is surjected onto by the cohomology of any compactification. Such an assertion is not true of the multiplicative quiver varieties $\mathcal{M}_\theta^q(\alpha)$. Instead, what is always true is that the cohomology of any reasonable smooth compactification---which is always Hodge-theoretically pure---surjects onto the pure part of the cohomology of any open subset. This yields the assertion of the theorem, which in any case would be the best possible result, given that the cohomology $H^*(B\mathbb{G},\mathbb{Q})$ is pure; but its Hodge-theoretic nature also necessitates working with rational cohomology. It is an interesting question to characterize the image of $H^*(B\mathbb{G},\mathbb{Z})$ in $H^*\big(\mathcal{M}_\theta^q(\alpha),\mathbb{Z}\big)$. \subsection{Acknowledgments} We are grateful to Gwyn Bellamy, Ben Davison, Tam\'as Hausel, and Travis Schedler for helpful conversations, and to Donu Arapura and Ajneet Dhillon for help with references. The first author was supported by EPSRC programme grant EI/I033343/1 and a Fisher Visiting Professorship at the University of Illinois at Urbana-Champaign. The second author was supported by NSF grants DMS-1502125 and DMS-1802094 and a Simons Foundation fellowship. The authors are also grateful to the Department of Mathematics of the University of Notre Dame for its hospitality during part of the preparation of this paper. \subsection{Notation} Throughout, $k$ denotes a field of characteristic $0$. In Sections \ref{sec:intro} and \ref{sec:coh}, $k=\mathbb{C}$. \section{Quivers and Multiplicative Preprojective Algebras} \subsection{Truncations of Graded Algebras} We will frequently use certain ``truncations'' of a $\mathbb{Z}_{\geq 0}$-graded algebra $A$ in what follows. For a $\mathbb{Z}$-graded vector space $V$ and integer $n$, we write $V_{\geq n} = \oplus_{m\geq n} V_m$, a vector space graded by $\{n, n+1, \dots\}$. We note the vector space injection $V_{\geq n}\rightarrow V$ that is the identity on the $m$th graded piece for $m\geq n$. \begin{defn} For a $\mathbb{Z}_{\geq 0}$-graded algebra $A$ and each $N\geq 0$, we define: $A\intN := A/A_{\geq N+1}.$ \end{defn} \subsection{Quivers, Doubles, and Triples} Let $Q = (I,\Omega)$ be a finite quiver, so that $s, t: \Omega\rightrightarrows I$ are the source and target maps: for $a\in\Omega$ we have $\xymatrix{\overset{s(a)}{\bullet}\ar[r]^{a} & \overset{t(a)}{\bullet}}$. The {\em double} of $Q$ is a quiver $\Qdbl = (I,H = \Omega\sqcup\overline{\Omega})$ with the same vertex set $I$ as for $Q$ and the set of arrows $H = \Omega\sqcup\overline{\Omega}$ where $\Omega$ is the arrow set of $Q$ and $\overline{\Omega}$ is a set equipped with a bijection to $\Omega$, written $\Omega\ni a \leftrightarrow a^*\in \overline{\Omega}$. We extend this bijection canonically to an involution on $H = \Omega\sqcup\overline{\Omega}$, still written $a\mapsto a^*$, and decree $s(a^*) = t(a), t(a^*)=s(a)$. For each arrow $a\in H$ we write \begin{displaymath} \epsilon(a) = \begin{cases} 1 & \text{if}\hspace{.5em} a\in\Omega,\\ -1 & \text{if}\hspace{.5em} a\in\overline{\Omega}.\end{cases} \end{displaymath} Fix an integer $N\geq 1$. The {\em graded tripled quiver} $Q^{\operatorname{gtr}}$ associated to $Q$ (cf. Section 4 of \cite{McNKirwan}) is a quiver defined as follows. We give $\Qgtr$ the vertex set $I^{\operatorname{gtr}} = I\times [0, N]$ where $I$ is the vertex set of $Q$. If $\Omega$ is the edge set of $Q$ and $H = \Omega\sqcup \overline{\Omega}$ the associated set of pairs of an edge together with an orientation, we give $\Qgtr$ the arrow set \begin{displaymath} \big(H\times [0, N-1]\big) \sqcup \big(I\times [0, N-1]). \hspace{3em} \text{Thus,} \end{displaymath} \begin{enumerate} \item for each $h\in H$, $n\in [0, N-1]$ we have arrows $(h,n)$ with $\xymatrix{\overset{(s(h),n)}{\bullet} \ar[r]^{(h,n)} & \overset{(t(h),n+1)}{\bullet}}$, i.e. \begin{displaymath} s(h,n) = (s(h), n) \hspace{1em} \text{and} \hspace{1em} t(h,n) = (t(h), n+1); \end{displaymath} \item for each $i\in I$, $n\in [0, N-1]$ we have arrows $t_{(i,n)}$ with $\xymatrix{\overset{(i,n)}{\bullet} \ar[r]^{t_{(i,n)}} & \overset{(i,n+1)}{\bullet}}$, i.e. \begin{displaymath} s(t_{i,n}) = (i,n) \hspace{1em} \text{and} \hspace{1em} t(t_{(i,n)}) = (i,n+1). \end{displaymath} \end{enumerate} More discussion can be found in \cite{McNKirwan}. \subsection{Path Algebras}\label{sec:path algebras} Let $S = \bigoplus_i S e_i$ be a semisimple algebra with orthogonal system of idempotents $\{e_i\}$. Suppose $A$ is an algebra with homomorphism $S\rightarrow A$. We say that $x\in A$ {\em has diagonal Peirce decomposition} if $\displaystyle x\in \bigoplus_{i\in I} e_iAe_i$, or equivalently if it lies in the centralizer $Z_{A}(S)$. Given a quiver $Q$, We let $kQ$ denote the path algebra of the quiver. Thus, we have a finite-dimensional semisimple $k$-algebra $S = \bigoplus_{i\in I} ke_i$ with idempotents $e_i$ labelled by the vertices $i\in I$. We define an $S$-bimodule $B = B(Q)$, with $k$-basis labelled by the arrows, and ``arrows written left-to-right,'' so $e_iae_j =0$ unless $i = s(a), j=t(a)$, and so that $e_{s(a)}ae_{t(a)} = a$. Then $kQ = T_S(B(Q))$ (the tensor algebra). It is natural to grade the path algebra $kQ$ of any quiver $Q = (I,H)$---for example, $k\Qdbl$---by taking the semisimple algebra $S$ to lie in degree $0$ and the arrows $h\in H$ to lie in degree $1$: this is the standard nonnegative grading on the tensor algebra. The algebra $k\Qdbl\langle t\rangle$ thus is naturally bi-graded, hence has total grading with $\deg(t)=1$. We can also grade $k\Qgtr$ by putting the semisimple algebra $\displaystyle\bigoplus_{i\in I^{\operatorname{gtr}}} ke_i$ in degree $0$ and the arrows in degree $1$. We obtain a graded algebra homomorphism \begin{displaymath} k\Qdbl\langle t\rangle\longrightarrow k\Qgtr \hspace{3em} \text{by taking } \end{displaymath} \begin{displaymath} e_i\mapsto \sum_n e_{(i,n)}, \;\; i\in I, \hspace{2em} h\mapsto \sum_n (h,n), \;\; h\in H, \hspace{2em} t\mapsto \sum_{(i,n)}t_{(i,n)}. \end{displaymath} The graded algebra $k\Qgtr$ has the property $k\Qgtr_{\geq N+1} = 0$, so we obtain a homomorphism \begin{equation}\label{dbl to gtr} k\Qdbl\langle t\rangle\intN := k\Qdbl\langle t\rangle/k\Qdbl\langle t\rangle_{\geq N+1}\longrightarrow k\Qgtr. \end{equation} \begin{lemma}\label{lem:ind of quiver reps} Let $k\Qgtr\operatorname{-mod}$ denote the category of finite-dimensional left modules, and furthermore let $k\Qdbl\langle t\rangle_N\operatorname{-gr}_{[0,N]}$ denote the category of finite-dimensional graded left modules concentrated in degrees $[0,N]$. Then the homomorphism \eqref{dbl to gtr} determines an equivalence of categories: \begin{displaymath} k\Qgtr\operatorname{-mod} \xrightarrow{\simeq} k\Qdbl\langle t\rangle\intN\operatorname{-gr}_{[0,N]}. \end{displaymath} \end{lemma} \noindent This equivalence identifies representations of $k\Qdbl[t]\intN$ with representations of the quotient $k\Qgtr/J$, where $J$ denotes the two-sided ideal \begin{equation}\label{eq:t commutes} J = \Big(\big\{ t_{(s(h), n)}\cdot(h, n+1) - (h,n)\cdot t_{(t(h),n+1)}\; \big| \; h\in H, \hspace{.3em} n\in [0,N-2]\big\}\Big). \end{equation} \subsection{Universal Localizations} We briefly review some aspects of universal localizations that may be unfamiliar to the reader, using Chapter 4 of \cite{Schofield} as our reference; see also \cite{Cohn}. Suppose that $R$ is a ring with $1$ and $\Sigma$ is a set of elements of $R$. Then there is a ring $R_\Sigma$ with a homomorphism $R\rightarrow R_\Sigma$ that is universal with respect to the property that for every $r\in \Sigma$, $r$ becomes invertible in $R_\Sigma$. The ring $R_\Sigma$ is called the {\em universal localization} of $R$ at $\Sigma$; an alternative notation that is sometimes preferable is $\Sigma^{-1}R$. The universal localization is constructed as follows: letting $\Sigma^{-1}$ denote the set of symbols $a^{-1}$ for $a\in \Sigma$, we define \begin{displaymath} \Sigma^{-1} R = R_\Sigma := R\langle\Sigma^{-1}\rangle/(\{a^{-1} a -1 \; |\; a\in\Sigma\}). \end{displaymath} This has the universal property claimed. We will need the following properties, which follow immediately from the universal property. \begin{prop}\label{prop:univ loc} Suppose $R$ is a ring with $1$. \begin{enumerate} \item If $t\in Z(R)$ is central, then $R_{\{t\}}$ is isomorphic to the \O re localization of $R$ at $t$. \item If $\Sigma, \Sigma'\subseteq R$ are subsets, let $\overline{\Sigma}'$ denote the image of $\Sigma'$ in $R_\Sigma$. Then $(R_\Sigma)_{\overline{\Sigma}'} \cong R_{\Sigma\cup\Sigma'}$. \item Given a two-sided ideal $I\subseteq R$, let $\overline{\Sigma}$ denote the image of $\Sigma$ in $R/I$ and $I_\Sigma$ denote the two-sided ideal in $R_\Sigma$ generated by $I$. Then $(R/I)_{\overline{\Sigma}} \cong R_\Sigma/I_\Sigma$. \end{enumerate} \end{prop} \subsection{Multiplicative Preprojective Algebras} We review the multiplicative preprojective algebra of a quiver $Q$ as defined in \cite{CBS}. Given a quiver $Q$ with double $\Qdbl = (I,H)$, for each arrow $a\in H$ of $\Qdbl$, we define $g_a = 1 + aa^*\in k\Qdbl$. Write $L_Q$ for the algebra obtained by universal localization of $k\Qdbl$ inverting $\Sigma = \{g_a\; |\; a\in H\}$. Identify the tuple $q\in (k^\times)^I$ with the element $\sum_{i\in I} q_i e_i\in S$. Crawley-Boevey and Shaw choose an ordering of the arrows in $H$ and define $\displaystyle \rho_{\operatorname{CBS}} = \prod^{\longrightarrow}_{a\in H} g_a^{\epsilon(a)} - q$ (the arrow over the product indicates that it is taken in the chosen order). It is proven in \cite{CBS} that, up to isomorphism, the quotient algebra $L_Q/(\rho_{\operatorname{CBS}})$ does not depend on the choice of ordering. Thus, in this paper we specifically fix an ordering $\Omega = \{a_1, \dots, a_g\}$ on the arrows in $Q$, and let \begin{equation}\label{rho} \rho_{\operatorname{CBS}} = g_{a_1}g_{a_2}\dots g_{a_g} g_{a_1^*}^{-1} \dots g_{a_g^*}^{-1} -q. \end{equation} \begin{defn} The associated multiplicative preprojective algebra is \begin{displaymath} \Lambda^q = \Lambda^q(Q) = L_Q/(\rho_{\operatorname{CBS}}), \end{displaymath} where $\rho_{\operatorname{CBS}}$ is defined as in \eqref{rho}. \end{defn} \subsection{Homogenized Multiplicative Preprojective Algebras} A principal tool in this paper is a certain graded algebra $\sA$ that ``homogenizes'' the multiplicative preprojective algebra $\Lambda^q$ of \cite{CBS}. Here we construct the algebra $\sA$ and collect some basic facts about $\sA$ and its relation to the multiplicative preprojective algebra $\Lambda^q$. Thus, fix a quiver $Q$. We consider $k\Qdbl[t] = k\Qdbl\langle t\rangle/(ta-at \; | \; a\in k\Qdbl)$ as a nonnegatively graded algebra, with the generators $a\in H, t$ all in degree 1, and $S = \oplus_{i\in I} ke_i$ in degree $0$. We let \begin{displaymath} G_a := t^2 + a a^* \in k\Qdbl[t] \hspace{2em} \text{for all} \hspace{2em} a\in H. \end{displaymath} \begin{remark} Each $G_a$ has diagonal Peirce decomposition: more precisely, \begin{displaymath} e_{s(a)}G_a = e_{s(a)}t^2 + aa^* = G_a e_{s(a)}, \hspace{2em} \text{and} \hspace{2em} e_iG_a = e_it^2 = t^2 e_i = G_a e_i \hspace{1em} \text{for $i\neq s(a)$}. \end{displaymath} \end{remark} We note the obvious equalities \begin{equation}\label{G_a composed a} G_a a = aG_{a^*}, \hspace{3em} a^* G_a = G_{a^*} a^*. \end{equation} \noindent Given $q \in (k^\times)^I$, we identify $q$ with $q:= \sum_{i\in I} q_ie_i\in k\Qdbl$, a sum of idempotents in the path algebra (which thus also has diagonal Peirce decomposition). Analogously to \cite{CBS}, the algebra $k\Qdbl[t]$ admits a universal localization in which the elements $G_a$, $a\in H$, and $t$ are inverted: we write $\sL_t$ for this universal localization. The algebra $\sL_t$ contains invertible elements $\displaystyle g_a = t^{-2}G_a = 1 + \frac{a}{t}\frac{a^*}{t}$ in graded degree $0$. We have $(\sL_t)_0 \cong L_Q$, where $L_Q$ is the universal localization of $k\Qdbl \cong k\Qdbl[t^{\pm 1}]_0$ at the elements $g_a$, $a\in H$, as in \cite{CBS} and reviewed above. As above, fix an ordering $\Omega = \{a_1, \dots, a_g\}$ on the arrows in $Q$. Write \begin{equation}\label{D defn} D = G_{a_1}\dots G_{a_g}, \hspace{1em} D^* = q(G_{a_g^*}\dots G_{a_1^*}), \end{equation} \begin{equation}\label{rho defn} \rho = D - D^* = (G_{a_1}\dots G_{a_g}) - q(G_{a_g^*}\dots G_{a_1^*}) \in k\Qdbl[t]. \end{equation} \begin{defn} We write $\sA = k\Qdbl[t]/(\rho)$, where $(\rho)$ denotes the two-sided ideal generated by $\rho$. \end{defn} The element $\rho$ has diagonal Peirce decomposition, and so $\rho e_i = e_i \rho$, and $(\rho) = (\{\rho e_i| i\in I\})$. \begin{prop}\label{prop:algebra A} Write $\Sigma = \{G_a \; | \; a\in H\}\cup\{t\}$. We have: \begin{enumerate} \item $\sA$ is a graded algebra where $a_i, a_i^*$ and $t$ have degree $1$ (and $S = \sum_{i\in I} ke_i$ lies in degree $0$). \item The universal localization \begin{equation}\label{universal localization} \Lambda_t:= \Sigma^{-1}\sA \end{equation} of $\sA$ obtained by inverting all $G_a, a\in H$, and $t$, is a graded algebra, and $(\Lambda_t) \cong \Lambda^q(Q)[t^{\pm 1}]$ where $\Lambda^q(Q)=: \Lambda^q$ denotes the multiplicative preprojective algebra of \cite{CBS}. \end{enumerate} \end{prop} The isomorphism \eqref{universal localization} of part (2) of Proposition \ref{prop:algebra A} follows from Proposition \ref{prop:univ loc}. \section{Representations and their Moduli} \subsection{Representations of $k\Qdbl$ and $k\Qgtr$}\label{reps of kQdbl and kQgtr} Fixing some $N\geq 2g$, where $g$ is the number of arrows in $Q$, we form the graded-tripled quiver $\Qgtr$ associated to $Q$ as above.\footnote{Thus, in particular, $N$ is at least as large as the degree of the relation $\rho$.} Given a dimension vector $\alpha \in \mathbb{Z}_{\geq 0}^I$ for the quiver $\Qdbl$, we write $\algtr \in \mathbb{Z}_{\geq 0}^{I\times [0,N]}$ for the dimension vector for $k\Qgtr$ for which $\algtr_{i,n} = \alpha_i$ for all $n\in [0,N]$. We write $\operatorname{Rep}(k\Qdbl,\alpha)$ for the space of representations of $k\Qdbl$ with dimension vector $\alpha$ and $\mathbb{G} = \prod_i GL(\alpha_i)$ for the automorphism group; thus \begin{displaymath} \operatorname{Rep}(k\Qdbl,\alpha) := \prod_{h\in H} \operatorname{Hom}(k^{\alpha_{s(h)}}, k^{\alpha_{t(h)}}). \end{displaymath} Similarly we write $\operatorname{Rep}(k\Qgtr,\algtr)$ for the space of representations of $k\Qgtr$ with dimension vector $\algtr$, and $\Ggtr$ for the automorphism group. As in the construction of Section 4.3 of \cite{McNKirwan}, there is a natural ``induction functor'' from the category of representations of $k\Qdbl$ with dimension vector $\alpha$ to the category of representations of $k\Qgtr$ of dimension vector $\algtr$. The construction proceeds as follows. To a representation $V$ of $k\Qdbl$ we may associate the $\mathbb{Z}_{\geq 0}$-graded vector space $V[t]$, and let arrows $h$ of $\Qdbl$ act as multiplication followed by shift-of-grading. This makes $V[t]$ into a graded left $k\Qdbl[t]$-module. We then form $V[t]/V[t]_{\geq N+1}$, a graded left $k\Qdbl[t]\intN$-module, and finally apply Lemma \ref{lem:ind of quiver reps} to get a representation of $k\Qgtr$: in fact, a representation of the quotient $k\Qgtr/J$ where $J$ is as in \eqref{eq:t commutes}. More concretely, the above construction is the following. Suppose we have a representation $V = (V_i)_{i \in I}$ of $k\Qdbl$ of dimension vector $\alpha$. We obtain a representation of $k\Qdbl[t]$ on a vector space $V_{\bullet, \bullet}$ of dimension vector $\algtr$ defined by: \begin{enumerate} \item setting $V_{i,n} := V_i$ for all $n\in [a,b]$; \item defining $t_{i,n} = t\cdot -: V_{i,n} = V_i \xrightarrow{\operatorname{id}} V_i = V_{i,n+1}$ to act by shift of $\mathbb{Z}$-grading; and \item defining each generator of $k\Qdbl[t]$ corresponding to $h\in H$ to act as the composite \begin{displaymath} (h,n): V_{i,n} = V_i \xrightarrow{h\cdot -} V_i = V_{i,n}\xrightarrow{t\cdot -} V_{i,n+1}. \end{displaymath} \end{enumerate} The construction determines a morphism of algebraic varieties (``induction'') \begin{displaymath} \mathsf{Ind}^\circ: \operatorname{Rep}(k\Qdbl,\alpha)\longrightarrow \operatorname{Rep}(k\Qgtr, \algtr). \end{displaymath} \begin{displaymath} \text{Write } \hspace{2em} \mathbb{G} = \prod_i GL(V_{i})\hspace{1em}\text{and}\hspace{1em} \Ggtr = \prod_{(i,n)\in I\times[a,b]} GL(V_{i,n}) \cong \prod_{n\in [a,b]}\mathbb{G}, \end{displaymath} with the diagonal homomorphism $\displaystyle \operatorname{diag}: \mathbb{G}\rightarrow \Ggtr \cong \prod_{n\in [a,b]} \mathbb{G}$. Then the morphism $\mathsf{Ind}^\circ$ is $(\mathbb{G}, \Ggtr)$-equivariant. We thus get a natural $\Ggtr$-equivariant morphism \begin{equation}\label{Ind-map} \mathsf{Ind}: \Ggtr\times_{\mathbb{G}} \operatorname{Rep}(k\Qdbl,\alpha)\longrightarrow \operatorname{Rep}(k\Qgtr, \algtr). \end{equation} Thus, given a representation $(a_h: V_{s(h)}\rightarrow V_{t(h)})_{h \in H}$ of $k\Qdbl$ on $V$, and $(g_{i,n})\in \Ggtr$, we have \begin{displaymath} \mathsf{Ind}\big((g_{i,n}), a_h\big) = \big((h,n), t_{i,n}\big) \, \text{where} \, (h,n) = g_{t(h),n+1}a_h g_{s(h),n}^{-1} \, \text{and} \, t_{i,n} = g_{i,n+1} g_{i,n}^{-1}. \end{displaymath} \begin{prop} The map $\mathsf{Ind}$ of \eqref{Ind-map} defines a $\Ggtr$-equivariant open immersion of $\Ggtr\times_{\mathbb{G}} \operatorname{Rep}(k\Qdbl,\alpha)$ in $\operatorname{Rep}(k\Qgtr/J,\algtr)$, whose image consists of those $\big((h,n), t_{i,n}\big)$ for which: \begin{equation} \text{$t_{i,n}$ is an isomorphism for all $n\in [0,N-1]$.} \tag{$\dagger$} \end{equation} \end{prop} \subsection{Representations of $\sA$ and $\sA\intN$}\label{induction and truncation} Let $\sA\operatorname{-Gr}$ denote the category of graded left $\sA$-modules. We also consider the category $\sA\intN\operatorname{-Gr}_{\geq 0}$ of those graded left $\sA\intN$-modules $M$ for which $M_i=0$ for $i\notin [0,N]$. We remark that $\sA\intN\operatorname{-Gr}_{\geq 0}$ can naturally be viewed as a full subcategory of the category $\sA\intN\operatorname{-Gr}$ of all graded left $\sA\intN$-modules, hence also of $\sA\operatorname{-Gr}$. Define a functor of truncation, \begin{displaymath} \tau_{[0,N]}: \sA\operatorname{-Gr} \longrightarrow \sA\intN\operatorname{-Gr}_{\geq 0}, \end{displaymath} by $M\mapsto \tau_{[0,N]}M := M_{\geq 0}/M_{\geq N+1}$. As above, we have a graded vector space injection $\tau_{[0,N]}(M)\rightarrow M$ that is the identity on the $m$th graded piece for $m\in [0,N]$ and is zero elsewhere; this map is $\sA_{\leq m}$-linear on $M_{N-m}$. \subsection{Representations of $\sA$ and $\Lambda^q$} We note: \begin{remark} The functor $\Lambda_t\operatorname{-Gr}\longrightarrow \Lambda^q\operatorname{-Mod}$, $M\mapsto M_0$, is an equivalence of categories. \end{remark} Recall from \eqref{universal localization} that, letting $\Sigma = \{G_a \; | \; a\in H\}\cup\{t\}$, we have a graded algebra isomorphism $\Sigma^{-1}\sA \cong \Lambda_t = \Lambda^q[t^{\pm 1}]$, and hence a graded algebra homomorphism $\sA\rightarrow \Lambda^q[t^{\pm 1}]$. Given a left or right $\Lambda^q$-module $\overline{\mathcal{M}}$, we form a graded left or right $\Lambda_t$-module $\mathcal{M} = \overline{\mathcal{M}}[t^{\pm 1}]$, and thus a graded $\sA$-module $M = \mathcal{M}[t]=\mathcal{M}[t^{\pm 1}]_{\geq 0}$. This defines a functor \begin{displaymath} R: \Lambda^q\operatorname{-Mod}\longrightarrow \sA\operatorname{-Gr}_{\geq 0}. \end{displaymath} In the opposite direction, we have a functor $(\Lambda_t\otimes_{\sA} -)_0: \sA\operatorname{-Gr}_{\geq 0}\rightarrow \Lambda^q\operatorname{-Mod}$. We have: \begin{lemma}\label{dual of fin diml lambda module} \mbox{} \begin{enumerate} \item The functors $\xymatrix{(\Lambda_t\otimes_{\sA} -)_0: \sA\operatorname{-Gr}_{\geq 0}\ar@<.5ex>[r] & \Lambda^q\operatorname{-Mod}:R\ar@<.5ex>[l] }$ form an adjoint pair. \item If $\overline{\mathcal{M}}$ is a finite-dimensional left $\Lambda^q$-module then the graded left $\sA$-module $M = \overline{\mathcal{M}}[t]$ is finitely generated and projective as a left $S_t$-module and as a left $k[t]$-module. Moreover, we have $\operatorname {Hom}_{k[t]}(M,k[t]) \cong \operatorname {Hom}_k(\overline{\mathcal{M}},k)[t]$ as a graded right $\sA$-module. \end{enumerate} \end{lemma} \subsection{Representation Spaces and Group Actions} Because the multiplicative preprojective algebra $\Lambda^q$ is the quotient $L_Q/(\rho_{\operatorname{CBS}})$ of the localization $L_Q$ of $k\Qdbl$ by the ideal generated by $\rho_{\operatorname{CBS}}$, the space $\operatorname{Rep}(\Lambda^q,\alpha)$ of left $\Lambda^q$-modules with dimension vector $\alpha$ is naturally a locally closed subscheme of $\operatorname{Rep}(k\Qdbl,\alpha)$: it is the closed subset, defined by vanishing of $\rho_{\operatorname{CBS}}$, of the open set defined by invertibility of the elements $g_a$. Similarly, the algebra $\sA\intN$ is a quotient of $k\Qdbl[t]\intN$ and thus, via Lemma \ref{lem:ind of quiver reps}, the space $\operatorname{Rep}_{\operatorname{gr}}(\sA\intN, \algtr)$ of graded left $\sA\intN$-modules concentrated in degrees $[0,N]$ is identified with a closed subscheme of $\operatorname{Rep}(k\Qgtr, \algtr)$ defined by the vanishing of the images of $\rho$ and $J$ in $k\Qgtr$. It is immediate from the construction of Section \ref{reps of kQdbl and kQgtr} that: \begin{prop}[cf. Prop. 4.7 of \cite{McNKirwan}] The morphism $\mathsf{Ind}$ of \eqref{Ind-map} restricts to an open immersion: \begin{displaymath} \mathsf{Ind}: \Ggtr \times_{\mathbb{G}} \operatorname{Rep}(\Lambda^q,\alpha) \rightarrow \operatorname{Rep}_{\operatorname{gr}}(\sA\intN, \algtr). \end{displaymath} Its image consists of those representations on which the elements $t, G_a$ act invertibly whenever their domain and target lie in the range $[0,N]$. \end{prop} \begin{corollary} The map $\mathsf{Ind}$ defines an open immersion of moduli stacks \begin{displaymath} \operatorname{Rep}(\Lambda^q,\alpha)/\mathbb{G}\rightarrow \operatorname{Rep}_{\operatorname{gr}}(\sA\intN, \algtr)/\Ggtr. \end{displaymath} \end{corollary} \subsection{Semistability and Stability}\label{sec:semistability} We next discuss (semi)stability of representations and the corresponding GIT quotients. For any quiver $Q = (I,\Omega)$ with dimension vector $\alpha\in \mathbb{Z}^I_{\geq 0}$, a GIT stability condition is given by $\theta\in\mathbb{Z}^I_{\geq 0}$ satisfying $\sum_i \theta_i\alpha_i = 0$. The vector $\theta$ determines a character $\chi_{\theta}: \prod_i GL(\alpha_i)\rightarrow\Gm$, $\chi(g_i)_{i\in I} = \prod_i \det(g_i)^{\theta_i}$, and the condition $\sum_i \theta_i\alpha_i = 0$ guarantees that the diagonal copy $\Delta(\Gm)$ of $\Gm$ in $\prod_i GL(\alpha_i)$ lies in the kernel of $\chi$; we require this because $\Delta(\Gm)$ acts trivially on $\operatorname{Rep}(Q,\alpha)$. Given dimension vectors $\beta,\alpha$, we write $\beta <\alpha$ if $\beta\neq\alpha$ and $\beta_i\leq \alpha_i$ for all $i\in I$. We now turn to stability conditions for the doubled and tripled quivers $\Qdbl$ and $\Qgtr$ for a fixed quiver $Q$. Suppose $\theta$ is a stability condition for $\Qdbl$ and dimension vector $\alpha$. We construct a stability condition $\thetagtr$ for $\Qgtr$ with dimension vector $\algtr$ as follows. For a representation $M$ of $k\Qgtr$ of dimension vector $\algtr$, we write $\delta_{i,n}(M) := \dim(M_{i,n})$; we will write $\thetagtr$ as a linear combination of the $\delta_{i,n}$. Also, we note that it suffices to construct a {\em rational} linear functional $\thetagtr$, since any positive integer multiple of $\thetagtr$ evidently defines the same stable and semistable loci. We fix an ordering on the vertices of $Q$, identifying $I = \{1, \dots, r\}$. and a positive integer $\displaystyle T \gg 0.$ We define: \begin{displaymath} \thetagtr := \sum_{i=1}^r T^i\big[\delta_{i,N}-\delta_{i,0}\big] + \sum_{i\in I} \theta_i \delta_{i,0}. \end{displaymath} \begin{prop} Suppose $M = \mathsf{Ind}(N)$ for some representation $N$ of $k\Qdbl$ with dimension vector $\alpha$. Then $M$ is semistable, respectively stable, with respect to $\thetagtr$ if and only if $N$ is semistable, respectively stable, with respect to $\theta$. \end{prop} \noindent The proof is an easy adaptation of that of Proposition 4.12(4) of \cite{McNKirwan}. We remark that the above construction does {\em not} match \cite{McNKirwan}: there we chose to construct a stability $\thetagtr$ for $\Qgtr$ that would be nondegenerate if $\theta$ was, whereas here we ignore this possible requirement. While it would be possible to copy the construction of a stability $\thetagtr$ from \cite{McNKirwan} and prove analogues of the statements of \cite{McNKirwan}, there are cases important to multiplicative quiver varieties in which it is not possible to find a stability condition for $k\Qdbl$ that is nondegenerate in the sense used in \cite{McNKirwan}: for example, the case when $Q$ has a single vertex and loops based at that vertex, with dimension vector $\alpha = n>1$. However, again for multiplicative quiver varieties, in some interesting cases the choice of the parameter $q$ can guarantee that every semistable representation of $\Lambda^q$ is automatically stable (though not for numerical reasons, as nondegeneracy guarantees). Indeed, we say $q = (q_i)_{i\in I}\in (k^\times)^I$ is a {\em primitive $\alpha$th root of unity} if $q^\alpha :=\prod q_i^{\alpha_i} = 1$ and $q^\beta\neq 1$ for all $0<\beta<\alpha$. We have: \begin{lemma}[\cite{CBS}, Lemma 1.5] \mbox{} \begin{enumerate} \item Suppose that $M$ is a representation of $\Lambda^q$ with dimension vector $\alpha$. Then $q^\alpha = 1$. \item In particular, if $q$ is a primitive $\alpha$th root of $1$, then every representation of $\Lambda^q$ of dimension vector $\alpha$ is $\theta$-stable for every $\theta$. \end{enumerate} \end{lemma} For example, if $Q= (\{\ast\},E)$ where $E$ has $g$ loops at $\ast$, $\alpha = n$, and $q$ is a primitive $n$th root of $1$, then every representation of $\Lambda^q$ of dimension $n$ is stable for every $\theta$; the corresponding moduli space of representations of $\Lambda^q$ is the character variety $\operatorname{Char}(\Sigma_g, GL_n, q\operatorname{Id})$ of the introduction. \begin{remark} It would be interesting to characterize those stability conditions $\thetagtr$ for $k\Qgtr$ with the property that there is a stability condition $\theta$ for $k\Qdbl$ so that if $M = \mathsf{Ind}(N)$ then $M$ is $\thetagtr$-(semi)stable if and only if $N$ is $\theta$-(semi)stable. \end{remark} \begin{notation} We write \begin{displaymath} \mathcal{M}_{\theta}^q(\alpha) := \operatorname{Rep}(\Lambda^q,\alpha)/\!\!/_{\theta}\mathbb{G} \;\;\text{and}\;\; \mathcal{M}_{\theta}^q(\alpha)^{\operatorname{s}} := \operatorname{Rep}(\Lambda^q,\alpha)^{\theta-\operatorname{s}}/\!\!/_{\theta}\mathbb{G} \end{displaymath} for the coarse moduli spaces determined by a stability condition $\theta$. \end{notation} \subsection{Moduli Stacks and Resolutions}\label{sec:moduli stacks} The moduli stacks \begin{center} $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-ss}}/\mathbb{G}$ and $\operatorname{Rep}_{\operatorname{gr}}(\sA\intN, \algtr)^{\thetagtr\operatorname{-ss}}/\Ggtr$ \end{center} are never Deligne-Mumford stacks: the diagonal copy of $\Gm$ in $\mathbb{G}$, respectively $\Ggtr$, always acts trivially on $\operatorname{Rep}(\Lambda^q,\alpha)$, respectively $\operatorname{Rep}_{\operatorname{gr}}(\sA\intN, \algtr)^{\thetagtr\operatorname{-ss}}$. Thus, the moduli stack of stable representations $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\mathbb{G}$ is always a $\Gm$-gerbe over the moduli space $\mathcal{M}_{\theta}^q(\alpha)^{\operatorname{s}}$ of stable representations. However, one can make a choice of subgroup $\bS\subset \mathbb{G}$ that ensures that the quotient stack $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta-\operatorname{s}}/\bS$ is a Deligne-Mumford stack and that $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta-\operatorname{s}}/\bS \rightarrow\mathcal{M}_{\theta}^q(\alpha)^{\operatorname{s}}$ is a finite gerbe (indeed a principal $BH$-bundle for a finite abelian group $H$). Indeed, for example, we can choose any character $\rho: \Ggtr\rightarrow \Gm$ for which the composite with the diagonal embedding $\rho\circ\Delta: \Gm\rightarrow \Gm$ is nontrivial, hence surjective. Then $\Sgtr:=\ker(\rho)$ has the property that $\Ggtr = \Sgtr\cdot \Delta(\Gm)$ and similarly letting $\bS = \mathbb{G}\cap \Sgtr$ we have $\mathbb{G} = \bS\cdot\Delta(\Gm)$. Moreover, since $\Delta(\Gm)$ is the stabilizer of every point of $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}$ and $H := \Delta(\Gm)\cap \bS$ is finite, we get: \begin{lemma} The quotient $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS$ is a Deligne-Mumford stack and the natural morphism \begin{displaymath} \operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS\rightarrow \mathcal{M}_{\theta}^q(\alpha)^{\operatorname{s}} \end{displaymath} is a torsor for the commutative group stack $BH$ (in particular, is a finite gerbe over $\mathcal{M}_{\theta}^q(\alpha)^{\operatorname{s}}$). \end{lemma} By construction, we have an open immersion: \begin{displaymath} \operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS\hookrightarrow \operatorname{Rep}_{\operatorname{gr}}(\sA\intN, \algtr)^{\thetagtr\operatorname{-ss}}/\Sgtr, \end{displaymath} and the coarse space of the target $\operatorname{Rep}_{\operatorname{gr}}(\sA\intN, \algtr)^{\thetagtr\operatorname{-ss}}/\Sgtr$ is the projective moduli scheme $\operatorname{Rep}_{\operatorname{gr}}(\sA\intN, \algtr)/\!\!/_{\thetagtr}\Ggtr$: it is projective because it is a closed subscheme of $\operatorname{Rep}(k\Qgtr,\algtr)/\!\!/_{\thetagtr}\Ggtr$, which (as in \cite{McNKirwan}) is itself projective because $k\Qgtr$ has no oriented cycles. As in \cite{McNKirwan}, since our goal is to compactify $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS$ appropriately, we will replace the quotient stack $\operatorname{Rep}_{\operatorname{gr}}(\sA\intN, \algtr)^{\thetagtr\operatorname{-ss}}/\Sgtr$ by its closed substack defined as the closure of $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS$. \begin{notation} We denote the closure of $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS$ in $\operatorname{Rep}_{\operatorname{gr}}(\sA\intN, \algtr)^{\thetagtr\operatorname{-ss}}/\Sgtr$ by $\overline{\mathcal{M}}_{\operatorname{st}}$. \end{notation} \begin{lemma} The stack $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS$ is smooth. The stack $\overline{\mathcal{M}}_{\operatorname{st}}$ is integral and its coarse moduli space is a projective scheme. The natural morphism $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS\hookrightarrow \overline{\mathcal{M}}_{\operatorname{st}}$ is an open immersion. \end{lemma} \begin{proof} The smoothness of $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS$ is Theorem 1.10 of \cite{CBS}. The remaining assertions are immediate. \end{proof} \noindent We may apply the results of \cite{Kirwan} or \cite{Edidin} to $\operatorname{Rep}(k\Qgtr, \algtr)^{\thetagtr\operatorname{-ss}}/\Sgtr$ and its closed substack $\overline{\mathcal{M}}_{\operatorname{st}}$ to obtain a projective Deligne-Mumford stack (i.e., a Deligne-Mumford stack whose coarse space is a projective scheme) $\overline{\mathcal{M}}_{\operatorname{st}}'$ equipped with a projective morphism $\overline{\mathcal{M}}_{\operatorname{st}}'\rightarrow \overline{\mathcal{M}}_{\operatorname{st}}$ that is an isomorphism over $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS$. The stack $\overline{\mathcal{M}}_{\operatorname{st}}'$ is itself, by construction, a global quotient of a quasiprojective variety by $\bS$, and thus we may apply equivariant resolution to resolve the singularities of $\overline{\mathcal{M}}_{\operatorname{st}}'$, to obtain: \begin{prop}\label{prop:compactification} The smooth Deligne-Mumford stack $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS$ admits an open immersion \begin{displaymath} \operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS\hookrightarrow\overline{\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS} \end{displaymath} in a smooth projective Deligne-Mumford stack equipped with a projective morphism $\overline{\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS}\rightarrow\overline{\mathcal{M}}_{\operatorname{st}}$ that is compatible with the open immersion $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS\hookrightarrow \overline{\mathcal{M}}_{\operatorname{st}}$. \end{prop} \section{The Diagonal of the Algebra $\sA$} \subsection{Bimodule of Derivations} Recall that we have fixed an ordering $\Omega = \{a_1, \dots, a_g\}$ on the arrows in $Q$. For $j=1, \dots, g$ we write \begin{displaymath} L_{a_j} = G_{a_1}\dots G_{a_{j-1}}, \hspace{1em} R_{a_j} = G_{a_{j+1}}\dots G_{a_g}, \hspace{1em}\text{so}\hspace{1em} D = L_{a_j}G_{a_j}R_{a_j}, \end{displaymath} \begin{displaymath} L_{a_j^*} = G_{a_g^*}\dots G_{a_{j+1}^*}, \hspace{1em} R_{a_j^*} = G_{a_{j-1}^*}\dots G_{a_1^*}, \hspace{1em}\text{so}\hspace{1em} D^* = qL_{a_j^*}G_{a_j^*}R_{a_j^*}. \end{displaymath} Let $B$ denote the sub-$(S[t],S[t])$-bimodule of $k\Qdbl[t]$ spanned by the arrows, so that $k\Qdbl[t]$ is identified with the tensor algebra $T_{S[t]}(B)$. As in \cite[p.190]{CBS}, the bimodule that is the target of the universal $\sS$-linear bimodule derivation of $k\Qdbl[t]$ satisfies \begin{displaymath} \Omega_{S[t]}(k\Qdbl[t]) \cong k\Qdbl[t]\otimes_{S[t]} B\otimes_{S[t]} k\Qdbl[t], \end{displaymath} under which the universal derivation $\delta_{k\Qdbl[t]/S[t]}: k\Qdbl[t] \rightarrow\Omega_{S[t]}(k\Qdbl[t])$ is identified with $a \mapsto 1\otimes a\otimes 1$. As in \cite[p.~190]{CBS}, for the universal localization $\sL_t$ we also get $\Omega_{S[t]}(\sL_t) \cong \sL_t \otimes_{k\Qdbl[t]} \Omega_{S[t]}(k\Qdbl[t]) \otimes_{k\Qdbl[t]} \sL_t$ with the obvious identification of the universal derivation $\delta_{\sL_t/S[t]}$. We write: \begin{equation}\label{P_1-eq} P_1 = \sA\otimes_{S[t]} B\otimes_{S[t]} \sA \cong \sA \underset{k\Qdbl[t]}{\otimes} \Omega_{S[t]}(k\Qdbl[t]) \underset{k\Qdbl[t]}{\otimes} \sA. \end{equation} The module $P_1$ is evidently projective as a bimodule. Via the above description, we obtain a collection of bimodule basis elements \begin{displaymath} \eta_a, a\in H, \;\;\; \text{via} \;\;\; \eta_a = 1\otimes a \otimes 1 \in \sA\otimes_{S[t]} B\otimes_{S[t]} \sA = P_1. \end{displaymath} \subsection{An Exact Sequence} We write \begin{displaymath} P_0 = \sA \otimes_{S[t]} \sA. \end{displaymath} Write $\eta_i = e_i\otimes 1 = 1\otimes e_i$, $i\in I$, for the obvious bimodule generators of $P_0$. Define graded bimodule maps \begin{equation}\label{the-complex} P_0(-2g) \xrightarrow{\alpha} P_1(-1) \xrightarrow{\beta} P_0 \end{equation} by $\beta(\eta_a) = a \eta_{s(a)} - \eta_{t(a)} a$ for arrows $a$ of $\Qgtr$, and \begin{equation}\label{alpha formula} \alpha(\eta_i) = \sum_{a\in\Omega, s(a) = i} L_a \Delta_a R_a - \sum_{a\in\Omega, t(a)=i} qL_{a^*}\Delta_{a^*}R_{a^*}, \end{equation} where $\Delta_a = \delta(G_a)$ (where $\delta$ denotes the universal derivation). It is then immediate that $\alpha(\eta_i) = e_i\cdot\delta(\rho)$; in particular, letting $\theta: P_0\rightarrow (\rho)/(\rho^2)$ denote the map defined by $\theta(p\otimes q) = p\rho q$ and writing $\phi$ for the isomorphism defined by \eqref{P_1-eq}, we have: \begin{equation}\label{Omega equation} \phi\circ\alpha = \delta\circ\theta. \end{equation} Imitating the proof of Lemma 3.1 of \cite{CBS} gives: \begin{lemma} The sequence \begin{equation}\label{complex P} P_0(-2g)\xrightarrow{\alpha} P_1(-1) \xrightarrow{\beta} P_0 \xrightarrow{\gamma} \sA\rightarrow 0, \end{equation} where $\gamma (p\otimes q) = pq$, is an exact sequence of $\mathbb{Z}$-graded bimodules. \end{lemma} \begin{proof} As in \cite[Theorem~10.3]{Schofield}, one gets an exact sequence \begin{displaymath} (\rho)/(\rho^2) \xrightarrow{\delta} \Omega_{S[t]}\big(k\Qdbl[t]\big) \rightarrow \Omega_{S[t]} \sA \rightarrow 0. \end{displaymath} As in \cite{CBS}, splicing this sequence and the defining sequence for $\Omega_{S[t]}\big(\sA\big)$ and applying \eqref{Omega equation} gives a commutative diagram \begin{displaymath} \xymatrix{ P_0(-2g)\ar[d]^{\theta} \ar[r]^{\alpha} & P_1(-1) \ar[d]_{\cong}^{\phi} \ar[r]^{\beta} & P_0 \ar[d]_{\cong}^{\psi} \ar[r] & \sA \ar[d]^{=} \ar[r] & 0\\ (\rho)/(\rho^2) \ar[r]^{\hspace{-4.5em}\delta} & \sA \underset{k\Qdbl[t]}{\otimes} \Omega_{S[t]}(k\Qdbl[t]) \underset{k\Qdbl[t]}{\otimes} \sA \ar[r]^{\hspace{4.5em}\xi} & \sA \otimes_{S[t]} \sA \ar[r] & \sA \ar[r] & 0. } \end{displaymath} The vertical arrows $\phi, \psi$ are isomorphisms and $\theta$ is surjective, yielding the assertion. \end{proof} \subsection{Dual of the Map $P_0(-2g)\xrightarrow{\alpha} P_1(-1)$} Recall that the {\em enveloping algebra} of $\sA$ over $k[t]$ is \begin{displaymath} \sA^e := \sA\otimes_{k[t]}\sA^{\operatorname{op}}. \end{displaymath} We consider $\sA^e$ as a left $\sA^e$-module where $a\otimes a'\in\sA^e$ acts by \begin{displaymath} a\otimes a'\cdot (x\otimes x') = ax\otimes x'a'. \end{displaymath} We remark that $\sA^e$ naturally also has a {\em right} $\sA^e$-module structure commuting with the left $\sA^e$-action, where $a\otimes a'\in \sA^e$ acts on the right by \begin{displaymath} (x\otimes x')\cdot a\otimes a' = xa\otimes a'x'. \end{displaymath} Given a finitely generated left $\sA^e$-module, we form $P^\vee = \operatorname {Hom}_{\sA^e}(P, \sA^e)$, the dual over the enveloping algebra; by the above discussion, this module has a right $\sA^e$-module structure, which we can identify with a left $\sA^e$-module structure via the isomorphism \begin{displaymath} (\sA^e)^{\operatorname{op}} \rightarrow \sA^e, \hspace{2em} a\otimes a' \mapsto a' \otimes a. \end{displaymath} We now want to calculate the dual $\alpha^\vee$ of the map $\alpha$ of \eqref{the-complex} using the formula \eqref{alpha formula}. Note that \begin{displaymath} \Delta(G_a) = a\delta(a^*) + \delta(a)a^* = a \eta_{a^*} + \eta_a a^*. \end{displaymath} We thus find from Formula \eqref{alpha formula} that the $\eta_a$-component of $\alpha$ is given by \begin{displaymath} \alpha(\eta_i)_{\eta_a} = \begin{cases} L_a \eta_a a^* R_a - qL_{a^*}a^*\eta_a R_{a^*} & \text{if $a\in\Omega$, $i=s(a)$},\\ L_{a^*} a^* \eta_a R_{a^*} - qL_a \eta_a a^* R_a & \text{if $a\in\overline{\Omega}$, $i=t(a)$} \end{cases} \end{displaymath} and zero otherwise. Let $\{\eta_a^\vee\}$ denote the basis of $P_1^\vee$ dual to the basis $\{\eta_a\}$ of $P_1$; we note that \begin{equation} \eta^\vee_a \in e_{t(a)}P_1^\vee e_{s(a)}. \end{equation} It follows from the above formulas: \begin{equation}\label{alpha check formulas} \alpha^\vee(\eta_a^\vee) = \begin{cases} a^*R_a \eta_{s(a)}^\vee L_a - qR_{a^*} \eta_{t(a)}^\vee L_{a^*}a^* & \text{if $a\in\Omega$,}\\ R_a^*\eta_{t(a)}^\vee L_{a^*}a^* - qa^*R_a \eta_{s(a)}^\vee L_a & \text{if $a\in\overline{\Omega}$.} \end{cases} \end{equation} \begin{lemma}\label{alpha vee formulas} For all $a\in \Omega$, we have \begin{align} \alpha^\vee\big(\eta^\vee_a a- a^*\eta_{a^*}^\vee\big) & = G_{a^*}\big(qR_{a^*}\eta^\vee_{t(a)} L_{a^*}\big) - \big(qR_{a^*}\eta^\vee_{t(a)}L_{a^*}\big)G_{a^*},\\ \alpha^\vee\big(a\eta_a^\vee - \eta_{a^*}^\vee a^*\big) & = G_a\big(R_a\eta_{s(a)}^\vee L_a\big) - \big(R_a\eta_{s(a)}^\vee L_a\big) G_a. \end{align} \end{lemma} \begin{lemma}\label{easy commuting case} If $a\in H$, $s(a)\neq i$, then $G_a D\eta^\vee_i = D\eta_i^\vee G_a$ in $P_0^\vee$. \end{lemma} \begin{proof} The element $D$ is a product of elements of diagonal Peirce type, hence itself is of diagonal Peirce type. Thus, using $e_{s(a)}\eta_i^\vee = 0 = \eta_i^\vee e_{s(a)}$, we get \begin{multline*} G_a D\eta^\vee_i = \big(G_ae_{s(a)} + (1-e_{s(a)})t^2\big)D\eta^\vee_i = (1-e_{s(a)}t^2)D\eta^\vee_i \\ = D\eta^\vee_i(1-e_{s(a)}t^2) = D\eta^\vee_i\big(e_{s(a)}G_a + (1-e_{s(a)})t^2\big) = D\eta^\vee_i G_a. \end{multline*} This completes the proof. \end{proof} Suppose now that $\mathcal{M}$ is a graded right $\Lambda_t$-module; then $M = \mathcal{M}_{\geq 0}$ is a graded right $\sA$-submodule of $\mathcal{M}$. For example, we could take $\mathcal{M} = \Lambda_t$ itself, as in \eqref{universal localization}. We consider the map \begin{displaymath} M\otimes_{\sA} P_1^\vee(1) \xrightarrow{1_{M}\otimes\alpha^\vee} M\otimes_{\sA} P_0^\vee(2g). \end{displaymath} \begin{remark}\label{defined operators} We note that, under the above hypothesis on $M$, for any product $Q$ of elements $G_a$, $a\in H$, of degree $\deg(Q)$, the elements $Qt^{-\deg(Q)}$ and $t^{\deg(Q)}Q^{-1}$ of $\Lambda_t$ give well defined operators of right multiplication on $M$ that satisfy all relations in $\Lambda_t$. \end{remark} \begin{prop}\label{big formulas} Suppose that $M = \mathcal{M}_{\geq 0}$ for a graded right $\Lambda_t$-module $\mathcal{M}$. Then for all $m\in M$ and all $i\in I$ and $1\leq j \leq g$, \begin{enumerate} \item the elements $m\big(G_{a_j} D \eta^\vee_i - D\eta_i^\vee G_{a_j}\big)$, $m\big(G_{a_j^*} D \eta^\vee_i - D\eta_i^\vee G_{a_j^*}\big)$, and \item the elements $m\big(a_j^* Dt^{-2}\eta_{s(a_j)}^\vee - Dt^{-2}\eta_{t(a_j)}^\vee a_j^*\big)$, $m\big(a_j Dt^{-2}\eta_{t(a_j)}^\vee - Dt^{-2}\eta_{s(a_j)}^\vee a_j\big)$ \end{enumerate} lie in $\operatorname{Im}(1_{\Lambda_t}\otimes\alpha^\vee) \subseteq M\otimes_{\sA} P_0^\vee(2g)$. \end{prop} \begin{proof} (1) We first prove that $m\big(G_{a_j} D \eta^\vee_i - D\eta_i^\vee G_{a_j}\big) \in \operatorname{Im}(1_M \otimes\alpha^\vee)$ by (strong) induction on $j$. \vspace{.5em} \noindent {\em Base Case.} $j=1$. By Lemma \ref{easy commuting case}, the assertion is true for $i\neq s(a_1)$. From Lemma \ref{alpha vee formulas}, we have \begin{displaymath} mG_{a_1}\alpha^\vee\big(a_1\eta_{a_1}^\vee - \eta_{a_1^*}^\vee a_1^*\big) = mG_{a_1}G_{a_1}\big(R_{a_1}\eta_{s(a_1)}^\vee L_{a_1}\big) - mG_{a_1}\big(R_{a_1}\eta_{s(a_1)}^\vee L_{a_1}\big) G_{a_1} = mG_{a_1}D\eta_{s(a_1)} - mD\eta_{s(a_1)}G_{a_1}. \end{displaymath} This completes the base case. \vspace{.5em} \noindent {\em Induction Step.} Assume $m\big(G_{a_k}D\eta^\vee_i - D\eta^\vee_i G_{a_k}\big)\in\operatorname{Im}(1_{M}\otimes\alpha^\vee)$ for all $i\in I$ and $k<j$. Again, by Lemma \ref{easy commuting case}, we have $mG_{a_j}D\eta^\vee_i - mD\eta^\vee_i G_{a_j}\in\operatorname{Im}(1_M\otimes\alpha^\vee)$ for $i\neq s(a_j)$. Applying Lemma \ref{alpha vee formulas} gives \begin{multline*} mG_{a_j}\alpha^\vee\big(a_j\eta_{a_j}^\vee - \eta_{a_j^*}^\vee a_j^*\big) = mG_{a_j}G_{a_j}\big(R_{a_j}\eta_{s(a_j)}^\vee L_{a_j}\big) - mG_{a_j}\big(R_{a_j}\eta_{s(a_j)}^\vee L_{a_j}\big) G_{a_j} \\ = mG_{a_j}\big(t^{\deg(L_{a_j})}L_{a_j}^{-1} Dt^{-\deg(L_{a_j})}\eta_{s(a_j)}^\vee L_{a_j}\big) - \big(t^{\deg(L_{a_j})}L_{a_j}^{-1} Dt^{-\deg(L_{a_j})}\eta_{s(a_j)}^\vee L_{a_j}\big) G_{a_j}\\ = m\big(G_{a_j} D\eta_{s(a_j)}^\vee - D\eta_{s(a_j)}^\vee G_{a_j}\big), \end{multline*} where the last equality applies the inductive hypothesis. This completes the induction step, thus proving the assertion for the elements $G_{a_j} D \eta^\vee_i - D\eta_i^\vee G_{a_j}$. The proof for $G_{a_j^*} D \eta^\vee_i - D\eta_i^\vee G_{a_j^*}$ follows the analogous descending induction on $j$. (2) Taking note of Remark \ref{defined operators}, from \eqref{alpha check formulas} we have $\alpha^\vee(mG_{a_j}t^{-2}\eta_{a_j}) = mG_{a_j}t^{-2}a_j^*R_{a_j} \eta_{s(a_j)}^\vee L_{a_j} - mG_{a_j}t^{-2}qR_{a_j^*} \eta_{t(a_j)}^\vee L_{a_j^*}a_j^*$. Applying part (1) of the proposition to the right-hand side of this formula gives \begin{align*} \alpha^\vee(mG_{a_j}t^{-2}\eta_{a_j}) & = mG_{a_j}t^{-2}a_j^*R_{a_j}L_{a_j} \eta_{s(a_j)}^\vee - mqG_{a_j}t^{-2}R_{a_j^*} L_{a_j^*}\eta_{t(a_j)}^\vee a_j^* + \operatorname{Im}(1_{\Lambda_t}\otimes\alpha^\vee)\\ & = mG_{a_j}a_j^* G_{a_j}^{-1} Dt^{-2}\eta_{s(a_j)}^\vee - mG_{a_j}G_{a_j^*}^{-1} Dt^{-2}\eta_{t(a_j)}^\vee a_j^* + \operatorname{Im}(1_{\Lambda_t}\otimes\alpha^\vee) \\ & = mG_{a_j}G_{a_j^*}^{-1}\big(a_j^* Dt^{-2}\eta_{s(a_j)}^\vee - Dt^{-2}\eta_{t(a_j)}^\vee a_j^* \big) + \operatorname{Im}(1_{\Lambda_t}\otimes\alpha^\vee) \end{align*} where the last equality uses \eqref{G_a composed a}; in particular this gives the first assertion of Part (2) of the proposition. The second assertion follows similarly. \end{proof} \section{Analysis of the Ext-Complex} \subsection{The Complex \eqref{complex P} and the $\operatorname {Hom}$-Functor} Let $M, N$ be graded left $\sA$-modules such that $M$ is finitely generated and projective as a $k[t]$-module. To the exact sequence \begin{displaymath} P_0(-2g)\otimes_{\sA} M \xrightarrow{\alpha\otimes 1} P_1(-1)\otimes_{\sA} M \xrightarrow{\beta\otimes 1} P_0\otimes_{\sA} M\xrightarrow{\gamma\otimes 1} M\rightarrow 0 \end{displaymath} we apply the functor $\operatorname {Hom}_{\sA}(-, N)$ to obtain an exact sequence \begin{equation}\label{Hom sequence beginning} 0\rightarrow \operatorname {Hom}_{\sA}(M, N)\rightarrow \operatorname {Hom}_{\sA}(P_0\otimes_{\sA} M, N) \xrightarrow{(\beta\otimes 1)^*} \operatorname {Hom}_{\sA}(P_1(-1)\otimes_{\sA} M, N). \end{equation} We continue the sequence \eqref{Hom sequence beginning} using \begin{equation}\label{used alpha dual} \operatorname {Hom}_{\sA}(P_1(-1)\otimes_{\sA} M, N)\xrightarrow{(\alpha\otimes 1)^*} \operatorname {Hom}_{\sA}(P_0(-2g)\otimes_{\sA} M, N). \end{equation} Thus, we would like to compute the cokernel of the map \eqref{used alpha dual}. \begin{prop}\label{Hom equals tensor} Let $M, N$ be graded left $\sA$-modules such that $M$ is finitely generated and projective as a $k[t]$-module, and write $M^* = \operatorname {Hom}_{k[t]}(M,k[t])$. Consider the contravariant functors of finitely generated projective $\sA^e$-modules $P$, \begin{displaymath} P\mapsto \big(N\otimes_{k[t]} M^*\big) \otimes_{\sA^e} P^\vee \hspace{2em} \text{and} \hspace{2em} P\mapsto \operatorname {Hom}_{\sA}(P\otimes_{\sA} M, N). \end{displaymath} The natural transformation $\big(N\otimes_{k[t]} M^*\big) \otimes_{\sA^e} P^\vee \xrightarrow{\Psi} \operatorname {Hom}_{\sA}(P\otimes_{\sA} M, N)$ of these functors of projective $\sA^e$-modules $P$ is a natural isomorphism. \end{prop} \begin{proof} By projectivity, it suffices to check for $P=\sA^e$, where it follows by adjunction. \end{proof} \begin{corollary} Under the hypotheses of Proposition \ref{Hom equals tensor}, the cokernel of the map \eqref{used alpha dual} is \begin{displaymath} \operatorname{coker}\big(1_{M^*}\otimes \alpha^\vee \otimes 1_N: M^*\otimes_{\sA} P_1^\vee(1)\otimes_{\sA} N\rightarrow M^*\otimes_{\sA} P_0^\vee(2g)\otimes_{\sA} N\big). \end{displaymath} \end{corollary} We note the following identities, which are immediate from adjunction: \begin{lemma}\label{adjunction identities} Suppose that $M = \overline{M}[t]$ is the graded left $\sA$-module associated to a finite-dimensional left $\Lambda^q$-module $\overline{M}$. Then: \begin{displaymath} \operatorname {Hom}_{\sA}(P_1\otimes_{\sA} M, N)\cong \operatorname {Hom}_{\sA}(\sA\otimes_{S_t} B[t]\otimes_{S_t} \sA\otimes_{\sA} M, N) \cong \operatorname {Hom}_{S_t}(B\otimes_S M, N)\cong \operatorname {Hom}_S(B\otimes_S \overline{M}, N), \end{displaymath} \begin{displaymath} \operatorname {Hom}_{\sA}(P_0\otimes_{\sA} M, N)\cong \operatorname {Hom}_{\sA}(\sA\otimes_{S_t} \sA\otimes_{\sA} M, N) \cong \operatorname {Hom}_{S_t}(M, N)\cong \operatorname {Hom}_S(\overline{M}, N). \end{displaymath} \end{lemma} \subsection{The Ext-Complex}\label{sec:Ext complex} Fix $N\geq 2g$. Let $\overline{V}$ be a finite-dimensional representation of $\Lambda^q$ of dimension vector $\alpha$, and let $V=\overline{V}[t]$ be the corresponding graded $\sA$-module as in Section \ref{induction and truncation}, and specifically as in Lemma \ref{dual of fin diml lambda module}. Suppose $W$ is a $\mathbb{Z}_{\geq 0}$-graded $\sA\intN = \sA/\sA_{\geq N+1}$-module, identified with a representation of $\Qgtr$ that has dimension vector $\algtr$. Thus $\tau_{[0,N]}V$ is also identified with a representation of $\Qgtr$ that has dimension vector $\algtr$. Let $P_\bullet$ denote the complex of \eqref{the-complex}. We consider the complex $\operatorname {Hom}_{\sA}(P_\bullet\otimes_{\sA} V, W)$. Since the sources and target of the Homs in this complex are graded $\sA$-modules, each Hom-space can be regarded as a graded vector space; we write \begin{displaymath} \mathsf{Ext} = \left[ \operatorname {Hom}_{\sA\operatorname{-Gr}}(P_0\otimes_{\sA} V, W) \xrightarrow{\beta^\vee} \operatorname {Hom}_{\sA\operatorname{-Gr}}(P_1\otimes_{\sA} V, W(1)) \xrightarrow{\alpha^\vee} \operatorname {Hom}_{\sA\operatorname{-Gr}}(P_0\otimes_{\sA} V, W(2g))\right] \end{displaymath} for its degree $0$ graded piece. As in \cite{McNKirwan}, using Lemma \ref{adjunction identities} we may identify $\mathsf{Ext}$ with: \begin{equation}\label{eq:perfect-complex} L(V_{0},W_{0}) \xrightarrow{\partial_0} E(V_{0}, W_{1})\xrightarrow{\partial_1} L(V_{0}, W_{2g}), \end{equation} where $\partial_0 = \beta^\vee_0$ and $\partial_1 = \alpha^\vee_0$. \begin{prop}\label{prop:Ext properties} Suppose that $\tau_{[0,N]}V$ and $W$ are graded $\sA\intN$-modules. Then: \begin{enumerate} \item We have an isomorphism $\operatorname{coker}(\partial_1) \cong \operatorname {Hom}_k\big(\operatorname {Hom}_{\sA\intN\operatorname{-Gr}}(W,\tau_{[0,N]}V), k\big)$. \end{enumerate} If, in addition, $\tau_{[0,N]}V$ is $\theta$-stable and $W$ is $\theta$-semistable, both of dimension vector $\algtr$, then: \begin{enumerate} \item[(2)] We have $\operatorname{ker}(\partial_0) = 0$ unless $\tau_{[0,N]}V\cong W$, in which case $\operatorname{ker}(\partial_0) \cong k$. \item[(3)] We have that $\operatorname{coker}(\partial_1)$ is zero unless $\tau_{[0,N]}V\cong W$, in which case $\operatorname{coker}(\partial_1)\cong k$. \end{enumerate} \end{prop} \begin{proof} Assertion (2) follows from the exactness of \eqref{Hom sequence beginning} and stability. Similarly, assertion (3) is immediate from assertion (1) by stability of $\tau_{[0,N]}V$ and semistability of $W$. Thus it remains to prove assertion (1). Similarly to Lemma \ref{adjunction identities}, we use Proposition \ref{Hom equals tensor} to identify \begin{align}\label{first adjunction identity} \operatorname {Hom}_{\sA\operatorname{-Gr}}(P_0\otimes_{\sA} V, W(2g)) & \cong V^*\otimes_{S_t} W\cong V_0^*\otimes_S W_{2g}\cong \operatorname {Hom}_S(V_0, W_{2g}),\\ \label{second adjunction identity} \operatorname {Hom}_{\sA\operatorname{-Gr}}(P_1\otimes_{\sA} V, W(1)) & \cong (B\otimes_S V_0)^*\otimes_S W_1 \cong \operatorname {Hom}_S(B\otimes_S V_0, W_1). \end{align} Specifically, we use \eqref{first adjunction identity} to identify $\sum_r \lambda_r\otimes w_r\in V_0^* \otimes_S W_{2g}$ with an element $\phi\in L(V_0,W_{2g})$, i.e., an $I$-graded homomorphism $(\phi_i): V_0\rightarrow W_{2g}$; and we use \eqref{second adjunction identity} to identify $\sum_r\lambda_r\otimes w_r \in (B\otimes_S V_0)^*\otimes_S W_1$ with an element $\psi \in E(V_0, W_1)$. Under these identifications, the elements \begin{displaymath} \sum_r \lambda_r\big(a_j^* Dt^{-2}\eta_{s(a_j)}^\vee - Dt^{-2}\eta_{t(a_j)}^\vee a_j^*\big)w_r, \hspace{2em} \sum_r \lambda_r\big(a_j Dt^{-2}\eta_{t(a_j)}^\vee - Dt^{-2}\eta_{s(a_j)}^\vee a_j\big)w_r \end{displaymath} of Proposition \ref{big formulas} are identified with \begin{displaymath} \psi_{a_j} a_j^* t^{-2}D - a_j^*\psi_{a_j}t^{-2}D \hspace{2em} \text{and} \hspace{2em} \psi_{a_j^*} a_j t^{-2}D - a_j\psi_{a_j^*}t^{-2}D \end{displaymath} for $\psi\in E(V_0,W_1) \cong \operatorname {Hom}_{\sA\operatorname{-Gr}}(P_1\otimes_{\sA} V,W(1))$. Via the trace pairings, the $k$-linear dual of $\partial_1$ is a map $L(W_{2g}, V_0) \xrightarrow{\partial_1^*} E(W_1,V_0)$; an element $\phi^*\in L(W_{2g}, V_0)$ satisfies $\partial_1^*(\phi^*) = 0$ only if \begin{displaymath} \operatorname{tr}\left[\phi^* \psi_{a_j} a_j^* t^{-2}D - \phi^* a_j^*\psi_{a_j}t^{-2}D\right] = 0 \hspace{1em} \text{and} \hspace{1em} \operatorname{tr}\left[\phi^* \psi_{a_j^*} a_j t^{-2}D - \phi^* a_j\psi_{a_j^*}t^{-2}D\right] = 0 \end{displaymath} for all $\psi\in E(V_0,W_1)$. Since each $G_{a_j}t^{-2}$ acts as an isomorphism on $V^*$, the elements $\lambda G_{a_j}t^{-2}\eta_{a_j}w$ and $\lambda G_{a_j^*}t^{-2}\eta_{a_j^*}w$, for $\lambda\in V_0^*, w\in W_1$, collectively generate $\operatorname {Hom}_{\sA\operatorname{-Gr}}(P_1\otimes_{\sA} V, W(1))$; it follows that an element $\phi^*\in L(W_{2g}, V_0)$ satisfies $\partial_1^*(\phi^*) = 0$ if and only if the above conditions are satisfied for all $\psi\in E(V_0,W_1)$. Cyclically permuting, these conditions become \begin{equation}\label{phi star conditions} a_j^*t^{-2}D\phi^* - t^{-2}D\phi^*a_j^* = 0 \hspace{1em} \text{and} \hspace{1em} a_jt^{-2}D\phi^* - t^{-2}D\phi^*a_j = 0. \end{equation} Given $\phi^*\in L(W_{2g}, V_0)$ satisfying these conditions, define $\Phi^*: W\rightarrow \tau_{[0,N]} V$ by taking $\Phi^*|_{W_{2g-m}} = t^{-m}D\phi^*t^{m}$. It is immediate from the conditions \eqref{phi star conditions} that on $W_{2g-m}$, $m\geq 2$, we have that $\Phi^*$ commutes with all $a_j$ and $a_j^*$, whereas for $m=1$ we may write $\Phi^*|_{W_{2g-1}} = t t^{-2}D\phi^*t$ and again $\Phi^*$ commutes with $a_j, a_j^*$. Thus $\Phi^*$ defines an $\sA\intN$-linear homomorphism $W\rightarrow \tau_{[0,N]} V$, yielding a linear map $\operatorname{ker}(\partial_1^*)\hookrightarrow \operatorname {Hom}_{\sA\intN\operatorname{-Gr}}(W, \tau_{[0,N]} V)$. Conversely, given a graded $\sA\intN$-module homomorphism $\Phi^*: W\rightarrow \tau_{[0,N]} V$, defining $\phi^*: W_{2g}\rightarrow V_0$ by $\phi^* = D^{-1}\Phi^*|_{W_{2g}}$, we see that $\phi^*\in\operatorname{ker}(\partial_1^*)$. This completes the proof. \end{proof} \section{Cohomology of Varieties and Stacks}\label{sec:coh} \begin{center} {\bf In the remainder of the paper, the base field $k$ is assumed to be $\mathbb{C}$. } \end{center} Here as throughout the paper, we use $H^*(X)$ to denote cohomology with $\mathbb{Q}$-coefficients, and $H^{\operatorname{BM}}_*(X)$ to denote Borel-Moore homology with $\mathbb{Q}$-coefficients; if $X$ is a smooth Deligne-Mumford stack, there is a canonical isomorphism $H^*(X) \cong H^{\operatorname{BM}}_*(X)$. \subsection{Mixed Hodge Structure on the Cohomology of an Algebraic Stack} Suppose that $\mathfrak{X}$ is an algebraic stack of finite type over $\mathbb{C}$. It follows from Example 8.3.7 of \cite{hodge-III} that the cohomology $H^*(\mathfrak{X})$ comes equipped with a functorial mixed Hodge structure. \begin{prop} Suppose $\mathfrak{X}$ is a complex Deligne-Mumford stack with the action of the commutative group stack $BH$ for some finite group $H$, and that $\mathfrak{X}$ has a coarse moduli space $\mathfrak{X}\rightarrow\operatorname{sp}(\mathfrak{X})$ with an isomorphism $\mathfrak{X}\rightarrow\operatorname{sp}(\mathfrak{X}) = \mathfrak{X}/BH$. Then $H^*(\mathfrak{X},\mathbb{Q}) = H^*\big(\operatorname{sp}(\mathfrak{X}),\mathbb{Q}\big)$ as mixed Hodge structures.\footnote{We explicitly write the $\mathbb{Q}$-coefficients to emphasize that they are essential.} \end{prop} \begin{proof} Use the Leray spectral sequence and the fact that $H^*(BH, \mathbb{Q}) = \mathbb{Q}$ for a finite group $H$. \end{proof} \subsection{Pushforwards and the Projection Formula} Suppose $f: X\rightarrow Y$ is a proper morphism of relative dimension $d$ of smooth, connected Deligne-Mumford stacks. Then there is a pushforward, or Gysin, map $f_*: H^*(X)\rightarrow H^{*-d}(Y)$. \begin{prop}[\cite{dCM}]\label{dCM} If $X$ and $Y$ are of finite type (so their cohomologies support canonical mixed Hodge structures), the Gysin map $f_*$ is a morphism of mixed Hodge structures. \end{prop} The Gysin map satisfies the projection formula: for classes $c\in H^*(X), c'\in H^*(Y)$, we have \begin{equation}\label{projection formula} f_*(c\cup f^*c') = f_*(c)\cup c'. \end{equation} Suppose $X$ and $Y$ are smooth Deligne-Mumford stacks and $C\in H^*(X\times Y)$ is a cohomology class. By the K\"unneth theorem we have $H^*(X\times Y)\cong H^*(X)\otimes H^*(Y)$, and thus we may write $C = \sum x_i\otimes y_i$ with $x_i\in H^*(X)$, $y_i\in H^*(Y)$. The classes $x_i$, $y_i$ are the {\em K\"unneth components} of $C$ (with respect to $X$ or $Y$ respectively). Now suppose that $f: X\rightarrow Y$ is a representable morphism from a smooth Deligne-Mumford stack $X$ to a smooth, proper Deligne-Mumford stack $Y$. The graph morphism $X\xrightarrow{(1,f)} X\times Y$ is not usually a closed immersion. \begin{prop}[cf. Proposition 2.1 of \cite{McNKirwan}]\label{prop:image} The image of $f^*: H^*(Y)\rightarrow H^*(X)$ is contained in the span of the K\"unneth components of $(1,f)_*[X]$ with respect to the left-hand factor $X$. \end{prop} \begin{proof} Write $\xymatrix{X & X\times Y \ar[l]_{p_X} \ar[r]^{p_Y} & Y}$ for the projections. Write $p_*: Y\rightarrow \operatorname{Spec}(\mathbb{C})$ for the projection to a point; then $(p_X)_*$ exists since $Y$ is proper. We have $f^* = (1,f)^*p_Y^*$ and $(p_X)_* (1,f)_* = \operatorname{id}$. Using the projection formula, then, we get \begin{displaymath} f^* = (p_X)_*(1,f)_*f^* = (p_X)_*(1,f)_*(1,f)^*p_Y^* = (p_X)_*\big((1,f)_*[X] \cap p_Y^*(-)\big). \end{displaymath} This proves the claim. \end{proof} \subsection{Cohomology of Compactifications} A finite-type Deligne-Mumford stack $\resol$ is {\em quasi-projective} if its coarse space $\operatorname{sp}(\resol)$ is a quasi-projective scheme. For example, if a reductive group $\bS$ acts on a polarized quasiprojective variety $\mathbb{M}$, then any open substack of $\mathbb{M}^s/\bS$ is a quasi-projective Deligne-Mumford stack.\footnote{Here $\mathbb{M}^s$ means stable points in the GIT sense: in particular, stabilizers are finite.} The cohomology $H^k(M)$ is {\em pure} if its mixed Hodge structure is pure of weight $k$: that is, $W_k\big(H^k(M)\big) = H^k(M)$. We say {\em $H^*(M)$ is pure} if each $H^k(M)$ is pure. \begin{prop}\label{prop:semiproj-coh} Suppose $\mathfrak{Y} = Y/\mathbb{G}$ is a quotient stack (i.e., the quotient of an algebraic space by a linear algebraic group scheme) and that $\resol^\circ\subset\resol\subset\mathfrak{Y}$ are open, separated, quasi-projective, smooth Deligne-Mumford substacks of $\mathfrak{Y}$. Then the image of the restriction map $H^k(\resol)\rightarrow H^k(\resol^\circ)$ contains $W_k\big(H^k(\resol^\circ)\big)$; in particular, if $H^*(\resol^\circ)$ is pure, then the restriction map is surjective. \end{prop} \begin{proof} Consider first the case of smooth quasi-projective varieties $\resol^\circ\subset\resol$. Then, for any smooth projective compactification $\overline{\resol}$ of $\resol$, the image of $H^*(\overline{\resol})\rightarrow H^*(\resol^\circ)$ is independent of the choice of $\overline{\resol}$: for example, by the Weak Factorization theorem, any two such $\overline{\resol}, \overline{\resol}'$ are related by a sequence of blow-ups and blow-downs along smooth centers in the complement of $\resol^\circ$, and the claimed independence follows from the usual formula for the cohomology of a blow-up. Since the image of $H^k(\overline{\resol})$ in $H^k(\resol^\circ)$ is $W_k\big(H^*K(\resol^\circ)\big)$ by Corollaire 3.2.17 of \cite{Deligne}, the claim follows in this case. We now consider the general case. By the assumptions, $\resol$ and $\resol^\circ$ are (separated) quasi-projective smooth Deligne-Mumford stacks that are global quotients. By Theorem 1 of \cite{KreschVistoli}, there exist a smooth quasi-projective scheme $\mathsf{W}$ and a finite flat LCI morphism $\mathsf{W}\rightarrow \resol$; the fiber product $\resol^{\circ}\times_{\resol} \mathsf{W} \rightarrow \resol^{\circ}$ is then also finite, flat, and LCI. Using the commutative square \begin{displaymath} \xymatrix{\resol^{\circ}\times_{\resol} \mathsf{W} \ar[r]^{\hspace{2em}\wt{j}} \ar[d]^{q^\circ} & \mathsf{W}\ar[d]^{q}\\ \resol^{\circ} \ar[r]^{j} & \resol } \end{displaymath} and base change, we find: \begin{enumerate} \item $H^k(\mathsf{W}) \xrightarrow{q_*} H^k(\resol)$ and $H^k(\resol^{\circ}\times_{\resol} \mathsf{W})\xrightarrow{q^\circ_*} H^k(\resol^\circ)$ are surjective (indeed, $q_*q^*$ and $q^\circ_*(q^\circ)^*$ are multiplication by the degree of $q$). \item Since the Gysin maps $q^\circ_*, q_*$ are morphisms of mixed Hodge structures by Proposition \ref{dCM}, \begin{displaymath} W_k\big(H^k(\resol^{\circ}\times_{\resol} \mathsf{W})\big)\xrightarrow{q^\circ_*} W_k\big(H^k(\resol^\circ)\big) \hspace{1em} \text{is surjective}. \end{displaymath} \item The image of $H^k(\mathsf{W})$ in $H^k(\resol^{\circ}\times_{\resol} \mathsf{W})$ contains $W_k\big(H^k(\resol^{\circ}\times_{\resol} \mathsf{W})\big)$, by the conclusion of the previous paragraph. \end{enumerate} The assertion is now immediate. \end{proof} \subsection{Markman's Formula for Chern Classes of Complexes} Suppose that $\mathfrak{M}$ is a smooth Deligne-Mumford stack and \begin{equation}\label{Markman complex} C: V_{-1}\xrightarrow{g} V_0\xrightarrow{f} V_1 \end{equation} is a complex of locally free sheaves on $\mathfrak{M}$ of ranks $r_{-1}, r_0, r_1$ respectively. \begin{prop}[Lemma 4 of \cite{Markman}]\label{prop:Markman} Suppose that $\Gamma\subset \mathfrak{M}$ is a smooth closed substack of pure codimension $m$, and that the complex $C$ of \eqref{Markman complex} satisfies: \begin{enumerate} \item $\mathcal{H}^{-1}(C) = 0$, \item $\mathcal{H}^1(C)$ and $\mathcal{H}^1(C^\vee)$ are line bundles on $\Gamma$, \item $m\geq 2$ and $\operatorname{rk}(C) = m-2$. \end{enumerate} Then if $m$ is even, $c_m(C) = [\Gamma]$ and $c_m\big(\mathcal{H}^0(C)\big) = \left(1-(m-1)!\right)[\Gamma]$. \end{prop} \begin{remark} Markman's Lemma 4 is ostensibly stated for smooth varieties $M$, but Section 3 of {\em op. cit.} generalizes the assertion to smooth Deligne-Mumford stacks. \end{remark} \subsection{Proofs of Theorems \ref{stack main thm} and \ref{main thm}} Fix a quiver $Q$, stability condition $\theta$ for $\Qdbl$ and the corresponding stability condition $\thetagtr$ for $\Qgtr$ as in Section \ref{sec:semistability}. Choosing a subgroup $\bS\subset\mathbb{G}$ as in Section \ref{sec:moduli stacks}, we obtain a ``graph immersion'' in a product of Deligne-Mumford stacks \begin{equation} \operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS \xrightarrow{\iota} \operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS \times \operatorname{Rep}_{\operatorname{gr}}(\sA\intN, \algtr)^{\thetagtr\operatorname{-ss}}/\Sgtr. \end{equation} We write $\iota$ for the immersion and $\Gamma = \operatorname{Im}(\iota)$ for its image, a smooth closed substack. We remark that $\iota$ is {\em not} a closed immersion unless $H$ is trivial; however, the morphism $\iota$ identifies \begin{displaymath} \Gamma \cong \operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS\times BH. \end{displaymath} It follows that $(1\times \iota)_*[\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}]$ is a nonzero rational multiple of $[\Gamma]$, and thus we may apply Proposition \ref{prop:image} with $(1\times \iota)_*[\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}]$ replaced by $[\Gamma]$, and we do this below. The factors $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS$ and $\operatorname{Rep}_{\operatorname{gr}}(\sA\intN, \algtr)^{\thetagtr\operatorname{-ss}}/\Sgtr$ come equipped with universal representations $V$, $W$ respectively. The complex $\mathsf{Ext}$ defined in Section \ref{sec:Ext complex} descends to the product $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS \times \operatorname{Rep}_{\operatorname{gr}}(\sA\intN, \algtr)^{\thetagtr\operatorname{-ss}}/\Sgtr$. We recall from Proposition \ref{prop:compactification} the compactification $\overline{\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS}$ of $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS$, which maps to $\operatorname{Rep}_{\operatorname{gr}}(\sA\intN, \algtr)^{\thetagtr\operatorname{-ss}}/\Sgtr$ and induces an isomorphism on the open substack $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS$. Pulling the complex $\mathsf{Ext}$ back to the product $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS \times \overline{\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS}$, we get a complex that we will denote $C$. Direct calculation shows that the rank of $C$ is $m-2 = \operatorname{codim}(\Gamma) -2$ (we note that its rank depends only on $Q$ and $\alpha$: only the differentials distinguish between the ordinary and multiplicative preprojective algebras). It follows from Proposition \ref{prop:Ext properties} that $C$ has the following properties: \begin{enumerate} \item $\mathcal{H}^{-1}(C) = 0$, \item $\mathcal{H}^1(C)$ and $\mathcal{H}^1(C^\vee)$ are set-theoretically supported on $\Gamma$, and their scheme-theoretic restrictions to $\Gamma$ are line bundles. \end{enumerate} Thus, in order to show that $\Gamma$ satisfies the hypotheses of Proposition \ref{prop:Markman}, it suffices to show that $\Gamma$ is the scheme-theoretic support of both $\mathcal{H}^1(C)$ and $\mathcal{H}^1(C^\vee)$. We do this by considering a morphism \begin{displaymath} \operatorname{Spec}(k[\epsilon])\rightarrow \operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS \times \overline{\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS} \end{displaymath} (where here and throughout the remainder of the proof, $k[\epsilon]$ denotes the ring of dual numbers) with the property that the closed point maps to $\Gamma$. Then it will suffice to show that either $\operatorname{Spec}(k[\epsilon])$ maps scheme-theoretically to $\Gamma$, or that the pullbacks of $\mathcal{H}^1(C)$ and $\mathcal{H}^1(C^\vee)$ to $\operatorname{Spec}(k[\epsilon])$ are scheme-theoretically supported at $\operatorname{Spec}(k) \subset \operatorname{Spec}(k[\epsilon])$. We thus consider a representations $\overline{V}_\epsilon, \overline{V}'_{\epsilon}$ of $\Lambda^q[\epsilon]$ that are flat over $k[\epsilon]$ and having dimension vector $\alpha$ after tensoring with $k\otimes_{k[\epsilon]}-$; and let $V_\epsilon =\overline{V}_\epsilon[t]$, $V_\epsilon' =\overline{V}'_\epsilon[t]$. Assume $\tau_{[0,N]}V_{\epsilon}$, $\tau_{[0,N]}V_{\epsilon}'$ are $\thetagtr$-stable. The complex $C_\epsilon$ defined as in \eqref{eq:perfect-complex} becomes a complex of free $k[\epsilon]$-modules, and $\mathcal{H}^{-1}(C_\epsilon) = \operatorname {Hom}_{\sA_\epsilon\operatorname{-Gr}}\big(\tau_{[0,N]}V_{\epsilon}, \tau_{[0,N]}V_{\epsilon}'\big)$. This cohomology is isomorphic to $k[\epsilon]$ if and only if $\tau_{[0,N]}V_{\epsilon}\cong \tau_{[0,N]}V_{\epsilon}'$. Thus, $\mathcal{H}^1(C_\epsilon^\vee)$ is isomorphic to $k[\epsilon]$ if and only if $\tau_{[0,N]}V_{\epsilon}\cong \tau_{[0,N]}V_{\epsilon}'$. It follows that the scheme-theoretic support of $\mathcal{H}^1(C^\vee)$ is the reduced diagonal $\Gamma$. It remains to check that the same is true of $\mathcal{H}^1(C)$. To do that, we again start with $\tau_{[0,N]}V_{\epsilon}$, $\tau_{[0,N]}V_{\epsilon}'$ as above, but consider them as graded $\sA$-modules (i.e., forgetting the $k[\epsilon]$-module structure) and form the complex $C$. Assume without loss of generality that $k\otimes_{k[\epsilon]}\tau_{[0,N]}V_{\epsilon}\cong k\otimes_{k[\epsilon]}\tau_{[0,N]}V_{\epsilon}'$ as graded $\sA$-modules. We have a short exact sequence of graded $\sA$-modules \begin{equation}\label{eq:extension} 0\rightarrow \epsilon\tau_{[0,N]}V_{\epsilon} \rightarrow \tau_{[0,N]}V_{\epsilon} \rightarrow k\otimes_{k[\epsilon]}\tau_{[0,N]}V_{\epsilon} \rightarrow 0, \end{equation} where by $k[\epsilon]$-flatness we have $ \epsilon\tau_{[0,N]}V_{\epsilon} \cong k\otimes_{k[\epsilon]}\tau_{[0,N]}V_{\epsilon}$, both stable; and similarly for $V'$. Assume without loss of generality that $k\otimes_{k[\epsilon]}\tau_{[0,N]}V_{\epsilon}\cong k\otimes_{k[\epsilon]}\tau_{[0,N]}V_{\epsilon}'$ as graded $\sA$-modules. Suppose there is a nonzero map of graded $\sA$-modules, $\phi: \tau_{[0,N]}V_{\epsilon}\rightarrow \tau_{[0,N]}V_{\epsilon}'$. If the composite \begin{equation}\label{eq:composite} \epsilon\tau_{[0,N]}V_{\epsilon} \hookrightarrow \tau_{[0,N]}V_{\epsilon} \xrightarrow{\phi} \tau_{[0,N]}V_{\epsilon}' \twoheadrightarrow k\otimes_{k[\epsilon]}\tau_{[0,N]}V_{\epsilon}' \end{equation} is nonzero, it is an isomorphism, since both its domain and target are stable of dimension vector $\algtr$; in which case both \eqref{eq:extension} and its analogue for $\tau_{[0,N]}V_{\epsilon}'$ are split extensions. This means that the tangent vector to $\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS \times \operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS$ determined by $(\overline{V}_\epsilon, \overline{V}_\epsilon')$ is zero, and thus irrelevant to our analysis of the scheme-theoretic support of $\mathcal{H}^1(C)$. Thus we may assume that the composite \eqref{eq:composite} is zero, and so the morphism $\phi$ is a homomorphism of $1$-extensions. Now if $\phi(\epsilon\tau_{[0,N]}V_{\epsilon})\neq 0$, then again by stability it maps isomorphically onto $\epsilon\tau_{[0,N]}V_{\epsilon}'$. Since \eqref{eq:extension} is non-split, it follows that $\phi$ is an isomorphism, implying that the tangent vector determined by $(\overline{V}_\epsilon, \overline{V}_\epsilon')$ is tangent to $\Gamma$, and again irrelevant to our analysis of the scheme-theoretic support of $\mathcal{H}^1(C)$. Finally then, we may assume that $\phi(\epsilon\tau_{[0,N]}V_{\epsilon}) = 0$. It follows that $\phi$ factors through the quotient $k\otimes_{k[\epsilon]}\tau_{[0,N]}V_{\epsilon}$; similarly its image lies in $\epsilon\tau_{[0,N]}V_{\epsilon}'$. It follows that $\operatorname {Hom}_{\sA}\operatorname{-Gr}\big(\tau_{[0,N]}V_{\epsilon}, \tau_{[0,N]}V_{\epsilon}'\big)$ is scheme-theoretically supported over $\operatorname{Spec}(k)\subset \operatorname{Spec} k[\epsilon]$, and hence by Proposition \ref{prop:Ext properties}(1) that the same is true of $\mathcal{H}^1(C)$. Since this is true for every $\operatorname{Spec} k[\epsilon]\rightarrow \operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS \times \operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS $ not tangent to $\Gamma$, we conclude that $\mathcal{H}^1(C)$ has scheme-theoretic support equal to $\Gamma$, as required. By Proposition \ref{prop:Markman}, then, we conclude that $[\Gamma] = c_m(C)$. By Proposition \ref{prop:image}, the K\"unneth components of $c_m(C)$ thus span the image of the restriction map \begin{displaymath} H^*\big(\overline{\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS}\big) \longrightarrow H^*\big(\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS\big), \end{displaymath} which by Proposition \ref{prop:semiproj-coh} is exactly $\oplus_m W_m\Big(H^m\big(\operatorname{Rep}(\Lambda^q,\alpha)^{\theta\operatorname{-s}}/\bS\big)\Big)$. Since the Chern classes of $C$ are polynomials in the Chern classes of the tautological bundles (see the proof of Proposition 2.4(ii) of \cite{McNKirwan}), this completes the proof of Theorem \ref{stack main thm}, hence also of Theorem \ref{main thm}.\hfill\qedsymbol \subsection{Proof of Theorem \ref{derived cat}} The proof of Theorem \ref{derived cat} is essentially identical to that of Theorem 1.6 of \cite{McNKirwan} (and we note that Theorem \ref{derived cat} holds whenever $k$ is any field of characteristic zero and $q\in k^\times$). Indeed, the assumption that there is a vertex $i_0\in I$ for which $\alpha_{i_0}=1$ guarantees the following. First, we may take $\mathbb{S} = \prod_{i\neq i_0} GL(\alpha_i)$, which acts freely on the stable locus: thus, $\mathcal{M}_\theta^q(\alpha)^{\operatorname{s}}$ is a fine moduli space for stable representations of $\Lambda^q$. Second, exactly as in the proof of Theorem 1.6 of \cite{McNKirwan}, in the complex \eqref{eq:perfect-complex}, there are direct sum decompositions \begin{displaymath} L(V_0,W_0) = \operatorname {Hom}(V_{0, i_0}, W_{0, i_0}) \oplus \big(\oplus_{i\neq i_0} \operatorname {Hom}(V_{0,i}, W_{0,i})\big) \; \text{and} \; \end{displaymath} \begin{displaymath} L(V_0, W_{2g}) = \operatorname {Hom}(V_{0,i_0}, W_{2g, i_0}) \oplus \big(\oplus_{i\neq i_0} \operatorname {Hom}(V_{0,i}, W_{2g,i})\big), \end{displaymath} so that the complex obtained by modifying \eqref{eq:perfect-complex} given by \begin{displaymath} \oplus_{i\neq i_0} \operatorname {Hom}(V_{0,i}, W_{0,i}) \xrightarrow{\partial_0} E(V_0,W_1) \xrightarrow{\partial_1} L(V_0,W_{2g})/\operatorname {Hom}(V_{0,i_0}, W_{2g, i_0}) \end{displaymath} has no cohomology at the ends, and in the middle has cohomology $\mathcal{H}$ that is a rank $m = \operatorname{codim}(\Gamma)$ vector bundle. Moreover, the remaining map $k=\operatorname {Hom}(V_{0, i_0}, W_{0, i_0}) \rightarrow E(V_0,W_1)$ defines a section $s$ of $\mathcal{H}$ whose scheme-theoretic zero locus is $Z(s) = \Gamma$. The remainder of the proof now copies that of Theorem 1.6 of \cite{McNKirwan}.\hfill\qedsymbol
1,314,259,994,658
arxiv
\section{Introduction} \noindent The first idea on syllogisms was produced in the field of proper thinking by the Greek philosopher Aristotle. His idea mainly said that in his Prior Analytics: \textquotedblleft syllogism is discourse in which, certain things being stated, something other than what is stated follows of necessity from their being so. I mean by the last phrase that they produce the consequence, and by this, that no further term is required from without in order to make the consequence necessary\textquotedblright \cite{jon}. A syllogism is a formal logical pattern to obtain a conclusion from the set of premises. A categorical syllogism can be defined as a logical consequence which made up three categorical propositions. It consists of three propositions which are said to be statements or sentences called major premise, minor premise and conclusion, respectively. Each of them has a quantified relationship between two objects. The position of objects in premises generate a classification called as syllogistic figures. So, there are $4$ different types figures. And, the combination of quantifiers ordering deduces $64$ different combinations in each figure. Therefore, a categorical syllogistic system consists of $256$ syllogistic moods, $15$ of which are unconditionally and $9$ of them are conditionally; in total $24$ of them are valid. Those syllogisms in the conditional group are also said to be \textit{strengthened}, or valid under \textit{existential import}, which is an explicit assumption of existence of some \textit{S}, \textit{M} or \textit{P}. Then we add a rule to SLCD: \textit{Some X is X when $X$ exists} and consequently, we obtain the formal system SLCD$^\dagger$. Throughout the centuries, categorical syllogistic system was a paramount part of logic. Innovations in the scope of mathematical logic in the 19th and the beginning of 20th centuries, the situation is changed. However, when J. \L{}ukasiewicz introduced syllogistic as an axiomatic system built on classical propositional calculus \cite{Lukasiewicz}, the situation became reversed once again. Thereby, categorical syllogistic system plays an important role in the mainstream of contemporary formal logic. Furthermore, \L{}ukasiewicz axiomatization on syllogisms is still open and new ideas rise from time to time. In recent years, the using of syllogisms is studied extensively and investigated under different treatments such as computer science \cite{Kumova, Hartmann}; engineering \cite{Kulik, jet}; artificial intelligence \cite{kryvyi}, \cite{zadeh}; etc. And also, computer science oriented logicians began to take part in \cite{Rocha}. Using of diagrams in formal logic reasoning has created a spate of interest for years by the reason of needing to visualize complex logic problems that are difficult to understand. For example, at the end of the 1800s, Lewis Carroll used an original diagrammatic scheme to visualize categorical syllogisms in his book \cite{Lewis}. On the contrary using venn diagrams, he used literal diagrams to solve categorical syllogistic problems containing 2-terms, 3-terms and so on. Moreover, using of diagrams in computer systems is a significant topic today because it has potential in offering systems which are more clear and flexible to perform. A common problem of various systems nowadays is that they are complicated which are hard to understand and use. So, we need to use diagrams or other graphical representations to develop more effective and efficient problem solving \cite{nakatsu}. On the contrary the applications of diagrammatic resoning in the cognitive sciences seek a solution how to support learners in complex tasks typically with paper-based or more \textquotedblleft static\textquotedblright \ diagrams \cite{Mayer1, Mayer2}, the applications more typically include how to program a computer to carry out these tasks in artificial intelligence \cite{Glaskow}. Besides, there are also some related works with the using diagrams of syllogisms in different areas such as \cite{moktefi2013beyond, castro2017re, alternativetovennn, manzano}. In this paper, we show how the categorical syllogistic statements are expressed using this Carroll literal diagrams. Then, we give a new algorithm deciding whether the syllogism and a strengthened syllogism are valid or not by the help of a calculus system SLCD and SLCD$^{\dagger}$, respectively. \section{Preliminaries} In this section, we sketch out notations and terminology which are used during this manuscript. A categorical syllogism can be defined as a deductive argument consisting of two logical propositions and a conclusion obtained from these propositions. They contain exactly three terms, each of which occurs in exactly two of the constituent propositions and the conclusion, where the propositions and the conclusion each has a quantified relationship between two terms. The objects in a categorical proposition are related with following four distinct forms as in Table \ref{tab-1}. \begin{table}[h!] \centering \caption{Categorical Syllogistic Propositions} \label{tab-1} \begin{tabular}{c c c } \hline Symbol & Statements & Generic Term \\ \hline $A$ & All $X$ are $Y$ & Universal Affirmative\\ $E$ & No $X$ are $Y$ & Universal Negative\\ $I$ & Some $X$ are $Y$ & Particular Affirmative\\ $O$ & Some $X$ are not $Y$ & Particular Negative\\ \hline \end{tabular} \end{table} For any syllogism, the categorical propositions are composed of three terms, a subject term, a predicate term, and a middle term: the subject term is the subject of the conclusion and denoted by $S$; the predicate term modifies the subject in the conclusion and denoted by $P$, and the middle term which occurs in the two premises and links the subject and predicate terms and noted by $M$. The subject and predicate terms occur in different premises but the middle term occurs once in each premise. The premise which consists of the predicate term and the middle term is called the \textit{major premise}; the premise which consists of subject term and the middle term is called the \textit{minor premise}. Categorical syllogisms are grouped into $4$ different ways, which are traditionally called figures, depending on the position of the term-variables $S$, $P$ and $M$ in Table \ref{tab-2}. \begin{table}[h!] \centering \caption{Categorical Syllogistic Figures} \label{tab-2} \begin{tabular} [c]{|c|c|c|c|}\hline Major & Minor & Conclusion & Figure\\\hline\hline $M-P$ & $S-M$ & $S-P$ & 1\\\hline $P-M$ & $S-M$ & $S-P$ & 2\\\hline $M-P$ & $M-S$ & $S-P$ & 3\\\hline $P-M$ & $M-S$ & $S-P$ & 4\\\hline \end{tabular} \end{table} Aristotle identified only the first three figures, but the last one was discovered in the middle ages. He searched each mood and figure in terms of whether it was valid or not. After, he obtained some common properties of these syllogisms, which are called rules of deduction. These rules are as follows: $\textbf{Step 1:}$ Relating to premises irrespective of conclusion or figure: \begin{itemize} \item[(a)]No inference can be made from two particular premises. \item[(b)]No inference can be made from two negative premises. \end{itemize} $\textbf{Step 2:}$ Relating to propositions irrespective of figure: \begin{itemize} \item[(a)]If one premise is particular, the conclusion must be particular. \item[(b)]If one premise is negative, the conclusion must be negative. \end{itemize} $\textbf{Step 3:}$ Relating to distribution of terms: \begin{itemize} \item[(a)]The middle term must be distributed at least once. \item[(b)]A predicate distributed in the conclusion must be distributed in the major premise. \item[(c)]A subject distributed in the conclusion must be distributed in the minor premise. \end{itemize} In categorical syllogistic system, there are 64 different syllogistic forms for each figure. These are called \textit{moods}. Therefore, the categorical syllogistic system is composed of 256 possible syllogisms. Only 24 of them are valid in this system. And they divided into two groups of 15 and of 9.\\ The syllogisms in the first group are valid \textit{unconditionally} which are given in Table \ref{tab-3}. \begin{table}[h!] \centering \caption{Unconditionally Valid Forms} \label{tab-3} \begin{tabular}{ c c c c} \hline Figure I & Figure II & Figure III & Figure IV \\ \hline $AAA$ & $EAE$ & $IAI$ & $AEE$\\ $EAE$ & $AEE$ & $AII$ & $IAI$\\ $AII$ & $EIO$ & $OAO$ & $EIO$\\ $EIO$ & $AOO$ & $EIO$ & \\ \hline \end{tabular} \end{table} The syllogisms in the second group called \textit{strengthened syllogism} are valid \textit{conditionally} or valid \textit{existential import} ,which is an explicit supposition of being of some terms, are shown in Table \ref{tab-4}. \pagebreak \begin{table}[h!] \centering \caption{Conditionally Valid Forms} \label{tab-4} \begin{tabular}{c c c c c} \hline Figure I & Figure II & Figure III & Figure IV & Necessary Condition\\ \hline $AAI$ & $AAO$ & & $AEO$ & \textit{S} exists\\ $EAO$ & $EAO$ & & & \textit{S} exists\\ & & $AAI$ & $EAO$ & \textit{M} exists\\ & & $EIO$ & & \textit{M} exists\\ & & & $AAI$ & \textit{P} exists\\ \hline \end{tabular} \end{table} \section{Representation of Categorical Syllogisms via Carroll's Diagrams and a Calculus System SLCD} Carroll's diagrams, thought up in 1884, are Venn-type diagrams where the universes are represented with a square. Nevertheless, it is not clear whether Carroll studied his diagrams independently or as a modification of John Venn's. Carroll's scheme looks like a productive method summing up several developments that have been introduced by researchers studying in this area. For categorical syllogistic system, we describe an homomorphic mapping between the categorical syllogistic propositions and the Carroll's diagrams. Let $X$ and $Y$ be two terms and let $X'$ and $Y'$ be complements of $X$ and $Y$, respectively. For two-terms, Carroll divides the square into four cells, by this means he gets the so-called bilateral diagram, as shown in Table \ref{tab-5}. \begin{table}[h!] \centering \caption{Relation of Two Terms} \label{tab-5} \begin{tabular}{|c|c|c|} \hline & $X'$ & $X$ \\ \hline $Y'$ & $X'Y'$ & $XY'$ \\ \hline $Y$ & $X'Y$ & $XY$ \\ \hline \end{tabular} \end{table} Each of these four cells can have three possibilities, when we explain the relations between two terms. They can be $0$ or $1$ or \textit{blank}. In this method, $0$ means that there is no element intersection cell of two elements, $1$ means that it is not empty and \textit{blank} cell means that we don't have any idea about the content of the cell, it could be $0$ or $1$. As above method, let $X$, $Y$, and $M$ be three terms and $X'$, $Y'$, and $M'$ be complements of $X$, $Y$, and $M$, respectively. For examining all relations between three terms, he added one more square in the middle of bilateral diagram which is called the trilateral diagram, as in Figure \ref{fig-1}. \begin{figure}[h!] \centering {\scalebox{0.70}{ \includegraphics[ ]{trilateral.jpg}}}% \caption{Relations of three terms} \label{fig-1} \end{figure} Each cell in a trilateral diagram is marked with a $0$, if there is no element and is marked with a $\textbf{I}$ if it is not empty and another using of $\textbf{I}$, it could be on the line where the two cell is intersection, this means that at least one of these cells is not empty. So, $\textbf{I}$ is different from $1$. In addition to these, if any cell is \textbf{blank}, it has two possibilities, $0$ or $\textbf{I}$. To obtain the conclusion of a syllogism, the knowledge of two premises are carried out on a trilateral diagram. This presentation is more useful for the elimination method than the Venn diagram view. In this way, one can observe the conclusion of the premises truer and quicker from a trilateral diagram. By dint of this method, we demean the data from a trilateral diagram to a bilateral diagram, involving only two terms that should occur in the conclusion and consequently eliminating the middle term. This method can be used in accordance with the rules below \cite{Lewis}: \noindent\textit{\textbf{First Rule:}} $0$ and $\textbf{I}$ are fixed up on trilateral diagrams. \noindent\textit{\textbf{Second Rule:}} If the quarter of the trilateral diagram contains a \textquotedblleft$\textbf{I}$\textquotedblright \ in either cell, then it is certainly occupied, and one may mark the corresponding quarter of the bilateral diagram with a \textquotedblleft$1$\textquotedblright \ to indicate that it is occupied. \noindent\textit{\textbf{Third Rule:}} If the quarter of the trilateral diagram contains two \textquotedblleft$0$\textquotedblright s, one in each cell, then it is certainly empty, and one may mark the corresponding quarter of the bilateral diagram with a \textquotedblleft$0$\textquotedblright \ to indicate that it is empty. We obtain the required conclusion of a syllogism by using of these rules. The effect of Carroll's method of transfer, unknown to Venn, could not be underestimated. It only shows how to extract the conclusion from the premises of a syllogism \cite{moktefi2012history}. Now, we give the set theoretical representation of syllogistic arguments by means of bilateral diagrams. To build such a model, we draw from Carroll's diagrammatic method. We give a definition of a map which corresponds each bilateral diagram to a set. Eventually, our main purpose is to construct a complete bridge between sets and categorical syllogisms such as Table \ref{tab-6}. \begin{table}[h] \centering \caption{\textit{The Paradigm for the representation of syllogistic arguments by using sets}} \label{tab-6} \begin{tabular}{c|c|c|c} & LOGIC& DIAGRAMS & SETS \\ \hline PREMISES& Propositions & $\xrightarrow{Translate}$& Sets\\ & & & $\downarrow$\\ CONCLUSIONS & Propositions & $\xleftarrow{Translate}$ & Sets \\ \end{tabular} \end{table} Let $X$ and $Y$ be two terms and their complements are denoted by $X'$ and $Y'$, respectively. Assume that $p_i$ shows a possible form of any bilateral diagram, such that $1\leq i \leq k$, where $k$ is the number of possible forms of bilateral diagram, as in Table \ref{tab-7}. \begin{table}[h] \centering \caption{Bilateral diagram for a quantity relation between $X$ and $Y$} \label{tab-7} \begin{tabular}{|c|c|c|} \hline $p_i$ & $X'$ & $X$ \\ \hline $Y'$ & $n_1$ & $n_2$ \\ \hline $Y$ & $n_3$ & $n_4$ \\ \hline \end{tabular} \end{table} \noindent where $n_1, n_2, n_3, n_4\in\{0,1\}$. During this paper, $R_{(A)}$, $R_{(E)}$, $R_{(I)}$ and $R_{(O)}$ correspond ``$All$'', \textquotedblleft$No$\textquotedblright, \textquotedblleft$Some$\textquotedblright and \textquotedblleft$Some-not$\textquotedblright statements, respectively. \begin{example}\label{example1} We analyze $\textit{``No X are Y''}$ statement means that there is no element in the intersection cell of $X$ and $Y$. We show it in the following bilateral diagram as in Table \ref{tab-8}. From Table \ref{tab-8}, we obtain all possible bilateral diagrams which have $0$ in the intersection cell of $X$ and $Y$. So, Table \ref{tab-9} shows all possible forms of $\textit{``No X are Y"}$.\\ \begin{table}[h!] \centering \caption{\textit{Bilateral diagram for ``$No$ $X$ $are$ $Y$"}} \label{tab-8} $R_{(A)}=$ \begin{tabular}{|c|c|c|} \hline & $X'$ & $X$ \\ \hline $Y'$ & & \\ \hline $Y$ & & 0 \\ \hline \end{tabular} \end{table} \begin{table}[h!] \centering \caption{\textit{All possible forms of ``$All$ $X$ $are$ $Y$"}} \label{tab-9} \begin{tabular}{|c|c|c|} \hline $\boldsymbol{p_1}$ & $X'$ & $X$ \\ \hline $Y'$ & 0 & 0 \\ \hline $Y$ & 0 & 0 \\ \hline \end{tabular}\ \ \begin{tabular}{|c|c|c|} \hline $\boldsymbol{p_2}$ & $X'$ & $X$ \\ \hline $Y'$ & 0 & 0 \\ \hline $Y$ &1 & 0 \\ \hline \end{tabular}\ \ \begin{tabular}{|c|c|c|} \hline $\boldsymbol{p_3}$ & $X'$ & $X$ \\ \hline $Y'$ & 0 & 1 \\ \hline $Y$ & 0 & 0 \\ \hline \end{tabular}\ \ \begin{tabular}{|c|c|c|} \hline $\boldsymbol{p_4}$ & $X'$ & $X$ \\ \hline $Y'$ & 1 & 0 \\ \hline $Y$ & 0 & 0 \\ \hline \end{tabular}\ \ \begin{tabular}{|c|c|c|} \hline $\boldsymbol{p_5}$ & $X'$ & $X$ \\ \hline $Y'$ & 0 & 1 \\ \hline $Y$ & 1 & 0 \\ \hline \end{tabular}\ \ \begin{tabular}{|c|c|c|} \hline $\boldsymbol{p_6}$ & $X'$ & $X$ \\ \hline $Y'$ & 1 & 0 \\ \hline $Y$ & 1 & 0 \\ \hline \end{tabular}\ \ \begin{tabular}{|c|c|c|} \hline $\boldsymbol{p_7}$ & $X'$ & $X$ \\ \hline $Y'$ & 1 & 1 \\ \hline $Y$ & 0 & 0 \\ \hline \end{tabular}\ \ \begin{tabular}{|c|c|c|} \hline $\boldsymbol{p_8}$ & $X'$ & $X$ \\ \hline $Y'$ & 1 & 1 \\ \hline $Y$ & 1 & 0 \\ \hline \end{tabular} \end{table} \end{example} Now in order to define a relation between bilateral diagrams and sets, let us form a set consisting of numbers which correspond to possible forms that each bilateral diagram possesses. For this aim, we firstly define a value mapping in which each possible bilateral diagram corresponds to exactly one value. \begin{definition}\label{definition1}\cite{rus} Let $p_i$ be a possible bilateral diagram and $n_i$ be the value that the $i$-th cell possesses. The $r^{\mathit{val}}_j$ corresponds to the value of $p_i$ which is calculated by using the formula $$r^{\mathit{val}}_j=\sum_{i=1}^4 2^{(4-i)}n_i, \ \ \ 1\leq j\leq k,$$ where $k$ is the number of all possible forms. \end{definition} \begin{definition} Let $R^{\mathit{set}}$ be the set of the values which correspond to all possible forms of any bilateral diagram; that is $R^{\mathit{set}}=\{r^{\mathit{val}}_j: 1\leq j \leq k,\text{$k$ is the number}$ $\text{of all possible forms}\}$. The set of all these $R^{\mathit{set}}$'s is denoted by $\mathcal{R}^{\mathit{Set}}$. \end{definition} \begin{corollary} We obtain the set representations of all categorical propositions as follows: \begin{itemize} \item[-] \textit{All X are Y:} It means that $X$ intersection with $Y'$ cell is empty set. We can illustrate this statement as Table \ref{tab-10}. \begin{table}[h!] \centering \caption{\textit{$X$ intersection with $Y'$ is empty set}} \label{tab-10 $R_{(A)}=$\begin{tabular}{|c|c|c|} \hline & $X'$ & $X$ \\ \hline $Y'$ & &0 \\ \hline $Y$ & & \\ \hline \end{tabular} \end{table} \noindent From Table \ref{tab-10}, we obtain all possible forms as the same method in Example \ref{example1}. By the help of Definition \ref{definition1}, the set representation of ``\textit{All X are Y}" corresponds to the $R^{\mathit{set}}_{(A)}=\{0,1,2,3,8,9,10,11\}$. \item[-]\textit{No X are Y:}There is no element in the intersection cell of $X$ and $Y$ as Table \ref{tab-11}. \begin{table}[h] \centerin \caption{\textit{$X$ intersection with $Y$ is empty set}} \label{tab-11} $R_{(E)}=$\begin{tabular}{|c|c|c|} \hline & $X'$ & $X$ \\ \hline $Y'$ & & \\ \hline $Y$ & &0 \\ \hline \end{tabular} \end{table} \noindent By Example \ref{example1}, we have all possible forms of ``\textit{No X are Y}". Then, we obtain $R^{\mathit{set}}_{(E)}=\{0,2,4,6,8,10,12,14\}$. \newpage \item[-]\textit{Some X are Y:} There is at least one element in the intersection $X$ and $Y$ as Table \ref{tab-12}. \begin{table}[h!] \centerin \caption{\textit{$X$ intersection $Y$ has at least one element}} \label{tab-12} $R_{(I)}=$\begin{tabular}{|c|c|c|} \hline & $X'$ & $X$ \\ \hline $Y'$ & & \\ \hline $Y$ & & 1 \\ \hline \end{tabular} \end{table} By using the possible bilateral diagrams of $R_{(I)}$, we have $R^{\mathit{set}}_{(I)}=\{1,3,5,7,9,11,13,15\}$. \item[-]\textit{Some X are not Y:} If some element of $X$ are not $Y$, then they have to be in $Y'$. So, the intersection cell of $X$ and $Y'$ is not empty as Table \ref{tab-13}. \begin{table}[h!] \centerin \caption{\textit{$X$ intersection $Y'$ has at least one element}} \label{tab-13} $R_{(O)}=$\begin{tabular}{|c|c|c|} \hline & $X'$ & $X$ \\ \hline $Y'$ & & $1$ \\ \hline $Y$ & & \\ \hline \end{tabular} \end{table} \noindent From the bilateral diagram of $R_{(O)}$, we get $R^{\mathit{set}}_{(O)}=\{4,5,6,7,12,13,14,15\}$. \end{itemize} \end{corollary} Let's consider the relationship between the possible bilateral diagrams of the categorical syllogisms before discussing of the categorical syllogisms via Carroll's diagrams. \begin{example} Let $p_i$ and $p_j$ be two possible forms of the bilateral diagrams of major and minor premises, respectively. We take the possible forms of bilateral diagrams as Table \ref{tab-14}. \begin{table}[h] \centering \caption{The possible forms of bilateral diagrams} \label{tab-14} $p_i=$ \begin{tabular}{|c|c|c|} \hline & $P'$ & $P$ \\ \hline $M'$ & 1 & 0 \\ \hline $M$ & 0 & 0 \\ \hline \end{tabular} \ \ and \ \ \ $p_j=$ \begin{tabular}{|c|c|c|} \hline & $S'$ & $S$ \\ \hline $M'$ & 0 & 1 \\ \hline $M$ & 0 & 0 \\ \hline \end{tabular} \end{table} We input the data on trilateral diagram as in Figure \ref{fig-2}. \begin{figure}[h!] \centering \caption{The relation of two possible forms} \label{fig-2} {\scalebox{0.70}{ \includegraphics[]{trilateral1.jpg}}}% \end{figure} By using the elimination method, we obtain the relation between $S$ and $P$ as Table \ref{tab-15}. \begin{table}[h] \centering \caption{The relation between $S$ and $P$} \label{tab-15} $p_k=$ \begin{tabular}{|c|c|c|} \hline & $P'$ & $P$ \\ \hline $S'$ & 0 & 0 \\ \hline $S$ & 1 & 0 \\ \hline \end{tabular} \end{table} $r_i=8$ corresponds to possible form $p_i$, and $r_j=4$ corresponds to possible form $p_j$, then we obtain that $r_k=2$ corresponds to $p_k$ that is a possible conclusion. \end{example} Let $r^{\mathit{val}}_i$ and $r^{\mathit{val}}_j$ be the numbers corresponding to possible forms of bilateral diagrams which have a common term. Then we can get the relation between two other terms by using this method. After these examples, we try to generalize them by a formula. \begin{definition} The syllogistic possible conclusion mapping, denoted $\ast$, is a mapping which gives us the deduction set of possible forms of major and minor premises sets. \end{definition} \begin{theorem} Let $r^{\mathit{val}}_i$ and $r^{\mathit{val}}_j$ correspond to the numbers of possible forms of major and minor premises, respectively. Then, $r^{\mathit{val}}_i\ast r^{\mathit{val}}_j$ equals the value given by the intersection of row and column numbers corresponding to $r^{\mathit{val}}_i$ and $r^{\mathit{val}}_j$ in Table \ref{tab-16}. \end{theorem} \begin{table}[h] \centering \caption{Operation table} \label{tab-16} {\small \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $\ast$& 0 & 1 & 2 & 3 & 4 & 8 & 12 & 5 & 10 & 6 & 9 & 7 & 11 & 13 & 14 & 15 \\ \hline 0& 0 & & & & & & & & & & & & & & & \\ \hline 1& & 1 & 4 & 5 & & & & & & & & & & & & \\ \hline 2& & 2 & 8 & 10 & & & & & & & & & & & & \\ \hline 3& & 3 & 12 & $H$ & & & & & & & & & & & & \\ \hline 4& & & & & 1 & 4 & 5 & & & & & & & & & \\ \hline 8& & & & & 2 & 8 & 10 & & & & & & & & & \\ \hline 12& & & & & 3 & 12 & $H$ & & & & & & & & & \\ \hline 5& & & & & & & & 1 & 4 & 5 & 5 & 5 & 5 & 5 & 5 & 5 \\ \hline 10& & & & & & & & 2 & 8 & 10 & 10 & 10 & 10 & 10 & 10 & 10 \\ \hline 6& & & & & & & & 3 & 12 & 9 & 6 & 11 & 14 & 7 & 13 & 15\\ \hline 9& & & & & & & & 3 & 12 & 6 & 9 & 7 & 13 & 11 & 14 & 15 \\ \hline 7& & & & & & & & 3 & 12 & 13 & 7 & $H_4$ & $H'_3$ & 7 & 13 & $H'_1$ \\ \hline 11& & & & & & & & 3 & 12 & 14 & 11 & $H_3$ & $H'_4$ & 11 & 14 & $H'_2$ \\ \hline 13& & & & & & & & 3 & 12 & 7 & 13 & 7 & 13 & $H_4$ & $H'_3$ & $H'_1$ \\ \hline 14& & & & & & & & 3 & 12 & 11 & 14 & 11 & 14 & $H_3$ & $H'_4$ & $H'_2$ \\ \hline 15& & & & & & & & 3 & 12 & 15 & 15 & $H_1$ & $H_2$ & $H_1$ & $H_2$ & $H$ \\ \hline \end{tabular}} \end{table} In the Table \ref{tab-16}, considering possible conclusion operation, some possible forms of premises have more than one possible conclusions, given as below: \begin{gather*} H=\{6, 7, 9, 11, 13, 14, 15\},\ H_1=\{7, 11, 15\},\ H_1'=\{6, 7, 9, 11, 13, 15\},\\ H_2=\{13, 14, 15\},\ H_2'=\{11, 14, 15\},\ H_3=\{6, 7, 11, 14, 15\},\\ H_3'=\{6, 7, 13, 14, 15\},\ H_4=\{7, 9, 11, 13, 15\},\ H_4'=\{9, 11, 13, 14, 15\} \end{gather*} Therefore, we scrutinise all possible cases between two terms and their conclusions. Note that, Table \ref{tab-16} is used as $Syllogistic\_Mapping()$ subalgorithm in Section \ref{sec4}. \begin{definition} Universes of values sets of major premises, minor premises, and conclusions are denoted by $\mathcal{R}^{\mathit{set}}_{\textit{Maj}}$, $\mathcal{R}^{\mathit{set}}_{\textit{Min}}$ and $\mathcal{R}^{\mathit{set}}_{\textit{Con}}$, respectively. \end{definition} Let $R^{\mathit{set}}_{(k)}$ be an element of $\mathcal{R}^{\mathit{set}}_{\textit{Maj}}$ and $R^{\mathit{set}}_{(l)}$ be an element of $\mathcal{R}^{\mathit{set}}_{\textit{Min}}$. The main problem is what the conclusion of these premises is. In syllogistic, we have some patterns which are mentioned in Table \ref{tab-3} and Table \ref{tab-4} above. Now, we explain them by using bilateral diagrams with an algebraic approach. \begin{definition} The syllogistic mapping, denoted by $\circledast$, is a mapping which gives us the conclusion of the major and the minor premises as Table \ref{tab-17}. \begin{table}[h!] \centering \caption{The conclusion of the major and the minor premises} \label{tab-17} \begin{tabular}{|c|c|c|} \hline & $P'$ & $P$ \\ \hline $M'$ & & \\ \hline $M$ & $$ & $$ \\ \hline \end{tabular} $\circledast$ \begin{tabular}{|c|c|c|} \hline & $S'$ & $S$ \\ \hline $M'$ & & \\ \hline $M$ & & \\ \hline \end{tabular} = \begin{tabular}{|c|c|c|} \hline & $P'$ & $P$ \\ \hline $S'$ & & \\ \hline $S$ & & \\ \hline \end{tabular} \end{table} \end{definition} \begin{theorem}\label{theorem4.12} Let $R^{\mathit{set}}_{(k)}=\{r^{\mathit{val}}_{k_1},\dots, r^{\mathit{set}}_{k_n}\}$ and $R^{\mathit{set}}_{(l)}=\{r^{\mathit{val}}_{l_1},\dots, r^{\mathit{val}}_{l_t}\}$ two sets corresponding to the Major and the Minor premises. Then $\circledast: \mathcal{R}^{\mathit{set}}_{\textit{Maj}}\times\mathcal{R}^{\mathit{set}}_{\textit{Min}}\rightarrow \mathcal{R}^{\mathit{set}}_{\textit{Con}}$ $$R^{\mathit{set}}_{(k)} \circledast R^{\mathit{set}}_{(l)}:= \bigcup^n_{j=1} \bigcup^t_{i=1} r^{\mathit{val}}_{k_j}\ast r^{\mathit{val}}_{l_i}$$ is the conclusion of the premises $R^{\mathit{set}}_{(k)}$ and $R^{\mathit{set}}_{(l)}$. \end{theorem} \begin{theorem}\cite{senturkoner} A syllogism is valid if and only if it is provable in \textit{SLCD}. \end{theorem} \begin{remark} For conditional valid forms, we need an addition rule which is \textit{\textquotedblleft Some $X$ are $X$"}. We can use above Theorem by taking into consideration this rule. \end{remark} \begin{remark} Let SLCD be noted calculus system. If the rule \textit{\textquotedblleft Some X are X when X exists"} (i.e., $\vdash\boldsymbol{I}_{XX}$) is added to SLCD, then the calculus system SLCD is denoted by $\textit{SLCD}^{\dagger}$. \end{remark} \begin{definition}\cite{senturkoner} Let $R_{(k)}$ be the bilateral diagram presentation of the premise. The \textit{transposition} of a premise is the symmetric positions with respect to the main diagonal. It is shown by $Trans(R_{(k)})$. \begin{eqnarray*} Trans:\mathcal{R}^{\mathit{set}}&\rightarrow& \mathcal{R}^{\mathit{set}},\\ {R}^{\mathit{set}}_{(k)}&\rightarrow& Trans({R}^{\mathit{set}}_{(k)}) =\{r^{\mathit{val}}_{k^T_1},\dots, r^{\mathit{set}}_{k^T_n}\}.\nonumber \end{eqnarray*} \end{definition} \newpage \begin{theorem}\label{theorem4.17}\cite{senturkoner} Let $R^{\mathit{set}}_{(k)}=\{r^{\mathit{val}}_{k_1},\dots, r^{\mathit{set}}_{k_n}\}$ and $R^{\mathit{set}}_{(l)}=\{r^{\mathit{val}}_{l_1},\dots, r^{\mathit{val}}_{l_t}\}$ be two sets to correspond to the Major and the Minor premises values sets and $R^{\mathit{set}}_{(s)}=\{r^{\mathit{val}}_{s_1},\dots, r^{\mathit{set}}_{s_m}\}$ be set to correspond to the constant set values which means \textquotedblleft Some S are S", \textquotedblleft Some M are M" and \textquotedblleft Some P are P". Then $\circledast^{\dagger}: \mathcal{R}^{\mathit{set}}_{\textit{Maj}}\times\mathcal{R}^{\mathit{set}}_{\textit{Min}}\rightarrow \mathcal{R}^{\mathit{set}}_{\textit{Con}}$ $$ R^{\mathit{set}}_{(k)} \circledast^{\dagger} R^{\mathit{set}}_{(l)}:= \begin{cases} \bigcup^n_{j=1} \bigcup^t_{i=1} \bigcup^m_{h=1} (r^{\mathit{val}}_{k_j}\ast (r^{\mathit{var}}_{s_h}\ast r^{\mathit{var}}_{l^T_i})), \; \; & \textit{If S exists}, \\ \bigcup^n_{j=1} \bigcup^t_{i=1} \bigcup^m_{h=1} (r^{\mathit{val}}_{k_j}\ast (r^{\mathit{var}}_{l_i}\ast r^{\mathit{var}}_{s_h} )), \; \; & \textit{If M exists}, \\ \bigcup^n_{j=1} \bigcup^t_{i=1} \bigcup^m_{h=1} ((r^{\mathit{var}}_{s_h} \ast r^{\mathit{val}}_{k^T_j})\ast r^{\mathit{var}}_{l_i}), \; \; & \textit{If P exists}. \end{cases}$$ is the conclusion of the premises $R^{\mathit{set}}_{(k)}$ and $R^{\mathit{set}}_{(l)}$ under the conditions \textit{S exists}, \textit{M exists} or \textit{P exists}. \end{theorem} \begin{theorem}\cite{senturkoner} A strengthened syllogism is valid if and only if it is provable in \textit{SLCD}$^{\dagger}$. \end{theorem} \section{An Algorithmic Decision for Categorical Syllogisms in SLCD}\label{sec4} In this part of the manuscript, we give an algorithm to decide whether a categorical syllogism is valid or not in the calculus system SLCD or SLCD$^{\dagger}$ at the first time in the literature. \\ Global variables used to all functions are below:\\ $Conc[\ ][\ ]:$ two dimensional array, set of all possible bilateral diagrams \\ $Const\_set:$ constant set for each condition S exists, M exists and P exists $\bullet$ Algorithm Syllogism:\\ This is the main algorithm. In this algorithm, $MPSM()$ and $Decision()$ subalgorithms are run for each state (Unconditional, S exists, M exists and P exists) and for each figure (Figure1, Figure2, Figure3 and Figure4). This algorithm sends related figure as parameter to subalgorithm $MPSM()$ and also, it sends the related state and figure as parameters to subalgorithm $Decision()$. \begin{algorithm}[H] \DontPrintSemicolon \caption{Syllogism\label{A1}} \KwData{All states for each Figure} \KwResult{Obtain the Conclusion set of syllogisms and make a decision for syllogisms whether \textquotedblleft Valid" or \textquotedblleft Invalid".} \BlankLine \emph{\textbf{Syllogism()}} \; \ForEach{$cond$ in $Conditions\{Unconditional, S\_exists, M\_exists, P\_exists\}$}{ \ForEach{$fig$ in $Figures\{Figure1, Figure2, Figure3, Figure4\}$}{ $MPSM(fig)$\; $Decision(cond, fig)$\; } } \end{algorithm} \vspace{0.5cm} $\bullet$ Subalgorithm MPSM:\\ This algorithm determines the positions of the subject term, middle term and predicate term with respect to the figure parameter as input.\\ \begin{algorithm}[H] \DontPrintSemicolon \caption{MPSM \label{A2}} \KwData{The specified figure} \KwResult{Positions of the major and minor terms are determined} \BlankLine \emph{\textbf{MPSM(fig)}} \; \uIf{$fig= ``Figure 1"$}{ $mj_1 = ``M"; mj_2 = ``P"$\; $mn_1 = ``S"; mn_2 = ``M"$\;} \uElseIf{$fig=``Figure 2"$}{ $mj_1 = ``P"; mj_2 = ``M"$\; $mn_1 = ``S"; mn_2 = ``M"$\;} \uElseIf{$fig=``Figure 3"$}{ $mj_1 = ``M"; mj_2 = ``P"$\; $mn_1 = ``M"; mn_2 = ``S"$\;} \uElseIf{$fig=``Figure 4"$}{ $mj_1 = ``P"; mj_2 = ``M"$\; $mn_1 = ``M"; mn_2 = ``S"$\;} \end{algorithm} \vspace{0.5cm} $\bullet$ Subalgorithm Decision:\\ This algorithm determines major and minor sets for each prepositions (A, E, I and O) of major and minor premises by using $Set\_Interpretation()$ subalgorithm. We obtain premises conclusion via $Syllogistic\_Mapping()$ using major set and minor set values with respect to Table \ref{tab-16}. Later, for the analysed figure the premises conclusion set is compared to all conclusion sets under the corresponding state. If these are equal to each other, then the algorithm prints ``valid" output for the related syllogism.\\ \begin{algorithm}[H] \DontPrintSemicolon \caption{Decision\label{A3}} \KwData{The set interpretations of major and minor premises of syllogisms.} \KwResult{Obtain the Conclusion set of syllogisms and make a decision for syllogisms whether ``Valid" or ``Invalid".} \BlankLine \emph{\textbf{Decision(cond,fig)}} \ForEach{$mj\_prep$ in $Prepositions\{A, E, I, O\}$}{ $major\_set = Set\_Interpretation(mj\_prep,``major",cond)$\; \ForEach{$mn\_prep$ in $Prepositions\{A, E, I, O\}$}{ $minor\_set = Set\_Interpretation(mn\_prep,``minor",cond)$\; $premises\_conclusion=Syllogistic\_Mapping(major\_set,minor\_set)$\; \ForEach{$conc\_prep$ in $Prepositions\{A, E, I, O\}$}{ \If {$premises\_conclusion=Conc[cond][conc\_prep]$}{$Print \ mj\_prep\ \& \ mn\_prep\ \& \ conc\_prep\ \& \ ``-Valid"$} } } } \end{algorithm} \vspace{0.5cm} $\bullet$ Subalgorithm Set\_Interpretation:\\ In this algorithm, temp set is determined for premise type and premise preposition as unconditional state. \\ - If the state is ``S exists" and the premise type is ``minor" then new temp set is determined by taking the transpose of temp set and it returns result of the subalgorithm $Syllogistic\_Mapping()$ which gets inputs as the constant set and the new temp set, respectively.\\ - If the state is ``M exists" and the premise type is ``minor" then it returns result of the subalgorithm $Syllogistic\_Mapping()$ which gets inputs as the temp set and the constant set, respectively.\\ - If the state is ``P exists" and the premise type is ``major" then new temp set is determined by taking the transpose of temp set and it returns result of the subalgorithm $Syllogistic\_Mapping()$ which gets inputs as the constant set and the new temp set, respectively.\\ - If the state is ``Unconditional" then it just returns the temp set. \begin{algorithm}[H] \DontPrintSemicolon \caption{Set Interpretation\label{A4}} \KwData{The specified premise preposition, premise type and state} \KwResult{The conclusion set} \BlankLine \emph{\textbf{$Set\_Interpretation(premise\_prep, premise\_type, cond)$}} \; Determine $Temp\_set$ using premise\_type and premise\_prep for Unconditional state with respect to the Diagram\; \If{$premise\_type=``minor"$}{ \If{$cond=``S \ Exists"$} {$NTemp\_set=transpose\ the\ diagram\ of\ Temp\_set$\; $return \ Syllogistic\_Mapping(Const\_set, NTemp\_set)$} \If{$cond=``M \ Exists"$} {$return \ Syllogistic\_Mapping(Temp\_set, Const\_set)$} } \If{$premise\_type=``major"$}{ \If{$cond=``P \ Exists"$} {$NTemp\_set=transpose\ the\ diagram\ of\ Temp\_set$\; $return \ Syllogistic\_Mapping(Const\_set, NTemp\_set)$} } $return \ Temp\_set$ \end{algorithm} \section{Conclusion} In this paper, we present a new effective algorithm for categorical syllogisms by using calculus system SLCD at the first time in literature. In accordance with this purpose, we explain categorical syllogisms by the help of Carroll's diagrams and we find unconditionally valid syllogisms and conditionally valid syllogisms via this algorithmic approach. As a result, our aim in this paper is to design an algorithm to contribute to researchers getting into the act in different areas of science used categorical syllogisms such as artificial intelligence, engineering, computer science and also mathematics. \section*{References}
1,314,259,994,659
arxiv
\subsubsection*{Acknowledgments} \small{ \new{ We thank Abhishek Kadian and Prithviraj Ammanabrolu for their help in the initial stages of the project. The Georgia Tech effort was supported in part by NSF, AFRL, DARPA, ONR YIPs, ARO PECASE. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.} } {\small \bibliographystyle{unsrtnat} \section{#1}\vspace{\sectionReduceBot}} \newcommand{\csubsection}[1]{\vspace{\subsectionReduceTop}\subsection{#1}\vspace{\subsectionReduceBot}} \newcommand{\csubsubsection}[1]{\vspace{\subsubsectionReduceTop}\subsubsection{#1}\vspace{\subsubsectionReduceBot}} \newcommand{\cabstract}[1]{\vspace{\abstractReduceTop}\begin{abstract}#1\end{abstract}\vspace{\abstractReduceBot}} \setlength{\abovecaptionskip}{6pt plus 0pt minus 0pt} \setlength{\textfloatsep}{12pt plus 0pt minus 0pt} \setlength{\floatsep}{15pt plus 0pt minus 0pt}
1,314,259,994,660
arxiv
\section{Introduction and Specification} A disruptive event is an event that obstructs routine process to fulfil their goal and instructed by many unauthorized sources~\cite{alsaedi2015feature,alsaedi2017can}. Its duration is often unpredictable and it may happen for one or two days or may be continuing for several days. It can spoil law and order situations that may lead to a civil unrest~\cite{panagiotopoulos20125,bahrami2018twitter}. The objective of such events are often unclear therefore it happens in a very unplanned and unstructured manner. Nowadays social media has become a primary source of information. The common man and authorities report every event or incident on social media. For example, social media became a tool for Protest against Citizen Amendment Act (CAA) in India~\cite{web2}. In such scenarios, any misleading information spread over the social microblogging sites can convert peaceful protest into violence~\cite{web1}. However, if we can get the early indication of the disruptive events using social media information then preventive measures can be taken at an early stage. In this work, first, we are collecting Disruptive Event data from the social media (twitter) that will be updated and gathered continuously. A part of this dataset is now being available and published. Table~\ref{specification} shows the specification of dataset. The description of the dataset is given in the subsequent sections. \begin{table}[htb] \caption{Specifications Table} \centering \label{specification} \begin{tabular}{p{4.5cm}p{8.0cm}} \hline Subject Area & Computer science \\ Specific subject Area & Artificial Intelligence \\ Type of data & Textual tweets along with numeric attributes \\ How data was acquired & Data were acquired by extracting tweets using the open source tweepy library along with the features of that tweet \\ Data Format & Raw csv file \\ Parameters for data collection & Only those tweets having keywords from the events and the non event set were extracted. \\ Description of data collection & The data ( a total of 2 files belonging to the event and non-event categories)comprises of features extracted from the tweeter's profile itself using the Tweepy API. The data consists of 6 features, namely the time at which it was tweeted, retweet count, follower count, location(only of those whose location services were turned on at the time of posting), username, and statuses count \\ Data source location & Worldwide \\ Data accessibility & The dataset can be accessed through the URL: \textit{\url{https://vj-creation.github.io/krl-webpage/resources.html}} or GitHub Link \textit{\url{https://github.com/devmehta01/DiPD}} \\ Related research article & Given in the references \\ \hline \end{tabular} \end{table} \section{Value of the Data} \begin{itemize} \item This data consists of a collection of eventful and non-eventful tweets. Each tweet is assigned a value of 1 or 0. 1 meaning eventful and 0 meaning non-eventful. The data can be used as input for machine learning systems. \item Machine Learning researchers can benefit from this dataset, while Governmental and Security agencies can also benefit from the machine learning models resulting from this dataset. Government organizations can use this on future tweets to keep track of events and mitigate them before they become violent. \item Features such as tweet location are extracted and can be used to determine where the events are occurring. It also includes features such as user followers and retweet count, which can be used to find the impact factor of the tweet. \item The provided dataset can also be used as a performance benchmark for developing state-of-the-art machine learning systems for disruptive event prediction \end{itemize} \section{Data Description} This paper contains twitter data for the prior prediction of disruptive events. The target class contains two labels - event and non-event. The dataset contains 7 attributes and 263,561 records, out of which 94,855 records are of event class, and 168,706 records are of non-event class. The attributes described in Table~\ref{attributes}, contain details about the tweet and information about the user. The data contains numerical and continuous data to be used for analysis based on classification, prediction, segmentation, and association algorithms. The dataset folder contains four csv files, two for event records(containing both raw and preprocessed tweets) and another for non-event records (with raw and preprocessed tweets). \begin{table}[htb] \caption{Attributes Table} \centering \label{attributes} \begin{tabular}{lp{2.0cm}p{6.8cm}p{2cm}p{1cm}} \hline Nr. & Attribute & Description & Format & Values \\ \hline 1. & created\_at & This feature shows us the time and date at which tweet was tweeted & YYYY/MM/DD HH:MM:SS & \\ 2. & retweet\_count & This feature depicts the number of time the given tweet was retweeted & Numeric & \\ 3. & follower\_count & This feature shows the number of followers the tweeter has & Numeric & \\ 4. & location & This feature depicts the approximate location of the place from where tweet was posted & Alphanumeric & \\ 5. & username & The feature shows the twitter handle of the user & Alphanumeric & \\ 6. & statuses\_count & The feature shows the number of tweets (including retweets) issued by the user & Numeric & \\ 7. & label & This feature shows whether the tweet lies in event or non-event category & Boolean & {[}0,1{]} \\ \hline \end{tabular} \end{table} \subsection{Data Extraction} In order to extract the tweets, Python's Twitter API 'Tweepy' was used. Event class data was gathered by using keywords containing examples of major disruptive events such as the Farmer's protests in India and the 'Black Lives Matter protests. Similarly, Non-event class data was obtained by using a different set of keywords. The algorithm avoids storing retweets. The attributes of the tweet stored are of two types: User-specific information such as the screen name and tweet-specific information such as the text, the number of retweets, the date and more. Extraction was performed multiple times over a few weeks so as to collect as many unique tweets as possible. The data was then preprocessed to remove duplicate tweets. The dataset is deliberately kept slightly unbalanced in accordance to the fact that most of the tweets are usually non-events.Few examples from the dataset are presented in Table~\ref{examples}. \begin{table}[htb] \caption{Example instances from the dataset} \centering \scriptsize \label{examples} \begin{tabular}{p{10.8cm}p{2.5cm}} \hline Tweet & Label \\ \hline @DschlopesIsBack @lickofcow @DoomerCoomer At a black lives matter protest. This isn't about self defense, this is about rittenhouse not liking BLM standing up to police brutality. Not liking people taking a knee, not liking the civil disturbances and protest. Thats why he killed them & 1(Event) \\ \hline @ArvindKejriwal Ideally pollution should stop after \#Diwali What is your excuse now? & 1(Event) \\ \hline "*BABRI MASJID* The judgement given over the Babri Masjid verdict was utterly biased just make the majority happy!! \#BabriMasjidVictimOfInjustice https://t.co/qoX2z92ghS" & 1(Event) \\ \hline This affair has become a toxic combination of the methods of the \#MeToo movement and the escalating aggression by the United States ruling elite toward China. & 1(Event) \\ \hline \#Bitcoin is at 63636 USD & 0(Non-event) \\ \hline Play the \#TheLastofUsPartll on grounded difficulty with permadeath enabled... With a PS4 that random ejects the disk! It's a different kinda rush cuz. & 0(Non-event) \\ \hline so glad seeing taeyong spending his time with his friends & 0(Non-event) \\ \hline Sorry Zomato we are not same.... Prefer Dal Bhaath or Kaanji Bhaath...... Your choices in vast India is very limited. @zomato @zomatocare https://t.co/btHnI91Rts & 0(Non-event) \\ \hline \hline \end{tabular} \end{table} \begin{figure}[htb] \centering \includegraphics[height = 4.5 in, width = 5.0 in]{bar.pdf} \caption{Percentage of tweets of various topics present in event category} \label{bar} \end{figure} \begin{figure}[htb] \centering \includegraphics[height = 1.8 in, width = 2.2 in]{pie.pdf} \caption{Proportion of eventful and non-eventful tweets present in dataset} \label{pie} \end{figure} \subsection{Distribution of tweets} As illustrated in Figure~\ref{bar}, tweets from various countries and domains are extracted and their share in the whole dataset is presented. The share of different topics have been drwan out and one can clearly see the importance of \#metoo and climate change. The list of of event and non-event keywords which were used for extraction are given in Table~\ref{eventnonevent}. \begin{table}[htb] \caption{Example instances from the dataset} \centering \scriptsize \label{eventnonevent} \begin{tabular}{p{6.5cm}p{6.5cm}} \hline Event Keywords & Non-Event Keywords \\ \hline \#farmersprotest, Black lives matter, Citizenship, \#CAAProtest, \#BLM, \#KashmirBleeds, \#WeDemandJusticeForAltaaf, Nabanna \#ChennaiRain, \#TamilNaduRain, \#DelhiRiot , \#BangaloreRiot, \#TripuraRiot, \#UAPA, \#LakhimpurKheri, \#IranProtests, \#BabriMasjidVictimOfInjustice, \#RamMandir, Ayodhya, \#MaharashtraRiots, \#JusticeForNirbahaya, \#MaharashtraUnsafe4Women, rape, 26/11, \#StandWithTaiwan, \#FreeHongKong, \#JusticeForpontharani, section 144, kashmir , article 370, \#PetrolDieselPriceHike, animal rights, \#metoo, \#LGBT, climate change, yellow vest , NRF, \#rotterdamprotest, netherland ,belgium ,brussels ,Kyle Rittenhouse, cop26, green pass, & Bitcoin, Cricket, Football, Tennis, Clothes, Vacation, Crypto, Sports, Guitar, Keyboard, Happy Birthday, Movies, music, stocks, leisure, galaxy, NASA, Science, School,KPop,Fruits, Mango,Gym, Workout, Decoration, Mothers day, Teachers day, phone, violin, youtube, hiking, Exam, Haircut, Outfit,Diwali, Shopping, Spotify, Facebook, Samsung, Apple, Phone, Marvel, DC, Pizza\\ \hline \end{tabular} \end{table} The distribution as a whole of event and non-event classes has been shown in the Figure~\ref{pie}. About 36\% belong to the event class and the rest to the non-event category. Even though the keywords for event category were more but the tweets extracted were less. \section*{Acknowledgment} This work is supported by a Research Grant under National Supercomputing Mission (India), Grant number: \textit{DST/NSM/R\&D\_HPC\_Applications/2021/24}. \bibliographystyle{ACM-Reference-Format}
1,314,259,994,661
arxiv
\section{Introduction} In this study we investigate the bound state spectrum of the low-lying triplet states in the four-electron Be atom. As is well known (see, e.g., \cite{Sob}) the bound state spectra of the four-electron Be atom and Be-like ions include the two separate series: singlet states and triplet states. In the lowest order approximation of the fine-structure constant $\alpha$, where $\alpha = \frac{e^2}{\hbar c} \approx \frac{1}{137}$, there are no optical transitions between these two series of bound states. The bound triplet states in different four-electron atoms and ions have been neglected for quite some time, while various theoretical, computational and experimental works performed recently for the four-electron atoms and ions were mainly oriented towards the singlet states in such systems. In contrast with the singlet states in the four-electron atoms and ions, for the triplet bound states only a few highly accurate results are now known (see, e.g., \cite{Fro1}, \cite{Galv} and references therein). On the other hand, different triplet bound states in the four-electron atoms and ions are of great interest in a large number of physical problems, including spectral analysis of the stellar and laboratory plasmas, accurate prediction of the properties of the light element plasmas at high temperatures and arbitrary pressures, etc. For instance, many spectral lines which correspond to the four-electron B$^{+}$, C$^{2+}$, N$^{3+}$ and O$^{4+}$ ions are observed in the emission spetra of the hot Wolf-Rayet stars (see, e.g., \cite{Beals}, \cite{Allen1} and references therein). The Wolf-Rayet stars are known as the brightest radiating objects in our Galaxy \cite{Sobolev}. They can be used in the future to create a very accurate and reliable navigation system in our Galaxy. Currently, the bound state spectrum of the triplet states in the four-electron atomic systems (atoms and ions) is not a well known subject, and the present work provides important data in this area. The total energies of many excited states are known only approximately. The order of different bound (triplet) states is wrongly predicted, and only approximately known. Published spectroscopic papers (see, e.g., \cite{Kramida}) do not include any of the rotationally excited bound states with $L \ge 4$. This situation has motivated us to perform accurate numerical computations of various $S(L = 0)$, $P(L = 1)$, $D(L = 2)$, $F(L = 3)$, $G(L = 4)$, $H(L = 5)$ and $I(L = 6)$ states in the Be atom with the infinitely heavy nucleus. Such a system is usually designated as the ${}^{\infty}$Be atom. As follows from Table 4.1 given in \cite{FrFish} the bound states in various four-electron atoms and ions are difficult atomic systems for the Hartree-Fock method and all methods based on the Hatree-Fock approximation. Results of our calculations for excited states are significantly more accurate than analogous results known from Hartree-Fock based methods. Based on the computed total energies of different bound (triplet) states we were able to draw the diagram of the triplet states bound spectrum of the four-electron Be atom (see, Fig.1). In general, to draw such a spectral diagram one needs to use accurate computational results for a large number of bound states. Such a spectral diagram was never presented in earlier papers which usually deal with one, or very few bound states in the beryllium atom. In contrast with these methods our approach allows one to determine and investigate all low-lying triplet states in the Be atom. In addition, with the computational results of this paper one can observe the actual transition from the low-lying bound states to the weakly-bound (or true Rydberg) states in the triplet series of the bound state spectrum of the Be atom. For the $2^3S$, $3^3S$ and $4^3S$ states in the ${}^{\infty}$Be atom we also perform a separate series of highly accurate computations. The results of such calculations are used to predict a number of bound state properties in these two states. Concluding remarks can be found in the final Section. \section{Hamiltonian and bound state wave functions in CI-method} The Hamiltonian $H$ of the four-electron atomic system is written in the form \begin{eqnarray} H = -\frac{\hbar^2}{2 m_e} \Bigl[ \sum^{4}_{i=1} \nabla^2_i + \frac{1}{M_n} \nabla^{2}_{5} \Bigr] - \sum^4_{i=1} \frac{Q e^2}{r_{i5}} + \sum^{3}_{i=1} \sum^{4}_{j=2 (j>i)} \frac{e^2}{r_{ij}} \label{eq1} \end{eqnarray} where $\hbar$ is the reduced Planck constant, $m_e$ is the electron mass, $e$ is the absolute value of the electron charge of the electron. Also, in this equation $Q$ and $M_n$ are the electric charge and mass of the nucleus expressed in $e$ and $m_e$, respectively. It is clear that $M_n \gg 1$ and some bound states in the four-electron atoms/ions can be observed, if $Q > 3$ in Eq.(\ref{eq1}). This means that the ground (singlet) state in the Li$^{-}$ ion is bounded, while the negatively charged He$^{2-}$ ion does not exist as a bound system in any state. In Eq.(\ref{eq1}) and everywhere below the subscript 5 denotes the atomic nucleus, while subscripts 1, 2, 3 and 4 stand for electrons. As mentioned above the four-electron Be atom has two independent series of bound states: singlet states and triplet states. The ground singlet state with $L = 0$ (also called the $2^1S$ state) has the lowest total energy $E$ = -14.667 355 3(5) $a.u.$ The total energies of the two bound $2^1P(L = 1)$ and $2^3P(L = 1)$ states are: $E$ = -14.473 44 $a.u.$ and $E$ = -14.567 24 $a.u.$ (see discussion and references in \cite{Galv,Chung}), respectively. It follows from these energy values that the two $P$ states are well above the energy of the ground $2^1S$ state (see, e.g., \cite{Sob}). In turn, the bound $2^3S$ state of the Be atom ($E$ = -14.430 060 015 $a.u.$ \cite{Fro1}) is located above these two $P$ states. In other words, in the four-electron atoms (and ions) the lowest bound state in the singlet series is the ground $2^1S$ state, while the analogous lowest bound state in the triplet series is the $2^3P$ state, i.e. the bound state with $L = 1$. In this work the total energies of different triplet states in the neutral Be atom have been obtained with the use of the Configuration Interaction method (CI) employing $LS$ eigenfunctions and Slater orbitals in the same way utilized in our work of Ref. \cite{Li}, except that here we select the $LS$ configurations because their number would be too large. The orbital used were $s$, $p$, $d$, $f$, $g$, $h$, and $i$ Slater orbitals defined as $\phi (\mathbf{r}) = r^{n-1}e^{-\alpha r}Y_l^m(\theta , \varphi )$. One set of two orbital exponents have been effectively optimized for all configurations. The CI wave function takes the form \begin{equation} \Psi =\sum_{p=1}^NC_p\Phi _p,\qquad \Phi _p=\hat{O}(\hat{L}^2)\hat{\mathcal{ }} \prod_{k=1}^n\phi _k(r_k,\theta _k,\varphi _k) \chi^{(0)}_1 \end{equation} $\hat{O}(\hat{L}^2)$ is a projector operator. It indicates that all such configurations are the eigenfunctions of the $\hat{L}^2$ operator with the eingenvalue $L (L + 1)$. Among all possible symmetry adapted configurations we have selected the ones with an energy contribution to the total energy of the state under consideration is $> 1\cdot 10^{-7} a.u.$, see Table I. In the last equation the notation $\chi^{(0)}_1$ is the spin eigenfunction of the triplet state with $S = 1$ and $S_z = 0$, which have been chosen in the form \begin{equation} \chi^{(0)}_1 =\left[ (\alpha \beta -\beta \alpha )(\alpha \beta + \beta \alpha ) \right] \ \end{equation} In this work the total energies of a number of bound triplet $S$, $P$, $D$, $F$, $G$, $H$ and $I$ states have been determined to relatively high accuracy. The total energies of these states are shown in Table II. These energies in Table II are ordered with respect to their numerical values. As follows from this Table the triplet, rotationally excited states, i.e. $P$, $D$, $F$, $G$, $H$ and $I$ states, are slightly more stable than the corresponding $S$ states with the same principal quantum number $n$ ($n$ = 2, 3, 4, 5, $\ldots$). Also, the direct comparison of the total energies of the triplet and singlet states would be interesting to study the fulfillment of the Hund's rule of maximum multiplicity in the four-electron Be atom. The total energies of some triplet $P$ states in the Be atom were evaluated in a few earlier studies using the variational full-core plus correlation wave function (FCPC) \cite{Chung,Chen} (note that the non-relativistic variational FCPC energy values are also corrected including the relativistic effects of the $1s^2$ electrons, we call this method 'FCPC corrected') and others based on explicitly correlated Monte Carlo (MC) \cite{Galv,Bertini} and Multiconfiguration Hartree-Fock methods (MCHF) \cite{Froese}. These calculations are summarized in Table III. Figure 1 shows the approximate bound state spectrum of the triplet states of $^9$Be. For this picture we have used our best-to-date results (total energies) and best energies known from the literature. The graphical representation of the excited $P$, $D$, $F$, $G$, $H$ and $I$ triplet states is approximate and it is based on the CI calculations of this work. It is clear that the total energy of an arbitrary bound state in the four-electron Be atom must be lower than the total energy of the ground (doublet) $2^2S$ state of the three-electron Be$^{+}$ ion (dissociation or threshold energy $E_{tr}$), which equals -14.324 763 176 47 $a.u.$ \cite{Yan}. If (and only if) this condition is obeyed, then such a state is a truly bound state. As follows from Table II all states considered in this study are bound. \section{General structure of the bound state spectra} The method described above allows one to conduct accurate computations of different bound states in the four-electron atoms and ions. Such states include both the singlet and triplet bound states and rotationally excited bound states with arbitrary, in principle, angular momentum $L$. It opens a new avenue in the study of the bound spectra in four- and many-electron atoms and ions, since in all earlier works only one (or very few) bound states with $L \le 1$ were considered. Based on the results of these studies it was very difficult to obtain an accurate and realistic picture of the bound state spectra in the four-electron atoms and ions. Now, we can investigate the whole bound state spectra of the different four-electron atomic species. For the triplet states in the Be atom considered in this study we determined the total energies of a large number of bound states. For better understanding of the relative positions of these bound states we plotted the total energies of these (triplet) bound states in one diagram (see Fig.1). In old books on atomic spectroscopy such pictures (or diagrams) were called the `spectral diagrams'. Thus, our Fig.1 is the spectral diagram of the triplet states of the ${}^{\infty}$Be atom. Later, we have found that such a spectral diagram is a very useful tool for studing some effects (e.g., electron-electron correlations and electron-electron repulsion) which essentially determine the actual order of the bound states in the spectrum and energy differences between them. Formally, by performing numerical calculations of a large number of bound states in atomic systems one always needs to answer the two following questions: (1) predict the correct order of low-lying bound states, and (2) describe transitions between the low-lying bound states and weakly-bound Rydberg states. To solve the first problem we can compare our results with the known experimental data for Be atom \cite{Kramida}. In general, the agreement between our computational results and picture (see Fig.1) and data for the beryllium atom presented in \cite{Kramida} can be considered as very good. It is clear we have calculated only the non-relativistic (total) energies, i.e. all relativistic and lowest-order QED corrections were ignored. Note also that our CI-method is substantially more accurate than various procedures based on Hartree-Fock approximation, but it still provides a restricted description of the electron-electron correlations in actual atoms and ions. Nevertheless, the observed agreement with the actual bound state spectra of the triplet states in the Be atom (or ${}^{9}$Be atom) is very good for low-lying bound states. The second question formulated above is related to the weakly-bound Rydberg states which were called the `hydrogenic' states in old atomic physics books. To explain the situation we note that the total energies of such states in the Be atom are described by the following formula (in atomic units) \begin{eqnarray} E({\rm Be}; n L) = E({\rm Be}^{+}; 2^2S) - \frac{m_e e^4}{2 \hbar^2} \frac{1}{(n + \Delta_{\ell})^2} = -14.324 763 176 47 - \frac{1}{2 (n + \Delta_{\ell})^2} \label{Rydb} \end{eqnarray} where $L = \ell$ (in this case), $n$ is the principal quantum number of the $n L$ state ($L$ is the angular quantum number) of the Be atom and $\Delta_{\ell}$ is the Rydberg correction which explicitly depends upon $\ell$ (angular momentum of the outer most electron) and the total electron spin of this atomic state. It can be shown that the Rydberg correction rapidly vanishes with $\ell$ (for given $n$ and $L$). Moreover, this correction $\Delta_{\ell}$ also decreases when the principal quantum number $n$ grows. It follows from this that the energy differences between the corresponding singlet and triplet bound states with the same $n$ and $L$ rapidly converges to zero when any of these two quantum numbers increase. This criterion is important in applications and allows one to classify all bound states as the Rydberg states, pre-Rydberg and non-Rydberg states. As follows from the results of our calculations all bound states with $n \ge 6$ in the beryllium atom are the weakly-bound, Rydberg states. On the other hand, all bound triplet states in the Be atom with $n \ge 4$ can be considered as pre-Rydberg states. To illustrate this we have determined the total energies of the triplet $4^{3}F$, $5^{3}G$ states and compared them with the total energies of the singlet $4^{1}F$, $5^{1}G$ states (see Table IV). As follows from Table IV the energy difference between the $4^{3}F$ and $4^{1}F$ states is $\approx 9.04 \cdot 10^{-4}$ $a.u.$, while the analogous difference between the total energies of the $5^{3}G$ and $5^{1}G$ states is $\approx 1.35 \cdot 10^{-4}$ $a.u.$ The ratios of these differences to the threshold energy $E_{tr}$ mentioned above are $\approx 6.31 \cdot 10^{-5}$ and $9.42 \cdot 10^{-6}$, respectively. These numerical values are very small in comparison with the unity. On the other hand, these values are larger than the value $1 \cdot 10^{-5}$ which can be found for the actual Rydberg states. Therefore, the bound states of the Be atom with $n = 4$ and $n = 5$ can be considered as the pre-Rydberg states. \section{Variational expansion in multi-dimensional gaussoids} It should be mentioned here that some bound $n^3S$ and $n^3P$ states in the Be atom have also been evaluated (see below) with the use of the basis set of multi-dimensional gaussoids proposed by Kolesnikov and Tarasov in the middle of 1970's for nuclear few-body problems. We shall call this wave function in the next section the 'KT-expansion' \cite{KT}. This method is different from the CI method mentioned above, but it allows one to evaluate very accurately total energies for some low-lying $S$ and $P$ states in the four-electron atoms and ions. This method is discussed below for the bound triplet ${}^3S$ states in the Be atom. First, note that the wave function of an arbitrary bound ${}^3S$ states in the Be atom can always be represented as the sum of products of the radial and spin functions (or configurations) \cite{LLQ}. For a triplet we use the spin eigenfunction with $S = 1$ and $S_z = 1$, where $S$ is the total electron spin, i.e. ${\bf S} = {\bf s}_1 + {\bf s}_2 + {\bf s}_3 + {\bf s}_4$, of four-electrons and $S_z$ is its $z-$projection. Therefore, our spin function $\chi_{11}(1,2,3,4)$ is defined by the following equalities: ${\bf S}^2 \chi_{11}(1,2,3,4) = 1 (1 + 1) \chi_{11}(1,2,3,4) = 2 \chi_{11}(1,2,3,4)$ and $S_z \chi_{11}(1,2,3,4) = \chi_{11}(1,2,3,4)$. In general, there are two such spin functions for each four-electron atom/ion in the triplet state. Below, we chose such functions in the form $\chi^{(1)}_{11} = (\alpha \beta - \beta \alpha) \alpha \alpha$ and $\chi^{(2)}_{11} = (2 \alpha \alpha \beta - \beta \alpha \alpha - \alpha \beta \alpha) \alpha$, where the notations $\alpha$ and $\beta$ denote spin-up and spin-down functions \cite{LLQ}, respectively. The total four-electron wave function of the triplet states is represented as the following sum \begin{eqnarray} \Psi = {\cal A}_e [\psi(A;\{r_{ij}\}) (\alpha \beta - \beta \alpha) \alpha \alpha] + {\cal A}_e [\phi(B;\{r_{ij}\}) (2 \alpha \alpha \beta - \beta \alpha \alpha - \alpha \beta \alpha) \alpha] \label{eq2} \end{eqnarray} where the notation $\{r_{ij}\}$ designates all ten relative coordinates (electron-nuclear and electron-electron coordinates) in the four-electron Be atom, while the notation ${\cal A}_e$ means the complete four-electron antisymmetrizer. The explicit formula for the ${\cal A}_e$ operator is \begin{eqnarray} {\cal A}_e = \hat{e} - \hat{P}_{12} - \hat{P}_{13} - \hat{P}_{23} - \hat{P}_{14} - \hat{P}_{24} - \hat{P}_{34} + \hat{P}_{123} + \hat{P}_{132} + \hat{P}_{124} + \hat{P}_{142} + \hat{P}_{134} + \hat{P}_{143} \nonumber \\ + \hat{P}_{234} + \hat{P}_{243} - \hat{P}_{1234} - \hat{P}_{1243} - \hat{P}_{1324} - \hat{P}_{1342} - \hat{P}_{1423} - \hat{P}_{1432} + \hat{P}_{12} \hat{P}_{34} + \hat{P}_{13} \hat{P}_{24} + \hat{P}_{14} \hat{P}_{23} \label{eq3} \end{eqnarray} Here $\hat{e}$ is the identity permutation, while $\hat{P}_{ij}$ is the permutation of the spin and spatial coordinates of the $i$th and $j$th identical particles. Analogously, the notations $\hat{P}_{ijk}$ and $\hat{P}_{ijkl}$ stand for the consecutive permutations of the spin and spatial coordinates of the three and four identical particles (electrons). In real bound state calculations one needs to know the explicit expressions for the spatial projectors only. These spatial projectors are usualy obtained by applying the ${\cal A}_e$ operator to each component of the wave function in Eq.(\ref{eq2}). At the second step we need to determine the scalar product (or spin integral) of the result and incident spin function. After the integration over all spin variables one finds the corresponding spatial projector. For instance, in the case of the first term in Eq.(\ref{eq2}) we obtain \begin{eqnarray} {\cal P}_{\psi\psi} = \frac{1}{2 \sqrt{6}} (2 \hat{e} + 2 \hat{P}_{12} - \hat{P}_{13} - \hat{P}_{23} - \hat{P}_{14} - \hat{P}_{24} - 2 \hat{P}_{34} - 2 \hat{P}_{12} \hat{P}_{34} - \hat{P}_{123} - \hat{P}_{124} - \hat{P}_{132} \nonumber \\ - \hat{P}_{142} + \hat{P}_{134} + \hat{P}_{143} + \hat{P}_{234} + \hat{P}_{243} + \hat{P}_{1234} + \hat{P}_{1243} + \hat{P}_{1342} + \hat{P}_{1432}) \label{eq4} \end{eqnarray} Analogous formulas have been found for two other spatial projectors ${\cal P}_{\psi\phi} = {\cal P}_{\phi\psi}$ and ${\cal P}_{\phi\phi}$. The explicit formulas for the ${\cal P}_{\psi\phi} = {\cal P}_{\phi\psi}$ and ${\cal P}_{\phi\phi}$ spatial projectors are significantly more complicated and not presented here. These formulas can be requested from the authors. In actual bound state calculations we can always restrict ourselves to one spin function, i.e. $\chi^{(1)}_{11}$, (or one spin configuration) and use the formula, Eq.(\ref{eq4}). The functions $\psi(A;\{r_{ij}\})$ and $\phi(B;\{r_{ij}\})$ in Eq.(\ref{eq2}) are the radial parts (or components) of the total wave function $\Psi$. For the bound states in various five-body systems these functions are approximated with the use of the KT-variational expansion written in ten-dimensional gaussoids \cite{KT}, e.g., for the $\psi(A;\{r_{ij}\})$ function we have \begin{eqnarray} \psi(A;\{r_{ij}\}) = {\cal P} \sum^{N_A}_{k=1} C_K \exp(-\sum_{ij} a_{ij} r^{2}_{ij}) \label{eq5} \end{eqnarray} where $N_A$ is the total number of terms, $C_k$ are the linear variational coefficients and ${\cal P}$ is the spatial projector defined in Eq.(\ref{eq4}). The notations $A$ (or notations $A$ and $B$ in Eq.(\ref{eq2})) stands for the corresponding set of the non-linear parameters $\{ a^{(k)}_{ij} \}$ (and $\{ b^{(k)}_{ij} \}$) in the radial wave functions, Eq.(\ref{eq5}) (or Eq.(\ref{eq2})). It is assumed here that these two sets of non-linear parameters $A$ and $B$ are optimized independently of each other. In general, the KT-variational expansion is very effective for various few-body systems known in atomic, molecular and nuclear physics. A large number of fast algorithms developed recently for optimization of the non-linear parameters in the trial wave functions, Eq.(\ref{eq5}), allow one to approximate the total energies and variational wave functions to high accuracy. In some cases, however, the overall convergence rate for some bound state properties is considerably lower than for the total energies and other properties. In particular, it was found that the expectation values of the two- and three-particle delta-functions, e.g., $\langle \delta({\bf r}_{eN}) \rangle$, converge slowly. In other words, it takes a long time to approximate these expectation values to high accuracy. The total energy and other expectation values do not change drastically during such an additional optimization of the non-linear parameters in the wave functions. In this study we report a number of expectation values computed for the bound triplet $2^3S, 3^3S, 4^3S$ and $5^3S$ states of the Be atom (see Table IV) computed with the use of KT-expansion. The expectation values of the electron-nuclear delta-function $\langle \delta({\bf r}_{eN}) \rangle$ for these states are important to perform approximate numerical evaluation of the hyperfine structure splitting for each of these states in the ${}^{9}$Be atom. \section{Bound state properties} Variational KT-expansion in multi-dimensional gaussoids allows one to obtain very accurate numerical values of the total energies. The fastest convergence of the KT-expansion is observed for the low-lying ${}^{3}S$ states. In this study by using the KT-expansion we have determined the total energies and some other bound state properties of a few low-lying triplet ${}^{3}S$ states in the ${}^{\infty}$Be atom. The current total energies of these states are: $E$ = -14.430 060 025 $a.u.$ ($2^3S$ state), -14.372 858 590 $a.u.$ ($3^3S$ state), -14.351 112 516 $a.u.$ ($4^3S$ state), -14.337 598 153 $a.u.$ ($5^3S$ state) and -14.328 903 15 $a.u.$ ($6^3S$ state). In general, the total energies of the $n^{3}S-$states are less accurate than our total energy obtained for the $2^3S$ state. By using our variational wave functions constructed in the KT-method we can determine a number of bound state properties of the Be atom in these bound states. The bound state properties of the $2^3S$, $3^3S, 4^3S$ and $5^3S$ states can be found in Tables IV and V. The expectation values from Tables IV and V represent the basic geometrical and dynamical propeties of the four-electron Be atom in these $S$ states. The physical meaning of all notations used in Tables IV and V to designate the bound state properties (or expectation values) is clear. All expectation values in these Tables are given in atomic units ($\hbar = 1, m_e = 1$ and $e = 1$). Note that these expectation values have never been determined in earlier studies. Moreover, these expectation values can be used in various applications, e.g., electron-nuclear delta-functions are needed to determine the hyperfine structure splittings in the triplet $S$ states (see, e.g., \cite{Fro1}). Another interesting problem is to study changes in the bound state properties of the triplet $S$ states which occur when the electric charge of the central nucleus $Q$ increases. Since the parameter $Q$ in Eq.(\ref{eq1}) is an integer number, then we can write for an arbitrary expectation value $\langle X \rangle$ \begin{eqnarray} \langle X(Q) \rangle = a_2 Q^2 + a_1 Q + a_0 + b_1 Q^{-1} + b_2 Q^{-2} + b_3 Q^{-3} + b_4 Q^{-4} + \ldots \label{Loran} \end{eqnarray} where the coefficients $a_2, a_1, a_0$ and $b_1, b_2, b_3, b_4, \ldots$ are the real numbers and $Q \ge 4$ for the bound triplet states in the four-electron atoms/ions. The first three terms in Eq.(\ref{Loran}) form the regular part of this expansion (series), while all terms with the coefficients $b_{i}$ ($i = 1, 2, \ldots$) form the principal part of the expansion, Eq.(\ref{Loran}). The explicit form of expansion Eq.(\ref{Loran}) follows (see, e.g., \cite{Epst}) from the fact that the non-relativistic Coulomb Hamiltonian Eq.(\ref{eq1}) is a quadratic form of the electron momenta ${\bf p}_i$ ($i = 1, 2, \ldots, N$) for $N-$electron atoms. In actual applications to atomic systems the formula, Eq.(\ref{Loran}), is used to predict the bound state properties, including the total energies, for those atomic systems for which direct numerical calculations is either difficult, or impossible. In reality, the numerical values of the coefficients $a_2 ,a_1, a_0$ and $b_1, b_2, b_3, \ldots$ are determined by fitting the results of accurate numerical computations with Eq.(\ref{Loran}). In this study by using the total energies of a number of bound $2^3S-$states in the four-electron Be atom and some Be-like ions (B$^{+}$, C$^{2+}$, \ldots, Na$^{7+}$ and Mg$^{8+}$) we determine the coefficients $a_2 ,a_1, a_0$ and $b_1, b_2, b_3$ for the total energies. In other words, we determine the actual six-term expansion, Eq.(\ref{Loran}), for the function $E(Q)$ (see Table VI). Note that the total energies of the $2^3S$ states in these ions (see Table VI) are now known to much better accuracy, than it followed from earlier studies. Therefore, we can make a conclusion that our coefficients $a_2 ,a_1, a_0, b_1, b_2$ and $b_3$ are also substantially more accurate and reliable than values obtained in earlier works. In principle, by using the same formula, Eq.(\ref{Loran}), we can evaluate other bound state properties for various bound states in the four-electron ions. \section{Conclusion} We have investigated the bound state spectrum of the low-lying triplet states in the four-electron Be atom. The total energies of the bound $S(L = 0)$, $P(L = 1)$, $D(L = 2)$, $F(L = 3)$, $G(L = 4)$, $H(L = 5)$ and $I(L = 6)$ states have been determined to the accuracy $\approx 1 \cdot 10^{-3} - 2 \cdot 10^{-3}$ a.u. which is much better than methods based on the Hartree-Fock approximation can provide. The results of our calculations are accurate and they agree with the experimental data known for these states from \cite{Kramida}. Note that our advanced approach has no restrictions in applications and allows one to investigate the whole spectrum of bound (triplet) states in the four-electron Be atom and Be-like ions. Such an analysis includes rotationally excited and highly excited states, and states with the different (total) electron spin. By using our method it is possible to observe and investigate the actual transition from the low-lying bound bound states to the weakly-bound (or Rydberg) states in the spectra of the four-electron Be atom. These important advantages of our approach allowed us to draw the first spectral diagram of the triplet states in the four-electron beryllium atom. It appears that such spectral diagrams can be very useful in theoretical research and experimental applications. Briefly, we can say that this work opens a new avenue in accurate numerical analysis of the bound triplet states in four-electron atoms and ions. It is expected that other theoretical and experimental works on the triplet states in four-electron atoms and ions will follow. In particular, in our next study we want to consider the low-lying triplet bound states in the four-electron, Be-isoelectronic ions, including such ions of boron, carbon and nitrogen which are important in stellar astrophysics. An obvious achievement of our study for the few-body physics is the analysis of the whole spectra of bound (triplet) states in the four-electron atom(s), while in all earlier works only a very few bound states were investigated.
1,314,259,994,662
arxiv
\section{Introduction} \label{sec:intro} Logic optimization approaches can be divided into {\em algorithmic-based methods}, which are based on global transformations, and {\em rule-based methods}, which are based on local transformations~\cite{espr}. Rule-based methods, also called {\em rewriting}, use a set of rules which are applied when certain patterns are found. A rule transforms a pattern for a local sub-expression, or a sub-circuit, into another equivalent one. Since rules need to be described, and hence the type available of operations/gates must be known, the rule-based approach usually requires that the description of the logic is confined to a limited number of operation/gate types such as AND, OR, XOR, NOT etc. In addition, the transformations have limited optimization capability since they are local in nature. Examples of rule-based systems include LSS~\cite{lss} and SOCRATES~\cite{socrates}. Algorithmic methods use global transformations such as decomposition or factorization, and therefore they are much more powerful compared to the rule-based methods. However, general Boolean methods, including don't care optimization, do not scale well for large functions. Algebraic methods are fast and robust, but they are not complete and thus often give lower quality results. For this reasons, industrial logic synthesis systems normally use algebraic restructuring methods in a combination with rule-based methods. In this paper, we propose a new rewriting algorithm based on 5-Input cuts. In the algorithm, the best circuits are pre-computed for a subset of NPN classes of 5-variable functions. Cut enumeration technique~\cite{Cong99} is used to find 5-input cuts for all nodes, and some of them are replaced with a best circuit. The Boolean matcher~\cite{Chai06} is used to map a 5-input function to its canonical form. The presented approach is expected to complement existing rewriting approaches which are usually based on 4-input cuts. Our experimental results show that, by adding the new rewriting algorithm to ABC synthesis tool~\cite{abc}, we can further reduce the area of heavily optimized large circuits by 5.57\% on average. The paper is organized as follows. Section~\ref{sec:bg} describes main notions and definitions used in the sequel. Section~\ref{sec:prev} summarises previous work. Section~\ref{sec:main} presents the proposed approach. Section~\ref{sec:exp} shows experimental results. Section~\ref{sec:conc} concludes the paper and discusses open problems. \section{Background} \label{sec:bg} A \emph{Boolean network} is a directed acyclic graph, of which the nodes represent logic gates, and the directed edges represent connections of the gates. A network is also referred to as a \emph{circuit}. A node of the network has zero or more \emph{fanins}, and zero or more \emph{fanouts}. A \emph{fanin} of a node $n$ is a node $n_\text{in}$ such that there exists an edge from $n_\text{in}$ to $n$. Similarly, a \emph{fanout} of a node $n$ is a node $n_\text{out}$ such that there is an edge from $n$ to $n_\text{out}$. The \emph{primary inputs} (PIs) of a network are the zero-fanin nodes of the network. The \emph{primary outputs} of a network are a subset of all nodes. If a network contains flip-flops, the inputs/outputs of the flip-flops are treated as POs/PIs of the network. An \emph{And-Inverter graph} (AIG) is a network, of which a node is either a PI or a 2-input AND gate, and an edge is negatable. An AIG is structurally hashed~\cite{Ganai00} to ensure uniqueness of the nodes. The \emph{area} of an AIG is measured by the number of nodes in the network. A \emph{cut} of a node $n$ is a set $C$ of nodes such that any path from a PI to $n$ must pass through at least one node in $C$. Node $n$ itself forms a \emph{trivial cut}. The nodes in $C$ are called the $leaves$ of cut $C$. A cut $C$ is \emph{$K$-feasible} if $|C| \leq K$; additionally, $C$ is called a \emph{$K$-input cut} if $|C| = K$. The \emph{level} of a node $n$ is the number of edges of the longest path from any PI to $n$. The \emph{depth} of a network is the largest level among all internal nodes of the network. Two Boolean functions, $F$ and $G$, are \emph{NPN-equivalent} and belong to the same \emph{NPN equivalence class}, if $F$ can be transformed into $G$ through negation of inputs (N), permutation of inputs (P), and negation of the output (N)~\cite{hurst2}. \section{Previous Work} \label{sec:prev} Rewriting of networks was introduced in the early logic synthesis systems. SOCRATES~\cite{socrates} and the IBM system~\cite{lss}\cite{ibm} performed rewriting under a set of rewriting rules to replace a combination of library gates with another combination of gates which had a smaller area or delay. In SOCRATES, these rules were managed in an expert system deciding which ones to apply and when. The rules in SOCRATES were written by human designers, based on personal experience and observation of experimental results. In the MIS system~\cite{mis}, which later developed into SIS~\cite{sis}, local transformations such as \emph{simplification} were used to locally optimize a multi-level network after global optimization. Two-level minimization methods such as ESPRESSO~\cite{espr} were used to minimize the functions associated with the nodes in the network. Similar methods~\cite{Brayton90} were also included in works of~\cite{bold}\cite{Malik88}\cite{Savoj89}. Rule-based rewriting method was used to simplify AND-OR-XOR networks in the multi-level synthesis approach presented in~\cite{Sasao95}. AIG-based rewriting technique presented in~\cite{Bjesse04} is used as a way to compress circuits before formal verification. Rewriting is performed in two steps. In the first step, which happens only once when the program starts, all two-level AIG subgraphs are pre-computed and stored in a table by their Boolean functions. In the second step, the AIG is traversed in topological order. The two-level AIG subgraphs of each node are found and the functionally equivalent pre-computed subgraphs are tried as the implementation of the node, while logic sharing with existing nodes is considered. The subgraph leading to least number of overall nodes is used as the replacement of the original subgraph. An improved AIG rewriting technique for pre-mapping optimization is presented in~\cite{rwr}. It uses 4-input cuts instead of two-level subgraphs in rewriting, and preserves the number of logic levels so the area is reduced without increasing delay. Additionally, AIG balancing, which minimizes delay without increasing area, is used together with rewriting, to achieve better results. Iterating these two processes forms a new technology-independent optimization flow, which is implemented in the sequential logic synthesis and verification system, ABC~\cite{abc}. Experiments show that this implementation scales to very large designs and is much faster than SIS~\cite{sis} and MVSIS~\cite{mvsis}, while resulting in circuits with the same or better quality. \section{AIG Rewriting Using 5-Input Cuts} \label{sec:main} The presented algorithm can be divided into two parts: \begin{enumerate} \item Best circuit generation \label{part:cgen} \item Cut enumeration and replacement \label{part:enum} \end{enumerate} Part \ref{part:cgen} of the algorithm tries to find the optimal circuits for a subset of ``practical'' 5-variable NPN classes, and stores these circuits. Part \ref{part:enum} of the algorithm enumerates all 5-input cuts in the target circuit, and chooses to replaces a cut with a suitable best circuit. In the implementation of rewriting using 4-input cuts in~\cite{rwr}, pre-computed tables of canonical forms and the transformations are kept for all $2^{16}$ 4-input functions~\cite{abc}\cite{rwr}. As we extend rewriting to 5-input cuts, the size of these tables becomes $2^{32}$. i.e. too large for using in a program that runs on a regular computer. In our implementation, we use a Boolean matcher~\cite{Chai06} to dynamically calculate the canonical form of a truth table and the corresponding transformation from the original truth table. \subsection{Best circuit generation} Similarly to~\cite{rwr}, we pre-compute the candidate circuits for each NPN class so they can be directly used later. There are $616126$ NPN equivalence classes for 5-input functions, among which only $2749$ classes appear in all IWLS 2005 benchmarks~\cite{iwls2005} as 5-feasible cuts. We picked $1185$ of them with more than 20 occurrences, and generated best circuits for representative functions of these classes. Due to the expanded complexity of the problem, we had to make some trade-offs between the quality of the circuits and the time and memory usage of our algorithm. Our implementation has following differences compared to~\cite{rwr}: \begin{itemize} \item Use of Boolean matcher to calculate canonical form, instead of table look-up. \item Use of a hash map to store the candidate into best circuits, instead of using a full table. \item When deciding whether to store a node in the node list, a node with the same cost as an existing node is discarded, instead of being stored in the list. \item Nodes of both canonical functions and the complement of the canonical functions are used as the candidate circuit, while in~\cite{rwr} complement functions are not used. \item When the number of nodes reaches an upper limit, a reduction procedure is performed before the generation continues, leaving only the nodes used in the circuit table. \end{itemize} We use two structures to store the best circuits: the \emph{forest}, list of all nodes, and the \emph{table}, storing only the pointers to the nodes in the list, which represent canonical functions or their complements. In the \emph{forest}, a node can either be an AND node or an XOR node, and two incoming edges of a node have complementation attributes. The \emph{cost} of a node is the number of AND nodes plus twice the number of XOR nodes those are reachable from this node towards the inputs. First, the constant zero node and five nodes for single variables are added into the \emph{forest}. The constant node and one of the variable nodes are added to the \emph{table}, since all variable nodes are NPN equivalent. Then, for each pair of nodes in the \emph{forest}, five types of 2-input gates are created, using the pair as inputs: \begin{itemize} \item AND gate \item AND gate with first input complemented \item AND gate with second input complemented \item AND gate with both inputs complemented \item XOR gate \end{itemize} A newly created node is stored in the \emph{forest} if the following conditions are met, otherwise it is discarded: \begin{itemize} \item The cost of the node is lower than any other node with the same functionality. \item The cost of the node is lower than or equal to any other node with NPN-equivalent functionality. \end{itemize} In addition, the pointer to this node is added to the \emph{table} if the following condition is also met: \begin{itemize} \item The function of the node is the canonical form representative, or its complement, in the NPN-equivalence class it belongs to. \end{itemize} When the number of nodes in the \emph{forest} reaches an upper limit, a node reduction procedure is performed, where only the reachable nodes from the nodes in the \emph{table} are left in the \emph{forest}. The algorithm stops when the number of uncovered ``practical'' classes is smaller than a threshold value. Finally, the generated best circuits are stored, so they can be used later when rewriting takes place. The pseudo-code of the proposed best circuit generation algorithm is shown in Algorithm~\ref{alg:gen}. The \texttt{GenerateBestCircuits} procedure returns a node list $N$ and a table of nodes $C$ recording the candidate best circuits for a subset of NPN classes. It takes three parameters. Parameter $P$ is a set of truth tables of ``practical'' 5-variable functions. This set contains about $1200$ 5-input canonical NPN representatives with 20 or more occurrence in IWLS 2005 benchmarks. Parameter $u$ is an integer indicating the acceptable number of uncovered practical NPN classes; $n_\text{max}$ is an integer indicating the limit number of nodes when a node reduction is needed. In our implementation, $u$ is set to $60$, and $n_\text{max}$ is set to $10000000$. The pseudo-code for procedure \texttt{TryNode} is shown in Algorithm~\ref{alg:node}. \texttt{TryNode} creates a node, and determines whether to put it into the node list and the circuit table. Parameter $T \in \{\text{AND}, \text{XOR}\}$ indicates whether the new gate should be an AND gate or an XOR gate. Parameter $n_0$ and $n_1$ are two fanins of the new gate. Procedure \texttt{ReduceNodes} reduces the node list by removing the nodes that are not used in any circuit in the circuit table. Procedure \texttt{Canonicalize} calculates the canonical form of the truth table of a given function. In the algorithms, variables $N$, $C$ and $M$ are globally accessible. $N$ denotes the list of all nodes. $C$ is a hash map of the candidate circuits; each of its entry is a set of nodes storing the root node of candidate circuits for the NPN class of this entry. $M$ is a temporary hash map to store the currently minimum costs of all functions. \begin{algorithm}[t] \caption{ \texttt{GenerateBestCircuits}($P$, $u$, $n_\text{max}$): Generate candidate best circuits for a subset of NPN classes of 5-input Boolean functions. } \label{alg:gen} \begin{algorithmic}[1] \STATE Add constant zero node to $N$ and $C$ \STATE Add variable nodes to $N$ \STATE Add node of variable 0 to $C$ \FOR{each $i$ from 2 \TO $|N|$} \FOR{each $j$ from 1 \TO $i - 1$} \STATE \texttt{TryNode}(AND, $N_i$, $N_j$) \STATE \texttt{TryNode}(AND, \texttt{Not}($N_i$), $N_j$) \STATE \texttt{TryNode}(AND, $N_i$, \texttt{Not}($N_j$)) \STATE \texttt{TryNode}(AND, \texttt{Not}($N_i$), \texttt{Not}($N_j$)) \STATE \texttt{TryNode}(XOR, $N_i$, $N_j$) \IF{num. of uncovered practical NPN classes $\leq u$} \RETURN \ENDIF \IF{$|N| > n_\text{max}$} \STATE \texttt{ReduceNodes}() \STATE $i \gets 1$ \STATE break \ENDIF \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} \begin{algorithm}[h] \caption{ \texttt{TryNode}($T$, $n_0$, $n_1$): Create a node of type $T$ with fanins $n_0$ and $n_1$, and determine whether to put it into $N$ or $C$. } \label{alg:node} \begin{algorithmic}[1] \STATE $n_\text{new} \gets$ \texttt{CreateNode}($T$, $n_0$, $n_1$) \STATE $t \gets \texttt{GetTruth}(n_\text{new})$ \IF{$M_t$ not exist \OR $M_t > \texttt{Cost}(n_\text{new})$} \STATE $M_t \gets \texttt{Cost}(n_\text{new})$ \ELSE \RETURN \ENDIF \STATE $t_\text{canon} \gets \texttt{Canonicalize}(t)$ \IF{$\exists n \in C_{t_\text{canon}}$ such that $\texttt{Cost}(n) < \texttt{Cost}(n_\text{new})$} \RETURN \ENDIF \STATE add $n_\text{new}$ to the end of list $N$ \IF{$t \neq t_\text{canon}$ \AND $t \neq \texttt{Complement}(t_\text{canon})$} \RETURN \ENDIF \IF{$\exists n \in C_{t_\text{canon}}$ such that $\texttt{Cost}(n) > \texttt{Cost}(n_\text{new})$} \STATE $C_{t_\text{canon}} \gets \emptyset$ \ENDIF \IF{$t = t_\text{canon}$} \STATE $C_{t_\text{canon}} \gets C_{t_\text{canon}} \bigcup \{n_\text{new}\}$ \ELSE \STATE $C_{t_\text{canon}} \gets C_{t_\text{canon}} \bigcup \{\texttt{Not}(n_\text{new})\}$ \ENDIF \RETURN \end{algorithmic} \end{algorithm} \subsection{Cut enumeration and replacement} We use a quite similar cut enumeration and replacement technique as in~\cite{rwr}. The main difference is that we use a Boolean matcher to calculate the canonical form of the NPN representative as well as the transformation to the canonical form from the original function, while in~\cite{rwr}, a faster table look-up is used. The Boolean matcher proposed in~\cite{Chai06} calculates only the canonical form representation. We modified the program so it can simultaneously generate the NPN transformation, which is needed when connecting the replacement graph to the whole circuit. Nodes are traversed in topological order. For each node starting from the PIs to the POs, all of its 5-input cuts are listed~\cite{Cong99}. The canonical form truth table and the corresponding NPN transformation of each cut are calculated using the Boolean matcher~\cite{Chai06}. Each cut is then evaluated whether there is a suitable replacement that does not increase the area of the network. Finally, the cut with the greatest gain is replaced by a best circuit. In the presented algorithm, zero-cost replacement is accepted, since it is a useful approach for re-arranging AIG structure to create more opportunities in subsequent rewriting~\cite{Bjesse04}. The pseudo-code of the rewriting procedure is shown in Algorithm~\ref{alg:rwr}. For each node in the network, $N_\text{best}$ denotes the largest number of nodes saved by replacing a cut of the node by a pre-computed candidate circuit; $c_\text{best}$ and $u_\text{best}$ denotes the corresponding candidate circuit and the original cut, respectively. These three variables are updated simultaneously, if there exists a possible replacement. Procedure $\texttt{ConnectToLeaves}(N, c, u, {Trans})$ connects the fanins of candidate circuit $c$ to the leaves of cut $u$, following the NPN transformation ${Trans}$. Procedure $\texttt{Reference}(N, c)$ increases the reference count of the nodes belong to sub-circuit $c$, in network $N$, whereas $\texttt{Dereference}(N, c)$ decreases the reference count. When the reference count of a node becomes zero, the node does not belong to the network. \begin{algorithm}[h] \caption{ \texttt{RewriteNetwork}($N$, $C$): Rewrite a Boolean network $N$ using candidate circuits stored in hash map $C$. } \label{alg:rwr} \begin{algorithmic}[1] \FOR{each node $n$ in $N$, in topological order} \STATE $N_\text{best} \gets -1$ \STATE $c_\text{best} \gets \text{NULL}$ \STATE $u_\text{best} \gets \text{NULL}$ \FOR{each 5-input cut $u$ of $n$} \STATE $t \gets \texttt{GetTruth}(u)$ \STATE $(t_\text{canon},{Trans}) \gets \texttt{Canonicalize}(t)$ \FOR{each candidate circuit $c$ in $C_{t_\text{canon}}$} \STATE $\texttt{ConnectToLeaves}(N, c, u, {Trans})$ \STATE $N_\text{saved} \gets \texttt{Dereference}(N, u)$ \STATE $N_\text{added} \gets \texttt{Reference}(N, c)$ \STATE $N_\text{gain} \gets N_\text{saved} - N_\text{added}$ \STATE $\texttt{Dereference}(N, c)$ \STATE $\texttt{Reference}(N, u)$ \IF{$N_\text{gain} \geq 0$ and $N_best < N_\text{gain}$} \STATE $N_\text{best} \gets N_\text{gain}$ \STATE $c_\text{best} \gets c$ \STATE $u_\text{best} \gets u$ \ENDIF \ENDFOR \ENDFOR \IF{$N_\text{best} = -1$} \STATE continue \ENDIF \STATE $\texttt{Dereference}(N, u_\text{best})$ \STATE $\texttt{Reference}(N, c_\text{best})$ \ENDFOR \end{algorithmic} \end{algorithm} In~\cite{rwr}, the authors proposed an optimization flow composed of \emph{balance}, \emph{rewrite} and \emph{refactor} processes, and implemented it in the tool ABC~\cite{abc} with the script \emph{resyn2}. Compared to~\cite{rwr}, rewriting using 5-input cuts exploits larger cuts and more replacement options, thus has the potential for getting \emph{resyn2} script out of local minima, providing better rewriting opportunities. \section{Experimental Results} \label{sec:exp} The presented algorithm is implemented using structurally hashed AIG as an internal circuit representation and integrated in ABC synthesis tool as a command \emph{rewrite5}. To evaluate its effectiveness, we performed a set of experiments using IWLS 2005 benchmarks~\cite{iwls2005} with more than 5000 AIG nodes after structural hashing. All experiments were carried out on a laptop with Intel Core i7 1.6GHz (2.8GHz maximum frequency) quad-core processor, 6 MB cache, and 4 GB RAM. First, for each benchmark, we applied a sequence of commands \emph{resyn2; rewrite5; resyn2} in the modified ABC and compared the result to two consecutive runs of \emph{resyn2} without \emph{rewrite5} in between. The results are summarized in Table~\ref{tab:exp1}. Columns labeled by $A$ give the area in terms of AIG nodes. Columns labeled by $t$ give the runtime. The improvement of area and the increase of runtime are then calculated and shown in the last two columns. Table~\ref{tab:exp1} shows that the average improvement in area achieved by adding \emph{rewrite5} in between two \emph{resyn2} runs is 3.50\%, at the cost of 33.18\% of extra runtime. This result indicates that the proposed \emph{rewrite5} method is effective in bringing ABC's \emph{resyn2} optimization script out of local minima, leading to better optimization possibilities. The second experiment is performed similarly, except we used a longer optimization flow: \emph{resyn2; rewrite5; resyn2; rewrite5; resyn2}. The result is compared to three consecutive runs of \emph{resyn2} script. The result of the second experiment is shown in Table~\ref{tab:exp2}, which has the same structure as Table~\ref{tab:exp1}. The average improvement in area using the new optimization flow is 4.88\%, at the cost of 46.11\% of extra runtime. This result shows the possibility to further extend the \emph{resyn2} sequence by inserting \emph{rewrite5} runs, to achieve even better optimization. Even longer optimization flows were also tested. The comparison of average results is summarized in Table~\ref{tab:sum}. The improvement in area converges after certain number of \emph{resyn2}-\emph{rewrite5} iterations. The increase of improvement is insignificant for more than four runs of \emph{resyn2}. \begin{table*}[p] \centering \footnotesize \begin{tabular}{cr|rr|rr|rr} \hline & & \multicolumn{ 2}{|c}{resyn2;resyn2} & \multicolumn{ 2}{|c|}{resyn2;rewrite5;resyn2} & & \\ \cline{3-6} benchmark & nodes & $A_1$ & $t_1$, sec & $A_2$ & $t_2$, sec & $(A_1-A_2)/A_1$ & $(t_2-t_1)/t_1$ \\ \hline ac97\_ctrl & 14244 & 10222 & 0.759 & 10212 & 0.921 & 0.10\% & 21.34\% \\ aes\_core & 21522 & 20153 & 3.125 & 19945 & 4.079 & 1.03\% & 30.53\% \\ b14\_1 & 9471 & 5902 & 1.299 & 4712 & 1.929 & 20.16\% & 48.50\% \\ b15\_1 & 17015 & 10215 & 2.067 & 10012 & 2.204 & 1.99\% & 6.63\% \\ b17\_1 & 51419 & 31447 & 5.364 & 30943 & 6.948 & 1.60\% & 29.53\% \\ b18\_1 & 130418 & 81185 & 18.947 & 78430 & 25.344 & 3.39\% & 33.76\% \\ b19\_1 & 254960 & 153796 & 37.618 & 149269 & 47.708 & 2.94\% & 26.82\% \\ b20\_1 & 21074 & 13635 & 2.666 & 12048 & 3.819 & 11.64\% & 43.25\% \\ b21\_1 & 20538 & 12845 & 2.618 & 10940 & 3.900 & 14.83\% & 48.97\% \\ b22\_1 & 31251 & 19698 & 4.109 & 16986 & 5.870 & 13.77\% & 42.86\% \\ des\_perf & 82650 & 73724 & 15.717 & 73224 & 23.228 & 0.68\% & 47.79\% \\ DMA & 24389 & 22306 & 2.524 & 20269 & 3.129 & 9.13\% & 23.97\% \\ DSP & 44759 & 37976 & 5.635 & 37728 & 7.734 & 0.65\% & 37.25\% \\ ethernet & 86650 & 55925 & 5.790 & 55838 & 7.879 & 0.16\% & 36.08\% \\ leon2 & 788737 & 774919 & 142.645 & 774065 & 187.660 & 0.11\% & 31.56\% \\ mem\_ctrl & 15325 & 8518 & 1.255 & 8449 & 1.511 & 0.81\% & 20.40\% \\ netcard & 803723 & 516124 & 93.952 & 516001 & 122.749 & 0.02\% & 30.65\% \\ pci\_bridge32 & 22790 & 16362 & 1.719 & 16271 & 2.288 & 0.56\% & 33.10\% \\ s35932 & 8371 & 7843 & 0.755 & 7843 & 1.003 & 0.00\% & 32.85\% \\ s38417 & 9062 & 7969 & 0.812 & 7936 & 1.149 & 0.41\% & 41.50\% \\ s38584 & 8477 & 7224 & 0.720 & 7188 & 0.921 & 0.50\% & 27.92\% \\ systemcaes & 12384 & 9614 & 1.705 & 9391 & 2.602 & 2.32\% & 52.61\% \\ tv80 & 9635 & 7084 & 1.169 & 6970 & 1.498 & 1.61\% & 28.14\% \\ usb\_funct & 15826 & 13082 & 1.439 & 12892 & 1.858 & 1.45\% & 29.12\% \\ vga\_lcd & 126696 & 88641 & 10.517 & 88659 & 14.268 & -0.02\% & 35.67\% \\ wb\_conmax & 47853 & 39163 & 4.748 & 38701 & 5.791 & 1.18\% & 21.97\% \\ \hline Average & & & & & & 3.50\% & 33.18\% \\ \hline \end{tabular} \caption{Effectiveness of improving double \emph{resyn2} optimization flow using \emph{rewrite5}, on IWLS 2005 benchmarks.} \label{tab:exp1} \end{table*} \begin{table*}[p] \centering \footnotesize \begin{tabular}{cr|rr|rr|rr} \hline & & \multicolumn{ 2}{|c}{resyn2;resyn2;resyn2} & \multicolumn{ 2}{|p{25mm}|}{resyn2;rewrite5;resyn2; rewrite5;resyn2} & & \\ \cline{3-6} benchmark & nodes & $A_1$ & $t_1$, sec & $A_2$ & $t_2$, sec & $(A_1-A_2)/A_1$ & $(t_2-t_1)/t_1$ \\ \hline ac97\_ctrl & 14244 & 10202 & 1.084 & 10180 & 1.396 & 0.22\% & 28.78\% \\ aes\_core & 21522 & 20044 & 4.562 & 19554 & 6.646 & 2.44\% & 45.68\% \\ b14\_1 & 9471 & 5652 & 1.702 & 4350 & 2.526 & 23.04\% & 48.41\% \\ b15\_1 & 17015 & 10029 & 2.335 & 9796 & 3.231 & 2.32\% & 38.37\% \\ b17\_1 & 51419 & 30107 & 7.446 & 29248 & 10.530 & 2.85\% & 41.42\% \\ b18\_1 & 130418 & 79204 & 24.658 & 74827 & 38.047 & 5.53\% & 54.30\% \\ b19\_1 & 254960 & 149177 & 49.815 & 143633 & 70.876 & 3.72\% & 42.28\% \\ b20\_1 & 21074 & 13405 & 3.811 & 10732 & 5.878 & 19.94\% & 54.24\% \\ b21\_1 & 20538 & 12240 & 3.603 & 9379 & 5.437 & 23.37\% & 50.90\% \\ b22\_1 & 31251 & 18967 & 5.614 & 15186 & 8.595 & 19.93\% & 53.10\% \\ des\_perf & 82650 & 73248 & 23.235 & 72322 & 36.941 & 1.26\% & 58.99\% \\ DMA & 24389 & 22288 & 3.573 & 20214 & 4.874 & 9.31\% & 36.41\% \\ DSP & 44759 & 37634 & 8.055 & 37273 & 12.465 & 0.96\% & 54.75\% \\ ethernet & 86650 & 55803 & 8.287 & 55794 & 12.067 & 0.02\% & 45.61\% \\ leon2 & 788737 & 774560 & 213.921 & 773399 & 352.054 & 0.15\% & 64.57\% \\ mem\_ctrl & 15325 & 8408 & 1.726 & 8313 & 2.260 & 1.13\% & 30.94\% \\ netcard & 803723 & 515961 & 133.294 & 515771 & 181.877 & 0.04\% & 36.45\% \\ pci\_bridge32 & 22790 & 16313 & 2.385 & 16235 & 3.650 & 0.48\% & 53.04\% \\ s35932 & 8371 & 7843 & 1.034 & 7843 & 1.457 & 0.00\% & 40.91\% \\ s38417 & 9062 & 7947 & 1.158 & 7886 & 1.725 & 0.77\% & 48.96\% \\ s38584 & 8477 & 7217 & 1.021 & 7199 & 1.312 & 0.25\% & 28.50\% \\ systemcaes & 12384 & 9595 & 2.258 & 9248 & 4.043 & 3.62\% & 79.05\% \\ tv80 & 9635 & 7030 & 1.618 & 6879 & 2.308 & 2.15\% & 42.65\% \\ usb\_funct & 15826 & 13041 & 2.037 & 12784 & 2.880 & 1.97\% & 41.38\% \\ vga\_lcd & 126696 & 88621 & 15.258 & 88687 & 22.223 & -0.07\% & 45.65\% \\ wb\_conmax & 47853 & 38676 & 6.759 & 38095 & 9.032 & 1.50\% & 33.63\% \\ \hline Average & & & & & & 4.88\% & 46.11\% \\ \hline \end{tabular} \caption{Effectiveness of improving triple \emph{resyn2} optimization flow using \emph{rewrite5}, on IWLS 2005 benchmarks.} \label{tab:exp2} \end{table*} \begin{table}[h] \centering \footnotesize \begin{tabular}{c|cc} \hline & improvement in area & extra runtime \\ \hline SS $\rightarrow$ SWS & 3.50\% & 33.18\% \\ SSS $\rightarrow$ SWSWS & 4.88\% & 46.11\% \\ SSSS $\rightarrow$ SWSWSWS & 5.39\% & 47.48\% \\ SSSSS $\rightarrow$ SWSWSWSWS & 5.57\% & 51.21\% \\ \hline \multicolumn{2}{l}{NOTE: S stands for \emph{resyn2}; W stands for \emph{rewrite5}.} \end{tabular} \caption{Summary of average results.} \label{tab:sum} \end{table} \section{Conclusion} \label{sec:conc} In this paper, we present an AIG-based rewriting technique that uses 5-input cuts. The technique extends the approach of AIG rewriting using 4-input cuts presented in~\cite{rwr}. Experimental results show that our algorithm is effective in driving other optimization techniques, such as \emph{resyn2} script in ABC, out of local minima. The proposed rewriting technique might be useful in a new optimization flow combining rewriting of both 4-input and 5-input cuts. \balance \bibliographystyle{IEEEtran}